id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
59,623,695 | https://en.wikipedia.org/wiki/Dermatotrophy | Dermatotrophy is a rare reproductive behaviour in which the young feed on the skin of its parents. It has been observed in several species of caecilian, including Boulengerula taitana, and is claimed to exist in the newly discovered unpublished species Dermophis donaldtrumpi.
References
Caecilians
Amphibian anatomy
Reproduction in animals | Dermatotrophy | Biology | 76 |
856,444 | https://en.wikipedia.org/wiki/Df%20%28Unix%29 | (abbreviation for disk free) is a standard Unix command used to display the amount of available disk space for file systems on which the invoking user has appropriate read access. is typically implemented using the statfs or statvfs system calls.
History
for Unix-like systems is part of the X/Open Portability Guide since issue 2 of 1987. It was inherited into the first version of POSIX and the Single Unix Specification. It first appeared in Version 1 AT&T Unix.
The version of bundled in GNU coreutils was written by Torbjorn Granlund, David MacKenzie, and Paul Eggert. The command is available as a separate package for Microsoft Windows as part of the UnxUtils collection of native Win32 ports of common GNU Unix-like utilities.
Usage
The Single UNIX Specification specifications for are:
df [-k] [-P|-t] [-del] [file...]
Use 1024-byte units, instead of the default 512-byte units, when writing space figures.
Use a standard, portable, output format
Write the amount of free space of the file system containing the specified file
Most implementations of in Unix and Unix-like operating systems include extra options. The BSD and GNU coreutils versions include , which lists free space in human readable format displaying units with the appropriate SI prefix (e.g. 10 MB), , which lists inode usage, and , restricting display to only local filesystems. GNU includes as well, listing filesystem type information, but the GNU shows the sizes in 1K blocks by default.
Specification
The Single Unix Specification (SUS) specifies by original space is reported in blocks of 512 bytes, and that at a minimum, the file system names and the amount of free space.
The use of 512-byte units is historical practice and maintains compatibility with and other utilities. This does not mandate that the file system itself be based on 512-byte blocks. The option was added as a compromise measure. It was agreed by the standard developers that 512 bytes was the best default unit because of its complete historical consistency on System V (versus the mixed 512/1024-byte usage on BSD systems), and that a option to switch to 1024-byte units was a good compromise. Users who prefer the more logical 1024-byte quantity can easily to without breaking many historical scripts relying on the 512-byte units.
The output with consists of one line of information for each specified file system. These lines are formatted as follows:
In the following list, all quantities expressed in 512-byte units (1024-byte when -k is specified) will be rounded up to the next higher unit. The fields are:
The name of the file system, in an implementation-defined format.
The total size of the file system in 512-byte units. The exact meaning of this figure is implementation-defined, but should include , , plus any space reserved by the system not normally available to a user.
The total amount of space allocated to existing files in the file system, in 512-byte units.
The total amount of space available within the file system for the creation of new files by unprivileged users, in 512-byte units. When this figure is less than or equal to zero, it shall not be possible to create any new files on the file system without first deleting others, unless the process has appropriate privileges. The figure written may be less than zero.
The percentage of the normally available space that is currently allocated to all files on the file system. This shall be calculated using the fraction:
expressed as a percentage. This percentage may be greater than 100 if is less than zero. The percentage value shall be expressed as a positive integer, with any fractional result causing it to be rounded to the next highest integer.
The directory below which the file system hierarchy appear
Example
Example outputs of the df command:
$ df
Filesystem 1K-blocks Used Available Use% Mounted on
udev 48764976 0 48764976 0% /dev
tmpfs 9757068 173100 9583968 2% /run
/dev/sda2 1824504008 723009800 1008791744 42% /
tmpfs 48785328 0 48785328 0% /dev/shm
tmpfs 5120 0 5120 0% /run/lock
tmpfs 48785328 0 48785328 0% /sys/fs/cgroup
/dev/sda1 523248 3672 519576 1% /boot/efi
$ df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
udev 12191244 500 12190744 1% /dev
tmpfs 12196332 702 12195630 1% /run
/dev/sda2 115859456 2583820 113275636 3% /
tmpfs 12196332 1 12196331 1% /dev/shm
tmpfs 12196332 5 12196327 1% /run/lock
tmpfs 12196332 16 12196316 1% /sys/fs/cgroup
/dev/sda1 0 0 0 - /boot/efi
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 47G 0 47G 0% /dev
tmpfs 9.4G 170M 9.2G 2% /run
/dev/sda2 1.7T 690G 963G 42% /
tmpfs 47G 0 47G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 47G 0 47G 0% /sys/fs/cgroup
/dev/sda1 511M 3.6M 508M 1% /boot/efi
See also
List of Unix commands
du (Unix)
References
External links
Manual pages
df — manual page from GNU coreutils
The df Command – by The Linux Information Project (LINFO)
Standard Unix programs
Unix SUS2008 utilities | Df (Unix) | Technology | 1,300 |
361,331 | https://en.wikipedia.org/wiki/Rubik%27s%20Magic | Rubik's Magic, like the Rubik's Cube, is a mechanical puzzle invented by Ernő Rubik and first manufactured by Matchbox in the mid-1980s.
The puzzle consists of eight black square tiles (changed to red squares with goldish rings in 1997) arranged in a 2 × 4 rectangle; diagonal grooves on the tiles hold wires that connect them, allowing them to be folded onto each other and unfolded again in two perpendicular directions (assuming that no other connections restrict the movement) in a manner similar to a Jacob's ladder toy. The front side of the puzzle shows, in the initial state, three separate, rainbow-colored rings; the back side consists of a scrambled picture of three interconnected rings. The goal of the game is to fold the puzzle into a heart-like shape and unscramble the picture on the back side, thus interconnecting the rings.
Numerous ways to accomplish this exist, and experienced players can transform the puzzle from its initial into the solved state in less than 2 seconds. Other challenges for Rubik's Magic include reproducing given shapes (which are often three-dimensional), sometimes with certain tiles required to be in certain positions and/or orientations.
History
Rubik's Magic was first manufactured by Matchbox in 1986. Professor Rubik holds both a Hungarian patent (HU 1211/85, issued 19 March 1985) and a US patent (US 4,685,680, issued 11 August 1987) on the mechanism of Rubik's Magic.
In 1987, Rubik's Magic: Master Edition was published by Matchbox; it consisted of 12 silver tiles arranged in a 2 × 6 rectangle, showing 5 interlinked rings that had to be unlinked by transforming the puzzle into a shape reminiscent of a W. Around the same time, Matchbox also produced Rubik's Magic Create the Cube, a "Level Two" version of Rubik's Magic, in which the puzzle is solved when folded into a cube with a base of two tiles, and the tile colors match at the corners of the cube. It did not have as wide a release, and is rare to find.
In 1996, the original version of Rubik's Magic was re-released by Oddzon, this time with yellow rings on a red background; other versions (for example, a variant of the original with silver tiles instead of black ones) were also produced, and there also was a strategy game based on Rubik's Magic. An unlicensed 2 × 8 version was also produced, with spheres printed on its tiles instead of rings. Custom versions as large as 2 × 12 have been built using kits available from Oddzon.
Details
It can be seen that the total number of 2 × 4 rectangles that can possibly be created using Rubik's Magic is only thirty-two; these can be created from eight distinct chains. The easiest way to classify chains is by the means of the middle tile of the puzzle's finished form (the only tile that has segments of all three rings) and the tile next to it featuring a yellow/orange ring segment (the indicator tile).
Every chain either has the middle tile on the outside (O) or the inside (I) of the chain; if it is arranged so that the indicator tile is to the right of the middle tile, then the position of the ring segment on the indicator tile can either be the upper left (UL), upper right (UR), lower left (LL), or lower right (LR) corner. The position and orientation of the remaining tiles are then determined by the middle and indicator tiles, and eight distinct chains (OUL to ILR) are obtained, although the naming convention is not standardized.
Similarly, the 2 × 4 rectangle forms of them can be categorized. Each of these forms has exactly one chain associated with it, and each chain yields four different rectangle forms, depending on the position of the edge where it is folded with regard to the middle tile. By concatenating one of the numbers 0, 1, 2, or 3 to the chain's name, depending on whether the number of tiles to the right of the middle tile before the folding edge, a categorization of the rectangle forms is obtained. The starting position, for example, is rectangle form OUR2.
The cube now is rainbow and has silver rings. A game rule for this one is you can match the silver rings and color squares, which can make it more complicated.
A similar classification can also be obtained for the heart-shaped forms of the puzzle, of which 64 exist.
Analysis
One question when analyzing Rubik's Magic concerns its state space: What is the set of configurations that can be reached from the initial state? This question is harder to answer than for Rubik's Cube, because the set of operations on Rubik's Magic does not form a mathematical group.
The basic operation (move) consists of transferring a hinge between two tiles T1 and T2, from one pair of edges (E11 of T1 and E21 on T2) to another pair E12 and E22.
Here, edges E11 and E12 are adjacent on tile T1,
and so are edges E21 and E22 on tile T2 but in opposite order. See the figure below for an example, where E11 is the East edge of the yellow tile, E21 is the West edge of the red tile, and both E21 and E22 are the North edges.
In order to carry out such a move, the hinge being moved cannot cross another hinge. Thus, the two hinges on a tile can take up one of five relative positions (see figure below). The positions are encoded as a number in the range from -2 to +2, called the wrap. The difference between wrap -2 and wrap +2 is the order of the neighboring tiles (which one is on top). The total wrap of a configuration is calculated as the alternating sum of the wraps of the individual tiles in the chain.
The total wrap is invariant under a move. Thus, one can calculate the number of theoretically possible shapes of the chain (disregarding the patterns on the individual tiles) as 1351.
Furthermore, the other tiles in the chain will have to move through space appropriately to allow the folding and unfolding needed to carry out a move. This limits the practically reachable number of configurations further. That number also depends on how much stretching of the wires you tolerate.
Records
The world record for a single solve of the Magic is 0.69 seconds, set by Yuxuan Wang. Yuxuan Wang also holds the record for an average of five solves - 0.76 seconds set at the Beijing Summer Open 2011 competition. Due to the World Cube Association no longer recognizing Rubik's Magic as an official event in 2012, Yuxuan Wang holds the permanent world record for this puzzle.
Top 5 Magic singles
Top 5 solvers by average of 5 solves
Rubik's Magic: Master Edition
Rubik's Magic: Master Edition (most commonly known as Master Magic) was manufactured by Matchbox in 1987. It is a modification from the Rubik's Magic, with 12 tiles instead of the original's 8. The puzzle has 12 panels interconnected with nylon wires in a 2 × 6 rectangular shape, measuring approximately 4.25 inches (10.5 cm) by 13 inches (32 cm). The goal of the game is the same as for Rubik's Magic, which is to fold the puzzle from a 2 × 6 rectangular shape into a W-like shape with a certain tile arrangement. Initially, the front side shows a set of 5 linked rings. Once solved, the puzzle takes the shape of the letter W, and shows 5 unlinked rings on the back side of the previous initial state.
As a puzzle, the Master Edition is actually simpler than the original Rubik's Magic. With more hinges, the player can work on one part, mostly ignoring the other parts. The minimal solution involves 16 quarter-turn moves. There are multiple solutions. The puzzle was an official World Cube Association (WCA) event from 2003 to 2012.
Top 5 singles
Top 5 solvers by average of 5 solves
Reviews
Jeux & Stratégie #42
1986 Games 100
See also
Pocket Cube
Rubik's Cube
Rubik's Revenge
Professor's Cube
V-Cube 6
V-Cube 7
V-Cube 8
Combination puzzles
Mechanical puzzles
Jacob's ladder (toy)
References
External links
Pictures of Rubik's Magic in various configurations
Detailed description and analysis
List of all 1351 theoretically possible shapes (Legend: = stands for wrap -2; - stands for wrap -1; 0 stands for wrap 0; + stands for wrap +1; # stands for wrap +2)
Categorising folding plate puzzles (plus tips)
New themes and different (solving-wise) mechanical types of folding plate puzzles
Mechanical puzzles
Combination puzzles
Hungarian inventions
1985 works
1985 introductions
1980s toys | Rubik's Magic | Mathematics | 1,856 |
417,815 | https://en.wikipedia.org/wiki/Relativistic%20rocket | Relativistic rocket means any spacecraft that travels close enough to light speed for relativistic effects to become significant. The meaning of "significant" is a matter of context, but often a threshold velocity of 30% to 50% of the speed of light (0.3c to 0.5c) is used. At 30% c, the difference between relativistic mass and rest mass is only about 5%, while at 50% it is 15%, (at 0.75c the difference is over 50%); so above such speeds special relativity is needed to accurately describe motion, while below this range Newtonian physics and the Tsiolkovsky rocket equation usually give sufficient accuracy.
In this context, a rocket is defined as an object carrying all of its reaction mass, energy, and engines with it.
No known technology can bring a rocket to relativistic speed. Relativistic rockets require huge advances in spacecraft propulsion, energy storage, and engine efficiency which may or may not ever be possible. Nuclear pulse propulsion could theoretically reach 0.1c using current known technology, but would still require many engineering advances to achieve this. The relativistic gamma factor at 10% of light velocity is 1.005. A 0.1c speed rocket is thus considered non-relativistic since its motion is still quite accurately described by Newtonian physics alone.
Relativistic rockets are usually seen discussed in the context of interstellar travel, since most would need a lot of space to reach such speed. They are also found in some thought experiments such as the twin paradox.
Relativistic rocket equation
As with the classical rocket equation, one wants to calculate the velocity change that a rocket can achieve depending on the exhaust speed and the mass ratio, i. e. the ratio of starting rest mass and rest mass at the end of the acceleration phase (dry mass) .
In order to make calculations simpler, we assume that the acceleration is constant (in the rocket's reference frame) during the acceleration phase; still, the result is nonetheless valid if the acceleration varies, as long as exhaust velocity is constant.
In the nonrelativistic case, one knows from the (classical) Tsiolkovsky rocket equation that
Assuming constant acceleration , the time span during which the acceleration takes place is
In the relativistic case, the equation is still valid if is the acceleration in the rocket's reference frame and is the rocket's proper time because at velocity 0 the relationship between force and acceleration is the same as in the classical case. Solving this equation for the ratio of initial mass to final mass gives
where "exp" is the exponential function. Another related equation gives the mass ratio in terms of the end velocity relative to the rest frame (i. e. the frame of the rocket before the acceleration phase):
For constant acceleration, (with a and t again measured on board the rocket), so substituting this equation into the previous one and using the hyperbolic function identity returns the earlier equation .
By applying the Lorentz transformation, one can calculate the end velocity as a function of the rocket frame acceleration and the rest frame time ; the result is
The time in the rest frame relates to the proper time by the hyperbolic motion equation:
Substituting the proper time from the Tsiolkovsky equation and substituting the resulting rest frame time in the expression for , one gets the desired formula:
The formula for the corresponding rapidity (the inverse hyperbolic tangent of the velocity divided by the speed of light) is simpler:
Since rapidities, contrary to velocities, are additive, they are useful for computing the total of a multistage rocket.
Matter-antimatter annihilation rockets
It is clear from the above calculations that a relativistic rocket would likely need to be antimatter-fired. Other antimatter rockets in addition to the photon rocket that can provide a 0.6c specific impulse (studied for basic hydrogen-antihydrogen annihilation, no ionization, no recycling of the radiation) needed for interstellar flight include the "beam core" pion rocket. In a pion rocket, frozen antihydrogen is stored inside electromagnetic bottles. Antihydrogen, like regular hydrogen, is diamagnetic which allows it to be electromagnetically levitated when refrigerated. Temperature control of the storage volume is used to determine the rate of vaporization of the frozen antihydrogen, up to a few grams per second (hence several petawatts when annihilated with equal amounts of matter). It is then ionized into antiprotons which can be electromagnetically accelerated into the reaction chamber. The positrons are usually discarded since their annihilation only produces harmful gamma rays with negligible effect on thrust. However, non-relativistic rockets may exclusively rely on these gamma rays for propulsion. This process is necessary because un-neutralized antiprotons repel one another, limiting the number that may be stored with current technology to less than a trillion.
Design notes on a pion rocket
The pion rocket has been studied independently by Robert Frisbee and Ulrich Walter, with similar results. Pions, short for pi-mesons, are produced by proton-antiproton annihilation. The antihydrogen or the antiprotons extracted from it will be mixed with a mass of regular protons pumped into the magnetic confinement nozzle of a pion rocket engine, usually as part of hydrogen atoms. The resulting charged pions have a speed of 0.94c (i.e. = 0.94), and a Lorentz factor of 2.93 which extends their lifespan enough to travel 21 meters through the nozzle before decaying into muons. 60% of the pions will have either a negative, or a positive electric charge. 40% of the pions will be neutral. The neutral pions decay immediately into gamma rays. These can't be reflected by any known material at the energies involved, though they can undergo Compton scattering. They can be absorbed efficiently by a shield of tungsten placed between the pion rocket engine reaction volume and the crew modules and various electromagnets to protect them from the gamma rays. The consequent heating of the shield will make it radiate visible light, which could then be collimated to increase the rocket's specific impulse. The remaining heat will also require the shield to be refrigerated. The charged pions would travel in helical spirals around the axial electromagnetic field lines inside the nozzle and in this way the charged pions could be collimated into an exhaust jet moving at 0.94c. In realistic matter/antimatter reactions, this jet only represents a fraction of the reaction's mass-energy: over 60% of it is lost as gamma-rays, collimation is not perfect, and some pions are not reflected backward by the nozzle. Thus, the effective exhaust speed for the entire reaction drops to just 0.58c. Alternate propulsion schemes include physical confinement of hydrogen atoms in an antiproton and pion-transparent beryllium reaction chamber with collimation of the reaction products achieved with a single external electromagnet; see Project Valkyrie.
See also
The Bussard ramjet
General references
The star flight handbook, Matloff & Mallove, 1989.
Mirror matter: pioneering antimatter physics, Dr. Robert L Forward, 1986
References
External links
Physics FAQs: The Relativistic Rocket
Javascript that calculates the Relativistic Rocket Equation
Spacetime Physics: Introduction to Special Relativity (1992). W. H. Freeman,
The Relativistic Photon Rocket
Interstellar travel
Rocket propulsion | Relativistic rocket | Astronomy | 1,597 |
6,142,327 | https://en.wikipedia.org/wiki/Solaris%20Trusted%20Extensions | Solaris Trusted Extensions is a set of security extensions incorporated in the Solaris 10 operating system by Sun Microsystems, featuring a mandatory access control model. It succeeds Trusted Solaris, a family of security-evaluated operating systems based on earlier versions of Solaris.
Solaris 10 5/09 is Common Criteria certified at Evaluation Assurance Level EAL4+ against the CAPP, RBACPP, and LSPP protection profiles.
Overview
Certain Trusted Solaris features, such as fine-grained privileges, are now part of the standard Solaris 10 release. Beginning with Solaris 10 11/06, Solaris now includes a component called Solaris Trusted Extensions which gives it the additional features necessary to position it as the successor to Trusted Solaris. Inclusion of these features in the mainstream Solaris release marks a significant change from Trusted Solaris, as it is no longer necessary to use a different Solaris release with a modified kernel for labeled security environments. Solaris Trusted Extensions is an OpenSolaris project.
Trusted Extensions additions and enhancements include:
Accounting
Role-Based Access Control
Auditing
Device Allocation
Mandatory Access Control Labeling
Solaris Trusted Extensions enforce a mandatory access control policy on all aspects of the operating system, including device access, file, networking, print and window management services. This is achieved by adding sensitivity labels to objects, thereby establishing explicit relationships between these objects. Only appropriate (and explicit) authorization allows applications and users read and/or write access to the objects.
The component also provides labeled security features in a desktop environment. Apart from extending support for the Common Desktop Environment from the Trusted Solaris 8 release, it delivers the first labeled environment based on GNOME. Solaris Trusted Extensions facilitate the access of data at multiple classification levels through a single desktop environment.
Solaris Trusted Extensions also delivers labeled device access and labeled network communication (through the CIPSO standard).
CIPSO is used to pass security information within and between labeled zones.
Solaris Trusted Extensions complies with the Federal Information Processing Standard (FIPS).
Trusted Solaris history
1999 Trusted Solaris 7
1996 Trusted Solaris 2.5.1 - ITSEC Certified for E3 / F-B1
1995 Trusted Solaris 1.2 - ITSEC Certified for E3 / F-B1
1992 SunOS Compartmented Mode Workstation 1.0 - ITSEC Certified for E3 / F-B1
1990 SunOS Multilevel Security 1.0 - TCSEC Conformance (1985 Orange Book)
References
External links
Solaris Trusted Extensions Official Website
OpenSolaris: Solaris Trusted Extensions project
Solaris Trusted Extensions press release
Operating system security
Sun Microsystems software
Proprietary operating systems | Solaris Trusted Extensions | Technology | 534 |
33,612,621 | https://en.wikipedia.org/wiki/Vdio | Vdio Inc. was an internet television service created by Skype and Rdio co-founder Janus Friis in 2011. On April 2, 2013, Vdio was officially launched for Rdio premium subscribers. Vdio's platform for sharing content was a pay-per-view system compared to Rdio's unlimited streaming. Similar to the main players in the video streaming market, Amazon and Netflix, Vdio offered a varied catalog that ranges from cult classic titles to new releases from major studios. From April 2013, the service was available in the United States and the United Kingdom (though it has now been discontinued). Current Rdio subscribers were given US$25 in credit to spend on Vdio.
On August 6, 2013, they relaunched the Vdio service in Canada check at Blog Rdio Canada. Later, it became available to the United States, United Kingdom and Canada until the video service was ultimately suspended in all 3 countries. After closing, Vdio's url re-directed to the main Rdio website.
Several of Friis' other companies, including the video-focused startup Joost, used peer-to-peer technology to achieve lower cost content delivery.
On December 27, 2013, Vdio announced over email that it was discontinuing its beta program, citing that it was not able to provide the "differentiated customer experience we had hoped for". They also posted a short document for existing customers.
References
External links
Rdio
Online mass media companies of the United States
Defunct video on demand services
Internet properties established in 2013
Internet properties disestablished in 2013 | Vdio | Technology | 326 |
41,270,451 | https://en.wikipedia.org/wiki/Broadband%20acoustic%20resonance%20dissolution%20spectroscopy | Broadband acoustic resonance dissolution spectroscopy (BARDS) is a technique in analytical chemistry. Developed in the late 2000s, it involves the analysis of the changes in sound frequency generated when a solute dissolves in a solvent, by harnessing the hot chocolate effect.
The technique is partly based on the solubility difference of gas in pure solvents and in solutions. The dissolution of a compound in a pure solvent results in the generation of gas bubbles in the solvent, due to the lowering of gas solubility in the resulting solution, as well as the introduction of gases with the solute. The presence of these gas bubbles increases the compressibility of the solution, thereby lowering the velocity of sound in the solution. This effect can be monitored by means of the frequency change of acoustic resonances that are mechanically produced in the solvent.
Principles of the BARDS response
Water is approximately 800 times more dense than air. However, air is approximately 15,000 times more compressible than water. The velocity of sound, υ, in a homogeneous liquid or gas is given by the following equation:
where ρ is the mass density and K the compressibility of the gas or liquid. K is given as:
where V is the volume of the medium, and dV is the volume decrease due to the pressure increase dp of the sound wave. When water is filled with air bubbles, the fluid density is essentially the density of water, and the air will contribute significantly to the compressibility. Crawford derived the relationship between fractional bubble volume and sound velocity in water, and hence the sound frequency in water, given as.
where υw and υ are the velocities of sound in pure and bubble-filled water, respectively, fw and f are the frequencies of sound in pure and bubble-filled water, respectively, Va is defined as the fractional volume occupied by gas bubbles, and α is a constant. When the solvent is water and the gas is air, the value of α is 1.49 × 104.
The effect of changes in solution density and solution compressibility are additive and reinforce the phenomenon, causing a significant decrease in the velocity of sound and, therefore, a significant decrease in the frequency of sound passing through an aerated solution.
Applications
BARDS has significant potential as an analytical technique. Applications researched so far include:
Batch consistency analysis
Blend uniformity analysis
Polymorph and pseudopolymorph discrimination
Monitoring of supersaturation of solutions and rates of outgassing
See also
The hot chocolate effect, the physical phenomenon on which the technique is based.
Acoustic resonance spectroscopy
References
Spectroscopy | Broadband acoustic resonance dissolution spectroscopy | Physics,Chemistry | 527 |
13,213,116 | https://en.wikipedia.org/wiki/Captology | Captology is the study of computers as persuasive technologies. This area of inquiry explores the overlapping space between persuasion in general (influence, motivation, behavior change, etc.) and computing technology. This includes the design, research, and program analysis of interactive computing products (such as the Web, desktop software, specialized devices, etc.) created for the purpose of changing people's attitudes or behaviors.
B. J. Fogg in 1996 derived the term captology from an acronym: Computers As Persuasive Technologies. In 2003, he published the first book on captology, entitled Persuasive Technology: Using Computers to Change What We Think and Do.
Captology is not the same thing as Behavior Design, according to BJ Fogg who is the person who coined both terms and created the foundation for both areas.
See also
Is Google Making Us Stupid?
Humu (software)
References
Further reading
External links
The Stanford University Persuasive Technology Lab
The Web Credibility Project
Persuasive Computers: Perspectives and Research Directions
Computing culture
1990s neologisms
Persuasion | Captology | Technology | 220 |
4,898,505 | https://en.wikipedia.org/wiki/Bureau%20of%20Ships | The United States Navy's Bureau of Ships (BuShips) was established by Congress on 20 June 1940, by a law which consolidated the functions of the Bureau of Construction and Repair (BuC&R) and the Bureau of Engineering (BuEng). The new bureau was to be headed by a chief and deputy-chief, one selected from the Engineering Corps (Marine Engineer) and the other from the Construction Corps (Naval Architect). The chief of the former Bureau of Engineering, Rear Admiral Samuel M. "Mike" Robinson, was named BuShips' first chief, while the former chief of the Bureau of Construction & Repair, Rear Admiral Alexander H. Van Keuren, was named as BuShips' first Deputy-Chief. The bureau's responsibilities included supervising the design, construction, conversion, procurement, maintenance, and repair of ships and other craft for the Navy; managing shipyards, repair facilities, laboratories, and shore stations; developing specifications for fuels and lubricants; and conducting salvage operations.
BuShips was abolished by DOD Order of 9 March 1966, as part of the general overhaul of the Navy's bureau system of material support. BuShips was succeeded by the Naval Ship Systems Command (NAVSHIPS), known as the Naval Sea Systems Command or NAVSEA since 1974.
Origins
The Bureau of Ships had its origins when , first of the s to be delivered, was found to be heavier than designed and dangerously top-heavy in early 1939. It was determined that an underestimate by BuEng of the weight of a new machinery design was responsible, and that BuC&R did not have sufficient authority to detect or correct the error during the design process. Initially, Acting Secretary of the Navy Charles Edison proposed consolidation of the design divisions of the two bureaus. When the bureau chiefs could not agree on how to do this, he replaced both chiefs in September 1939. The consolidation was finally effected by a law passed by Congress on 20 June 1940.
History
The Bureau of Ships was initially organized in five divisions by 15 August 1940: Design, War Plans, Shipbuilding, Maintenance, and Administration. At the start it was tasked with implementing the massive Fiscal Year 1940 (FY40) naval procurement plan, which included 11 aircraft carriers, nine battleships, six large cruisers, 57 other cruisers, 95 destroyers, 73 submarines, and dozens of auxiliary vessels (most of the battleships and large cruisers were never completed). By late 1942 a reorganization subordinated Design as a branch of Shipbuilding, a Radio division (which included sonar) was created from the former Radio branch of the Design division, and Finance became a division. By mid-1945 the Radio division had become the Electronics division, and Shore and Contracts divisions had been added. The entry of the US into World War II on 7 December 1941 resulted in the FY42 procurement plan and its component war emergency programs, which dwarfed FY40 by projecting 20 aircraft carriers, 50 escort carriers, 35 cruisers, 144 destroyers, 750 destroyer escorts, 127 submarines, and many other ships. The escort carriers and destroyer escorts were ship types that had not been built before, and that many of the projected ships were cancelled in 1944-45. From its inception in 1940 BuShips supervised the building of a larger navy than any previous one in the space of five years. A media release on 22 May 1945 stated that 8 million tons of new ships costing 17 billion dollars (in 1945 money) had been built during the war, and a further 5 million tons of existing ships had been acquired or converted. On 7 December 1941 the total tonnage of the fleet was 2,680,000 tons. In numbers of ships, 7,695 vessels were on hand in December 1941, including landing craft. Over 100,000 vessels and landing craft were built during the war, including 1,150 combatants, 557 auxiliary ships, and 82,266 landing craft.
After 1947, BuShips purchased ships for the Departments of the Army and the Air Force, coordinated Department of Defense (DOD) shipbuilding activities, and coordinated navy repair and conversion programs with other federal agencies. By 1949 the Naval Reactors branch of BuShips had been established under Hyman G. Rickover, which resulted in the highly successful naval nuclear power program. By 1955 Naval Reactors had developed the first nuclear-powered submarine, followed by the first nuclear-powered ballistic missile submarine in 1960, with other BuShips branches responsible for the non-nuclear portions of those submarines. In the 1950s BuShips was responsible for procuring the first supercarriers such as the and developing new ship types to carry naval surface-to-air missiles, notably guided missile "frigates" (hull classification symbol DLG) such as the . The Bureau established a formal program of value engineering (VE) in 1957, overseen by Lawrence D. Miles, an engineer who had launched VE at the General Electric Company in 1947, and Raymond Fountain, also from G.E.
In 1966 BuShips was succeeded by the Naval Ship Systems Command (NAVSHIPS), known as the Naval Sea Systems Command or NAVSEA since 1974.
Chiefs of the Bureau
The following is an incomplete list of individuals who served as chief of the Bureau of Ships.
Chief, Rear Admiral Samuel M. "Mike" Robinson, July 1940-January 1942 (1st Chief Bureau of Ships)Deputy Chief, RAdm. Alexander H. Van Keuren
Chief, Rear Admiral Alexander H. Van Keuren, January 1942-November 1942Deputy Chief, RAdm. Claud Ashton Jones (Medal of Honor Recipient)
Chief, Rear Admiral Edward L. "Ned" Cochrane, November 1942 – 1946Deputy Chief, RAdm. Earle W. Mills
Chief, Rear Admiral Earle W. Mills, 1946-
Chief, Rear Admiral Homer N. Wallin, 1951-1953
Rear Admiral Nathan Sonenshein, early 1970s
References
External links
Archives of the Bureau of Ships at NARA
1940 establishments in the United States
1966 disestablishments in the United States
Ships
Marine engineering organizations
Military units and formations established in 1940
Government agencies disestablished in 1966 | Bureau of Ships | Engineering | 1,247 |
1,021,628 | https://en.wikipedia.org/wiki/Chernobyl%20exclusion%20zone | The Chernobyl Nuclear Power Plant Zone of Alienation, also called the 30-Kilometre Zone or simply The Zone, was established shortly after the 1986 Chernobyl disaster in the Ukrainian SSR of the Soviet Union.
Initially, Soviet authorities declared an exclusion zone spanning a radius around the Chernobyl Nuclear Power Plant, designating the area for evacuations and placing it under military control. Its borders have since been altered to cover a larger area of Ukraine: it includes the northernmost part of Vyshhorod Raion in Kyiv Oblast, and also adjoins the Polesie State Radioecological Reserve in neighbouring Belarus. The Chernobyl exclusion zone is managed by an agency of the State Emergency Service of Ukraine, while the power plant and its sarcophagus and the New Safe Confinement are administered separately.
The current area of approximately in Ukraine is where radioactive contamination is the highest, and public access and habitation are accordingly restricted. Other areas of compulsory resettlement and voluntary relocation not part of the restricted exclusion zone exist in the surrounding areas and throughout Ukraine. In February 2019, it was revealed that talks were underway to re-adjust the exclusion zone's boundaries to reflect the declining radioactivity of its outer areas.
Public access to the exclusion zone is restricted in order to prevent access to hazardous areas, reduce the spread of radiological contamination, and conduct radiological and ecological monitoring activities. Today, the Chernobyl exclusion zone is one of the most radioactively contaminated areas on Earth and draws significant scientific interest for the high levels of radiation exposure in the environment, as well as increasing interest from disaster tourists. It has become a thriving sanctuary, with natural flora and fauna and some of the highest biodiversity and thickest forests in all of Ukraine. This is primarily due to the lack of human activity in the exclusion zone since 1986, in spite of the radioactive fallout.
Since the beginning of the Russian invasion of Ukraine in February 2022, the Chernobyl exclusion zone has been the site of fighting with neighbouring Russia, which captured Chernobyl on 24 February 2022. By April 2022, however, as the Kyiv offensive failed, the Russian military withdrew from the region. Ukrainian authorities have continued to keep the exclusion zone closed to tourists, pending the eventual cessation of hostilities in the Russo-Ukrainian War.
History
Pre-1986: Before the Chernobyl nuclear disaster
Historically and geographically, the zone is the heartland of the Polesia region. This predominantly rural woodland and marshland area was once home to 120,000 people living in the cities of Chernobyl and Pripyat as well as 187 smaller communities, but is now mostly uninhabited. All settlements remain designated on geographic maps but marked as () – "uninhabited". The woodland in the area around Pripyat was a focal point of partisan resistance during the Second World War, which allowed evacuated residents to evade guards and return into the woods. In the woodland near the Chernobyl Nuclear Power Plant stood the "Partisan's Tree" or "Cross Tree", which was used to hang captured partisans. The tree fell down due to age in 1996 and a memorial now stands at its location.
1986: Soviet exclusion zones
10-kilometre and 30-kilometre radii
The Exclusion Zone was established on soon after the Chernobyl disaster, when a Soviet government commission headed by Nikolai Ryzhkov decided on a "rather arbitrary" area of a radius from Reactor 4 as the designated evacuation area. The 30 km Zone was initially divided into three subzones: the area immediately adjacent to Reactor 4, an area of approximately radius from the reactor, and the remaining 30 km zone. Protective clothing and available facilities varied between these subzones.
Later in 1986, after updated maps of the contaminated areas were produced, the zone was split into three areas to designate further evacuation areas based on the revised dose limit of 100 mSv.
the "Black Zone" (over 200 μSv·h−1), to which evacuees were never to return
the "Red Zone" (50–200 μSv·h−1), where evacuees might return once radiation levels normalized
the "Blue Zone" (30–50 μSv·h−1), where children and pregnant women were evacuated starting in the summer of 1986
Special permission for access and full military control was put in place in late 1986. Although evacuations were not immediate, 91,200 people were eventually evacuated from these zones.
In November 1986, control over activities in the zone was given to the new production association Kombinat. Based in the evacuated city of Chernobyl, the association's responsibility was to operate the power plant, decontaminate the 30 km zone, supply materials and goods to the zone, and construct housing outside the new town of Slavutych for the power plant personnel and their families.
In March 1989, a "Safe Living Concept" was created for people living in contaminated zones beyond the Exclusion Zone in Belarus, Ukraine, and Russia. In October 1989, the Soviet government requested assistance from the International Atomic Energy Agency (IAEA) to assess the "Soviet Safe Living Concept" for inhabitants of contaminated areas. "Throughout the Soviet period, an image of containment was partially achieved through selective resettlements and territorial delineations of contaminated zones."
Post-1991: Independent Ukraine
In February 1991, the law On The Legal Status of the Territory Exposed to the Radioactive Contamination resulting from the ChNPP Accident was passed, updating the borders of the Exclusion Zone and defining obligatory and voluntary resettlement areas, and areas for enhanced monitoring. The borders were based on soil deposits of strontium-90, caesium-137, and plutonium as well as the calculated dose rate (sieverts/h) as identified by the National Commission for Radiation Protection of Ukraine. Responsibility for monitoring and coordination of activities in the Exclusion Zone was given to the Ministry of Chernobyl Affairs.
In-depth studies were conducted from 1992 to 1993, culminating the updating of the 1991 law followed by further evacuations from the Polesia area. A number of evacuation zones were determined: the "Exclusion Zone", the "Zone of Absolute (Mandatory) Resettlement", and the "Zone of Guaranteed Voluntary Resettlement", as well as many areas throughout Ukraine designated as areas for radiation monitoring. The evacuation of contaminated areas outside of the Exclusion Zone continued in both the compulsory and voluntary resettlement areas, with 53,000 people evacuated from areas in Ukraine from 1990 to 1995.
After Ukrainian Independence, funding for the policing and protection of the zone was initially limited, resulting in even further settling by samosely (returnees) and other illegal intrusion.
In 1997, the areas of Poliske and Narodychi, which had been evacuated, were added to the existing area of the Exclusion Zone, and the zone now encompasses the exclusion zone and parts of the zone of Absolute (Mandatory) Resettlement of an area of approximately . This Zone was placed under management of the 'Administration of the exclusion zone and the zone of absolute (mandatory) resettlement' within the Ministry of Emergencies.
On 15 December 2000, all nuclear power production at the power plant ceased after an official ceremony with then-President Leonid Kuchma when the last remaining operational reactor, number 3, was shut down.
Russian invasion of Ukraine (2022–present)
The Chernobyl Exclusion Zone was the site of fighting between Russian and Ukrainian forces during the Battle of Chernobyl on 24 February 2022, as part of the Russian invasion of Ukraine. Russian forces reportedly captured the plant the same day.
Facilities at Chernobyl still require ongoing management, in part to ensure the continued cooling of spent nuclear fuel. An estimated 100 plant workers and 200 Ukrainian guards who were at the Chernobyl Nuclear Power Plant when the Russians arrived had been unable to leave. Normally they would change shifts daily and would not live at the site. They had limited supplies of medication, food, and electricity.
According to Ukrainian reports, the radiation levels in the exclusion zone increased after the invasion. The higher levels are believed to be a result of disturbance of radioactive dust by the military activity or possibly incorrect readings caused by cyberattacks.
On 10 March, the International Atomic Energy Agency stated that it had lost all contact with Chernobyl.
On 22 March, the Ukrainian state agency responsible for the Chernobyl exclusion zone reported that Russian forces had destroyed a new laboratory at the Chernobyl nuclear power plant. The laboratory, which opened in 2015, worked to improve the management of radioactive waste, among other things. "The laboratory contained highly active samples and samples of radionuclides that are now in the hands of the enemy, which we hope will harm itself and not the civilized world", the agency said in its statement.
On 27 March, Lyudmila Denisova, then–Verkhovna Rada Commissioner for Human Rights, said that 31 known individual fires covering 10,000 hectares were burning in the zone. These fires caused "...an increased level of radioactive air pollution", according to Denisova. Firefighters were unable to reach the fires due to the Russian forces in the area. These wildfires are seasonal; one fire that was 11,500 hectares in size took place in 2020, and a series of several smaller fires occurred throughout the 2010s.
On 31 March, it was reported that most of the Russian troops occupying Chernobyl withdrew. An Exclusion Zone employee made a post on Facebook suggesting that Russian troops were suffering from acute radiation sickness, based on a photo of military buses unloading near a radiation hospital in Belarus. Chernobyl operator Energoatom claimed that Russian troops had dug trenches in the most contaminated part of the Chernobyl exclusion zone, receiving "significant doses" of radiation. BBC News reported unconfirmed reports that some were being treated in Belarus.
On 3 April, Ukrainian forces retook the Chernobyl power plant.
Population
The 30-kilometre zone is estimated to be home to 197 Samosely living in 11 villages as well as in the town of Chernobyl. This number is in decline, down from previous estimates of 314 in 2007 and 1,200 in 1986. These residents are senior citizens, with an average age of 63. After repeated attempts at expulsion, the authorities have accepted their presence and allowed them to stay with limited supporting services. Residence is now informally permitted by the Ukrainian government.
Approximately 3,000 people work in the Zone of Alienation on various tasks, such as the construction of the New Safe Confinement, the ongoing decommissioning of the reactors, and assessment and monitoring of the conditions in the zone. Employees do not live inside the zone, but work shifts there. Some of the workers work "4-3" shifts (four days on, three days off), while others work 15 days on and 15 days off. Other workers commute into the zone daily from Slavutych. The duration of shifts is counted strictly for reasons involving pension and healthcare. Everyone employed in the Zone is monitored for internal bioaccumulation of radioactive elements.
The town of Chernobyl, located outside of the 10-kilometre Exclusion Zone, was evacuated following the accident but now serves as a base to support the workers within the Exclusion Zone. Its amenities include administrative buildings, general stores, a canteen, a hotel, and a bus station. Unlike other areas within the Exclusion Zone, the town is actively maintained by workers, such as lawn areas being mowed and autumn leaves being collected.
Access and tourism
Prior to the COVID-19 pandemic and Russian invasion there were many visitors to the Exclusion Zone annually, and daily tours from Kyiv. In addition, multiple-day excursions can be easily arranged with Ukrainian tour operators. Most overnight tourists stay in a hotel within the town of Chernobyl, which is located within the Exclusion Zone. According to an exclusion area tour guide, as of 2017, there are approximately 50 licensed exclusion area tour guides in total, working for approximately nine companies. Visitors must present their passports when entering the Exclusion Zone and are screened for radiation when exiting, both at the 10 km checkpoint and at the 30 km checkpoint.
The Exclusion Zone can also be entered if an application is made directly to the zone administration department.
Some evacuated residents of Pripyat have established a remembrance tradition, which includes annual visits to former homes and schools. In the Chernobyl zone, there is one operating Eastern Orthodox church, St. Elijah Church. According to Chernobyl disaster liquidators, the radiation levels there are "well below the level across the zone", a fact that president of the Ukrainian Chernobyl Union Yury Andreyev considers miraculous.
The Chernobyl Exclusion Zone has been accessible to interested parties such as scientists and journalists since the zone was created. An early example was Elena Filatova's online account of her alleged solo bike ride through the zone. This gained her Internet fame, but was later alleged to be fictional, as a guide claimed Filatova was part of an official tour group. Regardless, her story drew the attention of millions to the nuclear catastrophe. After Filatova's visit in 2004, a number of papers such as The Guardian and The New York Times began to produce reports on tours to the zone.
Tourism to the area became more common after Pripyat was featured in popular video games S.T.A.L.K.E.R.: Shadow of Chernobyl and Call of Duty 4: Modern Warfare. Fans of the S.T.A.L.K.E.R. franchise, who refer to themselves as "stalkers", often gain access to the Zone. ("The Zone" and "stalker" derive from Arkady and Boris Strugatsky's science fiction novel Roadside Picnic, which preceded the accident but which described the evacuation of part of Russia after the appearance of dangerous alien artifacts. It served as the basis for the classic film Stalker.) Prosecution of trespassers became more severe after a significant increase in trespassing in the Exclusion Zone. An article in the penal code of Ukraine was specially introduced, and horse patrols were added to protect the zone's perimeter.
In 2012, journalist Andrew Blackwell published Visit Sunny Chernobyl: And Other Adventures in the World's Most Polluted Places. Blackwell recounts his visit to the Exclusion Zone, when a guide and driver took him through the zone and to the reactor site.
On 14 April 2013, the 32nd episode of the wildlife documentary TV program River Monsters (Atomic Assassin, Season 5, Episode 1) was broadcast, featuring the host Jeremy Wade catching a wels catfish in the cooling pools of the Chernobyl power plant at the heart of the Exclusion Zone.
On 16 February 2014, an episode of the British motoring TV programme Top Gear was broadcast, featuring two of the presenters, Jeremy Clarkson and James May, driving into the Exclusion Zone.
A portion of the finale of the Netflix documentary Our Planet, released in 2019, was filmed in the Exclusion Zone. The area was used as the primary example of how quickly an ecosystem can recover and thrive in the absence of human interference.
In 2019, Chernobyl Spirit Company released Atomik Vodka, the first consumer product made from materials grown and cultivated in the exclusion zone.
On 11 April 2022, the zone administration department suspended the validity of passes that allowed access to the exclusion zone, for the duration of martial law in Ukraine.
Illegal activities
The poaching of game, illegal logging, and metal salvage have been problems within the zone. Despite police control, intruders started infiltrating the perimeter to remove potentially contaminated materials, from televisions to toilet seats, especially in Pripyat, where the residents of about 30 high-rise apartment buildings had to leave all of their belongings behind. In 2007, the Ukrainian government adopted more severe criminal and administrative penalties for illegal activities in the alienation zone, as well as reinforced units assigned to these tasks. The population of Przewalski's horse, introduced to the Exclusion Zone in 1998, has reportedly fallen since 2005 due to poaching.
Administration
Government agencies
In April 2011, the State Agency of Ukraine on the Exclusion Zone Management (SAUEZM) became the successor to the State Department – Administration of the exclusion zone and the zone of absolute (mandatory) resettlement according to presidential decree. The SAUEZM is, as its predecessor, an agency within the State Emergency Service of Ukraine.
Policing of the Zone is conducted by special units of the Ministry of Internal Affairs of Ukraine and, along the border with Belarus, by the State Border Guard Service of Ukraine.
The SAUEZM is tasked with:
Conducting environmental and radioactivity monitoring in the zone
Management of long-term storage and disposal of radioactive waste
Leasing of land in the exclusion zone and the zone of absolute (mandatory) resettlement
Administering of state funds for radioactive waste management
Monitoring and preservation of documentation on the subject of radioactivity
Coordination of the decommissioning of the nuclear power plant
Maintenance of a register of persons who have suffered as a result of the disaster
The Chernobyl Nuclear Power Plant is located inside the zone but is administered separately. Plant personnel, 3,800 workers , reside primarily in Slavutych, a specially-built remote city in Kyiv Oblast outside of the Exclusion Zone, east of the accident site.
Checkpoints
There are 11 checkpoints.
Dytiatky, near the village of Dytiatky
Stari Sokoly, near the village of
Zelenyi Mys, near the village of
Poliske, near the village of
Ovruch, near the village of Davydky, Narodychi settlement hromada, Korosten Raion
Vilcha, near the village of
Dibrova, near the village of
Benivka, near the city of Pripyat
The city of Pripyat itself
Leliv, near the city of Chernobyl
Paryshiv, between the city of Chernobyl and the border with Belarus (route P56)
Development and recovery projects
The Chernobyl Exclusion Zone is an environmental recovery area, with efforts devoted to remediation and safeguarding of the reactor site. At the same time, projects for wider economic and social revival of the territories around the disaster zone have been envisioned or implemented.
In November 2007, the United Nations General Assembly adopted a resolution calling for "recovery and sustainable development" of the areas affected by the Chernobyl accident. Commenting on the issue, UN Development Programme officials mentioned the plans to achieve "self-reliance" of the local population, "agriculture revival" and development of ecotourism.
However, it is not clear whether such plans, made by the UN and then-President Victor Yushchenko, deal with the zone of alienation proper, or only with the other three zones around the disaster site where contamination is less intense and restrictions on the population are looser (such as the district of Narodychi in Zhytomyr Oblast).
Since 2011, tour operators have been bringing tourists inside the Exclusion Zone (illegal tours may have started even before). Tourists are accompanied by tour guides at all times and are not able to wander too far on their own due to the presence of several radioactive "hot spots". Pripyat was deemed safe for tourists to visit for a short period of time in the late 2010s, although certain precautions must be taken.
In 2016, the Ukrainian government declared the part of the exclusion zone on its territory the Chernobyl Radiation and Environmental Biosphere Reserve.
It was reported in 2016 that "A heavily contaminated area within a 10-kilometer radius" of the plant would be used for the storage of nuclear waste. The IAEA carried out a feasibility study in 2018 to assess the prospect of expanding the local waste management infrastructure.
In 2017, three companies were reported developing plans for solar farms within the Chernobyl Exclusion Zone. The high feed-in tariffs offered, the availability of land, and easy access to transmission lines (which formerly ran to the nuclear power station) have all been noted as beneficial to siting a solar farm. The solar plant began operations in October 2018.
In 2019, following a three-year research project into the transfer of radioactivity to crops grown in the exclusion zone conducted by scientists from UK and Ukrainian universities, one bottle of vodka using grain from the zone was produced. The vodka did not contain abnormal levels of radiation because of the distillation process. The researchers consider the production of vodka, and its sales profits, a means to aid economic recovery of the communities most adversely affected by the disaster. The project later switched to producing and exporting "Atomik" apple spirit, made from apples grown in the Narodychi District.
Radioactive contamination
The territory of the zone is polluted unevenly. Spots of hyperintensive pollution were created first by wind and rain spreading radioactive dust at the time of the accident, and subsequently by numerous burial sites for various material and equipment used in decontamination. Zone authorities pay attention to protecting such spots from tourists, scrap hunters, and wildfires, but admit that some dangerous burial sites remain unmapped, and only recorded in the memories of the (aging) Chernobyl liquidators.
Flora and fauna
There has been an ongoing scientific debate about the extent to which flora and fauna of the zone were affected by the radioactive contamination that followed the accident. As noted by Baker and Wickliffe, one of many issues is differentiating between negative effects of Chernobyl radiation and effects of changes in farming activities resulting from human evacuation.
Near the facility, a dense cloud of radioactive dust killed off a large area of Scots pine trees; the rusty orange color of the dead trees led to the nickname "The Red Forest" (Рудий ліс). The Red Forest was among the world's most radioactive places; to reduce the hazard, the Red Forest was bulldozed and the highly radioactive wood was buried, though the soil continues to emit significant radiation. Other species in the same area, such as birch trees, survived, indicating that plant species may vary considerably in their sensitivity to radiation.
Cases of mutant deformity in animals of the zone include partial albinism and other external malformations in swallows and insect mutations. A study of several hundred birds belonging to 48 different species also demonstrated that birds inhabiting highly radioactively contaminated areas had smaller brains compared to birds from clean areas.
A reduction in the density and the abundance of animals in highly radioactively contaminated areas has been reported for several taxa, including birds, insects, spiders, and mammals. In birds, which are an efficient bioindicator, a negative correlation has been reported between background radiation and bird species richness. Scientists such as Anders Pape Møller (University of Paris-Sud) and Timothy Mousseau (University of South Carolina) report that birds and smaller animals such as voles may be particularly affected by radioactivity.
Møller is the first author on 9 of the 20 most-cited articles relating to the ecology, evolution and non-human biology in the Chernobyl area. However, some of Møller's research has been criticized as flawed. Prior to his work at Chernobyl, Møller was accused of falsifying data in a 1998 paper about asymmetry in oak leaves, which he retracted in 2001. In 2004, the Danish Committees on Scientific Dishonesty (DCSD) reported that Møller was guilty of "scientific dishonesty". The French National Centre for Scientific Research (CNRS) subsequently concluded that there was insufficient evidence to establish either guilt or innocence. Strongly held opinions about Møller and his work have contributed to the difficulty of reaching a scientific consensus on the effects of radiation on wildlife in the Exclusion Zone.
More recently, the populations of large mammals have increased due to a significant reduction of human interference. The populations of traditional Polesian animals (such as the gray wolf, badger, wild boar, roe deer, white-tailed eagle, black stork, western marsh harrier, short-eared owl, red deer, moose, great egret, whooper swan, least weasel, common kestrel, and beaver) have multiplied enormously and begun expanding outside the zone. The zone is considered as a classic example of an involuntary park.
The return of wolves and other animals to the area is being studied by scientists such as Marina Shkvyria (National Academy of Sciences of Ukraine), Sergey Gaschak (Chernobyl Centre in Ukraine), and Jim Beasley (University of Georgia). Camera traps have been installed and are used to record the presence of species. Studies of wolves, which are concentrated in higher-radiation areas near the center of the exclusion zone, may enable researchers to better assess relationships between radiation levels, animal health, and population dynamics.
The area also houses herds of European bison (native to the area) and Przewalski's horses (foreign to the area, as the extinct tarpan was the native wild horse) released there after the accident. Some accounts refer to the reappearance of extremely rare native lynx, and there are videos of brown bears and their cubs, an animal not seen in the area for more than a century. Special game warden units are organized to protect and control them. No scientific study has been conducted on the population dynamics of these species.
The rivers and lakes of the zone pose a significant threat of spreading polluted silt during spring floods. They are systematically secured by dikes.
Grass and forest fires
It is known that fires can make contamination mobile again. In particular, V.I. Yoschenko et al. reported on the possibility of increased mobility of caesium, strontium, and plutonium due to grass and forest fires. As an experiment, fires were set and the levels of the radioactivity in the air downwind of these fires were measured.
Grass and forest fires have happened inside the contaminated zone, releasing radioactive fallout into the atmosphere. In 1986, a series of fires destroyed of forest, and several other fires have since burned within the zone. A serious fire in early May 1992 affected of land, including of forest. This resulted in a great increase in the levels of caesium-137 in airborne dust.
In 2010, a series of wildfires affected contaminated areas, specifically the surroundings of Bryansk and border regions with Belarus and Ukraine. The Russian government claimed that there was no discernible increase in radiation levels, while Greenpeace accused the government of denial.
On 4 April 2020, a fire broke in the Zone on at least 20 hectares of Ukrainian forests. Approximately 90 firefighters were deployed to extinguish the blaze, as well as a helicopter and two aircraft. Radiation is still present in these forests, making firefighting more difficult; authorities stated that there was no danger to the surrounding population. The previous reported fire was in June 2018.
Current state of the ecosystem
Despite the negative effect of the disaster on human life, many scientists see an overall beneficial effect to the ecosystem. Though the immediate effects of the accident were negative, the area quickly recovered and is today seen as very healthy. The lack of people in the area has increased the biodiversity of the Exclusion Zone in the years since the disaster.
In the aftermath of the disaster, radioactive contamination in the air had a decidedly negative effect on the fauna, vegetation, rivers, lakes, and groundwater of the area. The radiation resulted in deaths among coniferous plants, soil invertebrates, and mammals, as well as a decline in reproductive numbers among both plants and animals.
The surrounding forest was covered in radioactive particles, resulting in the death of 400 hectares of the most immediate pine trees, though radiation damage can be found in an area of tens of thousands of hectares. An additional concern is that as the dead trees in the Red Forest (named for the color of the dead pines) decay, contamination is leaking into the groundwater.
Despite all this, Professor Nick Beresford, an expert on Chernobyl and ecology, said that "the overall effect was positive" for the wildlife in the area.
The impact of radiation on individual animals has not been studied, but cameras in the area have captured evidence of a resurgence of the mammalian population – including rare animals such as the lynx and the vulnerable European bison.
Research on the health of Chernobyl's wildlife is ongoing, and there is concern that the wildlife still suffers from some of the negative effects of the radiation exposure. Though it will be years before researchers collect the necessary data to fully understand the effects, for now, the area is essentially one of Europe's largest nature preserves. Overall, an assessment by plant biochemist Stuart Thompson concluded, "the burden brought by radiation at Chernobyl is less severe than the benefits reaped from humans leaving the area." In fact, the ecosystem around the power plant "supports more life than before".
Infrastructure
The industrial, transport, and residential infrastructure has been largely crumbling since the 1986 evacuation. There are at least 800 known "burial grounds" (Ukrainian singular: mohyl'nyk) for the contaminated vehicles with hundreds of abandoned military vehicles and helicopters. River ships and barges lie in the abandoned port of Chernobyl. The port can easily be seen in satellite images of the area. The Jupiter Factory, one of the largest buildings in the zone, was in use until 1996 but has since been abandoned and its condition is deteriorating.
The infrastructure immediately used by the existing nuclear-related installations is maintained and developed, such as the railway link to the outside world from the Semykhody station used by the power plant.
Chernobyl-2
The Chernobyl-2 site (a.k.a. the "Russian Woodpecker") is a former Soviet military installation relatively close to the power plant, consisting of a gigantic transmitter and receiver belonging to the Duga-1 over-the-horizon radar system. Located from the surface area of Chernobyl-2 is a large underground complex that was used for anti-missile defense, space surveillance and communication, and research. Military units were stationed there.
In popular culture
Immediately after the explosion on 26 April 1986, Russian photographer Igor Kostin photographed and reported on the event, getting the first pictures from the air, then for the next 20 years he continued visiting the area to document the political and personal stories of those impacted by the disaster, publishing a book of photos Chernobyl: confessions of a reporter.
In 2014, the official video for Pink Floyd's "Marooned" features scenes of the town of Pripyat.
In an opening scene of the 1998 film Godzilla, the main character, scientist Nick Tatopoulos, is in the Chernobyl Exclusion Zone, researching the effects of environmental radiation on earthworms.
British photographer John Darwell was among the first foreigners to photograph within the Chernobyl Exclusion Zone for three weeks in late 1999, including in Pripyat, in numerous villages, a landfill site, and people continuing to live within the Zone. This resulted in an exhibition and book Legacy: Photographs inside the Chernobyl Exclusion Zone. Stockport: Dewi Lewis, 2001. . Visits have since been made by numerous other documentary and art photographers.
In A Good Day to Die Hard, a 2013 American action thriller film, the protagonists steal a car and drive to Pripyat where a safe deposit box with a file is located, only to find many men loading containers into vehicles while instead they are supposed to only get a secret file. The safe deposit box with the supposed file is a secret passage to a Chernobyl-era vault containing €1 billion worth of weapons-grade uranium. It is turned out that there is no secret file and the antagonists have concocted a scheme to steal the uranium deposit to make big money in the black market.
In a 2014 episode of Top Gear, the hosts were challenged with making their cars run out of fuel before they could reach the Exclusion Zone.
Jeremy Wade, of the fishing documentary River Monsters, risks his life to catch a river monster that supposedly lives near or in the cooling ponds of the Chernobyl power plant near Pripyat.
A large fraction of Martin Cruz Smith's 2004 crime novel Wolves Eat Dogs (the fifth in his series starring Russian detective Arkady Renko) is set in the Exclusion Zone.
The opening scene of the 2005 horror film Return of the Living Dead: Necropolis takes place within Chernobyl, where canisters of the zombie chemical 2-4-5 Trioxin are found to be held.
The video game franchise S.T.A.L.K.E.R., released in 2007, recreates parts of the zone from source photographs and in-person visits (bridges, railways, buildings, compounds, abandoned vehicles), albeit taking some artistic license regarding the geography of the Zone for gameplay reasons.
In the 2007 video game Call of Duty 4: Modern Warfare, two missions, i.e. "All Ghillied Up" and "One Shot, One Kill" take place in Pripyat.
A 2009 episode of Destination Truth depicts Josh Gates and the Destination Truth team exploring the ruins of Pripyat for signs of paranormal activity.
In 2011, Guillaume Herbaut and Bruno Masi created the web documentary La Zone, funded by CNC, LeMonde.fr and Agat Films. The documentary explores the communities and individuals that still inhabit or visit the Exclusion Zone.
The PBS program Nature aired on 19 October 2011, its documentary Radioactive Wolves which explores the return to nature which has occurred in the Exclusion Zone among wolves and other wildlife.
In the 2011 film Transformers: Dark of the Moon, Chernobyl is depicted when the Autobots investigate suspected alien activity.
2011: the award-winning short film Seven Years of Winter was filmed under the direction of Marcus Schwenzel in 2011. In his short film the filmmaker tells the drama of the orphan Andrej, which is sent into the nuclear environment by his brother Artjom in order to ransack the abandoned homes. In 2015 the film received the Award for Best Film from the Uranium International Film Festival.
The 2012 film Chernobyl Diaries is set in the Exclusion Zone. The horror movie follows a tour group that become stranded in Pripyat, and their encounters with creatures mutated by radioactive exposure.
The 2015 documentary The Russian Woodpecker, which won the Grand Jury Prize for World Documentary at the Sundance Film Festival, has extensive footage from the Chernobyl Exclusion Zone and focuses on a conspiracy theory behind the disaster and the nearby Duga radar installation.
Markiyan Kamysh's 2015 book, Stalking the Atomic City: Life Among the Decadent and the Depraved of Chornobyl, about illegal pilgrimage in the Chernobyl Exclusion Zone.
The 2015 documentary The Babushkas Of Chernobyl directed by Anne Bogart and Holly Morris focuses on elderly residents who remain in the Exclusion Zone. These people, a majority of whom are women, are self-sufficient farmers who receive routine visits from officials to check on their health and radiation levels. The film won several awards.
The five-part HBO miniseries Chernobyl was aired in 2019, dramatizing the events of the explosion and relief efforts after the fact. It was primarily shot in Lithuania.
In 2019, the Spintires video game released a DLC where players can drive around the Exclusion Zone behind the wheel of a Russian truck to hunt down prize logging sites, while also trying to avoid getting blasted by radiation. The power plant, Pripyat, Red Forest, Kupsta Lake and the Duga Radar have all been recreated, so players can also go on a sightseeing tour from the truck.
The survival horror video game Chernobylite by The Farm 51 is set in the Chernobyl Exclusion Zone.
In Chris Tarrant: Extreme Railways Season 5 Episode - "Extreme Nuclear Railway: A Journey Too Far?" (episode 22), Chris Tarrant visits Chernobyl on his journey through Ukraine.
See also
2020 Chernobyl Exclusion Zone wildfires
Effects of the Chernobyl disaster
List of Chernobyl-related articles
Polesie State Radioecological Reserve
Area 51
Notes
References
External links
State Agency of Ukraine on Exclusion Zone Management (SAUEZM) website – the central executive body over the zone (formerly under the Ministry of Emergencies of Ukraine)
Conservation, Optimization and Management of Carbon and Biodiversity in the Chornobyl Exclusion Zone – a project of SAUEZM, UNEP, GEF, and the Ministry of Ecology and Natural Resources of Ukraine
Chernobyl Radiation and Ecological Biosphere Reserve
Chernobyl Center – research institution working in the zone
Official radiation measurements – SUAEZM. Online map
News and publications
Wildlife defies Chernobyl radiation - by BBC News, 20 April 2006
Radioactive Wolves - by PBS Documentary aired in the U.S. on Oct, 19 2011
Inside the Forbidden Forests 1993 The Guardian article about the zone
The zone as a wildlife reserve
Images from inside the Zone
ChernobylGallery.com - Photographs of Chernobyl and Pripyat
Lacourphotos.com - Pripyat in Wintertime (Urban photos)
Images from inside the Zone
Exclusion Zone
Exclusion Zone
Environment of Ukraine
Administrative divisions of Ukraine
Radioactively contaminated areas
Belarus–Ukraine border
1986 establishments in Ukraine
History of Kyiv Oblast
History of Zhytomyr Oblast | Chernobyl exclusion zone | Chemistry,Technology | 7,609 |
35,763,931 | https://en.wikipedia.org/wiki/Multipartition | In number theory and combinatorics, a multipartition of a positive integer n is a way of writing n as a sum, each element of which is in turn an integer partition. The concept is also found in the theory of Lie algebras.
r-component multipartitions
An r-component multipartition of an integer n is an r-tuple of partitions λ(1), ..., λ(r) where each λ(i) is a partition of some ai and the ai sum to n. The number of r-component multipartitions of n is denoted Pr(n). Congruences for the function Pr(n) have been studied by A. O. L. Atkin.
References
Number theory
Combinatorics | Multipartition | Mathematics | 163 |
27,990,947 | https://en.wikipedia.org/wiki/Digital%20Humanities%20conference | The Digital Humanities conference is an academic conference for the field of digital humanities. It is hosted by Alliance of Digital Humanities Organizations and has been held annually since 1989.
History
The first joint conference was held in 1989, at the University of Toronto—but that was the 16th annual meeting of ALLC, and the ninth annual meeting of the ACH-sponsored International Conference on Computers and the Humanities (ICCH).
The Chronicle of Higher Education has called the conference "highly competitive" but "worth the price of admission," praising its participants' focus on best practices, the intellectual community it has fostered, and the tendency of its organizers to sponsor attendance of early-career scholars (important given the relative expense of attending it, as compared to other academic conferences).
An analysis of the Digital Humanities conference abstracts between 2004 and 2014 highlights some trends evident in the evolution of the conference (such as the increasing rate of new authors entering the field, and the continuing disproportional predominance of authors from North America represented in the abstracts). An extended study (2000-2015) offer a feminist and critical engagement of Digital Humanities conferences with solutions for a more inclusive culture. Scott B. Weingart has also published detailed analyses of submissions to Digital Humanities 2013, 2014, 2015, and 2016 on his blog.
Conferences
References
External links
Alliance of Digital Humanities Organizations official website
Humanities conferences
Digital humanities
Computer science conferences | Digital Humanities conference | Technology | 285 |
40,761,526 | https://en.wikipedia.org/wiki/MR-2096 | MR-2096 is an opioid analgesic drug related to oxymorphone. It has an unusual chiral tetrahydrofuran-2-ylmethyl substitution on the nitrogen which determines the character of effects, with the (R) enantiomer MR-2096 being an opioid agonist, while the (S) enantiomer MR-2097 has similarly potent opioid antagonist effects. This mix of activities has made these two enantiomers useful for characterising the binding site of the mu opioid receptor.
See also
N-Phenethylnormorphine
Ro4-1539
References
4,5-Epoxymorphinans
Hydroxyarenes
Ketones
Ethers
Mu-opioid receptor agonists
Semisynthetic opioids | MR-2096 | Chemistry | 175 |
25,992,485 | https://en.wikipedia.org/wiki/Fujitsu%20iPAD | The Fujitsu iPAD is a lightweight handheld device that was introduced by Fujitsu, in 2002. It runs Microsoft's CE.NET operating system. It supports 802.11b wireless LAN to connect wirelessly with other company infrastructure. The device can support inventory management as well as credit card payments. In January 2010, when Apple announced the Apple iPad, there was a naming controversy between the two devices. To settle the trademark infringement allegation, Apple purchased the trademark rights from Fujitsu. Some trademark analysts estimate that Apple paid Fujitsu over US$4 million in exchange for the March 17, 2010 assignment of Fujitsu's iPad trademark rights to Apple.
References
Retail point of sale systems
Fujitsu computers
Products introduced in 2002
Mobile computers | Fujitsu iPAD | Technology | 150 |
62,723,168 | https://en.wikipedia.org/wiki/Daniel%20McGillivray%20Brown | Daniel McGillivray Brown FRS (3 February 1923 – 24 April 2012) was a Scottish nucleic acid chemist.
Early life and career
Daniel McGillivray Brown was born in Giffnock on 3 February 1923, son of David Cunninghame Brown, a restaurateur, and Catherine Stewart (née McGillivray), a teacher. After Giffnock Primary School he attended Glasgow Academy and then, at age 17, Glasgow University where he studied chemistry, and received an honours degree.
In 1945 Brown moved to the Chester Beatty Research Institute, then in Chelsea, where he worked on the synthesis of heterocyclic stilbene derivatives for his PhD.
Then, in 1948, Brown moved to Cambridge to join Alexander Todd’s group. He gained his second PhD in 1952. Brown was appointed lecturer in the chemistry department in 1959, and reader in 1967. He was Visiting Professor at University of California, Los Angeles 1959–60; and at Brandeis University 1966–67. He received the Sc.D in 1968, and eventually became Vice-Provost at Kings College in 1974.
In 1981 Brown took a sabbatical at the Laboratory of Molecular Biology (LMB), and then moved there permanently one year later. He retired formally from the LMB in 2002 but continued publishing until 2008.
Contributions
At Cambridge, Brown set out to confirm the furanose chemical structure of the sugar part of nucleosides in natural nucleic acids, which had only been inferred at the time. He and Basil Lythgoe proved this to be the case. He later worked on the selective phosphorylation of nucleosides to form nucleotides. This was the beginning of a lifelong career, and led to the chemical structures of RNA and, by inference, DNA. He later worked on phosphoinositides and the mutagenesis of nucleotides.
Personal life
Daniel Brown met Margaret Joyce Herbert at Scottish Highland Dancing classes at the CUSRC in 1952. They married in Lincolnshire the following year. Dan and Margaret had four children: Catherine (1954), David (1955), Frances (1961) and Moira (1962).
Daniel McGillivray Brown died at his home in Cambridge on 24 April 2012. He was survived by his wife, three daughters, four grandchildren and a great grand-daughter.
Honors
Brown became a Fellow of the Royal Society in 1982.
References
1923 births
2012 deaths
Fellows of the Royal Society
People from East Renfrewshire
Alumni of the University of Glasgow
Alumni of King's College, Cambridge
Fellows of King's College, Cambridge
Fellows of the Royal Society of Chemistry
British organic chemists
20th-century Scottish chemists | Daniel McGillivray Brown | Chemistry | 550 |
30,819,190 | https://en.wikipedia.org/wiki/Home%20ultrasound | Home ultrasound is the provision of therapeutic ultrasound via the use of a portable or home ultrasound machine. This method of medical ultrasound therapy can be used for various types of pain relief and physical therapy.
In physics, the term "ultrasound" applies to all acoustic energy with a frequency above the audible range of human hearing. The audible range of sound is 20 hertz – 20 kilohertz. Ultrasound frequency is greater than 20 kilohertz.
Machines
Ultrasound energy is transferred based on the frequency and power output of the ultrasonic waves that an ultrasound machine or device creates. Home ultrasound machines and doctor's office machines both operate between 1 and 5 megahertz, however, home machines utilize pulsed ultrasonic waves while professional ultrasound machines in a doctor's office use continuous waves.
Typically, when using a home ultrasound machine, you will use it more frequently than if you were to have ultrasound treatments at a therapist's office, but the end results are the same as if using a continuous wave machine less frequently. Treatments towards a pre-workout in deep muscles and relieving tendons such as arthritis, frozen shoulder, strains, and sprains.
There are home ultrasounds available for purchase prices ranging from 46.00 U.S. dollars to 5,000.00.
Benefits
Home ultrasound machines may have several benefits: long-term cost savings, portable physical therapy treatment, long-term pain relief for multiple symptoms, possible decrease in healing time, and can reduce chronic inflammation. Increase in knee range of motion after use for an injury's such as Osteoarthritis OA, which is the most common joint disorder and incidence increases with age. treatment of OA aims to reduce joint pain and stiffness, preserve and improve the joint mobility. The benefits have improvements for pain, function, and quality of life scales were effected by ultrasounds.
Types of ultrasound therapy
Home ultrasound machines operate within the range of frequencies of therapeutic ultrasound, as opposed to the more commonly known diagnostic ultrasound, or Diagnostic sonography. Typical diagnostic ultrasound machines operate in the frequency range of 2-18 megahertz, whereas home ultrasound machines and therapeutic ultrasound machines operate in the frequency range of .7-3.3 megahertz. Diagnostic sonography is typically used to create an audio "image", such as during pregnancy to visualize the developing baby.
Phonophoresis
Phonophoresis, also known as sonophoresis, is the use of ultrasound to enhance the delivery of topically applied drugs. Home ultrasound allows the application of topically applied analgesics and anti-inflammatory agents through the therapeutic application of ultrasound. It is widely used in hospitals to deliver drugs through the skin. Pharmacists compound the drugs by mixing them with a coupling agent (gel, cream, ointment) that transfers ultrasonic energy from the ultrasound transducer to the skin. The ultrasound potentially enhances drug transport by cavitation, microstreaming, and heating.
Pregnancy
The ultrasonic wavelengths create an audio "image" as the machine therapeutically shows a baby growth inside the genetic mother's uterus. They serve as a monitor and have a validation of the predictions of ovulation and the IUI Intrauterine insemination cycles.
References
Further reading
External links
American Institute of Ultrasound in Medicine Professional Association for Ultrasound in Medicine
Does Ultrasound Therapy Work?
https://www.tensunits.com/category/ultrasound.html
Acoustics
Medical equipment
Medical physics
Medical ultrasonography
Athletic training
Physical therapy | Home ultrasound | Physics,Biology | 715 |
8,561,047 | https://en.wikipedia.org/wiki/Wingsail | A wingsail, twin-skin sail or double skin sail is a variable-camber aerodynamic structure that is fitted to a marine vessel in place of conventional sails. Wingsails are analogous to airplane wings, except that they are designed to provide lift on either side to accommodate being on either tack. Whereas wings adjust camber with flaps, wingsails adjust camber with a flexible or jointed structure (for hard wingsails). Wingsails are typically mounted on an unstayed spar—often made of carbon fiber for lightness and strength. The geometry of wingsails provides more lift, and a better lift-to-drag ratio, than traditional sails. Wingsails are more complex and expensive than conventional sails.
Introduction
Wingsails are of two basic constructions that create an airfoil, "soft" and "hard", both mounted on an unstayed rotating mast. Whereas hard wingsails are rigid structures that are stowed only upon removal from the boat, soft wingsails can be furled or stowed on board.
L. Francis Herreshoff pioneered a precursor rig that had jib and main, each with a two-ply sail with leading edges attached to a rotating spar. The C Class Catamaran class has been experimenting and refining wingsails in a racing context since the 60s. Englishman, John Walker, explored the use of wingsails in cargo ships and developed the first practical application for sailing yachts in the 1990s. Wingsails have been applied to small vessels, like the Optimist dinghy and Laser, to cruising yachts, and most notably to high-performance multihull racing sailboats, like USA-17. The smallest craft have a unitary wing that is manually stepped. Cruising rigs have a soft rig that can be lowered, when not in use. High-performance rigs are often assembled of rigid components and must be stepped (installed) and unstepped by shore-side equipment.
Camber adjustment
Wingsails change camber (the asymmetry between the top and the bottom surfaces of the aerofoil), depending on tack and wind speed. A wingsail becomes more efficient with greater curvature on the downwind side. Since the windward side changes with each tack, so must sail curvature change. This happens passively on a conventional sail, as it fills in with wind on each tack. On a wingsail, a change in camber requires a mechanism. Wingsails also change camber to adjust for windspeed. On an aircraft, flaps increase the camber or curvature of the wing, raising the maximum lift coefficient—the lift a wing can generate—at lower air speeds (speed of the air passing over it). A wingsail has the same need for camber adjustment, as windspeed changes—a straighter camber curvature as windspeed increases, more curved as it decreases.
Mechanisms for camber adjustment are similar for soft and hard wingsails. Each employs independent leading and trailing airfoil segments that are adjusted independently for camber. More sophisticated rigs allow for variable adjustment of camber with height above the water to account for increased windspeed.
Comparison with conventional sailing rigs
The presence of rigging, supporting the mast of a conventional fore-and-aft rig limits sail geometry to shapes that are less efficient than the narrow chord of the wingsail. However, conventional sails are simple to adjust for windspeed by reefing. Wingsails typically are a fixed surface area. Conventional sails can be furled easily; some flexible wingsails can be dropped, when not in use; rigid wingsails must be removed when exposure to wind is undesirable.
Points of sail
Nielsen summarised the efficiencies of wingsails, compared with conventional sails, for different points of sail, as follows:
Close-hauled: At 30° apparent wind, the wingsail has a 10-degree angle of attack and more lift, compared to the conventional sail plan and its angles of attack of 15° for the jib and 20° for the mainsail.
Beam reach: At 90° apparent wind, the wingsail, positioned across the boat, functions efficiently as a wing, providing forward lift, whereas the jib of the conventional sail plan suffers from being difficult to shape as a wing (the main sail is still relatively efficient).
Broad reach: At 135° apparent wind, the wingsail may be eased in such a manner that it still functions efficiently as a wing, whereas the jib and main sail no longer provide lift—instead they present themselves perpendicular to the wind and provide force from drag only.
References
External links
Marine propulsion
Sailboat components
Sailing rigs and rigging
Wind-powered vehicles | Wingsail | Engineering | 966 |
32,226,715 | https://en.wikipedia.org/wiki/Early%20growth%20response%20proteins | Early growth response proteins are a family of zinc finger transcription factors.
Members of the family include:
EGR1, EGR2, EGR3 and EGR4
References
External links
Transcription factors
Zinc proteins
Protein families | Early growth response proteins | Chemistry,Biology | 45 |
63,412,780 | https://en.wikipedia.org/wiki/Signed%20set | In mathematics, a signed set is a set of elements together with an assignment of a sign (positive or negative) to each element of the set.
Representation
Signed sets may be represented mathematically as an ordered pair of disjoint sets, one set for their positive elements and another for their negative elements. Alternatively, they may be represented as a Boolean function, a function whose domain is the underlying unsigned set (possibly specified explicitly as a separate part of the representation) and whose range is a two-element set representing the signs.
Signed sets may also be called -graded sets.
Application
Signed sets are fundamental to the definition of oriented matroids.
They may also be used to define the faces of a hypercube. If the hypercube consists of all points in Euclidean space of a given dimension whose Cartesian coordinates are in the interval , then a signed subset of the coordinate axes can be used to specify the points whose coordinates within the subset are or (according to the sign in the signed subset) and whose other coordinates may be anywhere in the interval . This subset of points forms a face, whose codimension is the cardinality of the signed subset.
Combinatorics
Enumeration
The number of signed subsets of a given finite set of elements is , a power of three, because there are three choices for each element: it may be absent from the subset, present with positive sign, or present with negative sign. For the same reason, the number of signed subsets of cardinality is
and summing these gives an instance of the binomial theorem,
Intersecting families
An analogue of the Erdős–Ko–Rado theorem on intersecting families of sets holds also for signed sets. The intersection of two signed sets is defined to be the signed set of elements that belong to both and have the same sign in both. According to this theorem, for any a collection of signed subsets of an -element set, all having cardinality and all pairs having a non-empty intersection, the number of signed subsets in the collection is at most
For instance, an intersecting family of this size can be obtained by choosing the sign of a single fixed element, and taking the family to be all signed subsets of cardinality that contain this element with this sign. For this theorem follows immediately from the unsigned Erdős–Ko–Rado theorem, as the unsigned versions of the subsets form an intersecting family and each unsigned set can correspond to at most signed sets. However, for larger values of a different proof is needed.
References
Set theory | Signed set | Mathematics | 517 |
1,000,474 | https://en.wikipedia.org/wiki/PLATO%20%28computer%20system%29 | PLATO (Programmed Logic for Automatic Teaching Operations), also known as Project Plato and Project PLATO, was the first generalized computer-assisted instruction system. Starting in 1960, it ran on the University of Illinois's ILLIAC I computer. By the late 1970s, it supported several thousand graphics terminals distributed worldwide, running on nearly a dozen different networked mainframe computers. Many modern concepts in multi-user computing were first developed on PLATO, including forums, message boards, online testing, email, chat rooms, picture languages, instant messaging, remote screen sharing, and multiplayer video games.
PLATO was designed and built by the University of Illinois and functioned for four decades, offering coursework (elementary through university) to UIUC students, local schools, prison inmates, and other universities. Courses were taught in a range of subjects, including Latin, chemistry, education, music, Esperanto, and primary mathematics. The system included a number of features useful for pedagogy, including text overlaying graphics, contextual assessment of free-text answers, depending on the inclusion of keywords, and feedback designed to respond to alternative answers.
Rights to market PLATO as a commercial product were licensed by Control Data Corporation (CDC), the manufacturer on whose mainframe computers the PLATO IV system was built. CDC President William Norris planned to make PLATO a force in the computer world, but found that marketing the system was not as easy as hoped. PLATO nevertheless built a strong following in certain markets, and the last production PLATO system was in use until 2006.
Innovations
PLATO was either the first or an earlier example of many now-common technologies:
Hardware
. Donald Bitzer
. Donald Bitzer
Display Graphics
storing in downloadable fonts.
.
Online communities
Notesfiles (precursor to newsgroups), 1973.
Term-talk (1:1 chat)
Screen software sharing: , used by instructors to help students, precursor of Timbuktu.
Common Computer Game Genres, including many of the early (first?) real time multi-player games
Multiplayer Games
. Rick Blomme
Dungeon Games
. Included the first video game boss.
, likely the first graphical dungeon computer game.
.
Space combat
Flight Simulation: ; this probably inspired UIUC student Bruce Artwick to start Sublogic which was acquired and later became Microsoft Flight Simulator.
Military simulations: .
3D Maze games: , based on a story by J. G. Ballard, the first PLATO 3-D walkthru maze game.
Quest Simulation: , like Trek with monsters, trees, treasures.
Solitaire: solitaire,
Educational
.
Training systems; an ambitious ICAI programming system featuring partial-order plans, used to train Con Edison steam plant operators.
History
Impetus
Before the 1944 G.I. Bill that provided free college education to World War II veterans, higher education was limited to a minority of the US population, though only 9% of the population was in the military. The trend towards greater enrollment was notable by the early 1950s, and the problem of providing instruction for the many new students was a serious concern to university administrators. To wit, if computerized automation increased factory production, it could do the same for academic instruction.
The USSR's 1957 launching of the Sputnik I artificial satellite energized the United States' government into spending more on science and engineering education. In 1958, the U.S. Air Force's Office of Scientific Research had a conference about the topic of computer instruction at the University of Pennsylvania; interested parties, notably IBM, presented studies.
Genesis
Around 1959, Chalmers W. Sherwin, a physicist at the University of Illinois, suggested a computerised learning system to William Everett, the engineering college dean, who, in turn, recommended that Daniel Alpert, another physicist, convene a meeting about the matter with engineers, administrators, mathematicians, and psychologists. After weeks of meetings they were unable to agree on a single design. Before conceding failure, Alpert mentioned the matter to laboratory assistant Donald Bitzer, who had been thinking about the problem, suggesting he could build a demonstration system.
Project PLATO was established soon afterwards, and in 1960, the first system, PLATO I, operated on the local ILLIAC I computer. It included a television set for display and a special keyboard for navigating the system's function menus; PLATO II, in 1961, featured two users at once, one of the first implementations of multi-user time-sharing.
The PLATO system was re-designed, between 1963 and 1969; PLATO III allowed "anyone" to design new lesson modules using their TUTOR programming language, conceived in 1967 by biology graduate student Paul Tenczar. Built on a CDC 1604, given to them by William Norris, PLATO III could simultaneously run up to 20 terminals, and was used by local facilities in Champaign–Urbana that could enter the system with their custom terminals. The only remote PLATO III terminal was located near the state capitol in Springfield, Illinois at Springfield High School. It was connected to the PLATO III system by a video connection and a separate dedicated line for keyboard data.
PLATO I, II, and III were funded by small grants from a combined Army-Navy-Air Force funding pool. By the time PLATO III was in operation, everyone involved was convinced it was worthwhile to scale up the project. Accordingly, in 1967, the National Science Foundation granted the team steady funding, allowing Alpert to set up the Computer-based Education Research Laboratory (CERL) at the University of Illinois Urbana–Champaign campus. The system was capable of supporting 20 time-sharing terminals.
Multimedia experiences (PLATO IV)
In 1972, with the introduction of PLATO IV, Bitzer declared general success, claiming that the goal of generalized computer instruction was now available to all. However, the terminals were very expensive (about $12,000). The PLATO IV terminal had several major innovations:
Plasma Display Screen: Bitzer's orange plasma display, incorporated both memory and bitmapped graphics into one display. The display was a 512×512 bitmap, with both character and vector plotting done by hardwired logic. It included fast vector line drawing capability, and ran at 1260 baud, rendering 60 lines or 180 characters per second. . Users could provide their own characters to support rudimentary bitmap graphics.
Touch panel: A 16×16 grid infrared touch panel, allowing students to answer questions by touching anywhere on the screen.
Microfiche images: Compressed air powered a piston-driven microfiche image selector that permitted colored images to be projected on the back of the screen under program control.
A hard drive for Audio snippets: The random-access audio device used a magnetic disc with a capacity to hold 17 total minutes of pre-recorded audio. It could retrieve for playback any of 4096 audio clips within 0.4 seconds. By 1980, the device was being commercially produced by Education and Information Systems, Incorporated with a capacity of just over 22 minutes.
A Votrax voice synthesizer
The Gooch Synthetic Woodwind (named after inventor Sherwin Gooch), a synthesizer that offered four-voice music synthesis to provide sound in PLATO courseware. This was later supplanted on the PLATO V terminal by the Gooch Cybernetic Synthesizer, which had sixteen voices that could be programmed individually, or combined to make more complex sounds.
Bruce Parello, a student at the University of Illinois in 1972, created the first digital emojis on the PLATO IV system.
Influence on PARC and Apple
Early in 1972, researchers from Xerox PARC were given a tour of the PLATO system at the University of Illinois. At this time, they were shown parts of the system, such as the Insert Display/Show Display (ID/SD) application generator for pictures on PLATO (later translated into a graphics-draw program on the Xerox Star workstation); the Charset Editor for "painting" new characters (later translated into a "Doodle" program at PARC); and the Term Talk and Monitor Mode communications programs. Many of the new technologies they saw were adopted and improved upon, when these researchers returned to Palo Alto, California. They subsequently transferred improved versions of this technology to Apple Inc.
CDC years
As PLATO IV reached production quality, William Norris (CDC) became increasingly interested in it as a potential product. His interest was twofold. From a strict business perspective, he was evolving Control Data into a service-based company instead of a hardware one, and was increasingly convinced that computer-based education would become a major market in the future. At the same time, Norris was troubled by the unrest of the late 1960s, and felt that much of it was due to social inequalities that needed to be addressed. PLATO offered a solution by providing higher education to segments of the population that would otherwise never be able to afford a university education.
Norris provided CERL with machines on which to develop their system in the late 1960s. In 1971, he set up a new division within CDC to develop PLATO "courseware", and eventually many of CDC's own initial training and technical manuals ran on it. In 1974, PLATO was running on in-house machines at CDC headquarters in Minneapolis, and in 1976, they purchased the commercial rights in exchange for a new CDC Cyber machine.
CDC announced the acquisition soon after, claiming that by 1985, 50% of the company's income would be related to PLATO services. Through the 1970s, CDC tirelessly promoted PLATO, both as a commercial tool and one for re-training unemployed workers in new fields. Norris refused to give up on the system, and invested in several non-mainstream courses, including a crop-information system for farmers, and various courses for inner-city youth. CDC even went as far as to place PLATO terminals in some shareholder's houses, to demonstrate the concept of the system.
In the early 1980s, CDC started heavily advertising the service, apparently due to increasing internal dissent over the now $600 million project, taking out print and even radio ads promoting it as a general tool. The Minneapolis Tribune was unconvinced by their ad copy and started an investigation of the claims. In the end, they concluded that while it was not proven to be a better education system, everyone using it nevertheless enjoyed it, at least. An official evaluation by an external testing agency ended with roughly the same conclusions, suggesting that everyone enjoyed using it, but it was essentially equal to an average human teacher in terms of student advancement.
Of course, a computerized system equal to a human should have been a major achievement, the very concept for which the early pioneers in CBT were aiming. A computer could serve all the students in a school for the cost of maintaining it, and wouldn't go on strike. However, CDC charged $50 an hour for access to their data center, in order to recoup some of their development costs, making it considerably more expensive than a human on a per-student basis. PLATO was, therefore, a failure as a profitable commercial enterprise, although it did find some use in large companies and government agencies willing to invest in the technology.
An attempt to mass-market the PLATO system was introduced in 1980 as Micro-PLATO, which ran the basic TUTOR system on a CDC "Viking-721" terminal and various home computers. Versions were built for the TI-99/4A, Atari 8-bit computers, Zenith Z-100 and, later, Radio Shack TRS-80, and IBM Personal Computer. Micro-PLATO could be used stand-alone for normal courses, or could connect to a CDC data center for multiuser programs. To make the latter affordable, CDC introduced the Homelink service for $5 an hour.
Norris continued to praise PLATO, announcing that it would be only a few years before it represented a major source of income for CDC as late as 1984. In 1986, Norris stepped down as CEO, and the PLATO service was slowly killed off. He later claimed that Micro-PLATO was one of the reasons PLATO got off-track. They had started on the TI-99/4A, but then Texas Instruments pulled the plug and they moved to other systems like the Atari, who soon did the same. He felt that it was a waste of time anyway, as the system's value was in its online nature, which Micro-PLATO lacked initially.
Bitzer was more forthright about CDC's failure, blaming their corporate culture for the problems. He noted that development of the courseware was averaging $300,000 per delivery hour, many times what the CERL was paying for similar products. This meant that CDC had to charge high prices in order to recoup their costs, prices that made the system unattractive. The reason, he suggested, for these high prices was that CDC had set up a division that had to keep itself profitable via courseware development, forcing them to raise the prices in order to keep their headcount up during slow periods.
PLATO V: multimedia
Intel 8080 microprocessors were introduced in the new PLATO V terminals. They could download small software modules and execute them locally. It was a way to augment the PLATO courseware with rich animation and other sophisticated capabilities.
Online community
Although PLATO was designed for computer-based education, perhaps its most enduring legacy is its place in the origins of online community. This was made possible by PLATO's groundbreaking communication and interface capabilities, features whose significance is only lately being recognized by computer historians. PLATO Notes, created by David R. Woolley in 1973, was among the world's first online message boards, and years later became the direct progenitor of Lotus Notes.
PLATO's plasma panels were well suited to games, although its I/O bandwidth (180 characters per second or 60 graphic lines per second) was relatively slow. By virtue of 1500 shared 60-bit variables per game (initially), it was possible to implement online games. Because it was an educational computer system, most of the user community were keenly interested in games.
In much the same way that the PLATO hardware and development platform inspired advances elsewhere (such as at Xerox PARC and MIT), many popular commercial and Internet games ultimately derived their inspiration from PLATO's early games. As one example, Castle Wolfenstein by PLATO alum Silas Warner was inspired by PLATO's dungeon games (see below), in turn inspiring Doom and Quake. Thousands of multiplayer online games were developed on PLATO from around 1970 through the 1980s, with the following notable examples:
Daleske's Empire a top-view multiplayer space game based on Star Trek. Either Empire or Colley's Maze War is the first networked multiplayer action game. It was ported to Trek82, Trek83, ROBOTREK, Xtrek, and Netrek, and also adapted (without permission) for the Apple II computer by fellow PLATO alum Robert Woodhead (of Wizardry fame), as a game called Galactic Attack.
The original Freecell by Alfille (from Baker's concept).
Fortner's Airfight, probably the direct inspiration for (PLATO alum) Bruce Artwick's Microsoft Flight Simulator.
Haefeli and Bridwell's Panther (a vector graphics-based tankwar game, anticipating Atari's Battlezone).
Many other first-person shooters, most notably Bowery's Spasim and Witz and Boland's Futurewar, believed to be the first FPS.
Countless games inspired by the role-playing game Dungeons & Dragons, including the original Rutherford/Whisenhunt and Wood dnd (later ported to the PDP-10/11 by Lawrence, who earlier had visited PLATO). and is believed to be the first dungeon crawl game and was followed by: Moria, Rogue, Dry Gulch (a western-style variation), and Bugs-n-Drugs (a medical variation)—all presaging MUDs (Multi-User Domains) and MOOs (MUDs, Object Oriented) as well as popular first-person shooters like Doom and Quake, and MMORPGs (Massively multiplayer online role-playing game) like EverQuest and World of Warcraft. Avatar, PLATO's most popular game, is one of the world's first MUDs and has over 1 million hours of use.. The games Doom and Quake can trace part of their lineage back to PLATO programmer Silas Warner.
PLATO's communication tools and games formed the basis for an online community of thousands of PLATO users, which lasted for well over twenty years. PLATO's games became so popular that a program called "The Enforcer" was written to run as a background process to regulate or disable game play at most sites and times – a precursor to parental-style control systems that regulate access based on content rather than security considerations.
In September 2006 the Federal Aviation Administration retired its PLATO system, the last system that ran the PLATO software system on a CDC Cyber mainframe, from active duty. Existing PLATO-like systems now include NovaNET and Cyber1.org.
By early 1976, the original PLATO IV system had 950 terminals giving access to more than 3500 contact hours of courseware, and additional systems were in operation at CDC and Florida State University. Eventually, over 12,000 contact hours of courseware was developed, much of it developed by university faculty for higher education. PLATO courseware covers a full range of high-school and college courses, as well as topics such as reading skills, family planning, Lamaze training and home budgeting. In addition, authors at the University of Illinois School of Basic Medical Sciences (now, the University of Illinois College of Medicine) devised a large number of basic science lessons and a self-testing system for first-year students. However the most popular "courseware" remained their multi-user games and role-playing video games such as dnd, although it appears CDC was uninterested in this market. As the value of a CDC-based solution disappeared in the 1980s, interested educators ported the engine first to the IBM PC, and later to web-based systems.
Custom character sets
In the early 1970s, some people working in the modern foreign languages group at the University of Illinois began working on a set of Hebrew lessons, originally without good system support for leftward writing. In preparation for a PLATO demo in Tehran, that would participate in, Sherwood worked with Don Lee to implement support for leftward writing, including Persian (Farsi), which uses the Arabic script. There was no funding for this work, which was undertaken only due to Sherwood's personal interest, and no curriculum development occurred for either Persian or Arabic. However, Peter Cole, Robert Lebowitz, and Robert Hart used the new system capabilities to re-do the Hebrew lessons. The PLATO hardware and software supported the design and use of one's own 8-by-16 characters, so most languages could be displayed on the graphics screen (including those written right-to-left).
University of Illinois School of Music PLATO Project (Technology and Research-based Chronology)
A PLATO-compatible music language known as OPAL (Octave-Pitch-Accent-Length) was developed for these synthesizers, as well as a compiler for the language, two music text editors, a filing system for music binaries, programs to play the music binaries in real time, and print musical scores, and many debugging and compositional aids. A number of interactive compositional programs have also been written. Gooch's peripherals were heavily used for music education courseware as created, for example, by the University of Illinois School of Music PLATO Project.
From 1970 to 1994, the University of Illinois (U of I) School of Music explored the use of the Computer-based Education Research Laboratory (CERL) PLATO computer system to deliver online instruction in music. Led by G. David Peters, music faculty and students worked with PLATO’s technical capabilities to produce music-related instructional materials and experimented with their use in the music curriculum.
Peters began his work on PLATO III. By 1972, the PLATO IV system made it technically possible to introduce multimedia pedagogies that were not available in the marketplace until years later.
Between 1974 and 1988, 25 U of I music faculty participated in software curriculum development and more than 40 graduate students wrote software and assisted the faculty in its use. In 1988, the project broadened its focus beyond PLATO to accommodate the increasing availability and use of microcomputers. The broader scope resulted in renaming the project to The Illinois Technology-based Music Project. Work in the School of Music continued on other platforms after the CERL PLATO system shutdown in 1994. Over the 24-year life of the music project, its many participants moved into educational institutions and into the private sector. Their influence can be traced to numerous multimedia pedagogies, products, and services in use today, especially by musicians and music educators.
Significant early efforts
Pitch recognition/performance judging
In 1969, G. David Peters began researching the feasibility of using PLATO to teach trumpet students to play with increased pitch and rhythmic precision. He created an interface for the PLATO III terminal. The hardware consisted of (1) filters that could determine the true pitch of a tone, and (2) a counting device to measure tone duration. The device accepted and judged rapid notes, two notes trilled, and lip slurs. Peters demonstrated that judging instrumental performance for pitch and rhythmic accuracy was feasible in computer-assisted instruction.
Rhythm notation and perception
By 1970, a random access audio device was available for use with PLATO III.
In 1972, Robert W. Placek conducted a study that used computer-assisted instruction for rhythm perception. Placek used the random access audio device attached to a PLATO III terminal for which he developed music notation fonts and graphics. Students majoring in elementary education were asked to (1) recognize elements of rhythm notation, and (2) listen to rhythm patterns and identify their notations. This was the first known application of the PLATO random-access audio device to computer-based music instruction.
Study participants were interviewed about the experience and found it both valuable and enjoyable. Of particular value was PLATO’s immediate feedback. Though participants noted shortcomings in the quality of the audio, they generally indicated that they were able to learn the basic skills of rhythm notation recognition.
These PLATO IV terminal included many new devices and yielded two notable music projects:
Visual diagnostic skills for instrumental music educators
By the mid-1970s, James O. Froseth (University of Michigan) had published training materials that taught instrumental music teachers to visually identify typical problems demonstrated by beginning band students. For each instrument, Froseth developed an ordered checklist of what to look for (i.e., posture, embouchure, hand placement, instrument position, etc.) and a set of 35mm slides of young players demonstrating those problems. In timed class exercises, trainees briefly viewed slides and recorded their diagnoses on the checklists which were reviewed and evaluated later in the training session.
In 1978, William H. Sanders adapted Froseth’s program for delivery using the PLATO IV system. Sanders transferred the slides to microfiche for rear-projection through the PLATO IV terminal’s plasma display. In timed drills, trainees viewed the slides, then filled in the checklists by touching them on the display. The program gave immediate feedback and kept aggregate records. Trainees could vary the timing of the exercises and repeat them whenever they wished.
Sanders and Froseth subsequently conducted a study to compare traditional classroom delivery of the program to delivery using PLATO. The results showed no significant difference between the delivery methods for a) student post-test performance and b) their attitudes toward the training materials. However, students using the computer appreciated the flexibility to set their own practice hours, completed significantly more practice exercises, and did so in significantly less time.
Musical instrument identification
In 1967, Allvin and Kuhn used a four-channel tape recorder interfaced to a computer to present pre-recorded models to judge sight-singing performances.
In 1969, Ned C. Deihl and Rudolph E. Radocy conducted a computer-assisted instruction study in music that included discriminating aural concepts related to phrasing, articulation, and rhythm on the clarinet. They used a four-track tape recorder interfaced to a computer to provide pre-recorded audio passages. Messages were recorded on three tracks and inaudible signals on the fourth track with two hours of play/record time available. This research further demonstrated that computer-controlled audio with four-track tape was possible.
In 1979, Williams used a digitally controlled cassette tape recorder that had been interfaced to a minicomputer (Williams, M.A. "A comparison of three approaches to the teaching of auditory-visual discrimination, sight singing and music dictation to college music students: A traditional approach, a Kodaly approach, and a Kodaly approach augmented by computer-assisted instruction," University of Illinois, unpublished). This device worked, yet was slow with variable access times.
In 1981, Nan T. Watanabe researched the feasibility of computer-assisted music instruction using computer-controlled pre-recorded audio. She surveyed audio hardware that could interface with a computer system.
Random-access audio devices interfaced to PLATO IV terminals were also available. There were issues with sound quality due to dropouts in the audio. Regardless, Watanabe deemed consistent fast access to audio clips critical to the study design and selected this device for the study.
Watanabe’s computer-based drill-and-practice program taught elementary music education students to identify musical instruments by sound. Students listened to randomly selected instrument sounds, identified the instrument they heard, and received immediate feedback. Watanabe found no significant difference in learning between the group who learned through computer-assisted drill programs and the group receiving traditional instruction in instrument identification. The study did, however, demonstrate that use of random-access audio in computer-assisted instruction in music was feasible.
The Illinois Technology-based music project
By 1988, with the spread of micro-computers and their peripherals, the University of Illinois School of Music PLATO Project was renamed The Illinois Technology-based Music Project. Researchers subsequently explored the use of emerging, commercially available technologies for music instruction until 1994.
Influences and impacts
Educators and students used the PLATO System for music instruction at other educational institutions including Indiana University, Florida State University, and the University of Delaware. Many alumni of the University of Illinois School of Music PLATO Project gained early hands-on experience in computing and media technologies and moved into influential positions in both education and the private sector.
The goal of this system was to provide tools for music educators to use in the development of instructional materials, which might possibly include music dictation drills, automatically graded keyboard performances, envelope and timbre ear-training, interactive examples or labs in musical acoustics, and composition and theory exercises with immediate feedback. One ear-training application, Ottaviano, became a required part of certain undergraduate music theory courses at Florida State University in the early 1980s.
Another peripheral was the Votrax speech synthesizer, and a "say" instruction (with "saylang" instruction to choose the language) was added to the Tutor programming language to support text-to-speech synthesis using the Votrax.
Other efforts
One of CDC's greatest commercial successes with PLATO was an online testing system developed for National Association of Securities Dealers (now the Financial Industry Regulatory Authority), a private-sector regulator of the US securities markets. During the 1970s Michael Stein, E. Clarke Porter and PLATO veteran Jim Ghesquiere, in cooperation with NASD executive Frank McAuliffe, developed the first "on-demand" proctored commercial testing service. The testing business grew slowly and was ultimately spun off from CDC as Drake Training and Technologies in 1990. Applying many of the PLATO concepts used in the late 1970s, E. Clarke Porter led the Drake Training and Technologies testing business (today Thomson Prometric) in partnership with Novell, Inc. away from the mainframe model to a LAN-based client server architecture and changed the business model to deploy proctored testing at thousands of independent training organizations on a global scale. With the advent of a pervasive global network of testing centers and IT certification programs sponsored by, among others, Novell and Microsoft, the online testing business exploded. Pearson VUE was founded by PLATO/Prometric veterans E. Clarke Porter, Steve Nordberg and Kirk Lundeen in 1994 to further expand the global testing infrastructure. VUE improved on the business model by being one of the first commercial companies to rely on the Internet as a critical business service and by developing self-service test registration. The computer-based testing industry has continued to grow, adding professional licensure and educational testing as important business segments.
A number of smaller testing-related companies also evolved from the PLATO system. One of the few survivors of that group is The Examiner Corporation. Dr. Stanley Trollip (formerly of the University of Illinois Aviation Research Lab) and Gary Brown (formerly of Control Data) developed the prototype of The Examiner System in 1984.
In the early 1970s, James Schuyler developed a system at Northwestern University called HYPERTUTOR as part of Northwestern's MULTI-TUTOR computer assisted instruction system. This ran on several CDC mainframes at various sites.
Between 1973 and 1980, a group under the direction of Thomas T. Chen at the Medical Computing Laboratory of the School of Basic Medical Sciences at the University of Illinois at Urbana Champaign ported PLATO's TUTOR programming language to the MODCOMP IV minicomputer. Douglas W. Jones, A.B. Baskin, Tom Szolyga, Vincent Wu and Lou Bloomfield did most of the implementation. This was the first port of TUTOR to a minicomputer and was largely operational by 1976. In 1980, Chen founded Global Information Systems Technology of Champaign, Illinois, to market this as the Simpler system. GIST eventually merged with the Government Group of Adayana Inc. Vincent Wu went on to develop the Atari PLATO cartridge.
CDC eventually sold the "PLATO" trademark and some courseware marketing segment rights to the newly formed The Roach Organization (TRO) in 1989. In 2000 TRO changed their name to PLATO Learning and continue to sell and service PLATO courseware running on PCs. In late 2012, PLATO Learning brought its online learning solutions to market under the name Edmentum.
CDC continued development of the basic system under the name CYBIS (CYber-Based Instructional System) after selling the trademarks to Roach, in order to service their commercial and government customers. CDC later sold off their CYBIS business to University Online, which was a descendant of IMSATT. University Online was later renamed to VCampus.
The University of Illinois also continued development of PLATO, eventually setting up a commercial on-line service called NovaNET in partnership with University Communications, Inc. CERL was closed in 1994, with the maintenance of the PLATO code passing to UCI. UCI was later renamed NovaNET Learning, which was bought by National Computer Systems (NCS). Shortly after that, NCS was bought by Pearson, and after several name changes now operates as Pearson Digital Learning.
The Evergreen State College received several grants from CDC to implement computer language interpreters and associated programming instruction. Royalties received from the PLATO computer-aided instruction materials developed at Evergreen support technology grants and an annual lecture series on computer-related topics.
Other versions
In South Africa
During the period when CDC was marketing PLATO, the system began to be used internationally. South Africa was one of the biggest users of PLATO in the early 1980s. Eskom, the South African electrical power company, had a large CDC mainframe at Megawatt Park in the northwest suburbs of Johannesburg. Mainly this computer was used for management and data processing tasks related to power generation and distribution, but it also ran the PLATO software. The largest PLATO installation in South Africa during the early 1980s was at the University of the Western Cape, which served the "native" population, and at one time had hundreds of PLATO IV terminals all connected by leased data lines back to Johannesburg. There were several other installations at educational institutions in South Africa, among them Madadeni College in the Madadeni township just outside Newcastle.
This was perhaps the most unusual PLATO installation anywhere. Madadeni had about 1,000 students, all of them who were original inhabitants i.e. native population and 99.5% of Zulu ancestry. The college was one of 10 teacher preparation institutions in kwaZulu, most of them much smaller. In many ways Madadeni was very primitive. None of the classrooms had electricity and there was only one telephone for the whole college, which one had to crank for several minutes before an operator might come on the line. So an air-conditioned, carpeted room with 16 computer terminals was a stark contrast to the rest of the college. At times the only way a person could communicate with the outside world was through PLATO term-talk.
For many of the Madadeni students, most of whom came from very rural areas, the PLATO terminal was the first time they encountered any kind of electronic technology. Many of the first-year students had never seen a flush toilet before. There initially was skepticism that these technologically illiterate students could effectively use PLATO, but those concerns were not borne out. Within an hour or less most students were using the system proficiently, mostly to learn math and science skills, although a lesson that taught keyboarding skills was one of the most popular. A few students even used on-line resources to learn TUTOR, the PLATO programming language, and a few wrote lessons on the system in the Zulu language.
PLATO was also used fairly extensively in South Africa for industrial training. Eskom successfully used PLM (PLATO learning management) and simulations to train power plant operators, South African Airways (SAA) used PLATO simulations for cabin attendant training, and there were a number of other large companies as well that were exploring the use of PLATO.
The South African subsidiary of CDC invested heavily in the development of an entire secondary school curriculum (SASSC) on PLATO, but unfortunately as the curriculum was nearing the final stages of completion, CDC began to falter in South Africa—partly because of financial problems back home, partly because of growing opposition in the United States to doing business in South Africa, and partly due to the rapidly evolving microcomputer, a paradigm shift that CDC failed to recognize.
Cyber1
In August 2004, a version of PLATO corresponding to the final release from CDC was resurrected online. This version of PLATO runs on a free and open-source software emulation of the original CDC hardware called Desktop Cyber. Within six months, by word of mouth alone, more than 500 former users had signed up to use the system. Many of the students who used PLATO in the 1970s and 1980s felt a special social bond with the community of users who came together using the powerful communications tools (talk programs, records systems and notesfiles) on PLATO.
The PLATO software used on Cyber1 is the final release (99A) of CYBIS, by permission of VCampus. The underlying operating system is NOS 2.8.7, the final release of the NOS operating system, by permission of Syntegra (now British Telecom [BT]), which had acquired the remainder of CDC's mainframe business. Cyber1 runs this software on the Desktop Cyber emulator. Desktop Cyber accurately emulates in software a range of CDC Cyber mainframe models and many peripherals.
Cyber1 offers free access to the system, which contains over 16,000 of the original lessons, in an attempt to preserve the original PLATO communities that grew up at CERL and on CDC systems in the 1980s. The load average of this resurrected system is about 10–15 users, sending personal and notesfile notes, and playing inter-terminal games such as Avatar and Empire (a Star Trek-like game), which had both accumulated more than 1.0 million contact hours on the original PLATO system at UIUC.
See also
:Category:PLATO (computer system) games
The Mother of All Demos (1968)
References
Further reading
External links
. Discusses his relationship with Control Data Corporation (CDC) during the development of PLATO, a computer-assisted instruction system. He describes the interest in PLATO of Harold Brooks, a CDC salesman, and his help in procuring a 1604 computer for Bitzer's use. Recalls the commercialization of PLATO by CDC and his disagreements with CDC over marketing strategy and the creation of courseware for PLATO.
. A program officer at the National Science Foundation (NSF) describes the impact of Don Bitzer and the PLATO system, grants related to the classroom use of computers, and NSF's Regional Computing Program.
.
.
. Archival collection containing internal reports and external reports and publications related to the development of PLATO and the operations of CERL.
. The CBE series documents CDC’s objective of creating, marketing and distributing PLATO courseware internally within various CDC departments and divisions, and externally.
.
: online preservation of the PLATO system.
Computer-based Education Research Laboratory
PLATO
Control Data Corporation software
History of electronic engineering | PLATO (computer system) | Engineering | 7,582 |
21,576,279 | https://en.wikipedia.org/wiki/3D%20television | 3D television (3DTV) is television that conveys depth perception to the viewer by employing techniques such as stereoscopic display, multi-view display, 2D-plus-depth, or any other form of 3D display. Most modern 3D television sets use an active shutter 3D system or a polarized 3D system, and some are autostereoscopic without the need of glasses. , most 3D TV sets and services are no longer available from manufacturers.
History
The stereoscope was first invented by Sir Charles Wheatstone in 1838. It showed that when two pictures are viewed stereoscopically, they are combined by the brain to produce 3D depth perception. The stereoscope was improved by Louis Jules Duboscq, and a famous picture of Queen Victoria was displayed at The Great Exhibition in 1851. In 1855 the Kinematoscope was invented. In the late 1890s, the British film pioneer William Friese-Greene filed a patent for a 3D movie process. On 10 June 1915, former Edison Studios chief director Edwin S. Porter and William E. Waddell presented tests in red-green anaglyph to an audience at the Astor Theater in New York City and in 1922 the first public 3D movie The Power of Love was displayed.
Stereoscopic 3D television was demonstrated for the first time on 10 August 1928, by John Logie Baird in his company's premises at 133 Long Acre, London. Baird pioneered a variety of 3D television systems using electro-mechanical and cathode-ray tube techniques. The first 3D TV was produced in 1935, and stereoscopic 3D still cameras for personal use had already become fairly common by the Second World War. Many 3D movies were produced for theatrical release in the US during the 1950s just when television started to become popular. The first such movie was Bwana Devil from United Artists that could be seen all across the US in 1952. One year later, in 1953, came the 3D movie House of Wax which also featured stereophonic sound. Alfred Hitchcock produced his film Dial M for Murder in 3D, but for the purpose of maximizing profits the movie was released in 2D because not all cinemas were able to display 3D films. In 1946 the Soviet Union also developed 3D films, with Robinzon Kruzo being its first full-length 3D movie. People were excited to view the 3D movies, but were put off by their poor quality. Because of this, their popularity declined quickly. There was another attempt in the 1970s and 1980s to make 3D movies more mainstream with the releases of Friday the 13th Part III (1982) and Jaws 3-D (1983).
Matsushita Electric (now Panasonic) developed a 3D television that employed an active shutter 3D system in the late 1970s. They unveiled the television in 1981, while at the same time adapting the technology for use with the first stereoscopic video game, Sega's arcade game SubRoc-3D (1982). 3D film showings became more popular throughout the 2000s, culminating in the success of 3D presentations of Avatar in December 2009 and January 2010.
Though 3D movies were generally well received by the public, 3D television did not become popular until after the CES 2010 trade show, when major manufacturers began selling a full lineup of 3D televisions, following the success of Avatar. Shortly thereafter, consumer and professional 3D camcorders were released to the public by Sony and Panasonic. These used two lenses, one for each eye. Around the same time, the LG Optimus 3D, the Fujifilm FinePix Real 3D series, and the Nintendo 3DS were released. According to DisplaySearch, 3D television shipments totaled 41.45 million units in 2012, compared with 24.14 in 2011 and 2.26 in 2010. In late 2013, the number of 3D TV viewers started to decline, and by 2016, development of 3D TV was limited to a few premium models. Production of 3D TVs ended in 2016.
Technologies
There are several techniques to produce and display 3D moving pictures. The following are some of the technical details and methodologies employed in some of the more notable 3D movie systems that have been developed.
The future of 3D television is also emerging as time progresses. New technology like WindowWalls (wall-size displays) and Visible light communication are being implemented into 3D television as the demand for 3D TV increases. Scott Birnbaum, vice president of Samsung's LCD business, said that the demand for 3D TV would skyrocket in the next couple of years, fueled by televised sports (but this did not happen). One might be able to obtain information directly onto their television due to new technologies like the Visible Light Communication that allows for this to happen because the LED lights transmit information by flickering at high frequencies.
Displaying technologies
The basic requirement is to display offset images that are filtered separately to the left and right eye. Two strategies have been used to accomplish this: have the viewer wear eyeglasses to filter the separately offset images to each eye, or have the light source split the images directionally into the viewer's eyes (no glasses required). Common 3D display technology for projecting stereoscopic image pairs to the viewer include:
With filters/lenses:
Anaglyph 3D – with passive color filters
Polarized 3D system – with passive polarization filters
Active shutter 3D system – with active shutters
Head-mounted display – with a separate display positioned in front of each eye, and lenses used primarily to relax eye focus
Without lenses: Autostereoscopic displays, sometimes referred to commercially as Auto 3D.
Others:
In a CEATEC 2011 exhibition, Hitachi released glasses-free 3D projection systems that use a set of 24 projectors, lenses, and translucent half mirrors to superimpose 3D images with a horizontal viewing angle of 60 degrees and a vertical viewing angle of 30 degrees. Besides Hitachi, Sony is also working on similar technologies.
Single-view displays project only one stereo pair at a time. Multi-view displays either use head tracking to change the view depending on the viewing angle, or simultaneous projection of multiple independent views of a scene for multiple viewers (automultiscopic). Such multiple views can be created on the fly using the 2D-plus-depth format.
Various other display techniques have been described, such as holography, volumetric display, and the Pulfrich effect, which was used in Doctor Who Dimensions in Time, in 1993, by 3rd Rock From The Sun in 1997, and by the Discovery Channel's Shark Week in 2000.
3D glasses may reduce image brightness.
Producing technologies
Stereoscopy is the most widely accepted method for capturing and delivering 3D video. It involves capturing stereo pairs in a two-view setup, with cameras mounted side by side and separated by the same distance as is between a person's pupils. If we imagine projecting an object point in a scene along the line-of-sight for each eye, in turn; to a flat background screen, we may describe the location of this point mathematically using simple algebra. In rectangular coordinates with the screen lying in the Y–Z plane, with the Z axis upward and the Y axis to the right, with the viewer centered along the X axis; we find that the screen coordinates are simply the sum of two terms. One accounting for perspective and the other for binocular shift. Perspective modifies the Z and Y coordinates of the object point, by a factor of D/(D–x), while binocular shift contributes an additional term (to the Y coordinate only) of s·x/(2·(D–x)), where D is the distance from the selected system origin to the viewer (right between the eyes), s is the eye separation (about 7 centimeters), and x is the true x coordinate of the object point. The binocular shift is positive for the left-eye-view and negative for the right-eye-view. For very distant object points, the eyes will be looking along essentially the same line of sight. For very near objects, the eyes may become excessively "cross-eyed". However, for scenes in the greater portion of the field of view, a realistic image is readily achieved by superposition of the left and right images (using the polarization method or synchronized shutter-lens method) provided the viewer is not too near the screen and the left and right images are correctly positioned on the screen. Digital technology has largely eliminated inaccurate superposition that was a common problem during the era of traditional stereoscopic films.
Multi-view capture uses arrays of many cameras to capture a 3D scene through multiple independent video streams. Plenoptic cameras, which capture the light field of a scene, can also be used to capture multiple views with a single main lens. Depending on the camera setup, the resulting views can either be displayed on multi-view displays, or passed along for further image processing.
After capture, stereo or multi-view image data can be processed to extract 2D plus depth information for each view, effectively creating a device-independent representation of the original 3D scene. These data can be used to aid inter-view image compression or to generate stereoscopic pairs for multiple different view angles and screen sizes.
2D plus depth processing can be used to recreate 3D scenes even from a single view and convert legacy film and video material to a 3D look, though a convincing effect is harder to achieve and the resulting image will likely look like a cardboard miniature.
3D production
Production of events such as live sports broadcasts in 3D differs from the methods used for 2D broadcasting. A high technical standard must be maintained because any mismatch in color, alignment, or focus between two cameras may destroy the 3D effect or produce discomfort in the viewer. Zoom lenses for each camera of a stereo pair must track over their full range of focal lengths.
Addition of graphical elements (such as a scoreboard, timers, or logos) to a 3D picture must place the synthesized elements at a suitable depth within the frame, so that viewers can comfortably view the added elements as well as the main picture. This requires more powerful computers to calculate the correct appearance of the graphical elements. For example, the line of scrimmage that appears as a projected yellow line on the field during an American football broadcast requires about one thousand times more processing power to produce in 3D compared to a 2D image.
Since 3D images are effectively more immersive than 2D broadcasts, fewer fast cuts between camera angles are needed. 3D National Football League broadcasts cut between cameras about one-fifth as often as in 2D broadcasting. Rapid cuts between two different viewpoints can be uncomfortable for the viewer, so directors may lengthen the transition or provide images with intermediate depth between two extremes to "rest" the viewer's eyes. 3D images are most effective if the cameras are at a low angle of view, simulating presence of the viewer at the event; this can present problems with people or structures blocking the view of the event. While fewer camera locations are required, the overall number of cameras is similar to a 2D broadcast because each position needs two cameras. Other live sport events have additional factors that affect production; for example, an ice rink presents few cues for depth due to its uniform appearance.
TV sets
These TV sets were high-end and generally included Ethernet, USB player and recorder, Bluetooth and USB Wi-Fi.
3D-ready TV sets
3D-ready TV sets are those that can operate in 3D mode (in addition to regular 2D mode) using one of several display technologies to recreate a stereoscopic image. These TV sets usually supported HDMI 1.4 and a minimum output refresh rate of 120 Hz; glasses may be sold separately.
Philips was developing a 3D television set that would be available for the consumer market by about 2011 without the need for special glasses (autostereoscopy). However it was canceled because of the slow adoption of customers going from 2D to 3D.
In August 2010, Toshiba announced plans to bring a range of autostereoscopic TVs to market by the end of the year.
The Chinese manufacturer TCL Corporation has developed a LCD 3D TV called the TD-42F, which is currently available in China. This model uses a lenticular system and does not require any special glasses (autostereoscopy). It currently sells for approximately $20,000.
Onida, LG, Samsung, Sony, and Philips intended to increase their 3D TV offering with plans to make 3D TV sales account for over 50% of their respective TV distribution offering in 2012. It was expected that the screens would use a mixture of technologies until there is standardization across the industry. Samsung offers the LED 7000, LCD 750, PDP 7000 TV sets and the Blu-ray 6900.
Full 3D TV sets
Full 3D TV sets included Samsung Full HD 3D (1920×1080p, 60 Hz) and Panasonic Full HD 3D (1920×1080p, 60 Hz).
A September 2011 Cnet review touted Toshiba's 55ZL2 as "the future of television". Because of the demanding nature of auto-stereoscopic 3D technology, the display features a 3840x2160 display; however, there was at the time no video content available at this resolution. That said, it utilizes a multi-core processor to provide excellent upscaling to the "4k2k" resolution. Using a directional lenticular lenslet filter, the display generates nine 3D views. This technology commonly creates dead spots, which Toshiba avoids by using an eye-tracking camera to adjust the image. The reviewers also note that the 3D resolution for a 1080p signal looks more like 720p and lacks parallax, which reduces immersion.
Standardization efforts
The entertainment industry was expected to adopt a common and compatible standard for 3D in home electronics. To present faster frame rate in high definition to avoid judder (non-smooth, linear motion), enhancing 3-D film, televisions and broadcasting, other unresolved standards are the type of 3D glasses (passive or active), including bandwidth considerations, subtitles, recording format, and a Blu-ray standard. With improvements in digital technology, in the late 2000s, 3D movies became more practical to produce and display, putting competitive pressure behind the creation of 3D television standards. There are several techniques for Stereoscopic Video Coding, and stereoscopic distribution formatting including anaglyph, quincunx, and 2D plus Delta. Serial digital interface is used to carry 3D TV signals within TV stations.
Content providers, such as Disney, DreamWorks, and other Hollywood studios, and technology developers, such as Philips, asked SMPTE for the development of a 3DTV standard in order to avoid a battle of formats and to guarantee consumers that they will be able to view the 3D content they purchase and to provide them with 3D home solutions for all pockets. In August 2008, SMPTE established the "3-D Home Display Formats Task Force" to define the parameters of a stereoscopic 3D mastering standard for content viewed on any fixed device in the home, no matter the delivery channel. It explored the standards that need to be set for 3D content distributed via broadcast, cable, satellite, packaged media, and the Internet to be played-out on televisions, computer screens and other tethered displays. After six months, the committee produced a report to define the issues and challenges, minimum standards, and evaluation criteria, which the Society said would serve as a working document for SMPTE 3D standards efforts to follow. A follow-on effort to draft a standard for 3D content formats was expected to take another 18 to 30 months.
Production studios were developing an increasing number of 3D titles for the cinema and as many as a dozen companies were actively working on the core technology behind the product. Many had technologies available to demonstrate, but no clear road forward for a mainstream offering emerged.
Under these circumstances, SMPTE's inaugural meeting was essentially a call for proposals for 3D television; more than 160 people from 80 companies signed up for this first meeting. Vendors that presented their respective technologies at the task force meeting included SENSIO Technologies, Philips, Dynamic Digital Depth (DDD), TDVision, and Real D, all of which had 3D distribution technologies.
There were many active 3D projects in SMPTE for both TV and filmmakers in the late 2000s. The SMPTE 35PM40 Working Group decided (without influence from the SMPTE Board or any other external influence) that the good progress being made on 3D standards within other SMPTE groups (including the IMF Interoperable Master Format) meant that its "overview" project would be best published as an Engineering Report. However, by 2011, the SMPTE board had "abandoned all further work on 3D television".
However, SMPTE was not the only 3D standards group. Other organizations such as the Consumer Electronics Association (CEA), 3D@Home Consortium, ITU and the Entertainment Technology Center (ETC), at USC School of Cinematic Arts have created their own investigation groups and have already offered to collaborate to reach a common solution. The Digital TV Group (DTG), has committed to profiling a UK standard for 3DTV products and services. Other standard groups such as DVB, BDA, ARIB, ATSC, DVD Forum, IEC and others were involved in the process.
MPEG has been researching multi-view, stereoscopic, and 2D plus depth 3D video coding since the mid-1990s; the first result of this research is the Multiview Video Coding extension for MPEG-4 AVC that is currently undergoing standardization. MVC has been chosen by the Blu-ray disc association for 3D distribution. The format offers backwards compatibility with 2D Blu-ray players.
HDMI version 1.4, released in June 2009, defines a number of 3D transmission formats. The format "Frame Packing" (left and right image packed into one video frame with twice the normal bandwidth) is mandatory for HDMI 1.4 3D devices. All three resolutions (720p50, 720p60, and 1080p24) have to be supported by display devices, and at least one of those by playback devices. Other resolutions and formats are optional. While HDMI 1.4 devices will be capable of transmitting 3D pictures in full 1080p, HDMI 1.3 does not include such support. As an out-of-spec solution for the bitrate problem, a 3D image may be displayed at a lower resolution, like interlaced or at standard definition.
DVB 3D-TV standard
DVB has established the DVB 3D-TV Specification. The following 3D-TV consumer configurations will be available to the public:
3D-TV connected to 3D Blu-ray Player for packaged media.
3D-TV connected to HD Games Console, e.g. PS3 for 3D gaming.
3D-TV connected to HD STB for broadcast 3D-TV.
3D-TV receiving a 3D-TV broadcast directly via a built-in tuner and decoder.
For the two broadcast scenarios above, initial requirements are for Pay-TV broadcasters to deliver 3D-TV services over existing HD broadcasting infrastructures, and to use existing receivers (with firmware upgrade, as required) to deliver 3D content to 3D-TV sets, via an HDMI or equivalent connection, if needed. This is termed Frame Compatible. There are a range of Frame Compatible formats. They include the Side by Side (SbS) format, the Top and Bottom (TaB) format, and others.
Broadcasts
3D channels
In 2008, 3D programming was broadcast on Japanese satellite BS11 approximately four times per day.
Cablevision launched a 3D version of its MSG channel on 24 March 2010, which was a limited service that was only available only to Cablevision subscribers on channel 1300. The channel was dedicated primarily to sports broadcasts, including MSG's 3D broadcast of a New York Rangers-New York Islanders game, limited coverage of the 2010 Masters Tournament, and (in cooperation with YES Network) a game between the New York Yankees and Seattle Mariners.
The first Australian program broadcast in high-definition 3D was Fox Sports coverage of the soccer game Australia-New Zealand on 24 May 2010.
Also in Australia, the Nine Network and Special Broadcasting Service brought the State of Origin (matches on 26 May, 16 June and 7 July 2010) (Nine) and FIFA World Cup (SBS) in 3D on Channel 40 respectively.
In early 2010, Discovery Communications, Imax, and Sony announced plans to launch a 3D TV channel in the US with a planned launch in early 2011. At the same time, a Russian company Platform HD and its partners – General Satellite and Samsung Electronics – announced about their 3D television project, which would be the first similar project in Russia.
In Brazil Rede TV! became the first Terrestrial television to transmit 3D signal freely for all 3D enabled audience on 21 May.
Starting on 11 June 2010, ESPN launched a new channel, ESPN 3D, dedicated to 3D sports with up to 85 live events a year in 3D.
On 1 January 2010, the world's first 3D channel, SKY 3D, started broadcasting nationwide in South Korea by Korea Digital Satellite Broadcasting. The channel's slogan is "World No.1 3D Channel". This 24/7 channel uses the Side by Side technology at a resolution of 1920x1080i. 3D contents include education, animation, sport, documentary and performances.
A full 24-hour broadcast channel was announced at the 2010 Consumer Electronics show as a joint venture from IMAX, Sony, and the Discovery channel. The intent was to launch the channel in the United States by year end 2010. However, this did not materialize in time.
DirecTV and Panasonic launched 2 broadcast channels and 1 Video on demand channel with 3D content in June 2010. DirecTV previewed a live demo of their 3D feed at the Consumer Electronics Show held 7–10 January 2010.
In Europe, British Sky Broadcasting (Sky) launched a limited 3D TV broadcast service on 3 April 2010. Transmitting from the Astra 2A satellite at 28.2° east, Sky 3D broadcast a selection of live English Premier League football matches to over 1000 British pubs and clubs equipped with a Sky+HD Digibox and 3D Ready TVs, and preview programmes provided for free to top-tier Sky HD subscribers with 3D TV equipment. This was later expanded to include a selection of films, sports, and entertainment programming launched to Sky subscribers on 1 October 2010.
On 28 September 2010, Virgin Media launched a 3D TV on Demand service.
Several other European pay-TV networks are also planning 3D TV channels and some have started test transmissions on other Astra satellites, including French pay-TV operator Canal+ which has announced its first 3D channel is to be launched in December 2010. Also the Spanish Canal+ has started the first broadcastings on 18 May 2010 and included 2010 FIFA World Cup matches in the new Canal+ 3D channel. Satellite operator SES started a free-to-air 3D demonstration channel on the Astra satellite at 23.5° east on 4 May 2010 for the opening of the 2010 ANGA Cable international trade fair using 3D programming supplied by 3D Ready TV manufacturer Samsung under an agreement between Astra and Samsung to co-promote 3D TV.
By November 2010, there were eight 3D channels broadcasting to Europe from three Astra satellite positions, including demonstrations provided by Astra, pay-TV from BSkyB, Canal+ and others, and the Dutch Brava3D cultural channel, which provides a mix of classical music, opera and ballet free-to-air across Europe from Astra 23.5°E.
In April 2011, HIGH TV (a 3D family entertainment channel) launched. Headquartered in NY with offices in Hong Kong and London, the channel broadcasts through eight satellites round the world, covering Europe, Asia, the Nordic region, Russia, South America, Africa, Middle East and North America.
3flow is a 3D channel that began broadcasting on Freebox in France on 1 April 2011. Made up entirely of native stereoscopic programming produced and owned by WildEarth and Sasashani (WildEarth's parent company). Initially the focus was mostly safari and has now widened to include underwater, extreme sports and other 3D content from around the world. WildEarth and Sasashani also distribute 3D series and shows through 3D Content Hub.
On 1 January 2012, China's first 3D Test Channel launched on China Central Television and 5 other networks.
On 1 February 2012: The Extreme Sports Channel – the home of Extreme Sports launched in Italy on Sky Italia marking its international début in high definition (HD).
The channel's HD feed will be a simulcast of the standard definition feed launched in 1999, which now broadcasts to subscribers in 66 territories and in 12 languages across Europe, the Middle East and Africa (EMEA). The inaugural launch on Italy's Sky platform sees the channel's entrance into the HD market and from there it will begin rolling out to operators across the EMEA region.
In February 2012 Telecable de Tricom, a major Dominican cable TV provider, announced the launch of the first 3D TV programming package in Latin America. As of 3 July 2012, the only 3D channels available are 3flow and HIGH TV 3D.
In July 2013 the BBC announced that it would be indefinitely suspending 3D programming due to a lack of uptake. Only half of the estimated 1.5 million households in the UK with a 3D-enabled television watched the 2012 summer's Olympics opening ceremony in 3D.
In 2013, in the US, ESPN 3D was shut down due to lack of demand, followed by Xfinity 3D and all DirecTV 3D programming in 2014.
List of 3D TV channels
Standard HD channels have also broadcast in 3D. BBC HD occasionally broadcast high-profile events in 3D including the Wimbledon men's & ladies' singles finals and the opening and closing ceremonies of the 2012 Summer Olympics. However the BBC abandoned 3D broadcasting following the 2013 Wimbledon tennis championships.
3D episodes and shows
There have been several notable examples in television where 3D episodes have been produced, typically as one-hour specials or special events.
1980s
The first-ever 3D broadcast in the UK was an episode of the weekly science magazine The Real World, made by Television South and screened in the UK in February 1982. The program included excerpts of test footage shot by Philips in the Netherlands. Red/green 3D glasses were given away free with copies of the TV Times listings magazine, but the 3D sections of the program were shown in monochrome. The experiment was repeated nationally in December 1982, with red/blue glasses allowing color 3D to be shown for the first time. The program was repeated the following weekend followed by a rare screening of the Western Fort Ti starring George Montgomery and Joan Vohs.
In 1985 Portugal's national TV channel RTP 1 broadcast the movie Creature from the Black Lagoon in anaglyph format. Red/cyan 3D glasses were sold with magazines.
1990s
In November 1993, the BBC announced a one-off week of 3D programming filmed using the pioneering Pulfrich 3D technique. 3D glasses were sold in shops around the UK, a percentage of the sales going to the Children In Need charity. The week's programming concluded with a screening of the 3D Doctor Who special "Dimensions In Time" as well as specially shot segments of Noel's House Party and the annual Children In Need charity appeal.
3D television episodes were a brief fad on U.S. television during the May 1997 sweeps. The sitcom 3rd Rock from the Sun showed a two-part episode, "Nightmare On Dick Street", where several of the characters' dreams are shown in 3D. The episode cued its viewers to put on their 3D glasses (which used the Pulfrich effect) by including "3D on" and "3D off" icons in the corner of the screen as a way to alert them as to when the 3D sequences would start and finish. Customers were given free glasses courtesy of a joint venture between Little Caesars pizza and Barq's Root Beer. Also in May 1997, ABC had a special line-up of shows that showcased specific scenes in 3D. The shows included Home Improvement, Spin City, The Drew Carey Show, Ellen, Family Matters, Step by Step, Sabrina, The Teenage Witch, and America's Funniest Home Videos. Similar to 3rd Rock, an icon alerted viewers when to put on the 3D glasses. Customers were given free anaglyph glasses at Wendy's for the promotion. Nickelodeon had a special lineup of shows in 1997 that also showcased specific scenes in 3D promoted as Nogglevision; ChromaDepth was the technology of choice for Nickelodeon's 3D.
2000s
Television shows including the drama Medium and the comedy Chuck (Season 2, episode 12) used 3D television.
Channel 4 in the UK ran a short season of 3D programming in November 2009 including Derren Brown and The Queen in 3D. Unlike previous British 3D TV experiments, the program were transmitted in ColorCode 3D.
In May 2006 Portugal's national TV channel RTP 1 broadcast several shows in anaglyph format ("Real 3D") for a week. Red/cyan 3D glasses were sold exclusively by a hypermarket chain.
2010s
On 31 January 2010, BSKYB became the first broadcaster in the world to show a live sports event in 3D when Sky Sports screened a football match between Manchester United and Arsenal to a public audience in several selected pubs.
On 31 January 2010, the 52nd Grammy Awards featured a Michael Jackson Tribute Sequence in 3D, using anaglyph format.
The very first stereoscopic indie live action comedy one-hour show called Safety Geeks : SVI : 3D specifically for 3DTV and 3D VOD was produced and released in March 2010 through Digital Dynamic Depth / Yabazam and their Yabazam website portal. Safety Geeks:SVI is the comic adventures of an elite force of safety experts, the P.O.S.H. (Professional Occupational Safety Hazard) team. Obsessed with making the world safer, the CSI-like team investigates accidents to find out what went wrong and who is to blame. It won the Los Angeles 3D film Festival in 2010 as best pilot or series in 3D.
In April 2010, the Masters Tournament was broadcast in live 3D on DirecTV, Comcast, and Cox.
The Roland Garros tennis tournament in Paris, from 23 May to 6 June 2010, was filmed in 3D (center court only) and broadcast live via ADSL and fiber to Orange subscribers throughout France in a dedicated Orange TV channel.
Fox Sports broadcasts the first program in 3D in Australia when the Socceroos played The New Zealand All Whites at the MCG on 24 May 2010.
The Nine Network broadcast the first Free-to-air 3D telecast when the Queensland Maroons faced the New South Wales Blues at ANZ Stadium on 26 May 2010.
On 29 May 2010, Sky broadcasts Guinness Premiership Final in 3D in selected pubs and clubs.
25 matches in the FIFA World Cup 2010 were broadcast in 3D.
The Inauguration of Philippine President Noynoy Aquino on 30 June 2010 was the first presidential inauguration to telecast in live 3D by GMA Network. However, the telecast was only broadcast in a small number of localities.
The 2010 Coke Zero 400 was broadcast in 3D on 3 July on NASCAR.com and DirecTV along with Comcast, TWC, and Bright House cable systems.
Astro broadcast the 2010 FIFA World Cup Final on 11 July 2010 in 3-D on their B.yond service.
Satellite delivered Bell TV in Canada began to offer a full-time pay-TV, 3D channel to its subscribers on 27 July 2010.
The 2010 PGA Championship was broadcast in 3D for four hours on 13 August 2010, from 3–7 pm EDT. The broadcast was available on DirecTV, Comcast, Time Warner Cable, Bright House Networks, Cox Communications, and Cablevision.
In September 2010, the Canadian Broadcasting Corporation's first 3D broadcast was a special about the Canadian monarch, Elizabeth II, and included 3-D film footage of the Queen's 1953 coronation as well as 3D video of her 2010 tour of Canada. This marks the first time the historical 3D images have been seen anywhere on television as well as the first broadcast of a Canadian produced 3D program in Canada.
FioS and the NFL partnered to broadcast 2 September 2010, pre-season game between the New England Patriots and the New York Giants in 3D. The game was only broadcast in 3D in the northeast.
The 2010 AFL Grand Final, on 25 September 2010, was broadcast in 3D from the Seven Network.
Rachael Ray aired a 3D Halloween Bash on 29 October 2010.
The first Japanese television series in 3D, Tokyo Control, premiered on 19 January 2011.
In May 2011, 3net released the first docu-reality TV series entitled Bullproof filmed in native 3D made by Digital Revolution Studios.
The 2011 3D Creative Arts Awards "Your World in 3D" was the first award show filmed in native 3D and televised on 3net 3D channel broadcast on DirectTV. The production was filmed at the Grauman's Chinese Theatre in Hollywood.
On 16 July 2011 – The Parlotones (South African Rock Act) became the first band to broadcast a Live Rock Opera to Terrestrial Cinema in 3D, a Live 3D feed to DIRECT TV in the US and Facebook pay per view. It was called "Dragonflies & Astronauts".
The semi-finals, Bronze Final and Final matches of the 2011 Rugby World Cup will be broadcast in 3D.
Singapore based Tiny Island Productions is currently producing Dream Defenders, which will be available in both autostereoscopic and stereoscopic 3D formats. 3net, which acquired the series, describes it as the first stereoscopic children's series and will air on 25 September 2011.
In July 2011, the BBC announced that the grand final of Strictly Come Dancing in December 2011 will air in 3-D.
The BBC broadcast the 2011 finals of the Wimbledon Lawn Tennis Championships in 3D.
In February 2012 Telecable de Tricom, a major Dominican cable TV provider, announced the launch of the first 3D TV programming package in Latin America. As of 10 August 2012 the only 3D channels available are Wildearth, 3 Flow 3D, and High TV 3D.
Avi Arad is currently developing a 3D Pac-Man TV show.
The Xbox Live broadcasts of the 2012 Miss Universe and Miss USA beauty pageants were available in RealD 3D.
In 2013, in Brazil, NET HD pay-per-view broadcasts of the thirteenth season of Big Brother Brasil were available in 3D.
In July 2013, the BBC announced that they were putting 3D broadcasts on hold due to lack of audience interest, even from those who owned 3D TV displays.
As one of their final 3D broadcasts, 23 November 2013, the BBC aired a special 3D episode of Doctor Who in celebration of that show's fiftieth anniversary. That episode, The Day of the Doctor, was filmed and produced in 3D, and broadcast in 2D and 3D in the UK, with simultaneous showings in 3D in cinemas around the world. It has since been made available on 3D Blu-ray.
Decline
As early as 2013, 3D televisions were being seen as a fad. DirecTV had stopped broadcasting 3D programs in 2012, while ESPN stopped in 2013. In the UK, Sky moved its content to on-demand, and the BBC ended airing 3D shows in 2013 due to "lack of public appetite".
Fewer and fewer 3D TVs were sold and soon TV manufacturers stopped making them. Vizio stopped production in 2014 and was followed by others. In January 2017, the last two major television manufacturers still producing 3D televisions, Sony and LG, announced they would stop all 3D support.
World record
The 2011 UEFA Champions League Final match between Manchester United and Barcelona was broadcast live in 3D format on a Ukrainian-produced EKTA screen in Gothenburg, Sweden. The screen made it to The Guinness Book of World Records as the world's biggest screen. The live 3D broadcast was provided by the company Viasat.
Health effects
Some viewers have complained of headaches, seizures and eyestrain after watching Active Shutter 3DTV. There have been several warnings, especially for the elderly. Motion sickness, in addition to other health concerns, is more easily induced by 3D presentations.
There are primarily two effects of 3D TV that are unnatural for the human vision: crosstalk between the eyes caused by imperfect image separation and the mismatch between convergence and accommodation caused by the difference between an object's perceived position in front of or behind the screen and the real origin of that light on the screen.
It is believed that approximately 12% of people are unable to properly see 3D images, owing to a variety of medical conditions. According to another experiment, up to 30% of people have very weak stereoscopic vision preventing depth perception based on stereo disparity. This nullifies or greatly decreases immersion effects of digital stereo to them.
See also
Autostereoscopy
Stereoscopy
2D-plus-Depth
2D plus Delta
3D display
3D film
List of 3D films
Blu-ray 3D Disc
Crosstalk
Digital 3D
HD TV
LED TV
Nintendo 3DS
SES
References
Further reading
External links
Television technology
Stereoscopy
Television terminology | 3D television | Technology | 7,538 |
63,455,728 | https://en.wikipedia.org/wiki/Modular%20forms%20modulo%20p | In mathematics, modular forms are particular complex analytic functions on the upper half-plane of interest in complex analysis and number theory. When reduced modulo a prime p, there is an analogous theory to the classical theory of complex modular forms and the p-adic theory of modular forms.
Reduction of modular forms modulo 2
Conditions to reduce modulo 2
Modular forms are analytic functions, so they admit a Fourier series. As modular forms also satisfy a certain kind of functional equation with respect to the group action of the modular group, this Fourier series may be expressed in terms of .
So if is a modular form, then there are coefficients such that .
To reduce modulo 2, consider the subspace of modular forms with coefficients of the -series being all integers (since complex numbers, in general, may not be reduced modulo 2).
It is then possible to reduce all coefficients modulo 2, which will give a modular form modulo 2.
Basis for modular forms modulo 2
Modular forms are generated by and .
It is then possible to normalize and to and , having integers coefficients in their -series.
This gives generators for modular forms, which may be reduced modulo 2.
Note the Miller basis has some interesting properties:
once reduced modulo 2, and are just ; that is, a trivial reduction.
To get a non-trivial reduction, one must use the modular discriminant .
Thus, modular forms are seen as polynomials of , and (over the complex in general, but seen over integers for reduction), once reduced modulo 2, they become just polynomials of over .
The modular discriminant modulo 2
The modular discriminant is defined by an infinite product,
where is the Ramanujan tau function.
Results from Kolberg and Jean-Pierre Serre demonstrate that, modulo 2, we have
i.e., the -series of modulo 2 consists of to powers of odd squares.
Hecke operators modulo 2
The action of the Hecke operators is fundamental to understanding the structure of spaces of modular forms. It is therefore justified to try to reduce them modulo 2.
The Hecke operators for a modular form are defined as follows:
with .
Hecke operators may be defined on the -series as follows:
if ,
then
with
Since modular forms were reduced using the -series, it makes sense to use the -series definition. The sum simplifies a lot for Hecke operators of primes (i.e. when is prime): there are only two summands. This is very nice for reduction modulo 2, as the formula simplifies a lot.
With more than two summands, there would be many cancellations modulo 2, and the legitimacy of the process would be doubtable. Thus, Hecke operators modulo 2 are usually defined only for primes numbers.
With a modular form modulo 2 with -representation , the Hecke operator on is defined by where
It is important to note that Hecke operators modulo 2 have the interesting property of being nilpotent.
Finding their order of nilpotency is a problem solved by Jean-Pierre Serre and Jean-Louis Nicolas in a paper published in 2012:.
The Hecke algebra modulo 2
The Hecke algebra may also be reduced modulo 2.
It is defined to be the algebra generated by Hecke operators modulo 2, over .
Following Serre and Nicolas's notations,
, i.e. .
Writing so that , define as the -subalgebra of given by and .
That is, if is a sub-vector-space of , we get .
Finally, define the Hecke algebra as follows:
Since , one can restrict elements of to to obtain an element of .
When considering the map as the restriction to , then is a homomorphism.
As is either identity or zero, .
Therefore, the following chain is obtained:
.
Then, define the Hecke algebra to be the projective limit of the above as .
Explicitly, this means
.
The main property of the Hecke algebra is that it is generated by series of and .
That is:
.
So for any prime , it is possible to find coefficients such that
.
References
Modular forms
Algebraic number theory | Modular forms modulo p | Mathematics | 860 |
2,593,693 | https://en.wikipedia.org/wiki/History%20of%20Apple%20Inc. | Apple Inc., originally Apple Computer, Inc., is a multinational corporation that creates and markets consumer electronics and attendant computer software, and is a digital distributor of media content. Apple's core product lines are the iPhone smartphone, iPad tablet computer, and the Mac personal computer. The company offers its products online and has a chain of retail stores known as Apple Stores. Founders Steve Jobs, Steve Wozniak, and Ronald Wayne created Apple Computer Co. on April 1, 1976, to market Wozniak's Apple I desktop computer, and Jobs and Wozniak incorporated the company on January 3, 1977, in Cupertino, California.
For more than three decades, Apple Computer was predominantly a manufacturer of personal computers, including the Apple II, Macintosh, and Power Mac lines, but it faced rocky sales and low market share during the 1990s. Jobs, who had been ousted from the company in 1985, returned to Apple in 1997 after his company NeXT was bought by Apple. The following year he became the company's interim CEO, which later became permanent. Jobs subsequently instilled a new corporate philosophy of recognizable products and simple design, starting with the original iMac in 1998.
With the introduction of the successful iPod music player in 2001 and iTunes Music Store in 2003, Apple established itself as a leader in the consumer electronics and media sales industries, leading it to drop "Computer" from the company's name in 2007. The company is also known for its iOS range of smartphone, media player, and tablet computer products that began with the iPhone, followed by the iPod Touch and then iPad. As of June 30, 2015, Apple was the largest publicly traded corporation in the world by market capitalization, with an estimated value of US$1 trillion as of August 2, 2018. Apple's worldwide annual revenue in 2010 totaled US$65 billion, growing to US$127.8 billion in 2011 and $156 billion in 2012.
1971–1985: Jobs and Wozniak
Pre-foundation
Steve Jobs and Steve Wozniak, referred to collectively as "the two Steves", first met in mid-1971, when their mutual friend Bill Fernandez introduced then 21-year-old Wozniak to 16-year-old Jobs. Their first business partnership began in the fall of that year when Wozniak, a self-educated electronics engineer, read an article in Esquire magazine that described a device that could place free long-distance phone calls by emitting specific tone chirps. Wozniak started to build his original “blue boxes”, which he tested by calling the Vatican City pretending to be Henry Kissinger wanting to speak to the pope. Jobs managed to sell some two hundred blue boxes for $150 each, and split the profit with Wozniak. Jobs later told his biographer that if it hadn't been for Wozniak's blue boxes, "there wouldn't have been an Apple."
By 1972, Jobs had withdrawn from Reed College and Wozniak from UC Berkeley. Wozniak designed a video terminal that he could use to log on to the minicomputers at Call Computer. Alex Kamradt commissioned the design and sold a small number of them through his firm. Aside from their interest in up-to-date technology, the impetus for the two Steves seems to have had another source. In his essay From Satori to Silicon Valley (published 1986), cultural historian Theodore Roszak made the point that Apple Computer emerged from within the West Coast counterculture and the need to produce print-outs, letter labels, and databases. Roszak offers a bit of background on the development of the two Steves' prototype models.
In 1975, the two Steves started attending meetings of the Homebrew Computer Club. New microcomputers such as the Altair 8800 and the IMSAI 8080 inspired Wozniak to build a microprocessor into his video terminal circuit to make a complete computer. At the time the only microcomputer CPUs generally available were the $179 Intel 8080 (), and the $170 Motorola 6800 (). Wozniak preferred the 6800, but both were out of his price range. So he watched, and learned, and designed computers on paper, waiting for the day he could afford a CPU.
When MOS Technology released its $20 () 6502 chip in 1976, Wozniak wrote a version of BASIC for it, then began to design a computer for it to run on. The 6502 was designed by the same people who designed the 6800, as many in Silicon Valley left employers to form their own companies. Wozniak's earlier 6800 paper-computer needed only minor changes to run on the new chip.
By March 1, 1976, Wozniak completed the machine and took it to a Homebrew Computer Club meeting to show it off. When Jobs saw Wozniak's computer, which later became the Apple I, he was immediately interested in its commercial potential. Initially, Wozniak intended to share schematics of the machine for free, but Jobs insisted that they should instead build and sell bare printed circuit boards for the computer. Wozniak originally offered the design to Hewlett-Packard (HP), where he worked at the time, but was denied by the company on five occasions. Jobs eventually convinced Wozniak to go into business together and start a new company of their own. In order to raise the money they needed to produce the first batch of printed circuit boards, Jobs sold his Volkswagen Type 2 minibus for $1,500, and Wozniak his HP-65 programmable calculator for $500.
Apple I and company formation
On April 1, 1976, Apple Computer Company was founded by Steve Jobs, Steve Wozniak, and Ronald Wayne. The company was registered as a California business partnership.
Wayne, who worked at Atari, Inc. as a chief draftsman, became a co-founder in return for a 10% stake. Wayne was gun-shy due to the failure of his own venture four years earlier. On April 12, less than two weeks after the company's formation, Wayne left Apple, selling his 10% share back to the two Steves for $800.
According to Wozniak, Jobs proposed the name “Apple Computer” when he had just come back from Robert Friedland's All-One Farm in Oregon. Jobs told Walter Isaacson that he was "on one of my fruitarian diets," when he conceived of the name and thought "it sounded fun, spirited and not intimidating ... plus, it would get us ahead of Atari in the phone book."
The two Steves made a last trip to the Homebrew Computer Club and demonstrated the Apple I (AKA: The Apple Computer). Paul Terrell, who operated the computer store chain Byte Shop, was impressed, and gave the two Steves his card, asking them to keep in touch. The next day, Jobs visited Terrell at the Mountain View Byte Shop store, and tried to sell him the bare circuit boards for the Apple I. Terrell said he was only interested in purchasing the machine fully assembled, and that he would order 50 assembled computers and pay US$500 each on delivery (). Jobs took the purchase order from the Byte Shop to national electronic parts distributor Cramer Electronics, and ordered the components needed. When asked by the credit manager how he would pay for the parts, Jobs replied, "I have this purchase order from the Byte Shop chain of computer stores for 50 of my computers and the payment terms are COD. If you give me the parts on net 30-day terms I can build and deliver the computers in that time frame, collect my money from Terrell at the Byte Shop and pay you."
To verify the purchase order, the credit manager called Paul Terrell, who assured him if the computers showed up, Jobs would have more than enough money for the parts order. The two Steves and their small crew spent day and night building and testing the computers, and delivered to Terrell on time. Terrell was surprised to receive a batch of assembled circuit boards, as he had expected complete computers with a case, monitor and keyboard. Nonetheless, he kept his word and paid the two Steves the money promised.
The Apple I went on sale in July 1976 as an assembled circuit board with a retail price of $666.66. Wozniak later said he had had no idea about the relation between the number and the mark of the beast, and that he came up with the price because he liked repeating digits. About 200 units of the Apple I were eventually sold.
The Apple I computer had some notable features, including the use of a TV display, whereas many machines had no display at all. This was not like the displays of later machines; the text was displayed at 60 characters per second – still faster than the teleprinters of contemporary machines of that era. The machine had bootstrap code on ROM, making it easier to start up. At the insistence of Paul Terrell, Wozniak designed a cassette interface for loading and saving programs, at the then-rapid pace of 1200 bit/s. The simple machine was a masterpiece of design using far fewer parts than anything in its class, and earned Wozniak his reputation as a designer.
Jobs looked for investments to expand the business, but banks were reluctant to lend him money; the idea of a computer for ordinary people seemed absurd at the time. In August 1976, Jobs approached his former boss at Atari, Nolan Bushnell, who recommended that he meet with Don Valentine, the founder of Sequoia Capital. Valentine was not interested in funding Apple, but in turn introduced Jobs to Mike Markkula, a millionaire who had worked under him at Fairchild Semiconductor. Markkula saw great potential in the two Steves, and became an angel investor of their company. He invested $92,000 in Apple out of his own property while securing a $250,000 () line of credit from Bank of America. In return, Markkula received a one-third stake in Apple. Apple Computer, Inc. was incorporated on January 3, 1977. The new corporation bought out the partnership the two Steves had formed nine months earlier.
In February 1977, Markkula recruited Michael Scott from National Semiconductor to serve as the first president and CEO of Apple Computer, as the two Steves were both insufficiently experienced and he was not interested in taking that position himself.
That same month, Wozniak resigned from his job at Hewlett-Packard to work full-time for Apple.
Apple II
Almost as soon as Apple had started selling its first computers, Wozniak moved on from the Apple I and began designing a greatly improved computer: the Apple II. Wozniak completed a working prototype of the new machine by August 1976. The two Steves presented the Apple II computer to the public at the first West Coast Computer Faire on April 16 and 17, 1977. On the first day of the exhibition, Jobs introduced the Apple II to a Japanese chemist named Toshio Mizushima, who became the first authorized Apple dealer in Japan. In the May 1977 issue of Byte, Wozniak said of the Apple II design, "To me, a personal computer should be small, reliable, convenient to use, and inexpensive."
The Apple II went on sale on June 10, 1977, with a retail price of $1,298. The computer's main internal difference from its predecessor was a completely redesigned TV interface, which held the display in memory. Now not only useful for simple text display, the Apple II included graphics and, eventually, color. During the development of the Apple II, Jobs pressed for a well-designed plastic case and built-in keyboard, with the idea that the machine should be fully packaged and ready to run out of the box. This was almost the case for the Apple I computers, but one still needed to plug various parts together and type in the code to run BASIC. Jobs wanted the Apple II case to be "simple and elegant", and hired an industrial designer named Jerry Manock to produce such a case design. Apple employee #5 Rod Holt developed the switching power supply. While early Apple II models use ordinary cassette tapes as storage devices, they were superseded in 1978 by the introduction of a -inch floppy disk drive and interface called the Disk II. The Disk II system was designed by Wozniak and released with a retail price of $495.
In 1979, the Apple II was chosen to be the desktop platform for what became the first killer application of the business world: VisiCalc, a spreadsheet. So important that the Apple II became what John Markoff described as a "VisiCalc accessory", the application created a business market for the computer and gave home users an additional reason to buy it: compatibility with the office. Before VisiCalc, Apple had been a distant third place competitor to Commodore and Tandy.
The Apple II was one of the three "1977 Trinity" computers generally credited with creating the home computer market (the other two being the Commodore PET and the Tandy Corporation TRS-80). A number of different models of the Apple II were built thereafter, including the Apple IIe and Apple IIGS, which continued in public use for nearly two decades. The Apple II series went on to sell about six million units in total before it was discontinued in 1993.
Apple III
While the Apple II was already established as a successful business-ready platform because of VisiCalc, Apple management was not content. The Apple III was designed to take on the business environment in an attempt to compete with IBM in the business and corporate computing market. The development of the Apple III started in late 1978 under the guidance of Wendell Sander, and was subsequently developed by a committee headed by Jobs. The Apple III was first announced on May 19, 1980, with a retail price ranging from $4,340
to $7,800, and released in November 1980.
The Apple III was a conservative design for the era, however Jobs wanted the heat generated by the electronics to be dissipated through the chassis of the machine rather than by the more usual cooling fan. The case was not sufficient to cool the components and the Apple III was prone to overheating, causing the integrated circuit chips to disconnect from the motherboard. Customers who contacted Apple customer service were told to raise their computers into the air, and then let go to cause the integrated circuits to fall back into place. Thousands of Apple III computers were recalled. A new model was introduced in 1983 to try to rectify the problems, but the damage was already done.
Apple IPO
In the July 1980 issue of Kilobaud Microcomputing, publisher Wayne Green stated that "the best consumer ads I've seen have been those by Apple. They are attention-getting, and they must be prompting sale." In August, the Financial Times reported that
On December 12, 1980, Apple went public on the NASDAQ stock exchange with the ticker symbol "AAPL", selling 4.6 million shares at $22 per share ($.10 per share when adjusting for stock splits ), generating over $100 million, which was more capital than any IPO since Ford Motor Company in 1956. Several venture capitalists cashed out, reaping billions in long-term capital gains. By the end of the day, the stock rose to $29 per share and 300 millionaires were created, including the two Steves. Around this time Wozniak offered $10 million of his own stock to early Apple employees, something Jobs refused to do. Apple's market cap was $1.778 billion at the end of its first day of trading.
In January 1981, Apple held its first shareholders meeting as a public company in the Flint Center, a large auditorium at nearby De Anza College (which is often used for symphony concerts) to handle the larger numbers of shareholders post-IPO. The business of the meeting had been planned so that the voting could be staged in 15 minutes or less. In most cases, voting proxies are collected by mail and counted days or months before a meeting. In this case, after the IPO, many shares were in new hands.
Jobs started his prepared speech, but after being interrupted by voting several times, he dropped his prepared speech and delivered a long, emotionally charged talk about betrayal, lack of respect, and related topics. The results of the vote were surprising: a young programmer, Randy Wigginton, received enough votes to be added to Apple's board of directors, through the use of cumulative voting by a few major shareholders.
Competition from the IBM PC
By August 1981 Apple was among the three largest microcomputer companies, perhaps having replaced Radio Shack as the leader; revenue in the first half of the year had already exceeded 1980's $118 million, and InfoWorld reported that lack of production capacity was constraining growth. Because of VisiCalc, businesses purchased 90% of Apple II computers; large customers especially preferred Apple.
IBM entered the personal computer market that month with the IBM PC in part because it did not want products without IBM logos on customers' desks, but Apple had many advantages. While IBM began with one microcomputer, little available hardware or software, and a couple of hundred dealers, Apple had five times as many dealers in the US and an established international distribution network. The Apple II had an installed base of more than 250,000 customers, and hundreds of independent developers offered software and peripherals; at least ten databases and ten word processors were available, while the PC had no databases and one word processor.
The company's customers gained a reputation for devotion and loyalty. BYTE in 1984 stated that
The magazine noted that the loyalty was not entirely positive for Apple; customers were willing to overlook real flaws in its products, even while holding the company to a higher standard than for competitors. The Apple III was an example of its autocratic reputation among dealers that one described as "Apple arrogance". After examining a PC and finding it unimpressive, Apple confidently purchased a full-page advertisement in The Wall Street Journal with the headline "Welcome, IBM. Seriously". The company prioritized the III for three years, spending what Wozniak estimated as $100 million on marketing and R&D while not improving the Apple II to compete with the PC, as doing so could hurt III sales.
Microsoft head Bill Gates was at Apple headquarters the day of IBM's announcement and later said "They didn't seem to care. It took them a full year to realize what had happened". The PC almost completely ended sales of the III, the company's most comparable product. The II still sold well, with Apple being the leading computer manufacturer in the United States where units were sold between 1978 and 1982. But by 1983, the PC surpassed the Apple II as the best-selling personal computer. IBM recruited the best Apple dealers while avoiding the discount grey market they disliked. The head of a retail chain said "It appears that IBM had a better understanding of why the Apple II was successful than had Apple". Gene Amdahl predicted that Apple would be another of the many "brash young companies" that IBM had defeated.
By 1984 the press called the two companies archrivals, but IBM had $4 billion in annual PC revenue, more than twice that of Apple and as much as the sales of it and the next three companies combined. A Fortune survey found that 56% of American companies with personal computers used IBM PCs, compared to 16% for Apple. Small businesses, schools, and some homes became the II's primary market.
Xerox PARC and the Lisa
Apple Computer's business division was focused on the Apple III, another iteration of the text-based computer. Simultaneously the Lisa group worked on a new machine that would feature a completely different interface and introduce the words mouse, icon, and desktop into the lexicon of the computing public. In return for the right to buy US$1,000,000 of pre-IPO stock, Xerox granted Apple Computer three days access to the PARC facilities. After visiting PARC, they came away with new ideas that would complete the foundation for Apple Computer's first GUI computer, the Apple Lisa. The first iteration of Apple's WIMP interface was a floppy disk where files could be spatially moved around. After months of usability testing, Apple designed the Lisa interface of windows and icons. The Lisa was introduced in 1983 at a cost of US$9,995 (). Because of the high price, Lisa failed to penetrate the business market.
Macintosh and the "1984" commercial
By 1984 computer dealers saw Apple as the only clear alternative to IBM's influence; some even promoted its products to reduce dependence on the PC. The company announced the Macintosh 128K to the press in October 1983, followed by an 18-page brochure included with magazines in December. Its debut was announced by a single national broadcast of a US$1.5 million television commercial, "1984" (). Directed by Ridley Scott and aired during the third quarter of Super Bowl XVIII on January 22, 1984, it is considered a "watershed event" and a "masterpiece." The commercial alludes to George Orwell's novel Nineteen Eighty-Four which describes a dystopian future of enforced conformity. In the commercial a heroine represents the coming of the Macintosh to save humanity, and ends with the words: "On January 24th, Apple Computer will introduce Macintosh. And you'll see why 1984 won't be like 1984.”
On January 24, 1984, the Macintosh went on sale with a retail price of $2,495. It came bundled with two applications designed to show off its interface: MacWrite and MacPaint. On the same day, an emotional Jobs introduced the computer to a wildly enthusiastic audience at Apple's annual shareholders meeting held in the Flint Auditorium; Macintosh engineer Andy Hertzfeld described the scene as "pandemonium". Jobs had directed the development of the Macintosh since 1981, when he took over the project from early Apple employee Jef Raskin, who conceived the computer, and Wozniak, who led the initial design and development with Raskin but was on leave during this time due to an airplane crash earlier that year, making it easier for Jobs to take over the program. The Macintosh was based on The Lisa (and Xerox PARC's mouse-driven graphical user interface), and it was widely acclaimed by the media with strong initial sales supporting it. However, the slow processing speed and limited software led to a rapid sales decline in the second half of 1984.
The Macintosh was too radical for some, who labeled it a mere "toy". Because the machine was entirely designed around the GUI, existing text-mode and command-driven applications had to be redesigned and the programming code rewritten; this was a challenging undertaking that many software developers shied away from, and resulted in an initial lack of software for the new system. In April 1984 Microsoft's Multiplan migrated over from MS-DOS, followed by Microsoft Word in January 1985. In 1985, Lotus Software introduced Lotus Jazz after the success of Lotus 1-2-3 for the IBM PC, although it was largely a flop. Apple introduced Macintosh Office the same year with the lemmings ad, infamous for insulting potential customers. It was not successful.
For a special post-election edition of Newsweek in November 1984, Apple spent more than US$2.5 million to buy all 39 of the advertising pages in the issue. Apple also ran a "Test Drive a Macintosh" promotion, in which potential buyers with a credit card could take home a Macintosh for 24 hours and return it to a dealer afterwards. While 200,000 people participated, dealers disliked the promotion, the supply of computers was insufficient for demand, and many were returned in such a bad shape that they could no longer be sold. This marketing campaign caused CEO John Sculley to raise the price from US$1,995 () to US$2,495 ().
Jobs and Wozniak leave Apple
By early 1985, the Macintosh's failure to defeat the IBM PC triggered a power struggle between Jobs and CEO John Sculley, who had been hired two years earlier by Jobs using the famous line, "Do you want to sell sugar water for the rest of your life or come with me and change the world?" Sculley and Jobs' visions for the company greatly differed. The former favored open architecture computers like the Apple II, sold to education, small business, and home markets less vulnerable to IBM. Jobs wanted the company to focus on the closed architecture Macintosh as a business alternative to the IBM PC. President and CEO Sculley had little control over chairman of the Board Jobs' Macintosh division; it and the Apple II division operated like separate companies, duplicating services. Although its products provided 85% of Apple's sales in early 1985, the company's January 1985 annual meeting did not mention the Apple II division or employees. This frustrated Wozniak, who left active employment at Apple in the spring of that year to pursue other ventures, stating that the company had "been going in the wrong direction for the last five years" and sold most of his stock. Despite these grievances, Wozniak left the company amicably and as of January 2018 continues to represent Apple at events or in interviews, receiving a stipend over the years for this role estimated in 2006 to be $120,000 per year. Wozniak also remained an Apple shareholder following his departure.
Wozniak's first venture after leaving Apple was founding CL 9 in 1985 and creating the first programmable universal remote control two years later, called the "CORE", stating that "I never felt like I was turning my back on my own company [Apple]." He told Apple's director of engineering Wayne Rosing about his decision to step away from the company, but not his longtime business partner and friend Steve Jobs. Wozniak guessed that Jobs first heard the news from an article in The Wall Street Journal where he mentioned that he wasn't leaving because he was disgruntled with Apple, but that he was excited to build a new remote control. The article nevertheless included some of Wozniak's criticisms of Apple, and Wozniak later said "it was an accident, but it's been picked up by every book and every bit of history [since]."
In April 1985, Sculley decided to remove Jobs as the general manager of the Macintosh division, and gained unanimous support from the Apple board of directors. Rather than submit to Sculley's direction, Jobs attempted to oust him from his leadership role at Apple. Informed by Jean-Louis Gassée, Sculley found out that Jobs had been attempting to organize a coup and called an emergency executive meeting at which Apple's executive staff sided with Sculley and stripped Jobs of all operational duties.
Jobs, while taking the position of chairman of the firm, had no influence over Apple's direction and resigned in September 1985, taking a number of Apple employees with him to found NeXT Inc. In a show of defiance at being set aside by Apple Computer, Jobs sold all but one of his 6.5 million shares in the company for $70 million. Jobs then acquired the visual effects house, Pixar for $5M (). NeXT Inc. built computers with futuristic designs and the UNIX-derived NEXTSTEP operating system. NeXTSTEP eventually developed into Mac OS X. While not a commercial success, due in part to its high price, the NeXT computer introduced important concepts to the history of the personal computer, including serving as the initial platform for Tim Berners-Lee as he was developing the World Wide Web.
Sculley reorganized the company, unifying sales and marketing in one division and product operations and development in another. Despite initial marketing difficulties, the Macintosh brand was eventually a success for Apple, due to its introduction of desktop publishing (and later computer animation) through Apple's partnership with Adobe Systems, which introduced the laser printer and Adobe PageMaker. The Macintosh became the default platform for many arts industries including cinema, music, advertising, and publishing.
1985–1997: Sculley, Spindler, Amelio
Corporate performance
Under leadership of John Sculley, Apple issued its first corporate stock dividend on May 11, 1987. A month later on June 16, Apple stock split for the first time in a 2:1 split. Between March 1988 and January 1989, Apple undertook five acquisitions, including software companies Network Innovations, Styleware, Nashoba Systems, and Coral Software, as well as satellite communications company Orion Network Systems.
Apple continued to sell both lines of its computers, the Apple II and the Macintosh. A few months after introducing the Mac, Apple released a compact version of the Apple II called the Apple IIc. And in 1986 Apple introduced the Apple IIGS, an Apple II positioned as something of a hybrid product with a mouse-driven, Mac-like operating environment. Even with the release of the first Macintosh, Apple II computers remained the main source of income for Apple for years.
The Mac family
At the same time, the Mac was becoming a product family of its own. The original model evolved into the Mac Plus in 1986 and spawned the Mac SE and the Mac II in 1987 and the Mac Classic and Mac LC in 1990. Meanwhile, Apple attempted its first portable Macs: the failed Macintosh Portable in 1989 and then the more popular PowerBook in 1991, a landmark product that established the modern form and ergonomic layout of the laptop. Popular products and increasing revenues made this a good time for Apple. MacAddict magazine has called 1989 to 1991 the "first golden age" of the Macintosh. On February 19, 1987, Apple registered the "Apple.com" domain name, making it one of the first hundred companies to register a .com address on the nascent Internet.
Early-mid-1990s
In the late 1980s, Apple's fiercest technological rivals were the Amiga and Atari ST platforms. But computers based on the IBM PC were far more popular than all three, and by the 1990s, they finally had a comparable GUI thanks to Windows 3.0, and were out-competing Apple.
Apple's response to the PC threat was a profusion of new Macintosh lines including Quadra, Centris, and Performa. These new lines were marketed poorly by what was now "arguably one of the worst-managed companies in the industry". There were too many models, differentiated by very minor graduations in technical specifications. The profusion of arbitrary model numbers confused consumers and hurt Apple's reputation for simplicity. Resellers like Sears and CompUSA often failed to sell or even competently display these Macs. Inventory grew as Apple consistently underestimated demand for popular models and overestimated demand for others.
In 1991, Apple partnered with long-time competitor IBM and Motorola to form the AIM alliance, with the ultimate goal to create a revolutionary new computing platform, known as PReP, using IBM and Motorola hardware and Apple software. As the first step, Apple started the Power Macintosh line in 1994, using PowerPC processors from Motorola and IBM. The RISC architecture of these processors differed substantially from the Motorola 680X0 series used by previous Macs. Parts of Apple's operating system were rewritten to allow some older Mac software to run in emulation on the PowerPC series. Apple refused IBM's offer to purchase the company, but later unsuccessfully sought another offer from IBM, and at one point was "hours away" from an acquisition by Sun Microsystems. In 1993, Apple released the Newton, a failed early personal digital assistant (PDA).
Need for a new OS
In 1994 Apple launched eWorld, an online service providing email, news and a bulletin board system to replace AppleLink. It was shut down in 1996. In 1995, to achieve deeper market penetration and extra revenue, Apple officially began licensing the Mac OS and Macintosh ROMs to 3rd party manufacturers. The "Clonintoshes" competed with Apple's own Mac's and reduced Apple's sales. Apple had market share of over 10% until Jobs was re-hired in 1997 as interim CEO to replace Gil Amelio, and found a loophole to terminate the Macintosh OS licensing program. Macintosh's market share fell to around 3%.
During the 90's, "project Pink" had Apple and IBM collaborating to develop a new operating system, named Taligent to replace System 7. Infighting resulted in Apple leaving the project and IBM finishing it. Apple started project Copland, another effort to replace System 7, but it was affected by Feature creep then Development hell due to software planned for Taligent being reworked for Copland. Ultimately Copland was scrapped. With the Copland project in disarray, Apple decided it needed to acquire another company's operating system. Candidates considered were Sun's Solaris and Windows NT. Hancock was in favor of Solaris, while Amelio preferred Windows. Amelio called Bill Gates, and Gates promised Microsoft engineers would port QuickDraw to NT.
Acquisition of NeXT
In 1996, the struggling NeXT company beat Be Inc.'s BeOS bid to sell its operating system to Apple. On December 20, 1996, Apple announced it would purchase NeXT, and its NeXTstep operating system, for $429 million and 1.5 million shares of Apple stock. This brought Jobs back to Apple's management for the first time since 1985, and NeXT technology became the foundation of the Mac OS X operating system.
1997–2001: Apple's comeback
Return of Steve Jobs
On July 9, 1997, Gil Amelio was ousted as CEO of Apple by the board of directors. Fred D. Anderson was the head of the directors in short term and obtained short-term working capital from the banks in July 1997. In August 1997, Jobs stepped in as the interim CEO to begin a critical restructuring of the company's product line. He eventually became CEO and served in that position from January 2000 to August 2011. On August 24, 2011, Jobs resigned his position as chief executive officer of Apple before his long battle with pancreatic cancer took his life on October 5, 2011. On November 10, Apple introduced the Apple Store, an online retail store based upon the WebObjects application server the company had acquired in its purchase of NeXT. The new direct sales outlet was tied to a new build-to-order manufacturing strategy.
Microsoft deal
At the 1997 Macworld Expo, Jobs announced that Apple would begin a partnership with Microsoft, with terms including a five-year commitment from Microsoft to release Microsoft Office for Macintosh, and a US$150 million investment in Apple. The long-standing dispute over whether Windows infringed Apple patents was settled, and Internet Explorer would ship as the Macintosh's default browser, with the user able to have a preference. Microsoft chairman Bill Gates appeared on-screen explaining plans for developing Mac software, and expressing excitement to be helping Apple return to success. Jobs addressed the audience:
The day before the announcement Apple had a market cap of $2.46 billion, and had ended its previous quarter with quarterly revenues of US$1.7 billion and cash reserves of US$1.2 billion, making the US$150 million amount of the investment largely symbolic. Apple CFO Fred Anderson stated that Apple would use the additional funds to invest in its core markets of education and creative content.
iMac, iBook, and Power Mac G4
While discontinuing Apple's licensing of its operating system to third-party computer manufacturers, one of Jobs's first moves as new acting CEO was to develop the iMac, which bought Apple time to restructure. The original iMac integrated a CRT display and CPU into a streamlined, translucent plastic body. The line became a sales smash, moving about one million units each year. It helped re-introduce Apple to the media and public and announced the company's new emphasis on the design and aesthetics of its products.
In 1999, Apple introduced the Power Mac G4, which utilized the Motorola-made PowerPC 7400 containing a 128-bit instruction unit known as AltiVec, its flagship processor line. Apple unveiled the iBook that year, its first consumer-oriented laptop, the first Macintosh to support the use of Wireless LAN via the optional AirPort card. Based on the 802.11b standard, it helped popularize Wireless LAN technology to connect computers to networks.
Mac OS X
In 2001, Apple introduced Mac OS X, an operating system based on NeXT's NeXTSTEP and incorporating parts of the FreeBSD kernel. Aimed at consumers and professionals alike, Mac OS X married the stability, reliability and security of Unix with the ease of a completely overhauled user interface. To help users transition, the new operating system allowed the use of Mac OS 9 applications through the Classic environment. Apple's Carbon API allowed developers to adapt Mac OS 9 software to use Mac OS X's features.
Retail stores
In May 2001, after much speculation, Apple announced the opening of a line of Apple retail stores, to be located throughout the major U.S. computer buying markets. The stores were designed for two primary purposes: to stem the tide of Apple's declining share of the computer market and to respond to poor marketing of Apple products at third-party retail outlets.
2001–2007: iPods, iTunes Store, Intel transition
iPod
In October 2001, Apple introduced its first iPod portable digital audio player. Then iPod started as a 5 gigabyte player capable of storing around 1000 songs. It then evolved into an array of products including the Mini (discontinued), the iPod Touch (discontinued), the Shuffle (discontinued), the iPod Classic (discontinued), the Nano (discontinued), the iPhone and the iPad. Since March 2011, the largest storage capacity for an iPod has been 160 gigabytes. Speaking to software developers on June 6, 2005, Jobs said the company's share of the entire portable music device market stood at 76%.
The iPod gave an enormous lift to Apple's financial results. In the quarter ending March 26, 2005, Apple earned US$290 million, or 34¢ a share, on sales of US$3.24 billion. The year before in the same quarter, Apple earned just US$46 million, or 6¢ a share, on revenue of US$1.91 billion.
Moving on from colored plastics and the PowerPC G3
In early 2002, Apple unveiled a completely redesigned iMac, using the G4 processor and LCD display. The new iMac G4 design had a white hemispherical base and a flat panel all-digital display supported by a swiveling chrome neck. After several iterations increasing the processing speed and screen sizes from 15" to 17" to 20" the iMac G4 was discontinued and replaced by the iMac G5 in the summer of 2004.
Later in 2002, Apple released the Xserve 1U rack mounted server. Originally featuring two G4 chips, the Xserve was unusual for Apple in two ways. It represented an earnest effort to enter the enterprise computer market, and it was cheaper than competitors' similar machines. This was largely due to Fast ATA drives as opposed to the SCSI hard drives used in traditional rack-mounted servers. Apple later released the Xserve RAID, a 14 drive RAID that was again cheaper than competing systems.
In mid-2003, Jobs launched the Power Mac G5, based on IBM's G5 processor. Its all-metal anodized aluminum chassis finished Apple's transition away from colored plastics in their computers. Apple claims this was the first 64-bit computer sold to the general public. The Power Mac G5 was used by Virginia Tech to build its prototype System X supercomputing cluster, which at the time was considered the third-fastest supercomputer in the world. It cost only US$5.2 million to build, far less than the previous No. 3 and other ranking supercomputers. Apple's Xserves were updated to use the G5 as well. They replaced the Power Mac G5 machines as the main building block of Virginia Tech's System X, which was ranked in November 2004 as the world's seventh-fastest supercomputer.
A new iMac based on the G5 processor was unveiled August 31, 2004, and was made available in mid-September. This model dispensed with the base altogether, placing the CPU and the rest of the computing hardware behind the flat-panel screen, which is suspended from a streamlined aluminum foot. This new iMac, dubbed the iMac G5, was the "world's thinnest desktop computer", measuring in at around two inches (around 5 centimeters).
In 2004, after creating a sizable financial base to work with, the company began experimenting with new parts from new suppliers. Apple could produce new designs quickly, and released the iPod Video, then the iPod Classic, and eventually the iPod touch and iPhone. On April 29, 2005, Apple released Mac OS X v10.4 "Tiger".
Apple's successful PowerBook and iBook relied on previous generation G4 architecture produced by Freescale Semiconductor, a spin-off from Motorola. IBM engineers had some success in making their PowerPC G5 processor consume less power and run cooler, but not enough to run in iBook or PowerBook formats. In October 2005, Apple released the Power Mac G5 Dual featuring a Dual-Core processor – two cores in one rather than two separate processors. The Power Mac G5 Quad uses two Dual-Core processors. The Power Mac G5 Dual cores run individually at 2.0 GHz or 2.3 GHz. The Power Mac G5 Quad cores run individually at 2.5 GHz, and all variations have a graphics processor with 256-bit memory bandwidth.
Retail store expansion
Initially, Apple Stores were only in the United States, but in late 2003, Apple opened its first Apple Store abroad, in Tokyo's Ginza district. It was followed by a store in Osaka, Japan in August 2004. In 2005, Apple opened stores in Nagoya, the Shibuya district of Tokyo, Fukuoka, and Sendai. A store opened in Sapporo in 2006. Apple's first European store opened in London, on Regent Street, in November 2004. A store in the Bullring shopping centre in Birmingham opened in April 2005, and the Bluewater shopping centre in Dartford, Kent opened in July 2005. Apple opened its first store in Canada in the middle of 2005 at the Yorkdale Shopping Centre in North York, Toronto. Later in 2005 Apple opened the Meadowhall Store in Sheffield and the Trafford Centre Store in Manchester, UK. Later additions in the London area include Brent Cross (January 2006), Westfield in Shepherd's Bush (September 2008), and Covent Garden (August 2010), which at was, as of 2015, the largest Apple Store in the world.
Apple opened several "mini" stores in October 2004 to capture markets where demand does not necessarily dictate a full-scale store. The first of these stores was opened at Stanford Shopping Center in Palo Alto, California. These stores are only one half the square footage of the smallest normal store.
Apple and "i" Web services
In 2000, Apple introduced iTools, a set of free web-based tools that included an email account, internet greeting cards called iCards, a Web site review service called iReview, and "KidSafe", to prevent children browsing inappropriate websites. The latter two services were canceled because of lack of success. iCards and email were integrated into Apple's .Mac subscription-based service introduced in 2002 and discontinued in mid-2008 to make way for MobileMe, coinciding with the iPhone 3G release. MobileMe, at the same US$99.00 annual subscription as its dotMac predecessor, featured "push" services to instantly and automatically send emails, contacts and calendar updates directly to users' iPhones. Controversy around the release of MobileMe resulted in downtime and a significantly longer release window. Apple extended existing MobileMe subscriptions by 30 days free-of-charge. At the WWDC event in June 2011, Apple announced iCloud, keeping most MobileMe services but dropping iDisk, Gallery, and iWeb. It added Find my Mac, iTunes Match, Photo Stream, Documents & Data Backup, and iCloud backup for iOS devices. The service requires iOS 5 and OS X 10.7 Lion.
iTunes Store
The iTunes Music Store was launched in April 2003, with 2 million downloads in the first 16 days. Music was purchased through the iTunes application, which was initially Macintosh-only; in October 2003, support for Windows was added. Initially, the music store was only available in the United States due to licensing restrictions.
In June 2004 Apple opened its iTunes Music Store in the United Kingdom, France, and Germany. A version for the European Union version opened October 2004, but it was not initially available in the Republic of Ireland due to the intransigence of the Irish Recorded Music Association (IRMA) but was opened there a few months later on Thursday, January 6, 2005. A version for Canada opened in December 2004. On May 10, 2005, the iTunes Music Store was expanded to Denmark, Norway, Sweden, and Switzerland.
On December 16, 2004, Apple sold its 200 millionth song on the iTunes Music Store to Ryan Alekman from Belchertown, Massachusetts. The download was The Complete U2, by U2. Just under three months later Apple sold its 300 millionth song on March 2, 2005. On July 17, 2005, the iTunes Music Store sold its 500 millionth song. At that point, songs were selling at an accelerating annualized rate of more than 500 million.
On October 25, 2005, the iTunes Store went live in Australia, with songs selling for A$1.69 each, albums at (generally) A$16.99 and music videos and Pixar short films at A$3.39. Before the loophole was closed, people in New Zealand were briefly able to buy music from the Australian store On February 23, 2006, the iTunes Music Store sold its 1 billionth song.
The iTunes Music Store changed its name to iTunes Store on September 12, 2006, when it began offering video content (TV shows and movies) for sale. Since iTunes' inception, it has sold over 2 billion songs, 1.2 billion of which were sold in 2006. Since downloadable TV and movie content was added 50 million TV episodes and 1.3 million movies have been downloaded. In early 2010, Apple celebrated the 10 billionth song downloaded from the iTunes Music Store.
Intel transition
In a keynote address on June 6, 2005, Jobs announced that Apple would produce Intel-based Macintosh computers beginning in 2006. Jobs confirmed rumors that the company had been secretly producing versions of Mac OS X for both PowerPC and Intel processors over the past 5 years, and that the transition to Intel processor systems would last until the end of 2007. Rumors of cross-platform compatibility had been spurred by the fact that Mac OS X is based on OPENSTEP, an operating system that was available for many platforms. Apple's own Darwin, the open source underpinnings of Mac OS X, was also available for Intel's x86 architecture.
On January 10, 2006, the Intel-based iMac and MacBook Pro were introduced, based on the Intel Core Duo platform. They came alongside news that Apple would complete the transition to Intel processors on all hardware by the end of 2006, a year ahead of the originally quoted schedule.
2007–2011: Apple Inc., iPhone, iOS, iPad
On January 9, 2007, Apple Computer, Inc. shortened its name to simply Apple Inc. In his Macworld Expo keynote address, Jobs explained that with their current product mix consisting of the iPod and Apple TV as well as their Macintosh brand, Apple really wasn't just a computer company anymore. At the same address, Jobs revealed a product that would revolutionize an industry in which Apple had never previously competed: the Apple iPhone. The iPhone combined Apple's first widescreen iPod with the world's first mobile device boasting visual voicemail, and an internet communicator able to run a fully-functional version of Apple's web browser, Safari, on the then-named iPhone OS (later renamed iOS).
iOS evolution: iPhone and iPad
The first version of the iPhone became publicly available on June 29, 2007, in selected countries/markets. It was another 12 months before the iPhone 3G became available on July 11, 2008. Apple announced the iPhone 3GS on June 8, 2009, along with plans to release it later in June, July, and August, starting with the U.S., Canada, and major European countries on June 19. This 12-month iteration cycle has continued with the iPhone 4 model arriving in similar fashion in 2010, a Verizon model was released in February 2011, and a Sprint model in October 2011, shortly after Jobs' death.
On February 10, 2011, the iPhone 4 was made available on both Verizon Wireless and AT&T. Now two iPod types are multi-touch: the iPod nano and the iPod touch, a big advance in technology. Apple TV currently has a 2nd-generation model, which is 4 times smaller than the original Apple TV. Apple has also gone wireless, selling a wireless trackpad, keyboard, mouse, and external hard drive. Wired accessories are still available.
The Apple iPad was announced on January 27, 2010, with retail availability commencing in April and systematically growing in markets throughout 2010. The iPad fits into Apple's iOS product line, being twice the screen size of an iPhone without the phone abilities. While there were initial fears of product cannibalization the FY2010 financial results released in Jan 2011 included commentary of a reverse 'halo' effect, where iPad sales were leading to increased sales of iMacs and MacBooks.
Resurgence compared to Microsoft
Since 2005, Apple's revenues, profits, and stock price have grown significantly. On May 26, 2010, Apple's stock market value overtook Microsoft's, and Apple's revenues surpassed those of Microsoft in the third quarter of 2010. After giving their results for the first quarter of 2011, Microsoft's net profits of $5.2 billion were lower for the quarter than those of Apple, which earned $6 billion in net profit for the quarter. The late April announcement of profits by the companies marked the first time in 20 years that Microsoft's profits had been lower than Apple's, a situation described by Ars Technica as "unimaginable a decade ago".
The Guardian reported that one of the reasons for the change was because PC software, where Microsoft dominates, has become less important compared to the tablet and smartphone markets, where Apple has a strong presence. One reason for this was a surprise drop in PC sales in the quarter. Another issue for Microsoft was that its online search business had lost a lot of money, with a loss of $700 million in the first quarter of 2010.
2011–2020: Restructuring and Apple Watch
On March 2, 2011, Apple unveiled the iPad's second-generation model, the iPad 2. Like the 4th-generation iPod Touch and iPhone, the iPad 2 comes with a front-facing camera as well as a rear-facing camera, along with three new apps that utilize these new features: Camera, FaceTime, and Photo Booth. On August 24, Jobs resigned from his position as CEO, with Tim Cook taking his place.
On October 29, 2012, Apple announced structural changes to increase collaboration between hardware, software, and services. This involved the departure of Scott Forstall, responsible for the launch of iOS (iPhone OS at the time of launch), who was replaced with Craig Federighi as head of iOS and OS X teams. Jony Ive became head of HI (Human Interface), whilst Eddy Cue was announced as head of online services including Siri and Maps. The most notable short term difference of this restructuring was the launch of iOS 7, the first version of the operating system to use a drastically different design to its predecessors, headed by Jony Ive, followed by OS X Yosemite a year later with a similar design.
During this time, Apple released the iPhone 5, the first iPhone to have a screen larger than 3.5", the iPod Touch 5 with a 4" screen, the iPhone 5S with fingerprint scanning technology in the form of Touch ID, and iPhone 6 and iPhone 6 Plus, with screens at 4.7" and 5.5". They released the 3rd-generation iPad with Retina Display, followed by the 4th-generation iPad just half a year later. The iPad Mini was announced alongside the iPad 4th gen, and was the first to feature a smaller screen than 9.7". This was followed by the iPad Mini 2 with Retina Display in 2013, alongside the iPad Air, a continuation of the original 9.7" range of iPads, which was subsequently followed by the iPad Air 2 with Touch ID in 2014. Apple released various major Mac updates, including the MacBook Pro with Retina Display, whilst discontinuing the original MacBook range for a short period, before reintroducing it in 2015 with various new features, a Retina Display and a new design that implemented USB-C, while removing all other ports. The Mac Pro and iMac were updated with more power and a drastically smaller and thinner profile. On November 25, 2013, Apple acquired a company called PrimeSense.
On May 28, 2014, Apple acquired Beats Electronics, producers of the popular Beats by Dre headphone and speaker range, as well as streaming service Beats Music. On September 9, 2014, Apple announced the Apple Watch, the first new product range since the departure of Jobs. The product cannot function beyond basic features without being within Bluetooth or WiFi range to an iPhone and contains basic applications (many acting as a remote for other devices, such as a music remote, or a control for an Apple TV) and fitness tracking. The Apple Watch received mixed reviews, with critics suggesting that whilst the device showed promise, it lacked a clear purpose, similar to many of the devices already on the market. The Apple Watch was released on April 24, 2015.
On September 9, 2015, Apple announced the iPhone 6S and iPhone 6S Plus with 3D Touch, the iPad Pro, and the fourth-generation Apple TV, along with the fourth-generation iPad Mini.
On March 21, 2016, Apple announced the first-generation iPhone SE and the smaller iPad Pro.
On September 7, 2016, Apple announced the iPhone 7 and iPhone 7 Plus with an improved camera and a faster processor than the previous generation. The iPhone 7 and iPhone 7 Plus have high storage options.
On October 27, 2016, Apple announced the new 13 and 15 inch MacBook Pro with a retina Touch Bar.
On March 21, 2017, Apple announced the iPad (2017). This is the iPad Air 2 successor, equipped with a faster processor, and starts at $329. Apple also announced the (Product)RED iPhone 7 and iPhone 7 Plus.
On June 5, 2017, Apple announced iOS 11 as well as new versions of macOS, watchOS, and tvOS. Updated versions of the iMac, MacBook Pro, and MacBook were released, along with the 10.5 and 12.9 inch iPad Pro, and "HomePod", a Siri speaker similar to the Amazon Echo. On September 12, at the Steve Jobs Theater, Apple introduced the iPhone 8 and iPhone 8 Plus with better camera features, more improvements in product design, user experience, performance and more, and announced the iPhone X with facial recognition technology and wireless charging. Apple announced the 4K Apple TV with 4K, HDR and Dolby Vision experience, and the Apple Watch Series 3, supporting a cellular connection, running watchOS 4.
In March 2018, Bloomberg reported that Apple was developing MicroLED screens for the iPhone, iPad, Mac, watch, AR glasses, and electric car. It was linked to a Research and Development facility in Santa Clara, California code named Aria by the Bay Area News Group. On September 12, at the Steve Jobs Theater, Apple introduced the iPhone XS, iPhone XS Max, and iPhone XR, running iOS 12, with improved facial recognition and HDR in the display as well as better cameras for all 3 phones. They also announced the Apple Watch Series 4, running watchOS 5, with an all-new design and larger display as well as many more health-related features.
In October 2018, Bloomberg reported that, as early as 2015, a specialized unit of China's People's Liberation Army began inserting chips into Supermicro servers that allowed for backdoor access to them. Approximately 30 companies reportedly had their servers compromised via the chips, including Apple Inc. On September 20, 2019, the iPhone 11, iPhone 11 Pro, and iPhone 11 Pro Max were introduced. The iPhone 11 Pro was the first iPhone to feature three cameras.
2020–present: 5G and Apple silicon
In 2020, Apple was fined £21 million euros, claiming they intentionally slowed down older models of iPhones to encourage people to buy newer models. The company somewhat admitted to this practice in 2017, saying the phones were slowed to respond to the decay of iPhones' lithium-ion batteries, which made it harder for the batteries to reach the phones' expected power demands. The COVID-19 pandemic heavily effected China, hurting Apple financially, because they invested into China enough to become increasingly dependent on the country. Chinese factories closed and demand for Apple products went down. However, they recovered and eventually reached a US$2 trillion market cap later that year. The iPhone 12, 12 Pro, and 12 Pro Max were introduced, being the first iPhones to support 5G connectivity. The company also started using its own Apple silicon processors in Macs, instead of chips made by Intel.
In April 2021, the M1-powered iPad was launched, along with a new M1-powered iMac offered in 7 colors, recalling the iMacs offered in 5 colors announced in 1999. Apple launched a GPS tracking device called AirTag that uses the Apple's Find My device network. In 2021 and 2022, Apple repeated its pattern of introducing four new iPhones in September, with 2021's iPhone 13 and iPhone 13 Pro lines and 2022's iPhone 14 and iPhone 14 Pro lines. The iPhone 14 Pro ditched the notch containing the sensors for a "dynamic island", which allows for space between the top edge of the screen and the sensors for Face ID. 2022 saw Apple announce the first Macs with Apple's M2 chip and a new sub-series of Apple Watch with increased performance for outdoor activities named Apple Watch Ultra.
Apple experienced a period of unprecedented employee unrest in 2021 and 2022, resulting in outspoken employees organizing over labor issues and the company's treatment of women at the company in its corporate offices and retail stores. Employees engaged in hashtag activism on social media inspired by the #MeToo movement called #AppleToo and two American stores unionized for the first time.
In 2022, Apple paused all product sales in Russia in response to the country's invasion of Ukraine. Tim Cook began focusing on privacy features, which lowered App Store developers' revenue. They also developed App Tracking Transparency, a set of privacy features which cost Facebook US$12 billion. The company also announced they would start including ads in the Books, Maps, and TV iOS apps, and that it would eventually move production out of China.
Apple launched a buy now, pay later service called 'Apple Pay Later' for its Apple Wallet users in 2023. The program allows its users to apply for loans between $50 and $1,000 to make online or in-app purchases, and then repaying them through four installments spread over six weeks without any interest or fees. In June, Apple released the Apple Vision Pro, a computer in the form of an augmented reality headset which runs the VisionOS operating system. Wearers of the headset use hand gestures to navigate the user interface. It initially released at $3,500 in the United States, and while sales figures have not been released as of June 2024, it is considered to be a financial failure in its initial stages. The company then reached a $25 million settlement in a U.S. Department of Justice case that alleged they were discriminating against U.S. citizens in hiring; they created jobs that were not listed online and required paper submission to apply for, while advertising these jobs to foreign workers as part of recruitment for PERM.
In April 2024, Apple laid off more than 600 employees working in facilities linked to the electric car project and microLED development. The projects had been reported to have been shut down in the preceding two months.
In June 2024, Apple announced iOS18 and macOS Sequoia in their WWDC 2024 event. After the development of powerful AI software like OpenAI's ChatGPT in the early 2020s, Apple was accused of being behind their competitors in innovating using AI; alongside the new operating systems, the company announced they would introduce AI features into their products, such as using ChatGPT to improve Siri. The first product to feature these improvements, named Apple Intelligence, is planned to be the iPhone 15 Pro running iOS 18. Criticism was leveled towards the plan to incorporate recorded user inputs into ChatGPT's data set. Apple did not pay OpenAI for these features, believing that exposure to OpenAI products will already financially benefit them. The announcement improved Apple's stock, temporarily making them the world's most valuable company instead of Microsoft.
In 2024, Apple was sued by two female employees seeking class action who claimed that the company's hiring and performance-review practices are biased against women. Meanwhile, the National Labor Relations Board (NLRB) charged Apple with violations of the National Labor Relations Act of 1935 in spring and fall of 2024. The NLRB accused Apple of maintaining unlawful employee contracts, rules around social media and Slack usage, for interrogating unionizing employees, and for illegally firing Janneke Parrish, a labor activist who had co-lead the #AppleToo movement.
Financial history
As cash reserves increased significantly in 2006, Apple created Braeburn Capital on April 6, 2006, to manage its assets.
Stock
'AAPL' is the stock symbol under which Apple Inc. trades on the NASDAQ stock market. Apple originally went public on December 12, 1980, with an initial public offering at US$22.00 per share. The stock has split 2 for 1 three times on June 15, 1987, June 21, 2000, and February 28, 2005. Apple initially paid dividends from June 15, 1987, to December 15, 1995. On March 19, 2012, Apple announced that it would again start paying a dividend of $2.65 per quarter (beginning in the quarter that starts in July 2012) along a $10 billion share buyback that would commence September 30, 2012, the start of its fiscal 2013 year. Gene Munster and Michael Olson of Piper Jaffray are the main analysts who track Apple stock. Piper Jaffray estimates future stock and revenue of Apple annually, and have been doing so for several years.
Timeline of Apple Inc. products
References
Further reading
Edwards, Jim. These Pictures Of Apple's First Employees Are Absolutely Wonderful – Business Insider, December 26, 2013. Contains vintage photos from the early days of Apple.
Apple Inc. | Complete Documentation since 1976. These Pictures Of Apple's First Employees Are Absolutely Wonderful
.
Video
External links
Welcome to Macintosh – 2008 documentary film about Apple history and innovation.
25 Years of Mac: From Boxy Beige to Silver Sleek – 2008 Wired on the 25th anniversary of the Macintosh.
The Apple Products That Totally Failed In The Market
History of Apple, timeline: First quarter of 2019
History of Apple, timeline: Second quarter of 2019
Apple Inc.
Steve Jobs
Apple Inc. | History of Apple Inc. | Technology | 13,240 |
24,983,637 | https://en.wikipedia.org/wiki/Functional%20Ecology%20%28journal%29 | Functional Ecology is a monthly peer-reviewed scientific journal covering physiological, behavioural, and evolutionary ecology, as well as ecosystems and community ecology, emphasizing an integrative approach.
The journal was established in 1987 and is published by Wiley-Blackwell on behalf of the British Ecological Society. The editors-in-chief are Lara Ferry (Arizona State University), Charles Fox (University of Kentucky), Katie Field (University of Sheffield), Emma Sayer (University of Lancaster), and Enrico Rezende (Pontifical Catholic University of Chile).
Abstracting and indexing
The journal is abstracted and indexed in Aquatic Sciences and Fisheries Abstracts, BIOSIS Previews, Current Contents/Agriculture, Biology & Environmental Sciences, the Science Citation Index, Scopus. According to the Journal Citation Reports, the journal has a 2021 impact factor of 6.28.
Types of papers
The journal publishes the following types of papers:
Standard Research Papers - a typical experimental, comparative or theoretical paper
Reviews - syntheses of topics of broad ecological interest
Perspectives - short articles presenting new ideas (without data) intended to stimulate scientific debate
Special Features - a collection of manuscripts, typically Reviews or Perspectives, on a single theme
The journal also produces podcasts on a semi-regular basis, usually focusing on a recent article and has a blog, which includes interviews with authors and articles relating to the ecological academic and research community.
References
External links
Wiley-Blackwell academic journals
Ecology journals
Academic journals established in 1987
British Ecological Society academic journals
Monthly journals
English-language journals | Functional Ecology (journal) | Environmental_science | 305 |
201,749 | https://en.wikipedia.org/wiki/Bus%20mastering | In computing, bus mastering is a feature supported by many bus architectures that enables a device connected to the bus to initiate direct memory access (DMA) transactions. It is also referred to as first-party DMA, in contrast with third-party DMA where a system DMA controller actually does the transfer.
Some types of buses allow only one device (typically the CPU, or its proxy) to initiate transactions. Most modern bus architectures, such as PCI, allow multiple devices to bus master because it significantly improves performance for general-purpose operating systems. Some real-time operating systems prohibit peripherals from becoming bus masters, because the scheduler can no longer arbitrate for the bus and hence cannot provide deterministic latency.
While bus mastering theoretically allows one peripheral device to directly communicate with another, in practice almost all peripherals master the bus exclusively to perform DMA to main memory.
If multiple devices are able to master the bus, there needs to be a bus arbitration scheme to prevent multiple devices attempting to drive the bus simultaneously. A number of different schemes are used for this; for example SCSI has a fixed priority for each SCSI ID. PCI does not specify the algorithm to use, leaving it up to the implementation to set priorities.
See also
Master/slave (technology)
SCSI initiator and target
References
How Bus Mastering Works - Tweak3D
What is bus mastering?- Brevard User's Group
Computer buses
Motherboard | Bus mastering | Technology | 300 |
1,180,190 | https://en.wikipedia.org/wiki/Itoh%E2%80%93Tsujii%20inversion%20algorithm | While the algorithm is often called the Itoh-Tsujii algorithm, it was first presented by Feng.
Feng's paper was received on March 13, 1987 and published in October 1989. Itoh and Tsujii's paper was received on July 8, 1987 and published in 1988.
Feng and Itoh-Tsujii algorithm is first used to invert elements in finite field using
the normal basis representation of elements, however, it is generic and can be used for other bases,
such as the polynomial basis. It can also be used in any finite field .
The algorithm is as follows:
Input: A ∈ GF(pm)
Output: A−1
r ← (pm − 1)/(p − 1)
compute Ar−1 in GF(pm)
compute Ar = Ar−1 · A
compute (Ar)−1 in GF(p)
compute A−1 = (Ar)−1 · Ar−1
return A−1
This algorithm is fast because steps 3 and 5 both involve operations in the subfield GF(p). Similarly, if a small value of p is used, a lookup table can be used for inversion in step 4. The majority of time spent in this algorithm is in step 2, the first exponentiation. This is one reason why this algorithm is well suited for the normal basis, since squaring and exponentiation are relatively easy in that basis.
This algorithm is based on the fact that is a cyclic group of order .
Given a nonzero element in finite field , we have
The above expression itself is close to that of the multiplicative Norm function in finite field, which is defined as
This viewpoint leads us to consider the additive absolute Trace function
, which is defined as
If =0, then we have
and can express as
In some s, for example, used in Advanced Encryption Standard (AES), this formula needs 1 less
multiplication operation than Feng and Itoh-Tsujii algorithm for elements with Trace value 0:
because
we have
and
This additive formula needs 3 multiplications, 4 additions and 6 squarings.
But the multiplicative formula
needs 4 multiplications and 7 squarings.
See also
Finite field arithmetic
References
Finite fields
Computational number theory | Itoh–Tsujii inversion algorithm | Mathematics | 455 |
19,302,585 | https://en.wikipedia.org/wiki/California%20Artificial%20Stone%20Paving%20Co.%20v.%20Molitor | California Artificial Stone Paving Co. v. Molitor, 113 U.S. 609 (1885), involved a bill that was filed by the appellant against the appellee complaining that the latter was infringing on a letters patent granted to one John J. Schillinger, and which had been assigned for the State of California to the complainant.
The patent was for an improvement in concrete pavement was originally issued July 19, 1870, and reissued May 2, 1871. The improvement, as described in the reissued patent, consisted in laying the pavement in detached blocks separated from each other by strips of tar paper or other suitable material so as to prevent the blocks from adhering to each other. As stated in the specification:
The case of Wilson v. Barnum was especially worthy of note in this connection. The question certified in that case was whether, upon the evidence given, the defendant infringed the complainant's patent. Chief Justice Taney, delivering the opinion of the Court, said:
The case was dismissed, with directions to the circuit court to proceed therein according to law.
See also
List of United States Supreme Court cases, volume 113
References
External links
United States Supreme Court cases
United States Supreme Court cases of the Waite Court
1885 in United States case law
United States patent case law
Concrete | California Artificial Stone Paving Co. v. Molitor | Engineering | 270 |
21,772,174 | https://en.wikipedia.org/wiki/Centrotherm%20Photovoltaics | centrotherm international AG is a supplier of process technology and equipment for the photovoltaics, semiconductor and microelectronics industries.
Its company headquarters are in Blaubeuren, Germany (Baden-Württemberg).
Industry sector
centrotherm international AG develops, manufactures and markets thermal key equipment and process technology for the production of solar cells, power semiconductor devices, logic and memory devices as well as LED and sensor technologies and also provides related services to customers.
Company history
The company was founded in 1976 as centrotherm Elektrische Anlagen GmbH + Co. KG. In the 1990s the company broke into the photovoltaic business and since 2000 has been an important global player in this industry sector. As part of company restructuring in the centrotherm Group, the name of the photovoltaic business unit was changed to centrotherm photovoltaics solutions GmbH & Co. KG in 2004 and in 2006 became centrotherm photovoltaics AG. The company has been listed on the Prime Standard of the Frankfurt Stock Exchange since October 2007 and was also listed on the German TecDAX in December of the same year.
References
External links
01.01.2011 bis 31.03.2011 3-Month Report 01.01.2011 bis 31.03.2011 3-Months Report
Companies based in Baden-Württemberg
Photovoltaics manufacturers
Manufacturing companies of Germany
Solar power in Germany | Centrotherm Photovoltaics | Engineering | 288 |
16,085,921 | https://en.wikipedia.org/wiki/National%20Environmental%20Engineering%20Research%20Institute | The National Environmental Engineering Research Institute (NEERI) in Nagpur was originally established in 1958 as the Central Public Health Engineering Research Institute (CPHERI). It has been described as the "premier and oldest institute in India." It is an institution listed on the Integrated Government Online Directory. It operates under the aegis of the Council of Scientific and Industrial Research (CSIR), based in New Delhi. Indira Gandhi, the Prime Minister of India at the time, renamed the Institute NEERI in 1974.
The Institute primarily focused on human health issues related to water supply, sewage disposal, diseases, and industrial pollution.
NEERI operates as a laboratory in the field of environmental science and engineering and is one of the constituent laboratories of the Council of Scientific and Industrial Research (CSIR). The institute has six zonal laboratories located in Chennai, Delhi, Hyderabad, Kolkata, Nagpur, and Mumbai. NEERI operates under the Ministry of Science and Technology of the Indian government. NEERI is a partner organization of India's POP National Implementation Plan (NIP).
History
In 1958, the Central Public Health Engineering Research Institute (CPHERI) was established. It was created by the Council of Scientific and Industrial Research (CSIR). In 1974, after participating in the "United Nations Inter-Governmental Conference on Human Environment" and with its renaming by Prime Minister Indira Gandhi, CPHERI became the National Environmental Engineering Research Institute (NEERI). NEERI has headquarters in Nagpur and five zonal laboratories in Mumbai, Kolkata, Delhi, Chennai, and Hyderabad.
The study for the location of a new municipal solid waste landfill site in Kolkata used the institute's 2005 guidelines.
During the COVID-19 crisis, the institute developed a saline gargling sample method to trace the disease.
Fields
Environmental monitoring
Since 1978, the institute has operated a nationwide air quality monitoring network. Sponsored by the Central Pollution Control Board (CPCB) since 1990. Receptor modelling techniques are used. CSIR-NEERI is involved in the design and development of air pollution control systems.
The institute has also developed a water purification system called 'NEERI ZAR'. In the 1960s and 1970s, the Institute developed guidelines for Defluorination techniques. They have sometimes formed a departure point for the development of other techniques. The Institute tests samples for research on Defluorination and the measurement of particulate matter in air.
The institute has been entrusted by the courts to provide an inspection of the current environmental and legal framework.
Skill development
The institute has set up a Centre for Skill Development, offering certificate courses in the areas of environmental impact and water quality assessment. Prof. V. Rajagopalan (1993 Vice President of the World Bank) had in his time (1955–65) with the Institute created a national program for water industry professionals. Graduate programmers were established in Public Health Engineering at the Guindy Engineering College, Madras, Roorkee Engineering University, and VJTI in Mumbai.
Assessment of research
In 1989–2013, 1,236 publications of the National Environmental Engineering Research Institute were assessed. The institute technique for enrichment of ilmenite with titanium dioxide has been evaluated externally.
Patent development
The institute has national and international patents for a method to manufacture zeolite-A using flash instead of sodium silicate and aluminate.
Selected publications
Kumar, A., et al. "Sustainability in Environmental Engineering and Science." (2021): 253–262.
Sharma, Abhinav. "Effect of ozone pretreatment on biodegradability enhancement and biogas production of biomethane distillery effluent."
Sharma, Asheesh, et al. "NutriL-GIS: A Tool for Assessment of Agricultural Runoff and Nutrient Pollution in a Watershed." National Environmental Engineering Research Institute (NEERI). India (2010).
Sinnarkar, S. N., and Rajesh Kumar Lohiya. "External user in an environmental research library." Annals of library and information studies 55.4 (2008): 275–280.
Schools, Greywater Reuse In Rural. "Guidance Manual." National Environmental Engineering Research Institute (2007).
Thawale, P. R., Asha A. Juwarkar, and S. K. Singh. "Resource conservation through land treatment of municipal wastewater." Current Science (2006): 704–711.
Rao, Padma S., et al. "Performance evaluation of a green belt in a petroleum refinery: a case study". Ecological engineering 23.2 (2004): 77–84.
Murty , K. S. "Groundwater in India." Studies in Environmental Science. Vol. 17. Elsevier, 1981. 733–736.
References
Research institutes in Nagpur
Council of Scientific and Industrial Research
Environmental engineering
Science and technology in Maharashtra
Ministry of Science and Technology (India)
Research institutes established in 1958
1958 establishments in Bombay State | National Environmental Engineering Research Institute | Chemistry,Engineering | 1,007 |
37,077,022 | https://en.wikipedia.org/wiki/Eta%20Gruis | Eta Gruis, Latinized from η Gruis, is a solitary star in the southern constellation of Grus. It is visible to the naked eye as a faint, orange-hued star with an apparent visual magnitude of 4.85. Based upon an annual parallax shift of as seen from the Earth, the system is located about 460 light years from the Sun. The star is drifting further away with a radial velocity of +28 km/s.
This object is an evolved K-type giant star with a stellar classification of , where the suffix notation indicates this is an intermediate CN star. It is a periodic microvariable with an amplitude of 0.0055 magnitude and a frequency of 0.36118 cycles per day. With the supply of hydrogen exhausted at its core, the star has expanded and cooled, now having 31 times the Sun's girth. It is radiating 338.5 times the luminosity of the Sun from its swollen photosphere at an effective temperature of 4,420 K.
Eta Gruis has a magnitude 11.5 visual companion located at an angular separation of along a position angle of 187°, as of 2012.
References
K-type giants
Grus (constellation)
Gruis, Eta
Durchmusterung objects
215369
112374
8655 | Eta Gruis | Astronomy | 267 |
57,954,250 | https://en.wikipedia.org/wiki/Atomic%20trap%20trace%20analysis | Atom Trap Trace Analysis (ATTA) is an extremely sensitive trace analysis method developed by Argonne National Lab (ANL). ATTA is used on long-lived, stable radioisotopes such as , , and . By using a laser that is locked to an atomic transition, a CCD or PMT will detect the laser induced fluorescence to allow highly selective, parts-per-trillion to parts-per-quadrillion concentration measurement with single atom detection. This method is useful for atomic transport processes, such as in the atmosphere, geological dating, as well as noble gas purification.
ATTA measurements are possible only if the atoms are excited to a metastable state prior to detection. The main difficulty to accomplishing this is the large energy gap (10-20 eV) between the ground and excited state. The current solution is to use an RF discharge, which is a brute force technique that is inefficient and leads to complications such contamination of the walls from ion sputtering and high gas density. A new scheme for generating a metastable beam which can achieve a cleaner, slower, and preferably more intense source would provide a substantial advance to ATTA technology. All-optical techniques have been considered, but have not yet been able to compete with the discharge source. A new technique for generation of metastable krypton involves the use of a two photon transition driven by a pulsed, far-UV laser to populate the excited state which decays to the metastable state with high probability.
References
Sources
Radiochemistry | Atomic trap trace analysis | Chemistry | 320 |
40,849,226 | https://en.wikipedia.org/wiki/The%20Singularity%20%28film%29 | The Singularity is a 2012 documentary film about the technological singularity, produced and directed by Doug Wolens. The film has been called "a large-scale achievement in its documentation of futurist and counter-futurist ideas”.
Synopsis
Doug Wolens organized his interviews with the commentators (see list below) by this set of topics, related to the singularity. During each topic or subtopic several commentators provide their viewpoints, some with suggestions on how to get there, others with a skeptical opinion about when it will happen.
Topic I. Artificial intelligence
Subtopic: Intelligence explosion
Subtopic: Machines That Think
Subtopic: Conscious machines
Topic II. Becoming machines
Subtopic: Neuroengineering
Subtopic: Nanotechnology
Topic III. Techno-utopia –
Subtopic: Getting Ready
Subtopic: Is The Singularity Near?
Subtopic: Regulating technology
Topic IV. Post-human – Transcend
Commentators
In order of their appearance in the film:
Ray Kurzweil – National Medal of Technology recipient, Inventor
Ralph Merkle – Institute for Molecular Manufacturing / Senior Research Fellow
Brad Templeton – Electronic Frontier Foundation Director
Jonas Lamis – Technology Entrepreneur, Founder and Chief Operating Officer at Rally
Paul Saffo – Distinguished Visiting Scholar at Stanford University
Eliezer Yudkowsky – Singularity Institute for Artificial Intelligence, Co-founder
Peter Voss – Adaptive AI, Inc / Founder and CEO
Ben Goertzel – Novamente LLC, Artificial Intelligence Development, Founder
Chris Phoenix – Center for Responsible Nanotechnology Co-founder
Peter Norvig – Google, Director of Research
Alison Gopnik – Professor of Psychology and Philosophy at UC Berkeley
David Chalmers – Centre for Consciousness, Director and Professor Philosophy
Wolf Singer – Max Planck Institute for Brain Research, Director
Christof Koch – California Institute of Technology, Professor of Cognitive and Behavioral Biology
Christine Peterson – Foresight Nanotech Institute, President and Co-founder
Andy Clark – University of Edinburgh, Professor of Philosophy
Barney Pell – Bing/Microsoft Chief Architect for local search
Cynthia Breazeal – MIT Media Lab's Personal Robots Group, Director
Bill McKibben – Scholar in Residence – Middlebury College
Richard A. Clarke
Matt Francis – UC Berkeley / Professor of Chemistry
David D. Friedman – Economist / Professor of Law at Santa Clara University
Leon Panetta – United States Secretary of Defense
Aubrey de Grey
Glenn Zorpette – Executive Editor of IEEE Spectrum
Music
American composer Christopher (“Chrizzy”) Lancaster scored the original soundtrack for the film. The soundtrack was created by the processing of acoustic cello sound through real-time samplers, audio effects and filtering recording his cello and feedback.
Release
The Singularity had limited theatrical release beginning with the 1400 seat Castro Theatre in San Francisco in September 2013, along with screening at the Brattle Theatre in Cambridge MA, the Smith Rafael Film Center in Marin California, and The Santa Fe Center for Contemporary Arts. The film has also had screenings at Yale University, University of Edinburgh, Arizona State University, NASA, BIL, and others. These screenings featured post-screening discussion with expert panels, and/or question and answer sessions with director Doug Wolens.
Doug Wolens has pursued an alternative self-distribution strategy for The Singularity, working directly with theatres, museums, educational institutions, as well as with the national and local press, to promote the screenings and iTunes December, 2012 digital release.
Reception
Stephen Cass of the IEEE Spectrum called it "a lively introduction" that does not cover new ground. Geoff Pevere of The Globe and Mail wrote that the film, an "intense, idea-packed account" of the concept, casts McKibben as the most compelling speaker, as his arguments come across the most human, appealing not only to reason but also feeling. Alex Knapp of Forbes wrote that it is "well done and provides a good overview", though he said he would have liked to have seen more criticism of the basic tenet of exponential technological growth. The interviewees themselves also attracted commentary; Case asked why there were no non-white subjects, and Pevere described them as "neo-hippie, unkempt longhairs".
References
External links
2012 films
2012 documentary films
American documentary films
American independent films
Documentary films about computing
Documentary films about robots
Singularitarianism
Singularity theory
Transhumanism
2010s English-language films
2010s American films
2012 independent films
English-language documentary films
English-language independent films | The Singularity (film) | Technology,Engineering,Biology | 905 |
20,373,832 | https://en.wikipedia.org/wiki/Abell%202142 | Abell 2142, or A2142, is a huge, X-ray luminous galaxy cluster in the constellation Corona Borealis. It is the result of a still ongoing merger between two galaxy clusters. The combined cluster is six million light years across, contains hundreds of galaxies and enough gas to make a thousand more. It is "one of the most massive objects in the universe."
X-Ray image
The adjacent image was taken 20 August 1999 with the Chandra X-ray Observatory's 0.3-10.0 keV Advanced CCD Imaging Spectrometer (ACIS), and covers an area of 7.5 x 7.2 arc minutes. It shows a colossal cosmic "weather system" produced by the collision of two giant clusters of galaxies. For the first time, the pressure fronts in the system have been traced in detail, and show a bright, but relatively cool 50 million degree Celsius central region (white) embedded in large elongated cloud of 70 million degree Celsius gas (magenta), all of which is roiling in a faint "atmosphere" of 100 million degree Celsius gas (faint magenta and dark blue). The bright source in the upper left is an active galaxy in the cluster.
Quick facts
Abell 2142 is part of the Abell catalogue of rich clusters of galaxies originally published by UCLA astronomer George O. Abell (1927-1983) in 1958. It has a heliocentric redshift of 0.0909 (meaning it is moving away from us at 27,250 km/s) and a visual magnitude of 16.0. It is about 1.2 billion light years (380 Mpc) away.
Merger dynamics
A2142 has attracted attention because of its potential to shed light on the dynamics of mergers between galaxies. Clusters of galaxies grow through gravitational attraction of smaller groups and clusters. During a merger the kinetic energy of colliding objects heats the gas between subclusters, causing marked variations in gas temperature. These variations contain information on the stage, geometry and velocity of the merger. An accurate temperature map can provide a great deal of information on the nature of the underlying physical processes. Previous instruments (e.g., ROSAT, ASCA) did not have the capabilities of Chandra and XMM-Newton (two current X-ray observatories) and were unable to map the region in detail.
Chandra has been able to measure variations of temperature, density, and pressure with high resolution. "Now we can begin to understand the physics of these mergers, which are among the most energetic events in the universe," said Maxim Markevitch of the Harvard-Smithsonian Center for Astrophysics, Cambridge, Massachusetts, and leader of the international team involved in the analysis of the observations. "The pressure and density maps of the cluster show a sharp boundary that can only exist in the moving environment of a merger."
A2142's observed X-ray emissions are largely smooth and symmetric, suggesting it is a result of a merger between two galaxy clusters viewed at least 1–2 billion years after the initial core crossing. One would expect to observe uneven X-ray emission and obvious shock fronts if the merger was at an early stage. Markevitch et al. have proposed that the central galaxy (designated G1) of a more massive cluster has merged with the former central galaxy (G2) of the less massive cluster. The relatively cool central area suggests that the heating caused by previous shock fronts missed the central core, interacting instead with the surrounding gas.
See also
List of Abell clusters
X-ray astronomy
References
Corona Borealis
2142
Galaxy clusters
Abell richness class 2 | Abell 2142 | Astronomy | 749 |
293,077 | https://en.wikipedia.org/wiki/Video%20game%20remake | A video game remake is a video game closely adapted from an earlier title, usually for the purpose of modernizing a game with updated graphics for newer hardware and gameplay for contemporary audiences. Typically, a remake of such game software shares essentially the same title, fundamental gameplay concepts, and core story elements of the original game, although some aspects of the original game may have been changed for the remake.
Remakes are often made by the original developer or copyright holder, and sometimes by the fan community. If created by the community, video game remakes are sometimes also called fangames and can be seen as part of the retro gaming phenomenon.
Definition
A remake offers a newer interpretation of an older work, characterized by updated or changed assets. For example, The Legend of Zelda: Ocarina of Time 3D and The Legend of Zelda: Majora's Mask 3D for the Nintendo 3DS are considered remakes of their original versions for the Nintendo 64, and not a remaster or a port, since there are new character models and texture packs. The Legend of Zelda: Wind Waker HD for Wii U would be considered a remaster, since it retains the same, albeit updated upscaled aesthetics of the original.
A remake typically maintains the same story, genre, and fundamental gameplay ideas of the original work. The intent of a remake is usually to take an older game that has become outdated and update it for a new platform and audience. A remake will not necessarily preserve the original gameplay especially if it is dated, instead remaking the gameplay to conform to the conventions of contemporary games or later titles in the same series in order to make a game marketable to a new audience.
For example, for Sierra's 1991 remake of Space Quest, the developers used the engine, point-and-click interface, and graphical style of Space Quest IV: Roger Wilco and The Time Rippers, replacing the original graphics and text parser interface of the original. However, other elements, like the narrative, puzzles and sets, were largely preserved. Another example is Black Mesa, a remake built entirely from the ground up in the Source Engine that remakes in-game textures, assets, models, and facial animations, while taking place in the events of the original Half-Life game. Resident Evil 2 (2019) is a remake of the 1998 game Resident Evil 2; while the original uses tank controls and fixed camera angles, the remake features "over-the-shoulder" third-person shooter gameplay similar to Resident Evil 4 and more recent games in the series that allows players the option to move while using their weapons similar to Resident Evil 6.
Ports
A port is a conversion of a game to a new platform that relies heavily on existing work and assets. A port may include various enhancements like improved performance, resolution, and sometimes even additional content, but differs from a remake in that it still relies heavily on the original assets and engine of the source game. Sometimes, ports even remove content that was present in the original version. For example, the handheld console ports of Mortal Kombat II had fewer characters than the original arcade game and other console ports due to system storage limitations but otherwise were still faithful to the original in terms of gameplay.
Compared to the intentional video game remake or remaster which is often done years or decades after the original came out, ports or conversions are typically released during the same generation as the original (the exception being mobile gaming versions of PC games, such as Grand Theft Auto III, since mobile gaming platforms did not exist until the 2010s going forward). Home console ports usually came out less than a year after the original arcade game, such as the distribution of Mortal Kombat for home consoles by Acclaim Entertainment. Since the 2000s as arcade releases are no longer the original launch platform for a video game, publishers tend to release the video game simultaneously on several consoles first and then port to the PC later.
Remaster
A port that contains a great deal of remade assets may sometimes be considered a remaster or a partial remake, although video game publishers are not always clear on the distinction. DuckTales: Remastered for example uses the term "Remastered" to distinguish itself from the original NES game it was based on, even though it is a clean-slate remake with a different engine and assets. Compared to a port which is typically released in the same era as the original, a remaster is done years or decades after the original in order to take advantage of generation technological improvements (the latter which a port avoids doing). Unlike a remake which often changes the now-dated gameplay, a remaster is very faithful to the original in that aspect (in order to appeal to that nostalgic audience) while permitting only a limited number of gameplay tweaks for the sake of convenience.
Reboots
Games that use an existing brand but are conceptually very different from the original, such as Wolfenstein 3D (1992) and Return to Castle Wolfenstein (2001) or Tomb Raider (1996) and Tomb Raider (2013) are usually regarded as reboots rather than remakes.
History
In the early history of video games, remakes were generally regarded as "conversions" and seldom associated with nostalgia. Due to limited and often highly divergent hardware, games appearing on multiple platforms usually had to be entirely remade. These conversions often included considerable changes to the graphics and gameplay, and could be regarded retroactively as remakes, but are distinguished from later remakes largely by intent. A conversion is created with the primary goal of tailoring a game to a specific piece of hardware, usually contemporaneous or nearly contemporaneous with the original release. An early example was Gun Fight, Midway's 1975 reprogrammed version of Taito's arcade game Western Gun, with the main difference being the use of a microprocessor in the reprogrammed version, which allowed improved graphics and smoother animation than the discrete logic of the original. In 1980, Warren Robinett created Adventure for the Atari 2600, a graphical version of the 1970s text adventure Colossal Cave Adventure. Also in 1980, Atari released the first officially licensed home console game conversion of an arcade title, Taito's 1978 hit Space Invaders, for the Atari 2600. The game became the first "killer app" for a video game console by quadrupling the system's sales. Since then, it became a common trend to port arcade games to home systems since the second console generation, though at the time they were often more limited than the original arcade games due to the technical limitations of home consoles.
In 1985, Sega released a pair of arcade remakes of older home video games. Pitfall II: Lost Caverns (arcade game) was effectively a remake of both the original Pitfall! and its sequel Pitfall II: Lost Caverns with new level layouts and colorful, detailed graphics. That same year, Sega adapted the 1982 computer game Choplifter for the arcades, taking the fundamental gameplay of the original and greatly expanding it, adding new environments, enemies, and gameplay elements. This version was very successful, and later adapted to the Master System and Famicom. Both of these games were distinguished from most earlier conversions in that they took major liberties with the source material, attempting to modernize both the gameplay as well as the graphics.
Some of the earliest remakes to be recognized as such were attempts to modernize games to the standards of later games in the series. Some were even on the same platforms as the original, for example Ultima I: The First Age of Darkness, a 1986 remake of the original that appeared on multiple platforms, including the Apple II, the system the game originated on. Other early remakes of this type include Sierra's early-1990s releases of King's Quest, Space Quest and Leisure Suit Larry. These games used the technology and interface of the most recent games in Sierra's series, and original assets in a dramatically different style. The intent was not simply to bring the game to a new platform, but to modernize older games which had in various ways become dated.
With the birth of the retrogaming phenomenon, remakes became a way for companies to revive nostalgic brands. Galaga '88 and Super Space Invaders '91 were both attempts to revitalize aging arcade franchises with modernized graphics and new gameplay elements, while preserving many signature aspects of the original games. The 16-bit generation of console games was marked by greatly enhanced graphics compared to the previous generation, but often relatively similar gameplay, which led to an increased interest in remakes of games from the previous generation. Super Mario All-Stars remade the entire NES Mario series, and was met with great commercial success. Remake compilations of the Ninja Gaiden and Mega Man series followed. As RPGs increased in popularity, Dragon Quest, Ys and Kyūyaku Megami Tensei were also remade. In the mid-'90s, Atari released a series of remakes with the 2000 brand, including Tempest 2000, Battlezone 2000, and Defender 2000. After Atari's demise, Hasbro continued the tradition, with 3D remakes of Pong, Centipede, and Asteroids.
By 1994 the popularity of CD-ROM led to many remakes with digitized voices and, sometimes, better graphics, although Computer Gaming World noted the "amateur acting" in many new and remade games on CD. Emulation also made perfect ports of older games possible, with compilations becoming a popular way for publishers to capitalize on older properties.
Budget pricing gave publishers the opportunity to match their game's price with the perceived lower value proposition of an older game, opening the door for newer remakes. In 2003, Sega launched the Sega Ages line for PlayStation 2, initially conceived as a series of modernized remakes of classic games, though the series later diversified to include emulated compilations. The series concluded with a release that combined the two approaches, and included a remake of Fantasy Zone II that ran, via emulation, on hardware dating to the time of the original release, one of the few attempts at an enhanced remake to make no attempts at modernization. The advent of downloadable game services like Xbox Live Arcade and PlayStation Network has further fueled the expanded market for remakes, as the platform allows companies to sell their games at a lower price, seen as more appropriate for the smaller size typical of retro games. Some XBLA and PSN remakes include Bionic Commando Rearmed, Jetpac Refuelled, Wipeout HD (a remake not of the original Wipeout but of the two PSP games), Cyber Troopers Virtual-On Oratorio Tangram and Super Street Fighter II Turbo HD Remix.
Some remakes may include the original game as a bonus feature. The 2009 remake of The Secret of Monkey Island took this a step further by allowing players to switch between the original and remade versions on the fly with a single button press. This trend was continued in the sequel, and is also a feature in Halo: Combat Evolved Anniversary and later in Halo 2 Anniversary as part of Halo: The Master Chief Collection.
Remasters and remakes on the Nintendo DS include Super Mario 64 DS, Kirby Super Star Ultra, Diddy Kong Racing DS, Pokémon HeartGold and SoulSilver, Fire Emblem: Shadow Dragon, Final Fantasy III and IV, Dragon Quest IV through VI, and Kingdom Hearts Re:coded. The Nintendo 3DS's lineup also had numerous remasters and remakes, including The Legend of Zelda: Ocarina of Time 3D, Star Fox 64 3D, The Legend of Zelda: Majora's Mask 3D, Pokémon Omega Ruby and Alpha Sapphire, Metroid: Samus Returns, Mario & Luigi: Superstar Saga + Bowser's Minions, Luigi's Mansion, and Mario & Luigi: Bowser's Inside Story + Bowser Jr.'s Journey. Remasters on both the DS and 3DS include Cave Story, Myst and Rayman 2: The Great Escape.
Community-driven remakes
Games unsupported by the rights-holders often spark remakes created by hobbyists and game communities. An example is OpenRA, which is a modernized remake of the classic Command & Conquer real-time-strategy games. Beyond cross-platform support, it adds comfort functions and gameplay functionality inspired by successors of the original games. Another notable examples are Pioneers, a remake and sequel in spirit to Frontier: Elite II; CSBWin, a remake of the dungeon crawler classic Dungeon Master; and Privateer Gemini Gold, a remake of Wing Commander: Privateer.
Skywind is a fan remake of Morrowind (2002) running on Bethesda's Creation Engine, utilising the source code, assets and gameplay mechanics of Skyrim (2011). The original game developers, Bethesda Softworks, have given project volunteers their approval. The remake team includes over 70 volunteers in artist, composer, designer, developer, and voice-actor roles. In November 2014, the team reported to have finished half of the remake's environment, over 10,000 new dialogue lines, and three hours of series-inspired soundtrack. The same open-development project is also working on Skyblivion, a remake of Oblivion (the game between Morrowind and Skyrim) in the Skyrim engine, and Morroblivion, a remake of Morrowind in the Oblivion engine (which still has a significant userbase on older PCs).
Demakes
The term demake may refer to games created deliberately with an artstyle inspired by older games of a previous video game generation. The action platformer Mega Man 9 is an example of such a game. Although remakes typically aim to adapt a game from a more limited platform to a more advanced one, a rising interest in older platforms has inspired some to do the opposite, remaking or adapting modern games to the technical standards of older platforms, usually going so far as to implement them on obsolete hardware platforms, hence the term "Demake". Such games are either physical or emulated.
Modern demakes often change the 3D gameplay to a 2D one. Popular demakes include Quest: Brian's Journey, an official Game Boy Color port of Quest 64; Super Smash Land, an unofficial Game Boy-style demake of Super Smash Bros.; D-Pad Hero, a NES-esque demake of Guitar Hero; Rockman 7 FC and Rockman 8 FC, NES-styled demakes of Mega Man 7 and Mega Man 8, respectively; Gang Garrison 2, a pixelated demake of Team Fortress 2; Bloodborne PSX, a PS1 demake of Bloodborne; and Halo 2600, an Atari 2600 demake of Microsoft's Halo series. There are also NES-style demakes of the Touhou Project games Embodiment of Scarlet Devil and Perfect Cherry Blossom. Some demakes are created to showcase and push the abilities of older generation systems such as the Atari 2600. An example of this is the 2013 game Princess Rescue, which is a demake of the NES title Super Mario Bros.
While most demakes are homebrew efforts from passionate fans, some are officially endorsed by the original creators such as Pac-Man Championship Edition'''s Famicom/NES demake being printed onto Japanese physical editions of the Namcot Collection as an original bonus game.
For much of the 1990s in China, Hong Kong, and Taiwan, black market developers created unauthorized adaptations of then-modern games such as Street Fighter II, Mortal Kombat, Phantasy Star IV, Final Fantasy VII or Tekken'' for the NES, which enjoyed considerable popularity in the regions because of the availability of low-cost compatible systems.
See also
List of video game remakes and remastered ports
High-definition remasters for PlayStation consoles
Game engine recreation
Video game remaster
References
Remake
Remake | Video game remake | Technology | 3,222 |
3,528,909 | https://en.wikipedia.org/wiki/John%20G.%20McNutt | John G. McNutt (born 1951) is Professor Emeritus in the Biden School of Public Policy and Administration at the University of Delaware and is a researcher and pioneer in the study of the use of information and communication technologies in the nonprofit sector in the United States. Much of his work focuses on electronic advocacy, especially the application of technology, data and data science to social action and public policymaking and the use of evidence in informing social policy. McNutt has conducted research on child advocacy groups, professional associations, environmental advocacy organizations, political action committees, transnational advocacy organizations and community development corporations. His current work examines four interrelated areas: (1) technology and advocacy for social change, (2) the role of data and data science in public policy, (3) The changing nature of social policy in an information society, (4) Informatics in the public and nonprofit sectors. Prior to the start of his 40-year academic career, McNutt was a social worker and VISTA Volunteer.
He earned a B.A. from Mars Hill College (1974), an M.S.W. from the University of Alabama (1978), and a Ph.D. from the University of Tennessee (1991).
External links
Policy Magic (John G. McNutt's personal web site)
Internet-based activism
American social workers
Non-profit technology
Mars Hill University alumni
University of Alabama alumni
University of Tennessee alumni
University of Delaware faculty
Living people
Year of birth missing (living people) | John G. McNutt | Technology | 306 |
5,122,292 | https://en.wikipedia.org/wiki/Trichloroisocyanuric%20acid | Trichloroisocyanuric acid is an organic compound with the formula (CONCl)3. It is used as an industrial disinfectant, bleaching agent and a reagent in organic synthesis. This white crystalline powder, which has a strong "chlorine odour," is sometimes sold in tablet or granule form for domestic and industrial use.
Synthesis
Trichloroisocyanuric acid is prepared from cyanuric acid via a reaction with chlorine gas and trisodium cyanurate.
Applications
The compound is a disinfectant, algicide and bactericide mainly for swimming pools and dyestuffs, and is also used as a bleaching agent in the textile industry. It is widely used in civil sanitation for pools and spas, preventing and curing diseases in animal husbandry and fisheries, fruit and vegetable preservation, wastewater treatment, as an algicide for recycled water in industry and air conditioning, in anti shrink treatment for woolens, for treating seeds and in organic chemical synthesis. It is used in chemical synthesis as an easy to store and transport chlorine gas source, it is not subject to hazardous gas shipping restrictions, and its reaction with hydrochloric acid produces relatively pure chlorine.
Trichloroisocyanuric acid as used in swimming pools is easier to handle than chlorine gas. It dissolves slowly in water, but as it reacts, cyanuric acid concentration in the pool will build-up. In large bodies of water, the TCCA is soluble and breaks down slowly, releasing chlorine in the water to sanitize contaminants. When TCCA instead comes in contact with or is wetted/moistened by a small amount of water and does not dissolve, it can experience a chemical reaction, generating heat and causing the decomposition of the chemical, which in turn produces toxic chlorine gas and can produce explosive nitrogen trichloride.
See also
Comet (cleanser)
Dichloroisocyanuric acid (Dichlor)
Sodium dichloroisocyanurate
Chlorine
References
External links
Symclosene data page
MSDS for Trichloroisocyanuric acid
Oxidation of primary alcohol to aldehyde
Antimicrobials
Organochlorides
Bleaches
Isocyanuric acids | Trichloroisocyanuric acid | Biology | 487 |
25,897,909 | https://en.wikipedia.org/wiki/Patch%20dynamics%20%28physics%29 | Patch dynamics is a term used in physics to bridge, using algorithms, the models describing macroscale behavior and to predict large-scale patterns in fluid flow. It uses locally averaged properties of short space-time scales to advance and predict long space-time scale dynamics.
In patch dynamics and finite difference approximations, the macroscale variables are defined at the grid points of a mesh chosen to resolve the solution. The standard PDE adaptive grid methods can be used to resolve gradients in the macroscale solution. Both patch dynamics and finite difference methods generate time derivatives at mesh points; these time derivatives then help advance the solution in time.
See also
Dynamical system#Rectification
References
Fluid dynamics | Patch dynamics (physics) | Chemistry,Engineering | 143 |
68,403,092 | https://en.wikipedia.org/wiki/Shoe%20dryer | A shoe dryer or boot dryer is a machine used for drying shoes, and usually functions by blowing air on the inside of the shoes. The airflow causes the shoes to dry faster. The air can be heated for even faster drying, and these are the most common types. Shoes dryers can be especially useful for people who often have wet shoes, such as families with small children or people who often hike outdoor in the nature, or for ski boots which often are moist after use. Many shoes dryers have a timer which shuts off the dryer after some time. There are also shoe dryers which instead use a heated grate which the shoes are placed on top of, and which do not blow air.
History
Several patents have been awarded for shoe dryers, with some of the oldest dating back to 1963.
Noise
Many fan-driven shoe dryers emit bothersome noise during use. In a test from 2019, the most silent model was measured at 45 decibel (dB), while the other models were measured at 50 and 57 dB. In 2022, another model was measured at 56 dB in "tornado" mode and 29 dB in "whisper" mode, and in 2023 another variant of the same dryer was measured at 72 dB. It was also commented that the higher pitch of the said models noise could contribute to it being perceived as more intense and bothersome.
Air flow
The volumetric flow rate, i.e. the amount of air that is moved, is an important measure of fan-based shoe dryers. For example, a model tested in 2023 was stated to have a volume flow of 12 cubic meters per hour (m³/h), which corresponds to 12 000 liters per hour or just over 3 liters of air per second. Larger diameters of tubing and fans are beneficial for increased volumetric flow, and also results in lower air speed and thus less noise.
Heated air
Shoe dryers with a fan often emit slightly lukewarm or warm air. In a test from 2019, one of the models was rated at a power of 350 watt, of which 30 W was utilized as fan power and the remaining around 320 W were used for heating. Another model in the test had two temperature settings for choosing between 40 °C and 55 °C air temperature. A model tested in 2023 had settings for blowing air at room temperature, or heated air at 37, 45 or 60 degrees Celsius.
Not all shoes can withstand heated drying. Using high heat can put wear on shoes made of certain materials, and for example the use of a tumble dryer, heating cables or heating cabinet can lead to leather shoes cracking.
Fire hazard
Shoe dryers with heating can be a fire hazard if left on for too long, as with any heating appliance, and should therefore be used under supervision.
See also
Dehumidifier
Drying cabinet
Drying room
References
Home appliances | Shoe dryer | Physics,Technology | 594 |
1,983,545 | https://en.wikipedia.org/wiki/148%20%28number%29 | 148 (one hundred [and] forty-eight) is the natural number following 147 and before 149.
In mathematics
148 is the second number to be both a heptagonal number and a centered heptagonal number (the first is 1). It is the twelfth member of the Mian–Chowla sequence, the lexicographically smallest sequence of distinct positive integers with distinct pairwise sums.
There are 148 perfect graphs with six vertices, and 148 ways of partitioning four people into subsets, ordering the subsets, and selecting a leader for each subset.
In other fields
Dunbar's number is a theoretical cognitive limit to the number of people with whom one can maintain stable interpersonal relationships. Dunbar predicted a "mean group size" of 148, but this is commonly rounded to 150.
References
Integers | 148 (number) | Mathematics | 165 |
576,855 | https://en.wikipedia.org/wiki/Binary%20decision%20diagram | In computer science, a binary decision diagram (BDD) or branching program is a data structure that is used to represent a Boolean function. On a more abstract level, BDDs can be considered as a compressed representation of sets or relations. Unlike other compressed representations, operations are performed directly on the compressed representation, i.e. without decompression.
Similar data structures include negation normal form (NNF), Zhegalkin polynomials, and propositional directed acyclic graphs (PDAG).
Definition
A Boolean function can be represented as a rooted, directed, acyclic graph, which consists of several (decision) nodes and two terminal nodes. The two terminal nodes are labeled 0 (FALSE) and 1 (TRUE). Each (decision) node is labeled by a Boolean variable and has two child nodes called low child and high child. The edge from node to a low (or high) child represents an assignment of the value FALSE (or TRUE, respectively) to variable . Such a BDD is called 'ordered' if different variables appear in the same order on all paths from the root. A BDD is said to be 'reduced' if the following two rules have been applied to its graph:
Merge any isomorphic subgraphs.
Eliminate any node whose two children are isomorphic.
In popular usage, the term BDD almost always refers to Reduced Ordered Binary Decision Diagram (ROBDD in the literature, used when the ordering and reduction aspects need to be emphasized). The advantage of an ROBDD is that it is canonical (unique up to isomorphism) for a particular function and variable order. This property makes it useful in functional equivalence checking and other operations like functional technology mapping.
A path from the root node to the 1-terminal represents a (possibly partial) variable assignment for which the represented Boolean function is true. As the path descends to a low (or high) child from a node, then that node's variable is assigned to 0 (respectively 1).
Example
The left figure below shows a binary decision tree (the reduction rules are not applied), and a truth table, each representing the function . In the tree on the left, the value of the function can be determined for a given variable assignment by following a path down the graph to a terminal. In the figures below, dotted lines represent edges to a low child, while solid lines represent edges to a high child. Therefore, to find , begin at x1, traverse down the dotted line to x2 (since x1 has an assignment to 0), then down two solid lines (since x2 and x3 each have an assignment to one). This leads to the terminal 1, which is the value of .
The binary decision tree of the left figure can be transformed into a binary decision diagram by maximally reducing it according to the two reduction rules. The resulting BDD is shown in the right figure.
Another notation for writing this Boolean function is .
Complemented edges
An ROBDD can be represented even more compactly, using complemented edges, also known as complement links. The resulting BDD is sometimes known as a typed BDD or signed BDD.
Complemented edges are formed by annotating low edges as complemented or not. If an edge is complemented, then it refers to the negation of the Boolean function that corresponds to the node that the edge points to (the Boolean function represented by the BDD with root that node). High edges are not complemented, in order to ensure that the resulting BDD representation is a canonical form. In this representation, BDDs have a single leaf node, for reasons explained below.
Two advantages of using complemented edges when representing BDDs are:
computing the negation of a BDD takes constant time
space usage (i.e., required memory) is reduced (by a factor at most 2)
However, Knuth argues otherwise:
Although such links are used by all the major BDD packages, they are hard to recommend because the computer programs become much more complicated. The memory saving is usually negligible, and never better than a factor of 2; furthermore, the author's experiments show little gain in running time.
A reference to a BDD in this representation is a (possibly complemented) "edge" that points to the root of the BDD. This is in contrast to a reference to a BDD in the representation without use of complemented edges, which is the root node of the BDD. The reason why a reference in this representation needs to be an edge is that for each Boolean function, the function and its negation are represented by an edge to the root of a BDD, and a complemented edge to the root of the same BDD. This is why negation takes constant time. It also explains why a single leaf node suffices: FALSE is represented by a complemented edge that points to the leaf node, and TRUE is represented by an ordinary edge (i.e., not complemented) that points to the leaf node.
For example, assume that a Boolean function is represented with a BDD represented using complemented edges. To find the value of the Boolean function for a given assignment of (Boolean) values to the variables, we start at the reference edge, which points to the BDD's root, and follow the path that is defined by the given variable values (following a low edge if the variable that labels a node equals FALSE, and following the high edge if the variable that labels a node equals TRUE), until we reach the leaf node. While following this path, we count how many complemented edges we have traversed. If when we reach the leaf node we have crossed an odd number of complemented edges, then the value of the Boolean function for the given variable assignment is FALSE, otherwise (if we have crossed an even number of complemented edges), then the value of the Boolean function for the given variable assignment is TRUE.
An example diagram of a BDD in this representation is shown on the right, and represents the same Boolean expression as shown in diagrams above, i.e., . Low edges are dashed, high edges solid, and complemented edges are signified by a circle at their source. The node with the @ symbol represents the reference to the BDD, i.e., the reference edge is the edge that starts from this node.
History
The basic idea from which the data structure was created is the Shannon expansion. A switching function is split into two sub-functions (cofactors) by assigning one variable (cf. if-then-else normal form). If such a sub-function is considered as a sub-tree, it can be represented by a binary decision tree. Binary decision diagrams (BDDs) were introduced by C. Y. Lee, and further studied and made known by Sheldon B. Akers and Raymond T. Boute. Independently of these authors, a BDD under the name "canonical bracket form" was realized Yu. V. Mamrukov in a CAD for analysis of speed-independent circuits. The full potential for efficient algorithms based on the data structure was investigated by Randal Bryant at Carnegie Mellon University: his key extensions were to use a fixed variable ordering (for canonical representation) and shared sub-graphs (for compression). Applying these two concepts results in an efficient data structure and algorithms for the representation of sets and relations. By extending the sharing to several BDDs, i.e. one sub-graph is used by several BDDs, the data structure Shared Reduced Ordered Binary Decision Diagram is defined. The notion of a BDD is now generally used to refer to that particular data structure.
In his video lecture Fun With Binary Decision Diagrams (BDDs), Donald Knuth calls BDDs "one of the only really fundamental data structures that came out in the last twenty-five years" and mentions that Bryant's 1986 paper was for some time one of the most-cited papers in computer science.
Adnan Darwiche and his collaborators have shown that BDDs are one of several normal forms for Boolean functions, each induced by a different combination of requirements. Another important normal form identified by Darwiche is decomposable negation normal form or DNNF.
Applications
BDDs are extensively used in CAD software to synthesize circuits (logic synthesis) and in formal verification. There are several lesser known applications of BDD, including fault tree analysis, Bayesian reasoning, product configuration, and private information retrieval.
Every arbitrary BDD (even if it is not reduced or ordered) can be directly implemented in hardware by replacing each node with a 2 to 1 multiplexer; each multiplexer can be directly implemented by a 4-LUT in a FPGA. It is not so simple to convert from an arbitrary network of logic gates to a BDD (unlike the and-inverter graph).
BDDs have been applied in efficient Datalog interpreters.
Variable ordering
The size of the BDD is determined both by the function being represented and by the chosen ordering of the variables. There exist Boolean functions for which depending upon the ordering of the variables we would end up getting a graph whose number of nodes would be linear (in n) at best and exponential at worst (e.g., a ripple carry adder). Consider the Boolean function Using the variable ordering , the BDD needs nodes to represent the function. Using the ordering , the BDD consists of nodes.
It is of crucial importance to care about variable ordering when applying this data structure in practice. The problem of finding the best variable ordering is NP-hard. For any constant c > 1 it is even NP-hard to compute a variable ordering resulting in an OBDD with a size that is at most c times larger than an optimal one. However, there exist efficient heuristics to tackle the problem.
There are functions for which the graph size is always exponential—independent of variable ordering. This holds e.g. for the multiplication function. In fact, the function computing the middle bit of the product of two -bit numbers does not have an OBDD smaller than vertices. (If the multiplication function had polynomial-size OBDDs, it would show that integer factorization is in P/poly, which is not known to be true.)
Researchers have suggested refinements on the BDD data structure giving way to a number of related graphs, such as BMD (binary moment diagrams), ZDD (zero-suppressed decision diagrams), FBDD (free binary decision diagrams), FDD (functional decision diagrams), PDD (parity decision diagrams), and MTBDDs (multiple terminal BDDs).
Logical operations on BDDs
Many logical operations on BDDs can be implemented by polynomial-time graph manipulation algorithms:
conjunction
disjunction
negation
However, repeating these operations several times, for example forming the conjunction or disjunction of a set of BDDs, may in the worst case result in an exponentially big BDD. This is because any of the preceding operations for two BDDs may result in a BDD with a size proportional to the product of the BDDs' sizes, and consequently for several BDDs the size may be exponential in the number of operations. Variable ordering needs to be considered afresh; what may be a good ordering for (some of) the set of BDDs may not be a good ordering for the result of the operation. Also, since constructing the BDD of a Boolean function solves the NP-complete Boolean satisfiability problem and the co-NP-complete tautology problem, constructing the BDD can take exponential time in the size of the Boolean formula even when the resulting BDD is small.
Computing existential abstraction over multiple variables of reduced BDDs is NP-complete.
Model-counting, counting the number of satisfying assignments of a Boolean formula, can be done in polynomial time for BDDs. For general propositional formulas the problem is ♯P-complete and the best known algorithms require an exponential time in the worst case.
See also
Boolean satisfiability problem, the canonical NP-complete computational problem
L/poly, a complexity class that strictly contains the set of problems with polynomially sized BDDs
Model checking
Radix tree
Barrington's theorem
Hardware acceleration
Karnaugh map, a method of simplifying Boolean algebra expressions
Zero-suppressed decision diagram
Algebraic decision diagram, a generalization of BDDs from two-element to arbitrary finite sets
Sentential Decision Diagram, a generalization of OBDDs
Influence diagram
References
Further reading
Complete textbook available for download.
External links
Fun With Binary Decision Diagrams (BDDs), lecture by Donald Knuth
List of BDD software libraries for several programming languages.
Diagrams
Graph data structures
Model checking
Boolean algebra
Knowledge compilation | Binary decision diagram | Mathematics | 2,662 |
668,130 | https://en.wikipedia.org/wiki/Thermochemical%20equation | In thermochemistry, a thermochemical equation is a balanced chemical equation that represents the energy changes from a system to its surroundings. One such equation involves the enthalpy change, which is denoted with In variable form, a thermochemical equation would appear similar to the following:
, , and are the usual agents of a chemical equation with coefficients and is a positive or negative numerical value, which generally has units of kJ/mol. Another equation may include the symbol to denote energy; 's position determines whether the reaction is considered endothermic (energy-absorbing) or exothermic (energy-releasing).
Understanding aspects of thermochemical equations
Enthalpy () is the transfer of energy in a reaction (for chemical reactions, it is in the form of heat) and is the change in enthalpy. is a state function, meaning that is independent of processes occurring between initial and final states. In other words, it does not matter which steps are taken to get from initial reactants to final products, as will always be the same. , or the change in enthalpy of a reaction, has the same value of as in a thermochemical equation; however, is measured in units of kJ/mol, meaning that it is the enthalpy change per moles of any particular substance in an equation. Values of are determined experimentally under standard conditions of 1 atm and 25 °C (298.15K).
As discussed earlier, can have a positive or negative sign. If has a positive sign, the system uses heat and is endothermic; if is negative, then heat is produced and the system is exothermic.
Since enthalpy is a state function, the given for a particular reaction is only true for that exact reaction. Physical states of reactants and products matter, as do molar concentrations.
Since is dependent on the physical state and molar concentrations in reactions, thermochemical equations must be stoichiometrically correct. If one agent of an equation is changed through multiplication, then all agents must be proportionally changed, including .
The multiplicative property of thermochemical equations is mainly due to the first law of thermodynamics, which says that energy can neither be created nor destroyed; this concept is commonly known as the conservation of energy. It holds true on a physical or molecular scale.
Manipulating thermochemical equations
Coefficient multiplication
Thermochemical equations can be changed, as mentioned above, by multiplying by any numerical coefficient. All agents must be multiplied, including . Using the thermochemical equation of variables as above, one gets the following example.
One must assume that needs to be multiplied by two in order for the thermochemical equation to be used. All the agents in the reaction must then be multiplied by the same coefficient, like so:
This is again considered to be logical when the first law of thermodynamics is considered. Twice as much product is produced, so twice as much heat is removed or given off. The division of coefficients functions in the same way.
Hess's law: Addition of thermochemical equations
Hess's law states that the sum of the energy changes of all thermochemical equations included in an overall reaction is equal to the overall energy change. Since is a state function and is not dependent on how reactants become products as a result, steps (in the form of several thermochemical equations) can be used to find the of the overall reaction. For instance:
Reaction\ 1: \quad C_{graphite}(s)\ + O2 (g) \to CO2 (g)
This reaction is the result of two steps (a reaction sequence):
C_{graphite} (s) \ + \frac{1}{2}O2 (g) \to CO(g)
CO(g)\ + \frac{1}{2}O2(g) \to CO2 (g)
Adding these two reactions together results in Reaction 1, which allows to be found, so whether or not agents in the reaction sequence are equal to each other is verified. The reaction sequences are then added together. In the following example, CO is not in Reaction 1 and equals another reaction.
C_{graphite} (s) \ + \frac{1}{2} O2 (g) \ + \frac{1}{2} O2 (g) \to CO2(g)
and
C_{graphite} (s) \ + O2 (g) \to CO2(g), \ Reaction \ 1
To solve for , the s of the two equations in the reaction sequence are added together:
Another example involving thermochemical equations is that when methane gas is combusted, heat is released, making the reaction exothermic. In the process, 890.4 kJ of heat is released per mole of reactants, so the heat is written as a product of the reaction.
Other notes
If reactions have to be reversed for their products to be equal, the sign of must also be reversed.
If an agent has to be multiplied for it to equal another agent, all other agents and must also be multiplied by its coefficient.
Generally, values given in tables are under 1 atm and 25 °C (298.15 K), otherwise known as Standard Lab Conditions.
Locations of values of ΔH
Values of have been experimentally determined and are available in table form. Most general chemistry textbooks have appendixes including common values. There are several online tables available. A software offered with Active Thermochemical Tables (ATcT) provides more information online.
See also
Chemistry
Thermochemistry
Chemical reaction
Enthalpy
References
Atkins, Peter and Loretta Jones. 2005. Chemical Principles, the Quest for Insight (3rd edition). W. H. Freeman and Co., New York, NY.
External links
General chemistry information index: http://chemistry.about.com/library/blazlist4.htm
Further step by step help on Hess's law: http://members.aol.com/profchm/hess.html
Thermochemistry | Thermochemical equation | Chemistry | 1,305 |
38,568,822 | https://en.wikipedia.org/wiki/Long-term%20impact%20of%20alcohol%20on%20the%20brain | The long-term impact of alcohol on the brain has become a growing area of research focus. While researchers have found that moderate alcohol consumption in older adults is associated with better cognition and well-being than abstinence, excessive alcohol consumption is associated with widespread and significant brain lesions. Other data – including investigated brain-scans of 36,678 UK Biobank participants – suggest that even "light" or "moderate" consumption of alcohol by itself harms the brain, such as by reducing brain grey matter volume. This may imply that alternatives and generally aiming for lowest possible consumption could usually be the advisable approach.
Despite these physiological effects in principle, in some cases occasional moderate consumption may have ancillary benefits on the brain due to social and psychological benefits if compared to alcohol abstinence and soberness.
While the extent of causation is difficult to prove, alcohol intake – even at levels often considered to be low – "is negatively associated with global brain volume measures, regional gray matter volumes, and white matter microstructure" and these associations become stronger as alcohol intake increases.
The effects can manifest much later—mid-life Alcohol Use Disorder has been found to correlate with increased risk of severe cognitive and memory deficits in later life. Alcohol related brain damage is not only due to the direct toxic effects of alcohol; alcohol withdrawal, nutritional deficiency, electrolyte disturbances, and liver damage are also believed to contribute to alcohol-related brain damage.
Adolescent brain development
Consuming large amounts of alcohol over a period of time can impair normal brain development in humans. Deficits in retrieval of verbal and nonverbal information and in visuospatial functioning were evident in youths with histories of heavy drinking during early and middle adolescence.
During adolescence critical stages of neurodevelopment occur, including remodeling and functional changes in synaptic plasticity and neuronal connectivity in different brain regions. These changes may make adolescents especially susceptible to the harmful effects of alcohol. Compared to adults, adolescents exposed to alcohol are more likely to exhibit cognitive deficits (including learning and memory dysfunction). Some of these cognitive effects, such as learning impairments, may persist into adulthood.
Mechanisms of action
Neuroinflammation
Ethanol can trigger the activation of astroglial cells which can produce a proinflammatory response in the brain. Ethanol interacts with the TLR4 and IL-1RI receptors on these cells to activate intracellular signal transduction pathways. Specifically, ethanol induces the phosphorylation of IL-1R-associated kinase (IRAK), ERK1/2, stress-activated protein kinase (SAPK)/JNK, and p38 mitogen-activated protein kinase (p38 MAPK). Activation of the IRAK/MAPK pathway leads to the stimulation of the transcription factors NF-kappaB and AP-1. These transcription factors cause the upregulation of inducible nitric oxide synthase (iNOS) and cyclooxygenase-2 (COX-2) expression. The upregulation of these inflammatory mediators by ethanol is also associated with an increase in caspase 3 activity and a corresponding increase in cell apoptosis. The exact mechanism by which various concentrations of ethanol either activates or inhibits TLR4/IL-1RI signaling is not currently known, though it may involve alterations in lipid raft clustering or cell adhesion complexes and actin cytoskeleton organization.
Changes in dopaminergic and glutamatergic signaling pathways
Intermittent ethanol treatment causes a decrease in expression of the dopamine receptor type 2 (D2R) and a decrease in phosphorylation of 2B subunit of the NMDA receptor (NMDAR2B) in the prefrontal cortex, hippocampus, nucleus accumbens, and for only D2R the striatum. It also causes changes in the acetylation of histones H3 and H4 in the prefrontal cortex, nucleus accumbens, and striatum, suggesting chromatin remodeling changes which may mediate long-term alterations. Additionally, adolescent rats pre-exposed to ethanol have higher basal levels of dopamine in the nucleus accumbens, along with a prolonged dopamine response in this area in response to a challenge dose of ethanol. Together, these results suggest that alcohol exposure during adolescence can sensitize the mesolimbic and mesocortical dopamine pathways to cause changes in dopaminergic and glutamatergic signaling, which may affect the remodeling and functions of the adolescent brain. These changes are significant as alcohol’s effect on NMDARs could contribute to learning and memory dysfunction (see Effects of alcohol on memory).
Inhibition of hippocampal neurogenesis
Excessive alcohol intake (binge drinking) causes a decrease in hippocampal neurogenesis, via decreases in neural stem cell proliferation and newborn cell survival. Alcohol decreases the number of cells in S-phase of the cell cycle, and may arrest cells in the G1 phase, thus inhibiting their proliferation. Ethanol has different effects on different types of actively dividing hippocampal progenitors during their initial phases of neuronal development. Chronic alcohol exposure decreases the number of proliferating cells that are radial glia-like, preneuronal, and intermediate types, while not affecting early neuronal type cells; suggesting ethanol treatment alters the precursor cell pool. Furthermore, there is a greater decrease in differentiation and immature neurons than there is in proliferating progenitors, suggesting that the abnormal decrease in the percentage of actively dividing preneuronal progenitors results in a greater reduction in the maturation and survival of postmitotic cells.
Additionally, alcohol exposure increased several markers of cell death. In these studies neural degeneration seems to be mediated by non-apoptotic pathways. One of the proposed mechanisms for alcohol’s neurotoxicity is the production of nitric oxide (NO), yet other studies have found alcohol-induced NO production to lead to apoptosis (see Neuroinflammation section).
Transient versus stable alterations
Many negative physiologic consequences of alcoholism are reversible during abstinence. As an example, long-term chronic alcoholics suffer a variety of cognitive deficiencies. However, multiyear abstinence resolves most neurocognitive deficits, except for some lingering deficits in spatial processing. Nevertheless there are some frequent long-term consequences that are not reversible during abstinence. Alcohol craving (compulsive need to consume alcohol) is frequently present long-term among alcoholics. Among 461 individuals who sought help for alcohol problems, followup was provided for up to 16 years. By 16 years, 54% of those who tried to remain abstinent without professional help had relapsed, and 39% of those who tried to remain abstinent with help had relapsed.
Alcohol consumption can substantially impair neurobiologically-beneficial and -demanding exercise.
Long-term, stable consequences of chronic hazardous alcohol use are thought to be due to stable alterations of gene expression resulting from epigenetic changes within particular regions of the brain. For example, in rats exposed to alcohol for up to 5 days, there was an increase in histone 3 lysine 9 acetylation in the pronociceptin promoter in the brain amygdala complex. This acetylation is an activating mark for pronociceptin. The nociceptin/nociceptin opioid receptor system is involved in the reinforcing or conditioning effects of alcohol.
References
Brain, long | Long-term impact of alcohol on the brain | Biology | 1,600 |
212,453 | https://en.wikipedia.org/wiki/Mis%C3%A8re | Misère (French for "destitution"), misere, bettel, betl, or (German for "beggar"; equivalent terms in other languages include , and ) is a bid in various card games, and the player who bids misère undertakes to win no tricks or as few as possible, usually at no trump, in the round to be played. This does not allow sufficient variety to constitute a game in its own right, but it is the basis of such trick-avoidance games as Hearts, and provides an optional contract for most games involving an auction. The term or category may also be used for some card game of its own with the same aim, like Black Peter.
A misère bid usually indicates an extremely poor hand, hence the name. An open or lay down misère, or misère ouvert is a 500 bid where the player is so sure of losing every trick that they undertake to do so with their cards placed face-up on the table. Consequently, 'lay down misère' is Australian gambling slang for a predicted easy victory.
In Skat, the bidding can result in a null game, where the bidder wins only if they lose every trick. (Conversely, the opponents win by forcing the bidder to take a trick.) In Swedish Whist, by contrast, a null game is one in which both teams try to take the fewest tricks. This variation is known as ramsch in Skat.
In Spades, bidding for no tricks is known as bidding nil, which if successful gives the bidder a bonus.
The word is first recorded in this sense in the rules for the game "Boston" in the late 18th century. It cannot be played in 6 hand 500.
Misère game
A misère game or bettel game is a game that is played according to its conventional rules, except that it is "played to lose"; that is, the winner is the one who loses according to the normal game rules. Or, if the game is for more than two players, the one who wins according to the normal game rules loses. Such games generally have rulesets that normally encourage players to win; for example, most variations of checkers (draughts) require players to make a capture move if it is available; thus, in the misère variation, players can force their opponents to take numerous checkers through intentionally "poor" play.
In combinatorial game theory, a misère game is one played according to the "misère play condition"; that is, a player unable to move wins. (This is in contrast to the "normal play condition" in which a player unable to move loses.) Examples of games that use the misère play condition include Sylver coinage.
See also
Vole - the opposite of a misère
Avoider-Enforcer game
Losing chess
References
Board game terminology
Card game terminology
Combinatorial game theory
Misere | Misère | Mathematics | 599 |
54,150,419 | https://en.wikipedia.org/wiki/Multivariate%20Laplace%20distribution | In the mathematical theory of probability, multivariate Laplace distributions are extensions of the Laplace distribution and the asymmetric Laplace distribution to multiple variables. The marginal distributions of symmetric multivariate Laplace distribution variables are Laplace distributions. The marginal distributions of asymmetric multivariate Laplace distribution variables are asymmetric Laplace distributions.
Symmetric multivariate Laplace distribution
A typical characterization of the symmetric multivariate Laplace distribution has the characteristic function:
where is the vector of means for each variable and is the covariance matrix.
Unlike the multivariate normal distribution, even if the covariance matrix has zero covariance and correlation the variables are not independent. The symmetric multivariate Laplace distribution is elliptical.
Probability density function
If , the probability density function (pdf) for a k-dimensional multivariate Laplace distribution becomes:
where:
and is the modified Bessel function of the second kind.
In the correlated bivariate case, i.e., k = 2, with the pdf reduces to:
where:
and are the standard deviations of and , respectively, and is the correlation coefficient of and .
For the uncorrelated bivariate Laplace case, that is k = 2, and , the pdf becomes:
Asymmetric multivariate Laplace distribution
A typical characterization of the asymmetric multivariate Laplace distribution has the characteristic function:
As with the symmetric multivariate Laplace distribution, the asymmetric multivariate Laplace distribution has mean , but the covariance becomes . The asymmetric multivariate Laplace distribution is not elliptical unless , in which case the distribution reduces to the symmetric multivariate Laplace distribution with .
The probability density function (pdf) for a k-dimensional asymmetric multivariate Laplace distribution is:
where:
and is the modified Bessel function of the second kind.
The asymmetric Laplace distribution, including the special case of , is an example of a geometric stable distribution. It represents the limiting distribution for a sum of independent, identically distributed random variables with finite variance and covariance where the number of elements to be summed is itself an independent random variable distributed according to a geometric distribution. Such geometric sums can arise in practical applications within biology, economics and insurance. The distribution may also be applicable in broader situations to model multivariate data with heavier tails than a normal distribution but finite moments.
The relationship between the exponential distribution and the Laplace distribution allows for a simple method for simulating bivariate asymmetric Laplace variables (including for the case of ). Simulate a bivariate normal random variable vector from a distribution with and covariance matrix . Independently simulate an exponential random variable from an Exp(1) distribution. will be distributed (asymmetric) bivariate Laplace with mean and covariance matrix .
References
Probability distributions
Multivariate continuous distributions
Geometric stable distributions | Multivariate Laplace distribution | Mathematics | 600 |
73,372,444 | https://en.wikipedia.org/wiki/Russula%20badia | Russula badia, also known as the burning brittlegill, is a species of mushroom in the genus Russula.
References
External links
badia
Fungi of Europe
Fungi described in 1881
Fungus species | Russula badia | Biology | 40 |
3,699,383 | https://en.wikipedia.org/wiki/X%3AA%20ratio | The X:A ratio is the ratio between the number of X chromosomes and the number of sets of autosomes in an organism. This ratio is used primarily for determining the sex of some species, such as drosophila flies and the C. elegans nematode. The first use of this ratio for sex determination is ascribed to Victor M. Nigon.
Generally, a 1:1 ratio results in a female and a 1:2 ratio results in a male. When calculating the ratio, Y chromosomes are ignored. For example, for a diploid drosophila that has XX, the ratio is 1:1 (2 Xs to 2 sets of autosomes, since it is a diploid). For a diploid drosophila that has XY, the ratio is 1:2 (1 X to 2 sets of autosomes, since it is diploid).
Drosophilla sex chromosome ratio determines the factors it encodes which enhances the synthesis of sxl protein which in turn activates the female specific pathway.
See also
Notes
References
Genetics
X | X:A ratio | Biology | 224 |
29,192 | https://en.wikipedia.org/wiki/Space%20elevator | A space elevator, also referred to as a space bridge, star ladder, and orbital lift, is a proposed type of planet-to-space transportation system, often depicted in science fiction. The main component would be a cable (also called a tether) anchored to the surface and extending into space. An Earth-based space elevator would consist of a cable with one end attached to the surface near the equator and the other end attached to a counterweight in space beyond geostationary orbit (35,786 km altitude). The competing forces of gravity, which is stronger at the lower end, and the upward centrifugal pseudo-force (it is actually the inertia of the counterweight that creates the tension on the space side), which is stronger at the upper end, would result in the cable being held up, under tension, and stationary over a single position on Earth. With the tether deployed, climbers (crawlers) could repeatedly climb up and down the tether by mechanical means, releasing their cargo to and from orbit. The design would permit vehicles to travel directly between a planetary surface, such as the Earth's, and orbit, without the use of large rockets.
History
Early concept
The idea of the space elevator appears to have developed independently in different times and places. The earliest models originated with two Russian scientists in the late nineteenth century. In his 1895 collection Dreams of Earth and Sky, Konstantin Tsiolkovsky envisioned a massive sky ladder to reach the stars as a way to overcome gravity. Decades later, in 1960, Yuri Artsutanov independently developed the concept of a "Cosmic Railway", a space elevator tethered from an orbiting satellite to an anchor on the equator, aiming to provide a safer and more efficient alternative to rockets. In 1966, Isaacs and his colleagues introduced the concept of the 'Sky-Hook', proposing a satellite in geostationary orbit with a cable extending to Earth.
Innovations and designs
The space elevator concept reached America in 1975 when Jerome Pearson began researching the idea, inspired by Arthur C. Clarke's 1969 speech before Congress. After working as an engineer for NASA and the Air Force Research Laboratory, he developed a design for an "Orbital Tower", intended to harness Earth's rotational energy to transport supplies into low Earth orbit. In his publication in Acta Astronautica, the cable would be thickest at geostationary orbit where tension is greatest, and narrowest at the tips to minimize weight per unit area. He proposed extending a counterweight to 144,000 kilometers (89,000 miles) as without a large counterweight, the upper cable would need to be longer due to the way gravitational and centrifugal forces change with distance from Earth. His analysis included the Moon's gravity, wind, and moving payloads. Building the elevator would have required thousands of Space Shuttle trips, though material could be transported once a minimum strength strand reached the ground or be manufactured in space from asteroidal or lunar ore. Pearson's findings, published in Acta Astronautica, caught Clarke's attention and led to technical consultations for Clarke's science fiction novel The Fountains of Paradise (1979), which features a space elevator.
The first gathering of multiple experts who wanted to investigate this alternative to space flight took place at the 1999 NASA conference 'Advanced Space Infrastructure Workshop on Geostationary Orbiting Tether Space Elevator Concepts'. in Huntsville, Alabama. D.V. Smitherman, Jr., published the findings in August of 2000 under the title Space Elevators: An Advanced Earth-Space Infrastructure for the New Millennium, concluding that the space elevator could not be built for at least another 50 years due to concerns about the cable's material, deployment, and upkeep.
Dr. B.C. Edwards suggested that a long paper-thin ribbon, utilizing a carbon nanotube composite material could solve the tether issue due to their high tensile strength and low weight The proposed wide-thin ribbon-like cross-section shape instead of earlier circular cross-section concepts would increase survivability against meteoroid impacts. With support from NASA Institute for Advanced Concepts (NIAC), his work was involved more than 20 institutions and 50 participants. The Space Elevator NIAC Phase II Final Report, in combination with the book The Space Elevator: A Revolutionary Earth-to-Space Transportation System (Edwards and Westling, 2003) summarized all effort to design a space elevator including deployment scenario, climber design, power delivery system, orbital debris avoidance, anchor system, surviving atomic oxygen, avoiding lightning and hurricanes by locating the anchor in the western equatorial Pacific, construction costs, construction schedule, and environmental hazards. Additionally, he researched the structural integrity and load-bearing capabilities of space elevator cables, emphasizing their need for high tensile strength and resilience. His space elevator concept never reached NIAC's third phase, which he attributed to submitting his final proposal during the week of the Space Shuttle Columbia disaster.
21st century advancements
To speed space elevator development, proponents have organized several competitions, similar to the Ansari X Prize, for relevant technologies. Among them are Elevator:2010, which organized annual competitions for climbers, ribbons and power-beaming systems from 2005 to 2009, the Robogames Space Elevator Ribbon Climbing competition, as well as NASA's Centennial Challenges program, which, in March 2005, announced a partnership with the Spaceward Foundation (the operator of Elevator:2010), raising the total value of prizes to US$400,000.
The first European Space Elevator Challenge (EuSEC) to establish a climber structure took place in August 2011.
In 2005, "the LiftPort Group of space elevator companies announced that it will be building a carbon nanotube manufacturing plant in Millville, New Jersey, to supply various glass, plastic and metal companies with these strong materials. Although LiftPort hopes to eventually use carbon nanotubes in the construction of a space elevator, this move will allow it to make money in the short term and conduct research and development into new production methods." Their announced goal was a space elevator launch in 2010. On 13 February 2006, the LiftPort Group announced that, earlier the same month, they had tested a mile of "space-elevator tether" made of carbon-fiber composite strings and fiberglass tape measuring wide and (approx. 13 sheets of paper) thick, lifted with balloons. In April 2019, Liftport CEO Michael Laine admitted little progress has been made on the company's lofty space elevator ambitions, even after receiving more than $200,000 in seed funding. The carbon nanotube manufacturing facility that Liftport announced in 2005 was never built.
In 2007, Elevator:2010 held the 2007 Space Elevator games, which featured US$500,000 awards for each of the two competitions ($1,000,000 total), as well as an additional $4,000,000 to be awarded over the next five years for space elevator related technologies. No teams won the competition, but a team from MIT entered the first 2-gram (0.07 oz), 100-percent carbon nanotube entry into the competition. Japan held an international conference in November 2008 to draw up a timetable for building the elevator.
In 2012, the Obayashi Corporation announced that it could build a space elevator by 2050 using carbon nanotube technology. The design's passenger climber would be able to reach the level of geosynchronous equatorial orbit (GEO) after an 8-day trip. Further details were published in 2016.
In 2013, the International Academy of Astronautics published a technological feasibility assessment which concluded that the critical capability improvement needed was the tether material, which was projected to achieve the necessary specific strength within 20 years. The four-year long study looked into many facets of space elevator development including missions, development schedules, financial investments, revenue flow, and benefits. It was reported that it would be possible to operationally survive smaller impacts and avoid larger impacts, with meteors and space debris, and that the estimated cost of lifting a kilogram of payload to GEO and beyond would be $500.
In 2014, Google X's Rapid Evaluation R&D team began the design of a Space Elevator, eventually finding that no one had yet manufactured a perfectly formed carbon nanotube strand longer than a meter. They thus put the project in "deep freeze" and also keep tabs on any advances in the carbon nanotube field.
In 2018, researchers at Japan's Shizuoka University launched STARS-Me, two CubeSats connected by a tether, which a mini-elevator will travel on. The experiment was launched as a test bed for a larger structure.
In 2019, the International Academy of Astronautics published "Road to the Space Elevator Era", a study report summarizing the assessment of the space elevator as of summer 2018. The essence is that a broad group of space professionals gathered and assessed the status of the space elevator development, each contributing their expertise and coming to similar conclusions: (a) Earth Space Elevators seem feasible, reinforcing the IAA 2013 study conclusion (b) Space Elevator development initiation is nearer than most think. This last conclusion is based on a potential process for manufacturing macro-scale single crystal graphene with higher specific strength than carbon nanotubes.
Materials
A significant difficulty with making a space elevator for the Earth is strength of materials. Since the structure must hold up its own weight in addition to the payload it may carry, the strength to weight ratio, or Specific strength, of the material it is made of must be extremely high.
Since 1959, most ideas for space elevators have focused on purely tensile structures, with the weight of the system held up from above by centrifugal forces. In the tensile concepts, a space tether reaches from a large mass (the counterweight) beyond geostationary orbit to the ground. This structure is held in tension between Earth and the counterweight like an upside-down plumb bob. The cable thickness is tapered based on tension; it has its maximum at a geostationary orbit and the minimum on the ground.
The concept is applicable to other planets and celestial bodies. For locations in the Solar System with weaker gravity than Earth's (such as the Moon or Mars), the strength-to-density requirements for tether materials are not as problematic. Currently available materials (such as Kevlar) are strong and light enough that they could be practical as the tether material for elevators there.
Available materials are not strong and light enough to make an Earth space elevator practical. Some sources expect that future advances in carbon nanotubes (CNTs) could lead to a practical design. Other sources believe that CNTs will never be strong enough. Possible future alternatives include boron nitride nanotubes, diamond nanothreads and macro-scale single crystal graphene.
In fiction
In 1979, space elevators were introduced to a broader audience with the simultaneous publication of Arthur C. Clarke's novel, The Fountains of Paradise, in which engineers construct a space elevator on top of a mountain peak in the fictional island country of "Taprobane" (loosely based on Sri Lanka, albeit moved south to the Equator), and Charles Sheffield's first novel, The Web Between the Worlds, also featuring the building of a space elevator. Three years later, in Robert A. Heinlein's 1982 novel Friday, the principal character mentions a disaster at the “Quito Sky Hook” and makes use of the "Nairobi Beanstalk" in the course of her travels. In Kim Stanley Robinson's 1993 novel Red Mars, colonists build a space elevator on Mars that allows both for more colonists to arrive and also for natural resources mined there to be able to leave for Earth. Larry Niven's book Rainbow Mars describes a space elevator built on Mars. In David Gerrold's 2000 novel, Jumping Off The Planet, a family excursion up the Ecuador "beanstalk" is actually a child-custody kidnapping. Gerrold's book also examines some of the industrial applications of a mature elevator technology. The concept of a space elevator, called the Beanstalk, is also depicted in John Scalzi's 2005 novel Old Man's War. In a biological version, Joan Slonczewski's 2011 novel The Highest Frontier depicts a college student ascending a space elevator constructed of self-healing cables of anthrax bacilli. The engineered bacteria can regrow the cables when severed by space debris.
Physics
Apparent gravitational field
An Earth space elevator cable rotates along with the rotation of the Earth. Therefore, the cable, and objects attached to it, would experience upward centrifugal force in the direction opposing the downward gravitational force. The higher up the cable the object is located, the less the gravitational pull of the Earth, and the stronger the upward centrifugal force due to the rotation, so that more centrifugal force opposes less gravity. The centrifugal force and the gravity are balanced at geosynchronous equatorial orbit (GEO). Above GEO, the centrifugal force is stronger than gravity, causing objects attached to the cable there to pull upward on it. Because the counterweight, above GEO, is rotating about the Earth faster than the natural orbital speed for that altitude, it exerts a centrifugal pull on the cable and thus holds the whole system aloft.
The net force for objects attached to the cable is called the apparent gravitational field. The apparent gravitational field for attached objects is the (downward) gravity minus the (upward) centrifugal force. The apparent gravity experienced by an object on the cable is zero at GEO, downward below GEO, and upward above GEO.
The apparent gravitational field can be represented this way:
where
At some point up the cable, the two terms (downward gravity and upward centrifugal force) are equal and opposite. Objects fixed to the cable at that point put no weight on the cable. This altitude (r1) depends on the mass of the planet and its rotation rate. Setting actual gravity equal to centrifugal acceleration gives:
This is above Earth's surface, the altitude of geostationary orbit.
On the cable below geostationary orbit, downward gravity would be greater than the upward centrifugal force, so the apparent gravity would pull objects attached to the cable downward. Any object released from the cable below that level would initially accelerate downward along the cable. Then gradually it would deflect eastward from the cable. On the cable above the level of stationary orbit, upward centrifugal force would be greater than downward gravity, so the apparent gravity would pull objects attached to the cable upward. Any object released from the cable above the geosynchronous level would initially accelerate upward along the cable. Then gradually it would deflect westward from the cable.
Cable section
Historically, the main technical problem has been considered the ability of the cable to hold up, with tension, the weight of itself below any given point. The greatest tension on a space elevator cable is at the point of geostationary orbit, above the Earth's equator. This means that the cable material, combined with its design, must be strong enough to hold up its own weight from the surface up to . A cable which is thicker in cross section area at that height than at the surface could better hold up its own weight over a longer length. How the cross section area tapers from the maximum at to the minimum at the surface is therefore an important design factor for a space elevator cable.
To maximize the usable excess strength for a given amount of cable material, the cable's cross section area would need to be designed for the most part in such a way that the stress (i.e., the tension per unit of cross sectional area) is constant along the length of the cable. The constant-stress criterion is a starting point in the design of the cable cross section area as it changes with altitude. Other factors considered in more detailed designs include thickening at altitudes where more space junk is present, consideration of the point stresses imposed by climbers, and the use of varied materials. To account for these and other factors, modern detailed designs seek to achieve the largest safety margin possible, with as little variation over altitude and time as possible. In simple starting-point designs, that equates to constant-stress.
For a constant-stress cable with no safety margin, the cross-section-area as a function of distance from Earth's center is given by the following equation:
where
Safety margin can be accounted for by dividing T by the desired safety factor.
Cable materials
Using the above formula, the ratio between the cross-section at geostationary orbit and the cross-section at Earth's surface, known as taper ratio, can be calculated:
The taper ratio becomes very large unless the specific strength of the material used approaches 48 (MPa)/(kg/m3). Low specific strength materials require very large taper ratios which equates to large (or astronomical) total mass of the cable with associated large or impossible costs.
Structure
There are a variety of space elevator designs proposed for many planetary bodies. Almost every design includes a base station, a cable, climbers, and a counterweight. For an Earth Space Elevator the Earth's rotation creates upward centrifugal force on the counterweight. The counterweight is held down by the cable while the cable is held up and taut by the counterweight. The base station anchors the whole system to the surface of the Earth. Climbers climb up and down the cable with cargo.
Base station
Modern concepts for the base station/anchor are typically mobile stations, large oceangoing vessels or other mobile platforms. Mobile base stations would have the advantage over the earlier stationary concepts (with land-based anchors) by being able to maneuver to avoid high winds, storms, and space debris. Oceanic anchor points are also typically in international waters, simplifying and reducing the cost of negotiating territory use for the base station.
Stationary land-based platforms would have simpler and less costly logistical access to the base. They also would have the advantage of being able to be at high altitudes, such as on top of mountains. In an alternate concept, the base station could be a tower, forming a space elevator which comprises both a compression tower close to the surface, and a tether structure at higher altitudes. Combining a compression structure with a tension structure would reduce loads from the atmosphere at the Earth end of the tether, and reduce the distance into the Earth's gravity field that the cable needs to extend, and thus reduce the critical strength-to-density requirements for the cable material, all other design factors being equal.
Cable
A space elevator cable would need to carry its own weight as well as the additional weight of climbers. The required strength of the cable would vary along its length. This is because at various points it would have to carry the weight of the cable below, or provide a downward force to retain the cable and counterweight above. Maximum tension on a space elevator cable would be at geosynchronous altitude so the cable would have to be thickest there and taper as it approaches Earth. Any potential cable design may be characterized by the taper factor – the ratio between the cable's radius at geosynchronous altitude and at the Earth's surface.
The cable would need to be made of a material with a high tensile strength/density ratio. For example, the Edwards space elevator design assumes a cable material with a tensile strength of at least 100 gigapascals. Since Edwards consistently assumed the density of his carbon nanotube cable to be 1300 kg/m3, that implies a specific strength of 77 megapascal/(kg/m3). This value takes into consideration the entire weight of the space elevator. An untapered space elevator cable would need a material capable of sustaining a length of of its own weight at sea level to reach a geostationary altitude of without yielding. Therefore, a material with very high strength and lightness is needed.
For comparison, metals like titanium, steel or aluminium alloys have breaking lengths of only 20–30 km (0.2–0.3 MPa/(kg/m3)). Modern fiber materials such as kevlar, fiberglass and carbon/graphite fiber have breaking lengths of 100–400 km (1.0–4.0 MPa/(kg/m3)). Nanoengineered materials such as carbon nanotubes and, more recently discovered, graphene ribbons (perfect two-dimensional sheets of carbon) are expected to have breaking lengths of 5000–6000 km (50–60 MPa/(kg/m3)), and also are able to conduct electrical power.
For a space elevator on Earth, with its comparatively high gravity, the cable material would need to be stronger and lighter than currently available materials. For this reason, there has been a focus on the development of new materials that meet the demanding specific strength requirement. For high specific strength, carbon has advantages because it is only the sixth element in the periodic table. Carbon has comparatively few of the protons and neutrons which contribute most of the dead weight of any material. Most of the interatomic bonding forces of any element are contributed by only the outer few electrons. For carbon, the strength and stability of those bonds is high compared to the mass of the atom. The challenge in using carbon nanotubes remains to extend to macroscopic sizes the production of such material that are still perfect on the microscopic scale (as microscopic defects are most responsible for material weakness). As of 2014, carbon nanotube technology allowed growing tubes up to a few tenths of meters.
In 2014, diamond nanothreads were first synthesized. Since they have strength properties similar to carbon nanotubes, diamond nanothreads were quickly seen as candidate cable material as well.
Climbers
A space elevator cannot be an elevator in the typical sense (with moving cables) due to the need for the cable to be significantly wider at the center than at the tips. While various designs employing moving cables have been proposed, most cable designs call for the "elevator" to climb up a stationary cable.
Climbers cover a wide range of designs. On elevator designs whose cables are planar ribbons, most propose to use pairs of rollers to hold the cable with friction.
Climbers would need to be paced at optimal timings so as to minimize cable stress and oscillations and to maximize throughput. Lighter climbers could be sent up more often, with several going up at the same time. This would increase throughput somewhat, but would lower the mass of each individual payload.
The horizontal speed, i.e. due to orbital rotation, of each part of the cable increases with altitude, proportional to distance from the center of the Earth, reaching low orbital speed at a point approximately 66 percent of the height between the surface and geostationary orbit, or a height of about 23,400 km. A payload released at this point would go into a highly eccentric elliptical orbit, staying just barely clear from atmospheric reentry, with the periapsis at the same altitude as low earth orbit (LEO) and the apoapsis at the release height. With increasing release height the orbit would become less eccentric as both periapsis and apoapsis increase, becoming circular at geostationary level.
When the payload has reached GEO, the horizontal speed is exactly the speed of a circular orbit at that level, so that if released, it would remain adjacent to that point on the cable. The payload can also continue climbing further up the cable beyond GEO, allowing it to obtain higher speed at jettison. If released from 100,000 km, the payload would have enough speed to reach the asteroid belt.
As a payload is lifted up a space elevator, it would gain not only altitude, but horizontal speed (angular momentum) as well. The angular momentum is taken from the Earth's rotation. As the climber ascends, it is initially moving slower than each successive part of cable it is moving on to. This is the Coriolis force: the climber "drags" (westward) on the cable, as it climbs, and slightly decreases the Earth's rotation speed. The opposite process would occur for descending payloads: the cable is tilted eastward, thus slightly increasing Earth's rotation speed.
The overall effect of the centrifugal force acting on the cable would cause it to constantly try to return to the energetically favorable vertical orientation, so after an object has been lifted on the cable, the counterweight would swing back toward the vertical, a bit like a pendulum. Space elevators and their loads would be designed so that the center of mass is always well-enough above the level of geostationary orbit to hold up the whole system. Lift and descent operations would need to be carefully planned so as to keep the pendulum-like motion of the counterweight around the tether point under control.
Climber speed would be limited by the Coriolis force, available power, and by the need to ensure the climber's accelerating force does not break the cable. Climbers would also need to maintain a minimum average speed in order to move material up and down economically and expeditiously. At the speed of a very fast car or train of it will take about 5 days to climb to geosynchronous orbit.
Powering climbers
Both power and energy are significant issues for climbers – the climbers would need to gain a large amount of potential energy as quickly as possible to clear the cable for the next payload.
Various methods have been proposed to provide energy to the climber:
Transfer the energy to the climber through wireless energy transfer while it is climbing.
Transfer the energy to the climber through some material structure while it is climbing.
Store the energy in the climber before it starts – requires an extremely high specific energy such as nuclear energy.
Solar power – After the first 40 km it is possible to use solar energy to power the climber
Wireless energy transfer such as laser power beaming is currently considered the most likely method, using megawatt-powered free electron or solid state lasers in combination with adaptive mirrors approximately wide and a photovoltaic array on the climber tuned to the laser frequency for efficiency. For climber designs powered by power beaming, this efficiency is an important design goal. Unused energy would need to be re-radiated away with heat-dissipation systems, which add to weight.
Yoshio Aoki, a professor of precision machinery engineering at Nihon University and director of the Japan Space Elevator Association, suggested including a second cable and using the conductivity of carbon nanotubes to provide power.
Counterweight
Several solutions have been proposed to act as a counterweight:
a heavy, captured asteroid
a space dock, space station or spaceport positioned past geostationary orbit
a further upward extension of the cable itself so that the net upward pull would be the same as an equivalent counterweight
parked spent climbers that had been used to thicken the cable during construction, other junk, and material lifted up the cable for the purpose of increasing the counterweight.
Extending the cable has the advantage of some simplicity of the task and the fact that a payload that went to the end of the counterweight-cable would acquire considerable velocity relative to the Earth, allowing it to be launched into interplanetary space. Its disadvantage is the need to produce greater amounts of cable material as opposed to using just anything available that has mass.
Applications
Launching into deep space
An object attached to a space elevator at a radius of approximately 53,100 km would be at escape velocity when released. Transfer orbits to the L1 and L2 Lagrangian points could be attained by release at 50,630 and 51,240 km, respectively, and transfer to lunar orbit from 50,960 km.
At the end of Pearson's cable, the tangential velocity is 10.93 kilometers per second (6.79 mi/s). That is more than enough to escape Earth's gravitational field and send probes at least as far out as Jupiter. Once at Jupiter, a gravitational assist maneuver could permit solar escape velocity to be reached.
Extraterrestrial elevators
A space elevator could also be constructed on other planets, asteroids and moons.
A Martian tether could be much shorter than one on Earth. Mars' surface gravity is 38 percent of Earth's, while it rotates around its axis in about the same time as Earth. Because of this, Martian stationary orbit is much closer to the surface, and hence the elevator could be much shorter. Current materials are already sufficiently strong to construct such an elevator. Building a Martian elevator would be complicated by the Martian moon Phobos, which is in a low orbit and intersects the Equator regularly (twice every orbital period of 11 h 6 min). Phobos and Deimos may get in the way of an areostationary space elevator; on the other hand, they may contribute useful resources to the project. Phobos is projected to contain high amounts of carbon. If carbon nanotubes become feasible for a tether material, there will be an abundance of carbon near Mars. This could provide readily available resources for future colonization on Mars.
Phobos is tide-locked: one side always faces its primary, Mars. An elevator extending 6,000 km from that inward side would end about 28 kilometers above the Martian surface, just out of the denser parts of the atmosphere of Mars. A similar cable extending 6,000 km in the opposite direction would counterbalance the first, so the center of mass of this system remains in Phobos. In total the space elevator would extend out over 12,000 km which would be below areostationary orbit of Mars (17,032 km). A rocket launch would still be needed to get the rocket and cargo to the beginning of the space elevator 28 km above the surface. The surface of Mars is rotating at 0.25 km/s at the equator and the bottom of the space elevator would be rotating around Mars at 0.77 km/s, so only 0.52 km/s (1872 km/h) of Delta-v would be needed to get to the space elevator. Phobos orbits at 2.15 km/s and the outermost part of the space elevator would rotate around Mars at 3.52 km/s.
The Earth's Moon is a potential location for a Lunar space elevator, especially as the specific strength required for the tether is low enough to use currently available materials. The Moon does not rotate fast enough for an elevator to be supported by centrifugal force (the proximity of the Earth means there is no effective lunar-stationary orbit), but differential gravity forces means that an elevator could be constructed through Lagrangian points. A near-side elevator would extend through the Earth-Moon L1 point from an anchor point near the center of the visible part of Earth's Moon: the length of such an elevator must exceed the maximum L1 altitude of 59,548 km, and would be considerably longer to reduce the mass of the required apex counterweight. A far-side lunar elevator would pass through the L2 Lagrangian point and would need to be longer than on the near-side; again, the tether length depends on the chosen apex anchor mass, but it could also be made of existing engineering materials.
Rapidly spinning asteroids or moons could use cables to eject materials to convenient points, such as Earth orbits; or conversely, to eject materials to send a portion of the mass of the asteroid or moon to Earth orbit or a Lagrangian point. Freeman Dyson, a physicist and mathematician, suggested using such smaller systems as power generators at points distant from the Sun where solar power is uneconomical.
A space elevator using presently available engineering materials could be constructed between mutually tidally locked worlds, such as Pluto and Charon or the components of binary asteroid 90 Antiope, with no terminus disconnect, according to Francis Graham of Kent State University. However, spooled variable lengths of cable must be used due to ellipticity of the orbits.
Construction
The construction of a space elevator would need reduction of some technical risk. Some advances in engineering, manufacturing and physical technology are required. Once a first space elevator is built, the second one and all others would have the use of the previous ones to assist in construction, making their costs considerably lower. Such follow-on space elevators would also benefit from the great reduction in technical risk achieved by the construction of the first space elevator.
Prior to the work of Edwards in 2000, most concepts for constructing a space elevator had the cable manufactured in space. That was thought to be necessary for such a large and long object and for such a large counterweight. Manufacturing the cable in space would be done in principle by using an asteroid or Near-Earth object for source material. These earlier concepts for construction require a large preexisting space-faring infrastructure to maneuver an asteroid into its needed orbit around Earth. They also required the development of technologies for manufacture in space of large quantities of exacting materials.
Since 2001, most work has focused on simpler methods of construction requiring much smaller space infrastructures. They conceive the launch of a long cable on a large spool, followed by deployment of it in space. The spool would be initially parked in a geostationary orbit above the planned anchor point. A long cable would be dropped "downward" (toward Earth) and would be balanced by a mass being dropped "upward" (away from Earth) for the whole system to remain on the geosynchronous orbit. Earlier designs imagined the balancing mass to be another cable (with counterweight) extending upward, with the main spool remaining at the original geosynchronous orbit level. Most current designs elevate the spool itself as the main cable is payed out, a simpler process. When the lower end of the cable is long enough to reach the surface of the Earth (at the equator), it would be anchored. Once anchored, the center of mass would be elevated more (by adding mass at the upper end or by paying out more cable). This would add more tension to the whole cable, which could then be used as an elevator cable.
One plan for construction uses conventional rockets to place a "minimum size" initial seed cable of only 19,800 kg. This first very small ribbon would be adequate to support the first 619 kg climber. The first 207 climbers would carry up and attach more cable to the original, increasing its cross section area and widening the initial ribbon to about 160 mm wide at its widest point. The result would be a 750-ton cable with a lift capacity of 20 tons per climber.
Safety issues and construction challenges
For early systems, transit times from the surface to the level of geosynchronous orbit would be about five days. On these early systems, the time spent moving through the Van Allen radiation belts would be enough that passengers would need to be protected from radiation by shielding, which would add mass to the climber and decrease payload.
A space elevator would present a navigational hazard, both to aircraft and spacecraft. Aircraft could be diverted by air-traffic control restrictions. All objects in stable orbits that have perigee below the maximum altitude of the cable that are not synchronous with the cable would impact the cable eventually, unless avoiding action is taken. One potential solution proposed by Edwards is to use a movable anchor (a sea anchor) to allow the tether to "dodge" any space debris large enough to track.
Impacts by space objects such as meteoroids, micrometeorites and orbiting man-made debris pose another design constraint on the cable. A cable would need to be designed to maneuver out of the way of debris, or absorb impacts of small debris without breaking.
Economics
With a space elevator, materials might be sent into orbit at a fraction of the current cost. As of 2022, conventional rocket designs cost about US$12,125 per kilogram (US$5,500 per pound) for transfer to geostationary orbit. Current space elevator proposals envision payload prices starting as low as $220 per kilogram ($100 per pound), similar to the $5–$300/kg estimates of the Launch loop, but higher than the $310/ton to 500 km orbit quoted to Dr. Jerry Pournelle for an orbital airship system.
Philip Ragan, co-author of the book Leaving the Planet by Space Elevator, states that "The first country to deploy a space elevator will have a 95 percent cost advantage and could potentially control all space activities."
International Space Elevator Consortium (ISEC)
The International Space Elevator Consortium (ISEC) is a US Non-Profit 501(c)(3) Corporation formed to promote the development, construction, and operation of a space elevator as "a revolutionary and efficient way to space for all humanity". It was formed after the Space Elevator Conference in Redmond, Washington in July 2008 and became an affiliate organization with the National Space Society in August 2013. ISEC hosts an annual Space Elevator conference at the Seattle Museum of Flight.
ISEC coordinates with the two other major societies focusing on space elevators: the Japanese Space Elevator Association and EuroSpaceward. ISEC supports symposia and presentations at the International Academy of Astronautics and the International Astronautical Federation Congress each year.
Related concepts
The conventional current concept of a "Space Elevator" has evolved from a static compressive structure reaching to the level of GEO, to the modern baseline idea of a static tensile structure anchored to the ground and extending to well above the level of GEO. In the current usage by practitioners (and in this article), a "Space Elevator" means the Tsiolkovsky-Artsutanov-Pearson type as considered by the International Space Elevator Consortium. This conventional type is a static structure fixed to the ground and extending into space high enough that cargo can climb the structure up from the ground to a level where simple release will put the cargo into an orbit.
Some concepts related to this modern baseline are not usually termed a "Space Elevator", but are similar in some way and are sometimes termed "Space Elevator" by their proponents. For example, Hans Moravec published an article in 1977 called "A Non-Synchronous Orbital Skyhook" describing a concept using a rotating cable. The rotation speed would exactly match the orbital speed in such a way that the tip velocity at the lowest point was zero compared to the object to be "elevated". It would dynamically grapple and then "elevate" high flying objects to orbit or low orbiting objects to higher orbit.
The original concept envisioned by Tsiolkovsky was a compression structure, a concept similar to an aerial mast. While such structures might reach space (100 km, 62 mi), they are unlikely to reach geostationary orbit. The concept of a Tsiolkovsky tower combined with a classic space elevator cable (reaching above the level of GEO) has been suggested. Other ideas use very tall compressive towers to reduce the demands on launch vehicles. The vehicle is "elevated" up the tower, which may extend as high as above the atmosphere, and is launched from the top. Such a tall tower to access near-space altitudes of has been proposed by various researchers.
The aerovator is a concept invented by a Yahoo Group discussing space elevators, and included in a 2009 book about space elevators. It would consist of a >1000 km long ribbon extending diagonally upwards from a ground-level hub and then levelling out to become horizontal. Aircraft would pull on the ribbon while flying in a circle, causing the ribbon to rotate around the hub once every 13 minutes with its tip travelling at 8 km/s. The ribbon would stay in the air through a mix of aerodynamic lift and centrifugal force. Payloads would climb up the ribbon and then be launched from the fast-moving tip into orbit.
Other concepts for non-rocket spacelaunch related to a space elevator (or parts of a space elevator) include an orbital ring, a space fountain, a launch loop, a skyhook, a space tether, and a buoyant "SpaceShaft".
Notes
See also
Gravity elevator
Orbital ring
References
Further reading
A conference publication based on findings from the Advanced Space Infrastructure Workshop on Geostationary Orbiting Tether "Space Elevator" Concepts (PDF), held in 1999 at the NASA Marshall Space Flight Center, Huntsville, Alabama. Compiled by D.V. Smitherman Jr., published August 2000
"The Political Economy of Very Large Space Projects" HTML PDF, John Hickman, Ph.D. Journal of Evolution and Technology Vol. 4 – November 1999
A Hoist to the Heavens By Bradley Carl Edwards
Ziemelis K. (2001) "Going up". In New Scientist 2289: 24–27. Republished in SpaceRef . Title page: "The great space elevator: the dream machine that will turn us all into astronauts."
The Space Elevator Comes Closer to Reality. An overview by Leonard David of space.com, published 27 March 2002
Krishnaswamy, Sridhar. Stress Analysis – The Orbital Tower (PDF)
LiftPort's Roadmap for Elevator To Space SE Roadmap (PDF)
Alexander Bolonkin, "Non Rocket Space Launch and Flight". Elsevier, 2005. 488 pgs.
External links
The Economist: Waiting For The Space Elevator (8 June 2006 – subscription required)
CBC Radio Quirks and Quarks November 3, 2001 Riding the Space Elevator
Times of London Online: Going up ... and the next floor is outer space
The Space Elevator: 'Thought Experiment', or Key to the Universe? . By Sir Arthur C. Clarke. Address to the XXXth International Astronautical Congress, Munich, 20 September 1979
International Space Elevator Consortium Website
Space Elevator entry at The Encyclopedia of Science Fiction
Articles containing video clips
Exploratory engineering
Hypothetical technology
Space access
Space colonization
Spacecraft propulsion
Spaceflight technology
Vertical transport devices | Space elevator | Astronomy,Technology | 8,624 |
1,786,922 | https://en.wikipedia.org/wiki/Stomachic | Stomachic is a historic term for a medicine that serves to tone the stomach, improving its function and increase appetite. While many herbal remedies claim stomachic effects, modern pharmacology does not have an equivalent term for this type of action.
Herbs with putative stomachic effects include:
Agrimony
Aloe
Anise
Avens (Geum urbanum)
Barberry
Bitterwood (Picrasmaa excelsa)
Cannabis
Cayenne
Centaurium
Cleome
Colombo (herb) (Frasera carolinensis)
Dandelion
Elecampane
Ginseng
Goldenseal
Grewia asiatica (Phalsa or Falsa)
Hops
Holy thistle
Juniper berry
Mint
Mugwort
Oregano
Peach bark
Rhubarb
White mustard seeds
Rose hips
Rue
Sweet flag (Acorus calamus)
Wormwood (Artemisia absinthium)
The purported stomachic mechanism of action of these substances is to stimulate the appetite by increasing the gastric secretions of the stomach; however, the actual therapeutic value of some of these compounds is dubious. Some other important agents used are:
Bitters: used to stimulate the taste buds, thus producing reflex secretion of gastric juices. Quassia, Aristolochia, gentian, and chirata are commonly used.
Alcohol: increases gastric secretion by direct action and also by the reflex stimulation of taste buds.
Miscellaneous compounds: including insulin which increases the gastric secretion by producing hypoglycemia, and histamine, which produces direct stimulation of gastric glands.
References
Gastroenterology
Herbalism
Pharmacognosy | Stomachic | Chemistry | 332 |
5,200,853 | https://en.wikipedia.org/wiki/InfoDev | infoDev is a World Bank Group program that supports high-growth entrepreneurs in developing economies. The program is part of the Innovation and Entrepreneurship Unit of the World Bank Group's Trade and Competitiveness Global Practice.
infoDev connects entrepreneurs with knowledge, funding and mentors through a global network of business incubators. The program has launched Climate Innovation Centers, Mobile Application Labs (mLabs), and Agribusiness Entrepreneurship Centers in developing countries around the world, including the Caribbean, Ethiopia, Ghana, Kenya, Morocco, South Africa and Vietnam.
Climate Technology Program
The Climate Technology Program helps developing economies identify profitable solutions to climate change. A 2015 infoDev study, Building Competitive Green Industries, found that $6.4 trillion will be invested in clean technologies in developing countries over the next decade.
infoDev has launched seven Climate Innovation Centers, which offer seed financing, policy interventions, network linkages, and technical and business training to entrepreneurs. In 2015, Climate Innovation Centers supported 270 clean technology startups.
Digital Entrepreneurship Program
The Digital Entrepreneurship Program supports the growth of competitive mobile application industries in emerging and frontier markets. The program has established Mobile Application Labs (mLabs)—incubation facilities and innovation hubs for digital entrepreneurs—in Kenya, South Africa and Senegal.
Digital marketing based on four piece's :
1)Product
2)Promotion
3)placement
4)Price
infoDev published a Business Analytics Toolkit for Tech Hub Managers in 2015.
Agribusiness Entrepreneurship Program
The Agribusiness Entrepreneurship Program supports the growth of competitive agro-processing enterprises by advancing innovation in products, processes and business models. The World Bank Group estimates that Africa’s food market will be worth $1 trillion by 2030.
The program has launched Agribusiness Entrepreneurship Centers in Tanzania and Nepal.
Access to Finance Program
The Access to Finance Program connects entrepreneurs with early-stage capital and networks. The program also publishes research on innovative forms of financing for entrepreneurs in developing economies, including crowdfunding and angel investors.
infoDev has published Crowdfunding in Emerging Markets: Lessons from East African Startups and Creating Your Own Angel Investor Group: A Guide for Emerging and Frontier Markets.
References
External links
Information and communication technologies for development
World Bank
Organizations established in 1996 | InfoDev | Technology | 453 |
6,169,271 | https://en.wikipedia.org/wiki/Resource%20acquisition%20ability | Resource acquisition ability (RAA) is a term in social psychology and the sexual opposite of the reproductive value (RV), introducing an unintentional mechanism used by women when selecting a male partner. The RAA is focused on some factors:
Genetic information
Wealth
Salary
Social status
Child care
Personal history (i.e. crime affairs are not good for the RAA)
Numerous other things
Unlike the reproduction value, the RAA is not a scale. Mainly because of the unindexable factors, this term is a bit more complex than the RV.
See also
Hypergamy
Interpersonal attraction
References
Stewart, Stinnett and Rosenfeld. "Sex Differences in Desired Characteristics of Short-Term and Long-Term Relationship Partners"
Interpersonal attraction
Interpersonal relationships | Resource acquisition ability | Biology | 155 |
36,367,682 | https://en.wikipedia.org/wiki/REMUS%20%28vehicle%29 | The REMUS (Remote Environmental Monitoring UnitS) series are autonomous underwater vehicles (AUVs) made by the Woods Hole Oceanographic Institution and designed by their Oceanographic Systems Lab (OSL). More recently REMUS vehicles have been manufactured by the spinoff company Hydroid Inc, which was a wholly owned subsidiary of Kongsberg Maritime. Hydroid was acquired by Huntington Ingalls Industries (HHI) in March 2020.
The series are designed to be low cost, they have shared control software and electronic subsystems and can be operated from a laptop computer. They are used by civilians for seafloor mapping, underwater surveying, and search and recovery as well as by several navies for mine countermeasures missions.
Models
There are a number of variants of the REMUS; all are torpedo-shaped vessels with reconfigurable sensors.
REMUS 6000
The largest model is the REMUS 6000 at long and in diameter; it is named after its maximum diving depth of 6000m. It can travel at speeds of up to and has an endurance of up to 22 hours. It was developed through cooperation between the Naval Oceanographic Office, the Office of Naval Research, and the Woods Hole Oceanographic Institution (WHOI).
In 2018 the Japan Agency for Marine-Earth Science and Technology (JAMSTEC) received an order of New Generation REMUS 6000 AUVs. The New Generation REMUS 6000 is based on the legacy REMUS 6000 platform with "a modular architecture that allows for the addition of multiple payloads including customer sensor packages, forward fins and additional battery sections.” Hydroid also claims that the New Generation model has increased endurance.
REMUS 620
In November 2022, the development of the REMUS 620 was announced. It is an enhanced version of the REMUS 300, built to the same size as the REMUS 600. It has a battery endurance of up to 110 hours and a range of up to , depending on installed modules, and a sprint speed of . With a synthetic-aperture sonar installed, battery life is reduced to 78 hours with a range of . Design missions include mine countermeasures, hydrographic surveys, intelligence collection, surveillance, cyber warfare and electronic warfare. It can also launch smaller UUVs or UAVs. It can be launched from submarines, surface ships, small manned or unmanned craft, and helicopters. It can be recovered underwater by submarines, and recovery back into torpedo tubes is being developed at Woods Hole.
REMUS 600
The midsized REMUS 600 was previously known as the REMUS 12.75, so called due to its diameter. It was renamed to the 600 to correspond to the maximum depth at which it can operate (600m). It can travel at speeds of up to and has an endurance of up to 70 hours at its standard cruising speed of .
A US Navy derivative of this platform designated Mk 18 Mod 2 Kingfish was manufactured from 2012 to 2023. The Mk 18 Mod 2 is equipped with side-scan sonar, a downward-looking video camera, ADCP, GPS, beam attenuation meter (BAM) to measure turbidity, and a conductivity temperature depth (CTD) sensor.
A total of 175 REMUS 600s were delivered to customers in the United States, United Kingdom, Australia and Japan.
REMUS 300
The small-sized REMUS 300 is a development of the REMUS 100, announced in April 2021. It has a length of and a diameter of . The standard REMUS 300 weighs , but its modular design permits a expeditionary configuration to a long-endurance configuration. It can be configured with lithium-ion batteries for an endurance of up to 30 hours, with a maximum range of . It can dive to and has a speed of up to .
It is designed for mine countermeasures, search and recovery, rapid environmental assessment, hydrographic survey, anti-submarine warfare, and intelligence, surveillance and reconnaissance. It has civil applications in the fields of marine archaeology, renewables, and offshore oil and gas.
In March 2022, the U.S. Navy selected the REMUS 300 as its next generation small UUV (SUUV). As of 2024, the system was also being adopted by the Royal Navy's Mine and Threat Exploitation Group.
REMUS 100
The REMUS 100 takes its name from its max operating depth of 100 meters. The US Navy operates a derivative of the REMUS 100, in addition to the standard REMUS 100, designated Mk 18 Mod 1 “Swordfish”. It can travel at speeds of up to and has an endurance of up to 22 hours at its standard cruising speed of .
REMUS M3V
The REMUS M3V (Micro 300 Meter Rated Vehicle) is the smallest in the range and is designed to fit the A-type sonobouy design envelope (91.5 x 12.4 cm). The M3V can travel at 10 knots and dive to 300 meters, apparently uniquely among the REMUS family the M3V can be airdropped.
Operational history
REMUS units were used successfully in 2003 during Operation Iraqi Freedom to detect mines, and in 2011 during the fourth search for the missing aircraft "black boxes" from the crashed Air France flight AF447, which they successfully found. Three REMUS 6000 units were used in the AF447 search. In a video posted by Colombian president Juan Manuel Santos, a REMUS 6000 is seen being used by the Colombian Navy to examine the shipwreck, now patrimony, of galleon San José that sunk in 1708 off the coast of Cartagena de Indias.
In 2012, the mine detection-variant of the REMUS 600 was deployed by the US Navy to the 5th Fleet, operating primarily in the Persian Gulf. REMUS vehicles in Navy service are generally deployed from rigid hull inflatable boats, which can carry two vehicles, although they have been deployed from littoral combat ship and from an MH-60S Seahawk helicopter in exercises. In 2018, a US Navy REMUS 600 named “Smokey” was captured by Houthi combat divers off the coast of Yemen; the Houthi forces published a video of the captured vehicle.
The University of Hawaii at Manoa operates a REMUS 100 equipped to measure salinity, temperature, currents, bathymetry and water quality parameters. These measurements help support research conducted by the university's nearshore/offshore sensor network and water sampling programs.
In 2017 a REMUS 6000 operated from the billionaire Paul Allen’s research vessel R/V Petrel helped discover the at 5,500m in the Philippine Sea. In 2018 a REMUS 6000 operated from R/V Petrel discovered the wreck of the in the Western Pacific, the USS Lexington was sunk in 1942 during the Battle of the Coral Sea.
In 2019 researchers at the University of Exeter used a Woods Hole Oceanographic Institution owned REMUS 100 based SharkCam off the coast of Coll and Tiree to study basking sharks.
On February 20th, 2024 a video surfaced on X showing fighters of the Ansar Allah movement in Yemen with a captured REMUS 600 reportedly belonging to the United States Navy
Operators
Forces navales algériennes
United States Navy
Woods Hole Oceanographic Institute
Naval Oceanographic Office
University of Hawaii at Manoa
Royal Navy
Croatian Navy
Finnish Navy
Royal Netherlands Navy
Royal Canadian Navy
Japan Agency for Marine-Earth Science and Technology
Japan Maritime Self-Defense Force
Remus 100 used as OZZ-1/3.
Remus 600 used as OZZ-2/4 at Awaji-class minesweeper
Irish naval service
Royal New Zealand Navy
Ukrainian Navy Unmanned surface vehicles announced as military aid to be sent to Ukraine by the United Kingdom (from industry) in August 2022
See also
Exercise REP(MUS)
References
External links
A picture of the types of REMUS units produced
Autonomous underwater vehicles
Oceanography | REMUS (vehicle) | Physics,Environmental_science | 1,597 |
2,433,483 | https://en.wikipedia.org/wiki/Valrubicin | Valrubicin (N-trifluoroacetyladriamycin-14-valerate, trade name Valstar) is a chemotherapy drug used to treat bladder cancer. Valrubicin is a semisynthetic analog of the anthracycline doxorubicin, and is administered by infusion directly into the bladder.
It was originally launched as Valstar in the U.S. in 1999 for intravesical therapy of Bacille Calmette-Guérin (BCG)-refractory carcinoma in situ of the urinary bladder in patients in whom cystectomy would be associated with unacceptable morbidity or mortality; however, it was voluntarily withdrawn in 2002 due to manufacturing issues. Valstar was relaunched on September 3, 2009.
Side effects
Blood in urine
Incontinence
painful or difficult urination
Unusually frequent urination
References
Anthracyclines
Topoisomerase inhibitors
Trifluoromethyl compounds
Acetamides
Withdrawn drugs | Valrubicin | Chemistry | 207 |
68,038,337 | https://en.wikipedia.org/wiki/Sofpironium%20bromide | Sofpironium bromide, sold under the brand name Ecclock among others, is a medication used to treat hyperhidrosis (excessive sweating). Sofpironium bromide is an anticholinergic agent that is applied to the skin.
It was approved for medical use in Japan in 2020, and in the United States in June 2024.
Medical uses
Sofpironium bromide is indicated for the treatment of primary axillary hyperhidrosis.
Mechanism of action
The pharmacodynamics of sofpironium bromide are unknown.
Society and culture
Legal status
It was approved for medical use in Japan in November 2020, and in the United States in June 2024.
Brand names
Sofpironium bromide is the international nonproprietary name.
It is sold under the brand name Ecclock in Japan and under the brand name Sofdra in the US.
References
Further reading
External links
Dermatologic drugs
Muscarinic antagonists
Quaternary ammonium compounds
Bromides
Pyrrolidines
Tertiary alcohols
Carboxylate esters
Ethyl esters
Cyclopentyl compounds | Sofpironium bromide | Chemistry | 235 |
43,574,878 | https://en.wikipedia.org/wiki/Cyanopeptolin | Cyanopeptolins (CPs) are a class of oligopeptides produced by Microcystis and Planktothrix algae strains, and can be neurotoxic. The production of cyanopeptolins occurs through nonribosomal peptides synthases (NRPS).
Chemistry
CPs are, in general, a six-residue peptide formed into a ring by a beta-lactone bridge, making them chemically depsipeptides (peptidolactones). The first position is usually threonine, which links to one or two residues via an ester bound on the beta-hydroxyl group; the third position is conserved to be 3-amino-6-hydroxy-2-piperidone (Ahp) or a derivative. All other positions are highly variable.
There is not a single, unified nomenclature, for CPs. Names such as CP1020 and CP1138 refer to the molar mass. Others, such as aeruginopeptins, micropeptins, microcystilide, nostopeptins, and oscillapeptins, refer to the organism the substance is originally found in.
Factors affecting production
Increased water temperatures, because of climate change and eutrophication of inland waters promote blooms of cyanobacteria, potentially threaten water contamination by the production of the toxic cyanopeptolin CP1020.
Biological activity
Most CPs are serine protease inhibitors.
Cyanopeptolin CP1020 exposure in zebrafish affected pathways related to DNA damage, the circadian rhythm and response to light.
Evolutionary history
CPs are probably very ancient: the cyanobacterial genera that produce CPs appear to have inherited the key modules vertically and not horizontally.
See also
Cyanotoxin
Microviridin
Microcystin
References
External links
Cyanobacteria, and toxin production (The New York Times)
Peptides
Cyanotoxins | Cyanopeptolin | Chemistry | 413 |
69,274,109 | https://en.wikipedia.org/wiki/Turkesterone | Turkesterone is a phytoecdysteroid found in numerous plant species, including Ajuga turkestanica, various Vitex species, Triticum aestivum, and Rhaponticum acaule.
See also
Ecdysterone
References
Tertiary alcohols | Turkesterone | Chemistry | 62 |
14,439,837 | https://en.wikipedia.org/wiki/AUSTAL2000 | Austal2000 is an atmospheric dispersion model for simulating the dispersion of air pollutants in the ambient atmosphere. It was developed by Ingenieurbüro Janicke in Dunum, Germany under contract to the Federal Ministry for Environment, Nature Conservation and Nuclear Safety.
Although not named in the TA Luft, it is the reference dispersion model accepted as being in compliance with the requirements of Annex 3 of the TA Luft and the pertinent VDI Guidelines.
Description
It simulates the dispersion of air pollutants by utilizing a random walk process (Lagrangian simulation model) and it has capabilities for building effects, complex terrain, pollutant plume depletion by wet or dry deposition, and first order chemical reactions. It is available for download on the Internet free of cost.
Austal2000G is a similar model for simulating the dispersion of odours and it was also developed by Ingenieurbüro Janicke. The development of Austal 2000G was financed by three German states: Niedersachsen, Nordrhein-Westfalen and Baden-Württemberg.
See also
List of atmospheric dispersion models
UK Dispersion Modelling Bureau
UK Atmospheric Dispersion Modelling Liaison Committee
Further reading
www.crcpress.com
www.air-dispersion.com
References
Atmospheric dispersion modeling | AUSTAL2000 | Chemistry,Engineering,Environmental_science | 285 |
1,764,793 | https://en.wikipedia.org/wiki/Hemiaminal | In organic chemistry, a hemiaminal (also carbinolamine) is a functional group or type of chemical compound that has a hydroxyl group and an amine attached to the same carbon atom: . R can be hydrogen or an alkyl group. Hemiaminals are intermediates in imine formation from an amine and a carbonyl by alkylimino-de-oxo-bisubstitution. Hemiaminals can be viewed as a blend of aminals and geminal diol. They are a special case of amino alcohols.
Classification according to amine precursor
Hemiaminals form from the reaction of an amine and a ketone or aldehyde. The hemiaminal is sometimes isolable, but often they spontaneously dehydrate to give imines.
Addition of ammonia
The adducts formed by the addition of ammonia to aldehydes have long been studied. Compounds containing both a primary amino group and a hydroxyl group bonded to the same carbon atom are rarely stable, as they tend to dehydrate to form imines which polymerise to hexamethylenetetramine. A rare stable example is the adduct of ammonia and hexafluoroacetone, .
The C-substituted derivatives are obtained by reaction of aldehydes and ammonia:
3 RCHO + 3 NH3 -> (RCHNH)3 + 3 H2O
Addition of primary amines
N-substituted derivatives are somewhat stable. They are invoked but rarely observed as intermediates in the Mannich reaction. These N,N',N''-trisubstituted hexahydro-1,3,5-triazines arise from the condensation of the amine and formaldehyde as illustrated by the route to 1,3,5-trimethyl-1,3,5-triazacyclohexane:
3 CH2O + 3 H2NMe -> (CH2NMe)3 + 3 H2O
Although adducts generated from primary amines or ammonia are usually unstable, the hemiaminals have been trapped in a cavity.
Addition of secondary amines: carbinolamines (hemiaminals) and bisaminomethanes
One of the simplest reactions entails condensation of formaldehyde and dimethylamine. This reaction produces first the carbinolamine (a hemiaminal) and bis(dimethylamino)methane ():
Me2NH + CH2O -> Me2NCH2OH
Me2NH + Me2NCH2OH -> Me2NCH2NMe2 + H2O
The reaction of formaldehyde with carbazole, which is weakly basic, proceed similarly:
Again, this carbinol converts readily to the methylene-linked bis(carbazole).
Hemiaminal ethers
Hemiaminal ethers have the following structure: R‴-C(NR'2)(OR")-R⁗. The glycosylamines are examples of cyclic hemiaminal ethers.
Use in total synthesis
Hemiaminal formation is a key step in an asymmetric total synthesis of saxitoxin:
In this reaction step the alkene group is first oxidized to an intermediate acyloin by action of osmium(III) chloride, oxone (sacrificial catalyst) and sodium carbonate (base).
See also
Aminal
Alkanolamine
Hemiacetal
References
Functional groups | Hemiaminal | Chemistry | 733 |
3,320,853 | https://en.wikipedia.org/wiki/Chemical%20process | In a scientific sense, a chemical process is a method or means of somehow changing one or more chemicals or chemical compounds. Such a chemical process can occur by itself or be caused by an outside force, and involves a chemical reaction of some sort. In an "engineering" sense, a chemical process is a method intended to be used in manufacturing or on an industrial scale (see Industrial process) to change the composition of chemical(s) or material(s), usually using technology similar or related to that used in chemical plants or the chemical industry.
Neither of these definitions are exact in the sense that one can always tell definitively what is a chemical process and what is not; they are practical definitions. There is also significant overlap in these two definition variations. Because of the inexactness of the definition, chemists and other scientists use the term "chemical process" only in a general sense or in the engineering sense. However, in the "process (engineering)" sense, the term "chemical process" is used extensively. The rest of the article will cover the engineering type of chemical processes.
Although this type of chemical process may sometimes involve only one step, often multiple steps, referred to as unit operations, are involved. In a plant, each of the unit operations commonly occur in individual vessels or sections of the plant called units. Often, one or more chemical reactions are involved, but other ways of changing chemical (or material) composition may be used, such as mixing or separation processes. The process steps may be sequential in time or sequential in space along a stream of flowing or moving material; see Chemical plant. For a given amount of a feed (input) material or product (output) material, an expected amount of material can be determined at key steps in the process from empirical data and material balance calculations. These amounts can be scaled up or down to suit the desired capacity or operation of a particular chemical plant built for such a process. More than one chemical plant may use the same chemical process, each plant perhaps at differently scaled capacities.
Chemical processes like distillation and crystallization go back to alchemy in Alexandria, Egypt.
Such chemical processes can be illustrated generally as block flow diagrams or in more detail as process flow diagrams. Block flow diagrams show the units as blocks and the streams flowing between them as connecting lines with arrowheads to show direction of flow.
In addition to chemical plants for producing chemicals, chemical processes with similar technology and equipment are also used in oil refining and other refineries, natural gas processing, polymer and pharmaceutical manufacturing, food processing, and water and wastewater treatment.
Unit processing in chemical process
Unit processing is the basic processing in chemical engineering. Together with unit operations it forms the main principle of the varied chemical industries. Each genre of unit processing follows the same chemical law much as each genre of unit operations follows the same physical law.
Chemical engineering unit processing consists of the following important processes:
Fractionation
Decontamination
Distillation
Filtration
Oxidation
Reduction
Refining / Refining (metallurgy)
Hydrogenation
Dehydrogenation
Hydrolysis
Hydration
Dehydration
Halogenation
Nitrification
Sulfonation
Amination
Alkylation
Dealkylation
Esterification
Polymerization
Polycondensation
Purification
Catalysis
Academic research institutes in process chemistry
Institute of Process Research & Development, University of Leeds
See also
Chemical plant
Chemical reaction
Foam fractionation
Industrial process
Process (engineering)
Separation process
References
Secondary sector of the economy
Industrial processes | Chemical process | Chemistry | 699 |
12,642,304 | https://en.wikipedia.org/wiki/Human%20factors%20integration | Human Factors Integration (HFI) is the process adopted by a number of key industries (notably defence and hazardous industries like oil & gas) in Europe to integrate human factors and ergonomics into the systems engineering process. Although each industry has a slightly different domain, the underlying approach is the same.
Overview
In essence HFI tries to reconcile the top down nature of system engineering with the iterative nature of a user centred design approach (e.g. ISO 6385 or ISO 9241-210). It often does this by creating a Human Factors Integration Plan (HFIP) that sits alongside the system development plan. The purpose of the HFIP is to define how the Human Factors Engineering activities necessary for the successful delivery of a particular system will be conducted.
It establishes the guiding principles to be followed by the project to implement the best-practice Human Factors methods. As well as the principles involved, the Plan normally describes the organisation, processes and controls necessary over the entire life cycle of the system from the concept phase through to decommissioning.
Domains
HFI undertakes this by conducting a formal process that identifies and reconciles human related issues. These issues are split for convenience into domains. The seven domains defined by the US Army under its MANPRINT programme are:
Manpower - The number of military and civilian personnel required and potentially available to operate, maintain, sustain and provide training for systems
Personnel - The cognitive and physical capabilities required to be able to train for, operate, maintain and sustain systems.
Training - The instruction or education, and on-the-job or unit training required to provide personnel their essential job skills, knowledge, values and attributes.
Human Factors Engineering - The integration of human characteristics into system definition, design, development, and evaluation to optimise human-machine performance under operational conditions.
Health Hazard Assessment - Short or long term hazards to health occurring as a result of normal operation of the system.
System safety - Safety risks occurring when the system is functioning in an abnormal manner.
Soldier Survivability - The characteristics of a system that can reduce fratricide, detectability and probability of being attacked and minimize system damage, soldier injury and cognitive and physical fatigue.
The UK Ministry of Defence (MoD) adopted a similar HFI approach to MANPRINT in the early 1990s, but excluded Soldier Survivability. Subsequently the MoD added a seventh 'Social & Organisational' domain. Some industries also include habitability as a separate domain.
HFI Plan
The HFI plan scope defines the relationship between all the activities and the Human Factors domains and provides a systematic approach to ensure that:
The human role in the system is defined to optimise human performance in relation to the core system architecture and ancillary equipment.
Adequate human-equipment analyses and trade-off studies are performed, revisiting the assumptions throughout the system life cycle. The process is iterative. As the programme progresses, the HF activities involve greater depth of analysis.
Biomedical analysis and design support includes the environmental protection necessary to promote health and safety, and the capability for safe operation and maintenance of the core architecture and ancillary equipment.
Training characteristics (materials, environment, evaluation criteria, etc.) for system personnel are identified.
System testing and evaluation is conducted to verify that users can safely and effectively operate, maintain and support equipment in its intended environment.
The design meets agreed operational performance standards and where this is not the case, to modify the design or associated training in such a way that the resultant crewed system meets the required standards.
References
Notes
See also
External links
Human Factors Integration Defence Technology Centre
UK Ministry of Defence Policy, information and guidance on the HFI aspects of UK MOD Defence Acquisition, part of the MOD's Acquisition Operating Framework (AoF).
MANPRINT
Systems engineering
Ergonomics | Human factors integration | Engineering | 768 |
48,992,685 | https://en.wikipedia.org/wiki/Transition%20metal%20dithiophosphate%20complex | Transition metal dithiophosphate complexes are coordination compounds containing dithiophosphate ligands, i.e. ligands of the formula (RO)2PS. The homoleptic complexes have formulas M[S2P(OR)2]2 and M[S2P(OR)2]3. These neutral complexes tend to be soluble in organic solvents, especially when R is branched.
Perhaps the most important members are zinc dialkyldithiophosphates, which are oil additives. Such compounds are prepared by the reaction of dialkoxydithiophoric acid with metal oxides, chlorides, and acetates.
References
Phosphorothioates | Transition metal dithiophosphate complex | Chemistry | 147 |
11,552,809 | https://en.wikipedia.org/wiki/Monographella%20albescens | Monographella albescens is a fungal plant pathogen also known as leaf scald which infects rice.
Transmission
Conidia are transferred by water splash.
Host resistance
Lines of rice that are resistant against M. albescens are available. Most resistance breeding has been in field trials in countries where the disease is already widespread. Even in "resistant" strains, however, there is some noticeable lesioning but little to no loss of yield. The mechanism of resistance remains unknown. There is wide variation in pathogen strain-host strain pathogenicity.
Rice plants fed increased silicon showed increased resistance to M. albescens. Surprisingly this is not - or not entirely - due to its structural role but also due to increased production of various compounds and enzymes.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Xylariales
Fungi described in 1889
Fungus species | Monographella albescens | Biology | 181 |
1,882,334 | https://en.wikipedia.org/wiki/William%20Fyfe%20%28geochemist%29 | William Sefton Fyfe, (4 June 1927 – 11 November 2013) was a New Zealand geologist and Professor Emeritus in the department of Earth Sciences at the University of Western Ontario. He is widely considered among the world's most eminent geochemists.
Life
Born in Ashburton, New Zealand, he received his BSc degree in 1948, his MSc degree in 1949, and his PhD degree in 1952 all from the University of Otago, where he taught in the Geology department as a lecturer. He performed research at the University of California, Los Angeles and the University of California, Berkeley. He was a professor at Berkeley, Imperial College London and the University of Manchester before arriving at the University of Western Ontario in 1972. From 1986 until 1990 he was dean of science at the University of Western Ontario.
Honours and awards
From 1952 to 1954, Fulbright Scholar (Geology)
In 1962 and 1983, he was a Guggenheim Fellow
In 1969 he was elected a Fellow of the Royal Society
Fellow of the Royal Society of Canada
In 1970 he was elected an Honorary Fellow of the Royal Society of New Zealand.
In 1981 he was awarded the Geological Association of Canada's highest honour, the Logan Medal.
In 1985 he was awarded the Royal Society of Canada's Willet G. Miller Medal.
In 1989 he was made a Companion of the Order of Canada.
In 1989 he was awarded an honorary doctoral degree of science, from the Memorial University.
In 1990 he was awarded an "honoris causa" doctoral degree from the University of Lisbon
In 1990 he was awarded the Geological Society of America's Arthur L. Day Medal.
In 1992 he was awarded the Natural Sciences and Engineering Research Council (NSERC) Canada Gold Medal for Science and Engineering.
In 1994 he was awarded an honorary doctoral degree of science, from the Saint Mary's University, Halifax.
In 1995 he was awarded honorary doctoral degrees of science, from the University of Otago and the University of Western Ontario.
In 1995, he was awarded the Roebling Medal of the Mineralogical Society of America.
In 2000 he was awarded the Geological Society's Wollaston Medal.
In 2006 he was awarded an honorary doctoral degree of science, from the University of Alberta
Asteroid 15846 Billfyfe is named in his honour
References
External links
Biosketch
1927 births
Academics of Imperial College London
New Zealand geochemists
Companions of the Order of Canada
New Zealand fellows of the Royal Society
Fellows of the Royal Society of Canada
Foreign members of the Russian Academy of Sciences
Foreign fellows of the Indian National Science Academy
Recipients of the Great Cross of the National Order of Scientific Merit (Brazil)
2013 deaths
New Zealand emigrants to Canada
University of Otago alumni
Logan Medal recipients
Wollaston Medal winners
Academic staff of the University of Western Ontario
Academic staff of the University of Otago
20th-century New Zealand chemists
20th-century Canadian chemists | William Fyfe (geochemist) | Chemistry | 574 |
37,461,490 | https://en.wikipedia.org/wiki/Iron%28II%29%20selenide | Iron(II) selenide refers to a number of inorganic compounds of ferrous iron and selenide (Se2−). The phase diagram of the system Fe–Se reveals the existence of several non-stoichiometric phases between ~49 at. % Se and ~53 at. % Fe, and temperatures up to ~450 °C. The low temperature stable phases are the tetragonal PbO-structure (P4/nmm) β-Fe1−xSe and α-Fe7Se8. The high temperature phase is the hexagonal, NiAs structure (P63/mmc) δ-Fe1−xSe. Iron(II) selenide occurs naturally as the NiAs-structure mineral achavalite.
More selenium rich iron selenide phases are the γ phases (γ and γˈ), assigned the Fe3Se4 stoichiometry, and FeSe2, which occurs as the marcasite-structure natural mineral ferroselite, or the rare pyrite-structure mineral dzharkenite.
It is used in electrical semiconductors.
Superconductivity
β-FeSe is the simplest iron-based superconductor but with diverse properties. It starts to superconduct at 8 K at normal pressure but its critical temperature (Tc) is dramatically increased to 38 K under pressure, by means of intercalation, or after quenching at high pressures. The combination of both intercalation and pressure results in re-emerging superconductivity at 48 K.
In 2013 it was reported that a single atomic layer of FeSe epitaxially grown on SrTiO3 is superconductive with a then-record transition temperature for iron-based superconductors of 70 K. This discovery has attracted significant attention and in 2014 a superconducting transition temperature of over 100K was reported for this system.
References
Iron(II) compounds
Selenides
Semiconductor materials
Nickel arsenide structure type
Superconductors | Iron(II) selenide | Chemistry,Materials_science | 423 |
63,513,992 | https://en.wikipedia.org/wiki/NGC%20767 | NGC 767 is a barred spiral galaxy located in the constellation Cetus about 241 million light years from the Milky Way. It was discovered by the American astronomer Francis Leavenworth in 1886.
One supernova has been observed in NGC 767: SN2019lre (typeII, mag. 19.2).
See also
List of NGC objects (1–1000)
References
External links
Barred spiral galaxies
Cetus
0767
007483
Discoveries by Francis Leavenworth
Astronomical objects discovered in 1886 | NGC 767 | Astronomy | 105 |
12,585,208 | https://en.wikipedia.org/wiki/Water%20cluster | In chemistry, a water cluster is a discrete hydrogen bonded assembly or cluster of molecules of water. Many such clusters have been predicted by theoretical models (in silico), and some have been detected experimentally in various contexts such as ice, bulk liquid water, in the gas phase, in dilute mixtures with non-polar solvents, and as water of hydration in crystal lattices. The simplest example is the water dimer (H2O)2.
Water clusters have been proposed as an explanation for some anomalous properties of liquid water, such as its unusual variation of density with temperature. Water clusters are also implicated in the stabilization of certain supramolecular structures. They are expected to play a role also in the hydration of molecules and ions dissolved in water.
Theoretical predictions
Detailed water models predict the occurrence of water clusters, as configurations of water molecules whose total energy is a local minimum.
Of particular interest are the cyclic clusters (H2O)n; these have been predicted to exist for n = 3 to 60. At low temperatures, nearly 50% of water molecules are included in clusters. With increasing cluster size the oxygen to oxygen distance is found to decrease which is attributed to so-called cooperative many-body interactions: due to a change in charge distribution the H-acceptor molecule becomes a better H-donor molecule with each expansion of the water assembly. Many isomeric forms seem to exist for the hexamer (H2O)6: from ring, book, bag, cage, to prism shape with nearly identical energy. Two cage-like isomers exist for heptamers (H2O)7, and octamers (H2O)8 are found either cyclic or in the shape of a cube.
Other theoretical studies predict clusters with more complex three-dimensional structures. Examples include the fullerene-like cluster (H2O)28, named the water buckyball, and the 280-water-molecule monster icosahedral network (with each water molecule coordinate to 4 others). The latter, which is 3 nm in diameter, consists of nested icosahedral shells with 280 and 100 molecules. There is also an augmented version with another shell of 320 molecules. There is increased stability with the addition of each shell. There are theoretical models of water clusters of more than 700 water molecules, but they have not been observed experimentally. One line of research uses graph invariants for generating hydrogen bond topologies and predicting physical properties of water clusters and ice. The utility of graph invariants was shown in a study considering the (H2O)6 cage and (H2O)20 dodecahedron, which are associated with roughly the same oxygen atom arrangements as in the solid and liquid phases of water.
Experimental observations
Experimental study of any supramolecular structures in bulk water is difficult because of their short lifetime: the hydrogen bonds are continually breaking and reforming at timescales faster than 200 femtoseconds.
Nevertheless, water clusters have been observed in the gas phase and in dilute mixtures of water and non-polar solvents like benzene and liquid helium. The experimental detection and characterization of the clusters has been achieved with the following methods: far-infrared spectroscopy|far-infrared (FIR), vibration-rotation-tunneling spectroscopy|vibration-rotation-tunneling (VRT), Н-NMR, and neutron diffraction. The hexamer is found to have planar geometry in liquid helium, a chair conformation in organic solvents, and a cage structure in the gas phase. Experiments combining IR spectroscopy with mass spectrometry reveal cubic configurations for clusters in the range n=(8-10).
When the water is part of a crystal structure as in a hydrate, x-ray diffraction can be used. Conformation of a water heptamer was determined (cyclic twisted nonplanar) using this method. Further, multi-layered water clusters with formulae (H2O)100 trapped inside cavities of several polyoxometalate clusters were also reported by Mueller et al.
Cluster models of bulk liquid water
Several models attempt to account for the bulk properties of water by assuming that they are dominated by cluster formation within the liquid. According to the quantum cluster equilibrium (QCE) theory of liquids, n=8 clusters dominate the liquid water bulk phase, followed by n=5 and n=6 clusters. Near the triple point, the presence of an n=24 cluster is invoked. In another model, bulk water is built up from a mixture of hexamer and pentamer rings containing cavities capable of enclosing small solutes. In yet another model an equilibrium exists between a cubic water octamer and two cyclic tetramers. However, none of these models yet have reproduced the experimentally-observed density maximum of water as a function of temperature.
See also
Hydrogen bond
Mpemba effect
Properties of water
Richard J. Saykally
References
External links
Water clusters at London South Bank University Link
The Cambridge Cluster Database - Includes water clusters calculated with various water models and the water clusters explored with ab initio methods.
Cluster chemistry
Water chemistry | Water cluster | Chemistry | 1,060 |
51,072,866 | https://en.wikipedia.org/wiki/Department%20for%20Business%2C%20Energy%20and%20Industrial%20Strategy | The Department for Business, Energy, and Industrial Strategy (BEIS) was a ministerial department of the United Kingdom Government, from July 2016 to February 2023.
The department was formed during a machinery of government change on 14 July 2016, following Theresa May's appointment as Prime Minister. It was created by a merger between the Department for Business, Innovation, and Skills and the Department of Energy and Climate Change.
On 7 February 2023, under the Rishi Sunak premiership, the department was dissolved. Its functions were split into three new departments: the Department for Business and Trade, the Department for Energy Security and Net Zero, and the Department for Science, Innovation, and Technology. Grant Shapps, the final secretary of state for the old department, became the first Secretary of State for Energy Security and Net Zero.
Responsibilities
The department had responsibility for:
business
industrial strategy
science, research, and innovation
deregulation
energy and clean growth
climate change
While some functions of the former Department for Business, Innovation, and Skills, in respect of higher and further education policy, apprenticeships, and skills, were transferred to the Department for Education, May explained in a statement:The Department for Energy and Climate Change and the remaining functions of the Department for Business, Innovation, and Skills have been merged to form a new Department for Business, Energy, and Industrial Strategy, bringing together responsibility for business, industrial strategy, science, and innovation with energy and climate change policy. The new department will be responsible for helping to ensure that the economy grows strongly in all parts of the country, based on a robust industrial strategy. It will ensure that the UK has energy supplies that are reliable, affordable, and clean, and it will make the most of the economic opportunities of new technologies and support the UK's global competitiveness more effectively.
Research and innovation partnerships in low and middle-income countries
BEIS spends part of the overseas aid budget on research and innovation through two major initiatives: The Newton Fund and the Global Challenges Research Fund, or GCRF. Both funds aim to leverage the UK's world-class research and innovation capacity to pioneer new ways to support economic development, social welfare, and long-term sustainable and equitable growth in low- and middle-income countries. The Newton Fund builds research and innovation partnerships with partner countries to support their economic development and social welfare and to develop their research and innovation capacity for long-term sustainable growth. The fund is delivered through seven UK delivery partners.
National Security and Investment Act 2021
In August 2022, BEIS blocked the sale of Pulsic Limited in Bristol to a company owned by China's National Integrated Circuit Industry Investment Fund. Pulsic is a chip design software company which makes tools to design and develop circuit layouts for chips.
In November 2022, BEIS ordered Nexperia to sell at least 86 percent of Newport Wafer Fab, the largest chipmaking facility in the UK, which it had acquired in July 2021. In 2018, a Chinese corporation by the name of Wingtech Technology acquired Nexperia.
Devolution
Some responsibilities extend to England alone due to devolution, while others are reserved or excepted matters that therefore apply to the other countries of the United Kingdom as well.
Reserved and exceptioned matters are outlined below.
Scotland
Reserved matters:
The Economy Directorates of the Scottish Government handles devolved economic policy.
Northern Ireland
Reserved matters:
Climate change policy
Competition
Consumer protection
Import and export control
Export licensing
Intellectual property
Nuclear energy
Postal services
Product standards, safety and liability
Research councils
Science and research
Telecommunications
Units of measurement
Excepted matter:
Outer space
Nuclear power
The department's main counterpart is:
Department for the Economy (general economic policy)
Ministers
The final roster of ministers in the Department for Business, Energy and Industrial Strategy were:
In October 2016, Archie Norman was appointed as Lead Non-Executive board member for BEIS.
References
Business, Energy and Industrial Strategy
2016 establishments in the United Kingdom
Business in the United Kingdom
Economy ministries
Energy ministries
Innovation ministries
Research ministries
Energy in the United Kingdom
Innovation in the United Kingdom
Ministries established in 2016
2023 disestablishments in the United Kingdom
Government agencies disestablished in the 2020s | Department for Business, Energy and Industrial Strategy | Engineering | 845 |
24,633,146 | https://en.wikipedia.org/wiki/Pedro%20E.%20Zadunaisky | Pedro Elías Zadunaisky (December 10, 1917 – October 7, 2009) was an Argentine astronomer and mathematician who plotted the orbit of Saturn's most-distant moon, Phoebe, as well as several comets including Halley's Comet, and various satellites including Explorer I.
Zadunaisky was born in Rosario, Santa Fe. He was once a senior astronomer and a mathematician at the Smithsonian Astrophysical Observatory and at NASA's Goddard Space Flight Center. 4617 Zadunaisky is an asteroid named in his honor. He died on October 7, 2009, at the age of 91. He wrote the book "A Guide to Celestial Mechanics" in 1961.
References
1917 births
2009 deaths
20th-century Argentine mathematicians
Argentine Jews
20th-century Argentine astronomers
People from Rosario, Santa Fe | Pedro E. Zadunaisky | Astronomy | 161 |
76,023,439 | https://en.wikipedia.org/wiki/Shen%20Qiang%20%28engineer%29 | Professor Qiang Shen is an academic and engineer. He is an expert in the research and
development of data modelling and analysis and currently serves as Pro Vice-Chancellor at
Aberystwyth University. As of 2023, he has published 450 peer-reviewed papers in electronic
engineering and computing journals. His expertise is often applied to critical intelligent decision support systems, with a focus on an increased level of automation, efficiency and reliability.
Career & research
In 2004, Shen published a paper with the Institute of Electrical and Electronics Engineers where he and co-authors studied methodologies and approaches of Semantics-preserving dimensionality reduction techniques. In 2009, he was the recipient of the Computational Intelligence Society Outstanding Paper Award from the Institute of Electrical and Electronics Engineers for his work on Fuzzy systems. In 2012, as part of the London 2012 Olympics celebration, the olympic torch passed through the Welsh town of Aberystwyth. Shen was selected by Aberystwyth University to be one of the two torchbearers of the olympic torch as it passed through the town. During the same year, Shen was elected as a council member of the Learned Society of Wales.
In 2017, Shen published research in the journal Remote Sensing which showed that using spectral–spatial information can considerably improve the performance of hyperspectral image (HSI) classification. Shen was part of informal hearings and meetings for the analysis of data in 2018 for Review of Government Funded Research and Innovation in Wales carried out by the Welsh Government. In 2021, he was part of the sub-panel for Computer Science and Informatics on the 2021 Research Excellence Framework. He became a Royal Academy of Engineering fellow in 2022.
The real life application of Shen's work include fields such as space exploration, counterterrorism, process monitoring, transportation management and consumer profiling. Qiang currently serves as Pro Vice-Chancellor the for Faculty of Business and Physical Sciences at Aberystwyth University.
References
Living people
Electrical engineers
Year of birth missing (living people) | Shen Qiang (engineer) | Engineering | 406 |
12,545,981 | https://en.wikipedia.org/wiki/Topological%20entropy%20in%20physics | The topological entanglement entropy or topological entropy, usually denoted by , is a number characterizing many-body states that possess topological order.
A non-zero topological entanglement entropy reflects the presence of long range quantum entanglements in a many-body quantum state. So the topological entanglement entropy links topological order with pattern of long range quantum entanglements.
Given a topologically ordered state, the topological entropy can be extracted from the asymptotic behavior of the Von Neumann entropy measuring the quantum entanglement between a spatial block and the rest of the system. The entanglement entropy of a simply connected region of boundary length L, within an infinite two-dimensional topologically ordered state, has the following form for large L:
where is the topological entanglement entropy.
The topological entanglement entropy is equal to the logarithm of the total quantum dimension of the quasiparticle excitations of the state.
For example, the simplest fractional quantum Hall states, the Laughlin states at filling fraction 1/m, have γ = ½log(m). The Z2 fractionalized states, such as topologically ordered states of
Z2 spin-liquid, quantum dimer models on non-bipartite lattices, and Kitaev's toric code state, are characterized γ = log(2).
See also
Quantum topology
Topological defect
Topological order
Topological quantum field theory
Topological quantum number
Topological string theory
References
Calculations for specific topologically ordered states
Condensed matter physics
Statistical mechanics
Entropy | Topological entropy in physics | Physics,Chemistry,Materials_science,Mathematics,Engineering | 311 |
33,397,451 | https://en.wikipedia.org/wiki/SolarPark%20Korea | SolarPark Korea Co., Ltd. is a South Korean crystalline silicon module manufacturer. Founded in 2008 as a German-Korean joint venture, the company combines German and Korean machinery and engineering in its automated module fabrication lines. In June 2011, SolarPark Korea became a 100% subsidiary of the SolarPark Co., Ltd.
History
April 1981
The automated machinery company INMAC is established in Sogong-dong, Joong-gu, Seoul, Korea
July 1990
A new factory is established in Songne-dong, Soda-gu, Bucheon-si, Gyeonggi-do, Korea
The headquarters is relocated to the same location
March 1997
INMAC is selected as a prospective medium & small enterprise of Gyeonggi-do
April 1999
INMAC is selected as a prospective medium & small exporting enterprise
(Small & Medium Business Corporation)
April 2007
Establishment of SolarPark Co., Ltd.
November 2007
SolarPark is awarded the 3 Million Dollar Export Tower Award and presidential citation during the 44th Trade day
April 2008
Establishment of SolarPark Korea Co., Ltd.(Previously SolarWorl Korea Co., Ltd)
Established as a 50/50 JV with SolarWorld AG
September 2008
Construction of Gochang Solar Park(15MWp) completed
November 2008
SolarPark Korea completes the first stage construction of its module production factory (annual capacity 60 MW)
September 2009
Solarpark Korea completes the second stage construction of its module production factory (annual capacity 90 MW)
February 2010
Construction of the Inline Mechanics Co., Ltd. factory completed
April 2010
SolarPark Korea completes the third stage construction of its module production factory (annual capacity 100 MW)
'Total capacity : 250 MW'
October 2010
IEC 61215, 61730-1, 61730-2 certifications received
November 2010
SolarPark Korea is awarded the Ston Tower Industrial Medal for achieving 300 million dollars in exports during the 47th Trade Day
June 2011
SolarWorld AG's 50% share of SolarPark Korea is acquired by SolarPark Korea.
SolarPark Korea now owns 100% of shares
September 2011
ISO 9001 & ISO 14001 certifications received
April 2012
Construction of second module production factory (annual capacity : 300 MW + 50 MW) completed
'Total capacity : 600 MW, 5th largest production capacity in Asia (excluding Chinese manufacturers)'
'July 2012
Merger of the 3 companies; SolarPark Korea Co., Ltd., SolarPark Co., Ltd., and Inline Mechanics Co., Ltd, completed.
Oct 2012
Passed PID test by TUV-SUD
Automation in production
SolarPark Korea espouses automation in production to achieve consistent high-quality, high-volume and cost-competitive output. Its module production lines achieve a capacity-per-employee of 0.83 MW.
SolarPark Korea's production lines comprise machines from equipment makers such as Somont, 3S, Berger Lichttechnik, Pasan and Schleich. Equipment integration is provided by affiliate Inline Mechanics.
References
External links
Photovoltaics manufacturers
Manufacturing companies of South Korea
South Korean companies established in 2008
Solar energy companies
South Korean brands
Manufacturing companies established in 2008 | SolarPark Korea | Engineering | 629 |
2,611,788 | https://en.wikipedia.org/wiki/John%20James%20Abert | John James Abert (17 September 1788 – 27 January 1863) was an American soldier. He headed the Corps of Topographical Engineers for 32 years, during which time he organized the mapping of the American West.
Abert was born in Shepherdstown, Virginia (now West Virginia; also disputed to be Frederick, Maryland) to John Abert and Margarita Meng, his father being said to have emigrated to the States as a soldier with Jean-Baptiste Donatien de Vimeur, comte de Rochambeau in 1780. He graduated from West Point in 1811, but declined a commission to practice law. After leaving West Point, he married Ellen Matlack Stretch in January 1812. He enlisted in the D.C. Militia during the War of 1812, and rejoined the army as a topographical engineer with the rank of brevet Major in October 1814. Abert volunteered as a private in the District of Columbia Militia for the defense of Washington in 1814, and was brevetted Major, Staff Topographical Engineer, for gallantry at the Battle of Bladensburg, Maryland, August 24, 1814.
His son, James William Abert, who also became a member of the corps, was born in 1820. In March 1829, John Abert was appointed to the leadership of the corps, and promoted to colonel in July 1838. Officers working under him were responsible for the exploration and mapping of the lands west of the Mississippi River. He was elected a member of the American Philosophical Society in 1832 and an Associate Fellow of the American Academy of Arts and Sciences in 1845.
In 1818, the US War Department created the Topographic Bureau as part of the Corps of Engineers, under the command of Major Isaac Roberdeau. The Topographic Bureau was assigned six men and was to collect and store maps and topographical reports. When Roberdeau died in 1829, Abert became the head of the Bureau. He wanted to be free from the oversight of the United States Army Corps of Engineers and establish a separate Topographic Corps. in 1831 Abert was able to persuade Congress to remove the topographic engineers from the Corps of Engineers, and place them directly under the United States Secretary of War. In 1838, Abert was appointed the command of the Corps of Topographical Engineers, which position he would hold for 23 years. The Corps of Topographical Engineers had grown by then to thirty-six officers, including six majors, four captains by brevet, six civil engineers and twenty subalterns of the line. Abert recruited the best soldier-scientists he could find. These included John C. Frémont, William H. Emory and Andrew A. Humphreys.
Abert was a member of a number of legal, geographical and scientific societies. He was also a member of the Geographical Society of Paris, the Société de Géographie. He retired from the Army in September 1861. Abert died in Washington, D.C., and was buried in Rock Creek Cemetery.
When he died, the US War Department wrote about Abert's accomplishments with the Corps of Topographical Engineers: "The Army and the country will not need to be reminded of the vast interest and value attached to the operations of this Corps since its organization. The geographical and other information concerning this continent which its officers have collected and published has challenged the admiration of the scientific world, while the practical benefit of their labors has been felt in nearly every State and every Territory; the whole forming a proud monument to him who was its founder. As a citizen and a man, Colonel Abert was remarkable for the steadfastness of his friendships, for his candor and unostentatious hospitality. Equally unostentatious, but no less sincere, was the simple piety which supported his declining years, and left behind an example which the proudest soldier would not be ashamed to follow."
Abert is the namesake of Lake Albert in South Dakota. Captain John C. Fremont named Lake Abert and Abert Rim in his honor when his 1843 expedition passed through southern Oregon. The Abert's squirrel was also named after him.
Children
Abert's children include:
James William Abert (1820–1897) soldier, explorer, ornithologist and topographical artist
Silvanus Thayer Abert (1828–1903), civil engineer
William Stretch Abert (1836–1867), soldier
References
West Point biography
External links
Appleton's Cyclopedia of American Biography, edited by James Grant Wilson, John Fiske and Stanley L. Klos. Six volumes, New York: D. Appleton and Company, 1887-1889
Abert Family Papers Missouri History Museum Archives
Career profile
1788 births
1863 deaths
American militiamen in the War of 1812
American topographers
Burials at Rock Creek Cemetery
Engineers from Washington, D.C.
Engineers from West Virginia
Explorers of Oregon
Fellows of the American Academy of Arts and Sciences
Members of the American Philosophical Society
Military personnel from Washington, D.C.
Military personnel from West Virginia
People from Shepherdstown, West Virginia
United States Army Corps of Topographical Engineers
United States Army officers
United States Military Academy alumni | John James Abert | Engineering | 1,026 |
14,120,062 | https://en.wikipedia.org/wiki/ELK1 | ETS Like-1 protein Elk-1 is a protein that in humans is encoded by the ELK1. Elk-1 functions as a transcription activator. It is classified as a ternary complex factor (TCF), a subclass of the ETS family, which is characterized by a common protein domain that regulates DNA binding to target sequences. Elk1 plays important roles in various contexts, including long-term memory formation, drug addiction, Alzheimer's disease, Down syndrome, breast cancer, and depression.
Structure
As depicted in Figure 1, the Elk1 protein is composed of several domains. Localized in the N-terminal region, the A domain is required for the binding of Elk1 to DNA. This region also contains a nuclear localization signal (NLS) and a nuclear export signal (NES), which are responsible for nuclear import and export, respectively. The B domain allows Elk1 to bind to a dimer of its cofactor, serum response factor (SRF). Located adjacent to the B domain, the R domain is involved in suppressing Elk1 transcriptional activity. This domain harbors the lysine residues that are likely to undergo SUMOylation, a post-translational event that strengthens the inhibition function of the R domain. The D domain plays the key role of binding to active Mitogen-activated protein kinases (MAPKs). Located in the C-terminal region of Elk1, the C domain includes the amino acids that actually become phosphorylated by MAPKs. In this region, Serine 383 and 389 are key sites that need to be phosphorylated for Elk1-mediated transcription to occur. Finally, the DEF domain is specific for the interaction of activated extracellular signal-regulated kinase (Erk), a type of MAPK, with Elk1.
Expression
Given its role as a transcription factor, Elk1 is expressed in the nuclei of non-neuronal cells. The protein is present in the cytoplasm as well as in the nucleus of mature neurons. In post-mitotic neurons, a variant of Elk1, sElk1, is expressed solely in the nucleus because it lacks the NES site present in the full-length protein. Moreover, while Elk1 is broadly expressed, actual levels vary among tissues. The rat brain, for example, is extremely rich in Elk1, but the protein is exclusively expressed in neurons.
Splice variants
Aside from the full-length protein, the Elk1 gene can yield two shortened versions of Elk1: ∆Elk1 and sElk1. Alternative splicing produces ∆Elk1. This variant lacks part of the DNA-binding domain that allows interaction with SRF. On the other hand, sElk1 has an intact region that binds to SRF, but it lacks the first 54 amino acids that contain the NES. Found only in neurons, sElk1 is created by employing an internal translation start site. Both ∆Elk1 and sElk1, truncated versions of full-length protein, are capable of binding to DNA and inducing various cellular signaling. In fact, sElk1 counteracts Elk1 in neuronal differentiation and the regulation of nerve growth factor/ERK signaling.
Signaling
The downstream target of Elk1 is the serum response element (SRE) of the c-fos proto-oncogene. To produce c-fos, a protein encoded by the Fos gene, Elk1 needs to be phosphorylated by MAPKs at its C-terminus. MAPKs are the final effectors of signal transduction pathways that begin at the plasma membrane. Phosphorylation by MAPKs results in a conformational change of Elk1. As seen in Figure 2, Raf kinase acts upstream of MAPKs to activate them by phosphorylating and, thereby activating, MEKs, or MAPK or ERK kinases. Raf itself is activated by Ras, which is linked to growth factor receptors with tyrosine kinase activity via Grb2 and Sos. Grb2 and Sos can stimulate Ras only after the binding of growth factors to their corresponding receptors. However, Raf activation does not exclusively depend on Ras. Protein kinase C, which is activated by phorbol esters, can fulfill the same function as Ras. MEK kinase (MEKK) can also activate MEKs, which then activate MAPKs, making Raf unnecessary at times. Various signal transduction pathways, therefore, funnel through MEKs and MAPKs and lead to the activation of Elk1. After stimulation of Elk1, SRF, which allows Elk1 to bind to the c-fos promoter, must be recruited. The binding of Elk1 to SRF happens due to protein-protein interaction between the B domain of Elk1 and SRF and the protein-DNA interaction via the A domain.
The aforementioned proteins are like recipes for a certain signaling output. If one of these ingredients, such as SRF, is missing, then a different output occurs. In this case, lack of SRF leads to Elk1's activation of another gene. Elk1 can, thus, independently interact with an ETS binding site, as in the case of the lck proto-oncogene in Figure 2. Moreover, the spacing and relative orientation of the Elk1 binding site to the SRE is rather flexible, suggesting that the SRE-regulated early genes other than c-fos could be targets of Elk1. egr-1 is an example of an Elk1 target that depends on SRE interaction. Ultimately, phosphorylation of Elk1 can result in the production of many proteins, depending on the other factors involved and their specific interactions with each other.
When studying signaling pathways, mutations can further highlight the importance of each component used to activate the downstream target. For instance, disruption of the C-terminal domain of Elk1 that MAPK phosphorylates triggers inhibition of c-fos activation. Similarly, dysfunctional SRF, which normally tethers Elk1 to the SRE, leads to Fos not being transcribed. At the same time, without Elk1, SRF cannot induce c-fos transcription after MAPK stimulation. For these reasons, Elk1 represents an essential link between signal transduction pathways and the initiation of gene transcription.
Clinical significance
Long-term memory
Formation of long-term memory may be dependent on Elk1. MEK inhibitors block Elk1 phosphorylation and, thus, impair acquired conditioned taste aversion. Moreover, avoidance learning, which involves the subject learning that a particular response leads to prevention of an aversive stimulus, is correlated with a definite increase in activation of Erk, Elk1, and c-fos in the hippocampus. This area of the brain is involved in short-term and long-term information storage. When Elk1 or SRF binding to DNA is blocked in the rat hippocampus, only sequestration of SRF interferes with long-term spatial memory. While the interaction of Elk1 with DNA may not be essential for memory formation, its specific role still needs to be explored. This is because activation of Elk1 can trigger other molecular events that do not require Elk1 to bind DNA. For example, Elk1 is involved in the phosphorylation of histones, increased interaction with SRF, and recruitment of the basal transcriptional machinery, all of which do not require direct binding of Elk1 to DNA.
Drug addiction
Elk1 activation plays a central role in drug addiction. After mice are given cocaine, a strong and momentary hyperphosphorylation of Erk and Elk1 is observed in the striatum. When these mice are then given MEK inhibitors, Elk1 phosphorylation is absent. Without active Elk1, c-fos production and cocaine-induced conditioned place preference are shown to be blocked. Moreover, acute ethanol ingestion leads to excessive phosphorylation of Elk1 in the amygdala. Silencing of Elk1 activity has also been found to decrease cellular responses to withdrawal signals and lingering treatment of opioids, one of the world's oldest known drugs. Altogether, these results highlight that Elk1 is an important component of drug addiction.
Pathophysiology
Buildup of beta amyloid (Aβ) peptides is shown to cause and/or trigger Alzheimer's disease. Aβ interferes with BDNF-induced phosphorylation of Elk1. With Elk1 activation being hindered in this pathway, the SRE-driven gene regulation leads to increased vulnerability of neurons. Elk1 also inhibits transcription of presenilin 1 (PS1), which encodes a protein that is necessary for the last step of the sequential proteolytic processing of amyloid precursor protein (APP). APP makes variants of Aβ (Aβ42/43 polypeptide). Moreover, PS1 is genetically associated with most early-onset cases of familial Alzheimer's disease. These data emphasize the intriguing link between Aβ, Elk1, and PS1.
Another condition associated with Elk1 is Down syndrome. Fetal and aged mice with this pathophysiological condition have shown a decrease in the activity of calcineurin, the major phosphatase for Elk1. These mice also have age-dependent changes in ERK activation. Moreover, expression of SUMO3, which represses Elk1 activity, increases in the adult Down syndrome patient. Therefore, Down syndrome is correlated with changes in ERK, calcineurin, and SUMO pathways, all of which act antagonistically on Elk1 activity.
Elk1 also interacts with BRCA1 splice variants, namely BRCA1a and BRCA1b. This interaction enhances BRCA1-mediated growth suppression in breast cancer cells. Elk1 may be a downstream target of BRCA1 in its growth control pathway. Recent literature reveals that c-fos promoter activity is inhibited, while overexpression of BRCA1a/1b reduces MEK-induced activation of the SRE. These results show that one mechanism of growth and tumor suppression by BRCA1a/1b proteins acts through repression of the expression of Elk1 downstream target genes like Fos.
Depression has been linked with Elk1. Decreased Erk-mediated Elk1 phosphorylation is observed in the hippocampus and prefrontal cortex of post-mortem brains of suicidal individuals. Imbalanced Erk signaling is correlated with depression and suicidal behavior. Future research will reveal the exact role of Elk1 in the pathophysiology of depression.
References
External links
Transcription factors | ELK1 | Chemistry,Biology | 2,210 |
41,972,932 | https://en.wikipedia.org/wiki/Soundwalk | A soundwalk is a walk with a focus on listening to the environment. The term was first used by members of the World Soundscape Project under the leadership of composer R. Murray Schafer in Vancouver in the 1970s. Hildegard Westerkamp, from the same group of artists and founder of the World Forum of Acoustic Ecology, defines soundwalking as "... any excursion whose main purpose is listening to the environment. It is exposing our ears to every sound around us no matter where we are."
Schafer was particularly interested in the implications of the changes in soundscapes in industrial societies in children, and children's relationship to the world through sound. He was a proponent of ear-cleaning (cleaning one's ears cognitively), and he saw soundwalking as an important part of this process of re-engaging our aural senses in finding our place in the world.
Westerkamp used soundwalks to create multiple soundart pieces. "Cricket Voice", "A Walk Through the City", and "Beneath the Forest Floor" are all soundwalk inspired works.
Soundwalking has also been used as artistic medium by visual artists and documentary makers, such as Janet Cardiff.
In 2018 the sound artist Francesco Giomi introduced for the first time the term "soundride" as a direct derivation from a soundwalk but driven by bicycle, used to reach more far points, interesting from their sound point of view.
Other Terms
Other terms closely related to soundwalking and used by Schafer include:
Keynote: typically ambient sounds which are not perceived, not because they are inaudible but because they are filtered out cognitively, such as a highway or air-condition hum
Soundmark: a sonic landmark; a sound which is characteristic of a place
Sound signal: a foreground sound; e.g. a dog, an alarm clock; messages/meaning is usually carried through sound signals.
Sound object: the smallest possible recognizable sonic entity (recognizable by its amplitude envelope)
Acousmatic: a description for sounds whose sources are out of sight or unknown. This also relates to acousmatic music.
See also
Soundscape ecology
Acousmatic music
Sound art
Shinrin-yoku
References
Sound
Acoustics | Soundwalk | Physics | 458 |
77,542,818 | https://en.wikipedia.org/wiki/NGC%206078 | NGC 6078 is an elliptical galaxy in the constellation of Hercules. Its velocity with respect to the cosmic microwave background is 9459 ± 44 km/s, which corresponds to a Hubble distance of 139.52 ± 9.81 Mpc (∼455 million light-years). It was discovered by French astronomer Édouard Stephan on 21 June 1876.
Very close to NGC 6078 are the galaxies PGC 57459 and SDSS J161206.68+141210.3.
One supernova has been observed in NGC 6078: SN 2011dv (type Ia, mag 16.2) was discovered by the Italian Supernovae Search Project on 28 June 2011.
See also
List of NGC objects (6001–7000)
References
External links
6078
057460
+02-41-017
Hercules (constellation)
18760621
Discoveries by Édouard Stephan
Elliptical galaxies | NGC 6078 | Astronomy | 189 |
14,573,421 | https://en.wikipedia.org/wiki/Traditional%20knowledge%20GIS | Traditional knowledge Geographic Information Systems (GIS) is a toolset of systems that uses data, techniques, and technologies designed to document and utilize local knowledge in communities around the world. Traditional knowledge is information that encompasses the experiences of a particular culture or society. Traditional knowledge GIS differ from ordinary cognitive maps in that they express environmental and spiritual relationships among real and conceptual entities. This toolset focuses on cultural preservation, land rights disputes, natural resource management, and economic development.
Technical aspects
Traditional knowledge GIS employs cartographic and database management techniques such as participatory GIS, map biographies, and historical mapping. Participatory GIS aspires to a mutually beneficial relationship between the governing and the governed by fostering public involvement in all aspects of a GIS. It is widely accepted that this technique is necessary for sound environmental and economic planning in developing areas. This method generates a sense of place in scientific analysis that incorporates sacred sites and traditional land use practices. Participatory GIS can be effective for local resource management and planning, but researchers doubt its efficacy as a tool in attaining land tenure or fighting legal battles because of lack of expertise among local individuals and lack of access to technology.
Map biographies track the practices of local communities either for the sake of preservation or to argue for resource protection or land grants. GIS technologies are powerful in their ability to accommodate multimedia and multidimensional data sets, which allows for the recording and playing of oral histories and representations of abstract ecological knowledge.
Historical mapping documents and analyzes events that are meaningful to a particular tradition or locale. Cultural and humanitarian benefits can be derived from including maps in the historical record of an area.
Cultural preservation
Cultural preservation is perhaps the principal application of a traditional knowledge GIS. As adherents to traditional lifestyles decline in population, a degree of urgency has developed around the collection of data and wisdom from aging local elders. A central feature of cultural preservation is language revitalization. Bilingual visual and auditory maps depict oral traditions and historical information in places of cultural significance at various scales and levels of detail.
Researchers encounter significant obstacles to data acquisition due to the sensitive nature of much of the data sought for a traditional knowledge GIS, and locals may distrust the motives of outside consultants.
Land rights and natural resource management
Traditional knowledge GIS can influence debates over land rights and resource management in ecologically sensitive areas. Interests of local residents in these regions often conflict with those of migrant workers, state conservation units, and domestic and foreign mining or logging enterprises. GIS hardware and software are used to identify spatial trends in interpreting these conflicts.
Economic development
Economic development through traditional knowledge GIS is subject to local ownership over the systems and full access to relevant data and training. This situation is rare outside of industrialized nations, so little progress has been made in this field of research.
Current issues and effectiveness
There is a disparate nature to implementations of traditional knowledge GIS across geographies. Though developing nations utilize some forms of participatory GIS, communities there are less likely to gain access to expensive databases and cartographic methods than those in developed nations.
The overall effectiveness of traditional knowledge GIS has not been determined conclusively. Advocates for traditional mapping point to successes in acquiring land titles, managing local databases, and creating new skill sets for local communities worldwide. Detractors cite cost, the need for specialized training, and cultural differences as reasons GIS may be inappropriate for these applications. Traditional knowledge GIS analyze the nature of political and social struggles that lead to competing resource claims. They are powerful tools for mediation and negotiation among coexisting social groups.
No cost or open-source traditional knowledge software
The Nunaliit Atlas Framework was developed by and is maintained by the Geomatics and Cartographic Research Centre at Carleton University. The focus of this software is to create community atlas projects.
Commercial software
The CEDAR tool has a number of modules focused on contact relationship management, consultation for development projects, heritage projects and GIS. This software is provided either as a hosted service or as a computer located in client offices.
The LOUIS toolkit is a suite of tools for recording, managing and using traditional land use and traditional knowledge information. This software is provided as a hosted service with complementary desktop and mobile applications, including a mobile data collection application.
See also
Participatory 3D modelling (P3DM)
Participatory GIS
References
Applications of geographic information systems
Geographical technology
Geographic information systems
GIS | Traditional knowledge GIS | Technology | 902 |
30,985,752 | https://en.wikipedia.org/wiki/Permocalculus | Permocalculus is a genus of red algae known from Permian to Cretaceous strata. Closely aligned to Gymnocodium, it is placed in the Gymnocodiaceae.
References
Fossil algae
Red algae genera
Permian first appearances
Cretaceous extinctions
Enigmatic red algae taxa | Permocalculus | Biology | 57 |
29,866,108 | https://en.wikipedia.org/wiki/B-theorem | In mathematics, the B-theorem is a result in finite group theory formerly known as the B-conjecture.
The theorem states that if is the centralizer of an involution of a finite group, then every component of is the image of a component of .
References
Theorems about finite groups
Conjectures that have been proved | B-theorem | Mathematics | 68 |
1,387,081 | https://en.wikipedia.org/wiki/Rubik%27s%20Clock | The Rubik's Clock is a mechanical puzzle invented and patented by Christopher C. Wiggs and Christopher J. Taylor. The Hungarian sculptor and professor of architecture Ernő Rubik bought the patent from them to market the product under his name. It was first marketed in 1988.
The Rubik's Clock is a two-sided puzzle, each side presenting nine clocks to the puzzler. There are four dials, one at each corner of the puzzle, each allowing the corresponding corner clock to be rotated directly. (The corner clocks, unlike the other clocks, rotate on both sides of the puzzle simultaneously and can never be operated independently. Thus, the puzzle contains only 14 independent clocks.)
There are also four pins which span both sides of the puzzle; each pin arranged such that if it is "in" on one side, it is "out" on the other. The state of each pin (in or out) determines whether the adjacent corner clock is mechanically connected to the three other adjacent clocks on the front side or on the back side: thus the configuration of the pins determines which sets of clocks can be turned simultaneously by rotating a suitable dial.
The aim of the puzzle is to set all nine clocks to 12 o'clock (straight up) on both sides of the puzzle simultaneously. A method to do so is to start by constructing a cross on both sides (at 12 o’clock) and then solving the corner clocks individually.
The Rubik's clock is listed as one of the 17 WCA events, with records for fastest time to solve one puzzle, and the fastest average time to solve 5 puzzles (discarding the slowest and fastest times). The puzzle is unique in the WCA in that it is the only puzzle for which viable speedsolving methods have been devised that always solve it in God's number moves (14 for the clock) or less; an example is "7-Simul", which involves performing seven pairs of moves on the front and back of the clock simultaneously and requires mental calculation from the puzzle's initial position to determine some moves.
Combinations
Since there are 14 independent clocks, with 12 settings each, there are a total of =1,283,918,464,548,864 possible combinations for the clock faces. This does not count for the number of pin positions.
Notation
The puzzle is oriented with 12 o'clock on top, and either side in front. The following moves can be made:
Pin movements
UR (top-right): Move the top-right pin up.
DR (bottom-right): Move the bottom-right pin up.
DL (bottom-left): Move the bottom-left pin up.
UL (top-left): Move the top-left pin up.
U (both top): Move both top pins up.
R (both right): Move both right pins up.
D (both bottom): Move both bottom pins up.
L (both left): Move both left pins up.
ALL (all): Move all pins up.
Wheel movements
X+ (X clockwise turns): Turn a dial next to an up-position pin clockwise X times, then move all pins down.
X− (X counter-clockwise turns): Turn a dial next to an up-position pin counter-clockwise X times, then move all pins down.
Puzzle rotation
y2: Flip the puzzle, then move all pins down.
Records
The world record for single solve is held by Lachlan Gibson of New Zealand with a time of 1.86 seconds, set at A New Year in Auckland 2025. The world record for Olympic average of five solves is held by Volodymyr Kapustianskyi of Ukraine with an average of 2.39 seconds, set at Grand Forks 2024.
Top 10 solvers by single solve
Top 12 solvers by Olympic average of 5 solves
References
External links
Unofficial Records Speedsolving.com's page of unofficial records for many puzzles including Rubik's Clock
Real Genius Computer game implementation of Rubik's Clock for the Commodore Amiga, released in 1989
https://www.worldcubeassociation.org/results/rankings/clock/average
https://www.worldcubeassociation.org/results/rankings/clock/single?show=100+persons
Mechanical puzzles
Combination puzzles
1988 works
1988 introductions
1980s toys | Rubik's Clock | Mathematics | 899 |
7,930,037 | https://en.wikipedia.org/wiki/Subnormal%20operator | In mathematics, especially operator theory, subnormal operators are bounded operators on a Hilbert space defined by weakening the requirements for normal operators. Some examples of subnormal operators are isometries and Toeplitz operators with analytic symbols.
Definition
Let H be a Hilbert space. A bounded operator A on H is said to be subnormal if A has a normal extension. In other words, A is subnormal if there exists a Hilbert space K such that H can be embedded in K and there exists a normal operator N of the form
for some bounded operators
Normality, quasinormality, and subnormality
Normal operators
Every normal operator is subnormal by definition, but the converse is not true in general. A simple class of examples can be obtained by weakening the properties of unitary operators. A unitary operator is an isometry with dense range. Consider now an isometry A whose range is not necessarily dense. A concrete example of such is the unilateral shift, which is not normal. But A is subnormal and this can be shown explicitly. Define an operator U on
by
Direct calculation shows that U is unitary, therefore a normal extension of A. The operator U is called the unitary dilation of the isometry A.
Quasinormal operators
An operator A is said to be quasinormal if A commutes with A*A. A normal operator is thus quasinormal; the converse is not true. A counter example is given, as above, by the unilateral shift. Therefore, the family of normal operators is a proper subset of both quasinormal and subnormal operators. A natural question is how are the quasinormal and subnormal operators related.
We will show that a quasinormal operator is necessarily subnormal but not vice versa. Thus the normal operators is a proper subfamily of quasinormal operators, which in turn are contained by the subnormal operators. To argue the claim that a quasinormal operator is subnormal, recall the following property of quasinormal operators:
Fact: A bounded operator A is quasinormal if and only if in its polar decomposition A = UP, the partial isometry U and positive operator P commute.
Given a quasinormal A, the idea is to construct dilations for U and P in a sufficiently nice way so everything commutes. Suppose for the moment that U is an isometry. Let V be the unitary dilation of U,
Define
The operator N = VQ is clearly an extension of A. We show it is a normal extension via direct calculation. Unitarity of V means
On the other hand,
Because UP = PU and P is self adjoint, we have U*P = PU* and DU*P = DU*P. Comparing entries then shows N is normal. This proves quasinormality implies subnormality.
For a counter example that shows the converse is not true, consider again the unilateral shift A. The operator B = A + s for some scalar s remains subnormal. But if B is quasinormal, a straightforward calculation shows that A*A = AA*, which is a contradiction.
Minimal normal extension
Non-uniqueness of normal extensions
Given a subnormal operator A, its normal extension B is not unique. For example, let A be the unilateral shift, on l2(N). One normal extension is the bilateral shift B on l2(Z) defined by
where ˆ denotes the zero-th position. B can be expressed in terms of the operator matrix
Another normal extension is given by the unitary dilation B' of A defined above:
whose action is described by
Minimality
Thus one is interested in the normal extension that is, in some sense, smallest. More precisely, a normal operator B acting on a Hilbert space K is said to be a minimal extension of a subnormal A if K' ⊂ K is a reducing subspace of B and H ⊂ K' , then K' = K. (A subspace is a reducing subspace of B if it is invariant under both B and B*.)
One can show that if two operators B1 and B2 are minimal extensions on K1 and K2, respectively, then there exists a unitary operator
Also, the following intertwining relationship holds:
This can be shown constructively. Consider the set S consisting of vectors of the following form:
Let K' ⊂ K1 be the subspace that is the closure of the linear span of S. By definition, K' is invariant under B1* and contains H. The normality of B1 and the assumption that H is invariant under B1 imply K' is invariant under B1. Therefore, K' = K1. The Hilbert space K2 can be identified in exactly the same way. Now we define the operator U as follows:
Because
, the operator U is unitary. Direct computation also shows (the assumption that both B1 and B2 are extensions of A are needed here)
When B1 and B2 are not assumed to be minimal, the same calculation shows that above claim holds verbatim with U being a partial isometry.
References
Operator theory
Linear operators | Subnormal operator | Mathematics | 1,061 |
37,920,642 | https://en.wikipedia.org/wiki/Lie-%2A%20algebra | In mathematics, a Lie-* algebra is a D-module with a Lie* bracket. They were introduced by Alexander Beilinson and Vladimir Drinfeld (), and are similar to the conformal algebras discussed by and to vertex Lie algebras.
References
Lie algebras | Lie-* algebra | Mathematics | 58 |
734,256 | https://en.wikipedia.org/wiki/Molecular%20modelling | Molecular modelling encompasses all methods, theoretical and computational, used to model or mimic the behaviour of molecules. The methods are used in the fields of computational chemistry, drug design, computational biology and materials science to study molecular systems ranging from small chemical systems to large biological molecules and material assemblies. The simplest calculations can be performed by hand, but inevitably computers are required to perform molecular modelling of any reasonably sized system. The common feature of molecular modelling methods is the atomistic level description of the molecular systems. This may include treating atoms as the smallest individual unit (a molecular mechanics approach), or explicitly modelling protons and neutrons with its quarks, anti-quarks and gluons and electrons with its photons (a quantum chemistry approach).
Molecular mechanics
Molecular mechanics is one aspect of molecular modelling, as it involves the use of classical mechanics (Newtonian mechanics) to describe the physical basis behind the models. Molecular models typically describe atoms (nucleus and electrons collectively) as point charges with an associated mass. The interactions between neighbouring atoms are described by spring-like interactions (representing chemical bonds) and Van der Waals forces. The Lennard-Jones potential is commonly used to describe the latter. The electrostatic interactions are computed based on Coulomb's law. Atoms are assigned coordinates in Cartesian space or in internal coordinates, and can also be assigned velocities in dynamical simulations. The atomic velocities are related to the temperature of the system, a macroscopic quantity. The collective mathematical expression is termed a potential function and is related to the system internal energy (U), a thermodynamic quantity equal to the sum of potential and kinetic energies. Methods which minimize the potential energy are termed energy minimization methods (e.g., steepest descent and conjugate gradient), while methods that model the behaviour of the system with propagation of time are termed molecular dynamics.
This function, referred to as a potential function, computes the molecular potential energy as a sum of energy terms that describe the deviation of bond lengths, bond angles and torsion angles away from equilibrium values, plus terms for non-bonded pairs of atoms describing van der Waals and electrostatic interactions. The set of parameters consisting of equilibrium bond lengths, bond angles, partial charge values, force constants and van der Waals parameters are collectively termed a force field. Different implementations of molecular mechanics use different mathematical expressions and different parameters for the potential function. The common force fields in use today have been developed by using chemical theory, experimental reference data, and high level quantum calculations. The method, termed energy minimization, is used to find positions of zero gradient for all atoms, in other words, a local energy minimum. Lower energy states are more stable and are commonly investigated because of their role in chemical and biological processes. A molecular dynamics simulation, on the other hand, computes the behaviour of a system as a function of time. It involves solving Newton's laws of motion, principally the second law, . Integration of Newton's laws of motion, using different integration algorithms, leads to atomic trajectories in space and time. The force on an atom is defined as the negative gradient of the potential energy function. The energy minimization method is useful to obtain a static picture for comparing between states of similar systems, while molecular dynamics provides information about the dynamic processes with the intrinsic inclusion of temperature effects.
Variables
Molecules can be modelled either in vacuum, or in the presence of a solvent such as water. Simulations of systems in vacuum are referred to as gas-phase simulations, while those that include the presence of solvent molecules are referred to as explicit solvent simulations. In another type of simulation, the effect of solvent is estimated using an empirical mathematical expression; these are termed implicit solvation simulations.
Coordinate representations
Most force fields are distance-dependent, making the most convenient expression for these Cartesian coordinates. Yet the comparatively rigid nature of bonds which occur between specific atoms, and in essence, defines what is meant by the designation molecule, make an internal coordinate system the most logical representation. In some fields the IC representation (bond length, angle between bonds, and twist angle of the bond as shown in the figure) is termed the Z-matrix or torsion angle representation. Unfortunately, continuous motions in Cartesian space often require discontinuous angular branches in internal coordinates, making it relatively hard to work with force fields in the internal coordinate representation, and conversely a simple displacement of an atom in Cartesian space may not be a straight line trajectory due to the prohibitions of the interconnected bonds. Thus, it is very common for computational optimizing programs to flip back and forth between representations during their iterations. This can dominate the calculation time of the potential itself and in long chain molecules introduce cumulative numerical inaccuracy. While all conversion algorithms produce mathematically identical results, they differ in speed and numerical accuracy. Currently, the fastest and most accurate torsion to Cartesian conversion is the Natural Extension Reference Frame (NERF) method.
Applications
Molecular modelling methods are used routinely to investigate the structure, dynamics, surface properties, and thermodynamics of inorganic, biological, and polymeric systems. A large number of molecular models of force field are today readily available in databases. The types of biological activity that have been investigated using molecular modelling include protein folding, enzyme catalysis, protein stability, conformational changes associated with biomolecular function, and molecular recognition of proteins, DNA, and membrane complexes.
See also
References
Further reading
Bioinformatics
Molecular biology
Computational chemistry | Molecular modelling | Chemistry,Engineering,Biology | 1,132 |
9,934,611 | https://en.wikipedia.org/wiki/Dihydrofolic%20acid | Dihydrofolic acid (conjugate base dihydrofolate) (DHF) is a folic acid (vitamin B9) derivative which is converted to tetrahydrofolic acid by dihydrofolate reductase. Since tetrahydrofolate is needed to make both purines and pyrimidines, which are building blocks of DNA and RNA, dihydrofolate reductase is targeted by various drugs to prevent nucleic acid synthesis.
Interactive pathway map
Further reading
References
Folates | Dihydrofolic acid | Chemistry | 115 |
39,310,035 | https://en.wikipedia.org/wiki/Dependence%20receptor | In cellular biology, dependence receptors are proteins that mediate programmed cell death by monitoring the absence of certain trophic factors (or, equivalently, the presence of anti-trophic factors) that otherwise serve as ligands (interactors) for the dependence receptors.
A trophic ligand is a molecule whose protein binding stimulates cell growth, differentiation, and/or survival.
Cells depend for their survival on stimulation that is mediated by various receptors and sensors, and integrated via signaling within the cell and between cells.
The withdrawal of such trophic support leads to a form of cellular suicide.
Various dependence receptors are involved in a range of biological events: developmental cell death (naturally occurring cell death), trophic factor withdrawal-induced cell death, the spontaneous regression characteristic of type IV-S neuroblastoma, neurodegenerative cell death, inhibition of new tumor cells (tumorigenesis) and metastasis, and therapeutic antibody-mediated tumor cell death, as well as programmed cell death in other instances.
Since these receptors may support either cell death or cell survival, they initiate a new type of tumor suppressor, a conditional tumor suppressor.
In addition, events such as cellular atrophy and process retraction may also be mediated by dependence receptors, although this has not been as well documented as the induction of programmed cell death.
Receptors
The following is the list of known dependence receptors:
Notch3
Kremen1
DCC (Deleted in Colorectal Carcinoma)
UNC5 receptors (UNC5A, UNC5B, UNC5C, UNC5D)
Neogenin
p75NTR
Ptch1
CDON
PLXND1
RET
TrkA
TrkC
EphA4
c-Met
Insulin receptor IR
Insulin-like growth factor 1 receptor
ALK (anaplastic lymphoma kinase)
Androgen receptor
Some integrins
NTRK3
Background
Cells depend for their survival on stimulation that is mediated by various receptors and sensors. For any required stimulus, its withdrawal leads to a form of cellular suicide; that is, the cell plays an active role in its own demise. The term programmed cell death was first suggested by Lockshin & Williams in 1964.
Apoptosis, a form of programmed cell death, was first described by Kerr et al. in 1972,
although the earliest references to the morphological appearance of such cells may date back to the late 19th century.
Cells require different stimuli for survival, depending on their type and state of differentiation.
For example, prostate epithelial cells require testosterone for survival, and the withdrawal of testosterone leads to apoptosis in these cells.
How do cells recognize a lack of stimulus? While positive survival signals are clearly important, a complementary form of signal transduction is pro-apoptotic, and is activated or propagated by stimulus withdrawal or by the addition of an “anti-trophin.”
The dependence receptor notion was based on the observation that the effects of a number of receptors that function in both nervous system development and the production of tumors (especially metastasis) cannot be explained simply by a positive effect of signal transduction induced by ligand binding, but rather must also include cell death signaling in response to trophic withdrawal.
Positive survival signals involve classical signal transduction, initiated by interactions between ligands and receptors. Negative survival signals involve an alternative form of signal transduction that is initiated by the withdrawal of ligands from dependence receptors. This process is seen in developmental cell death, carcinogenesis (especially metastasis), neurodegeneration, and possibly non-lethal (sub-apoptotic) events such as neurite retraction and somal atrophy. Mechanistic studies of dependence receptors suggest that these receptors form complexes that activate and amplify caspase activity. In at least some cases, the caspase activation is via a pathway that is dependent on caspase-9 but not on mitochondria.
Some of the downstream mediators have been identified, such as DAP kinase and the DRAL gene.
Dependence receptors display the common property that they mediate two different intracellular signals: in the presence of ligand, these receptors transduce a positive signal leading to survival, differentiation or migration; conversely, in the absence of ligand, the receptors initiate and/or amplify a signal for programmed cell death. Thus cells that express these proteins at sufficient concentrations manifest a state of dependence on their respective ligands. The signaling that mediates cell death induction upon ligand withdrawal is incompletely defined, but typically includes a required interaction with, and cleavage by, specific caspases.
Mutation of the caspase site(s) in the receptor, of which there is typically one or two, prevents the trophic ligand withdrawal-induced programmed cell death.
Complex formation appears to be a function of ligand-receptor interaction, and dependence receptors appear to exist in at least two conformational states.
Complex formation in the absence of ligand leads to caspase activation by a mechanism that is usually dependent on caspase cleavage of the receptor itself, releasing pro-apoptotic peptides.
Thus these receptors may serve in caspase amplification, and in so doing create cellular states of dependence on their respective ligands.
These states of dependence are not absolute, since they can be blocked downstream in some cases by the expression of anti-apoptotic genes such as Bcl-2 or P35.
However, they result in a shift toward an increased likelihood of a cell's undergoing apoptosis.
Research
Research has highlighted the role of the dependence receptor UNC5D in the phenomenon of spontaneous regression of type IV-S neuroblastoma.
TrkA and TrkC have been shown to function as dependence receptors,
with TrkC mediating both neural cell death and tumorigenesis.
In addition, although dependence receptors have been described as mediating programmed cell death in the absence of binding of trophic ligand, the possibility that a similar effect might be achieved by the binding of a physiological anti-trophin has been raised, and it has been suggested that the Alzheimer's disease-associated peptide, Aβ, may play such a role.
References
Apoptosis
Cell signaling
Molecular neuroscience
Programmed cell death
Receptors
Single-pass transmembrane proteins | Dependence receptor | Chemistry,Biology | 1,300 |
39,519,649 | https://en.wikipedia.org/wiki/Applied%20Logic%20Corporation | Applied Logic Corporation (AL/COM) was a time-sharing company in the 1960s and 70s.
Headquartered in Princeton, New Jersey, AL/COM started in 1962 working on "mathematical techniques and their applications to problem-solving."
Seeing the need for in-house time sharing the company bought a Digital Equipment Corporation (DEC) PDP-6 and developed its time sharing service, which came on-line in 1966. In 1968 the company began development of "Mathematics Park" in Montgomery Township, New Jersey, "designed to provide tenants with a computer-serviced and mathematically-oriented environment," adjacent to the Princeton Airport. Also in 1968 the company registered AL/COM as a trademark for its service.
The system involved both custom software and custom hardware, and the service was marketed nationally by a network of associates.
In the late 1960s, the company developed a system called SAM (Semi-Automated Mathematics) for proving mathematical theories without human intervention. A theorem proved by the system, "SAM's lemma", was "widely hailed as the first contribution of automated reasoning systems to mathematics." The SAM series was one of the first interactive theorem provers and had an influence on subsequent theorem provers.
In 1965 Applied logic acquired a DEC PDP-6 computer system, which became operation in January 1966. By 1969 the company had four DEC PDP-10 dual systems with plans for a fifth, and had expanded nationwide with offices in San Jose, San Diego, and San Francisco. The company also planned to market its time sharing systems in addition to providing services. The company reported sales of $1,200,995, with an operational loss of $63,456.
By 1972 AL/COM had local dial-up facilities in ten cities: Boston, Massachusetts, Buffalo, New York, Chicago, Illinois, Indianapolis, Indiana, Montclair, New Jersey, New York, New York, Philadelphia, Pennsylvania, Princeton, New Jersey, Washington, DC, and Wilmington, Delaware. The computer center was located in Mathematics Park in Princeton.
By late 1969 AL/COM had definite plans for CIT Leasing to leaseback $2.73 million USD of their equipment at Mathematics Park and was considering an additional $7.5 million more. By 1970 the company was in financial difficulty and negotiated an agreement to defer $1,300,000 of debt. Applied Logic filed for Chapter XI bankruptcy in 1975.
References
External links
A woman adjusting an Applied Logic Corporation (AL/COM) time sharing AL-10 computer system (photo at Getty Images)
CRT-AIDED SEMI-AUTOMATED MATHEMATICS SAM Final Report
Applied Logic Publications at Bitsavers
Photos of computer room and staff
American companies established in 1962
American companies disestablished in 1975
Companies based in Princeton, New Jersey
Computer companies established in 1962
Computer companies disestablished in 1975
Defunct companies based in New Jersey
Defunct computer companies of the United States
Defunct computer hardware companies
Time-sharing companies | Applied Logic Corporation | Technology | 596 |
31,332 | https://en.wikipedia.org/wiki/International%20Obfuscated%20C%20Code%20Contest | The International Obfuscated C Code Contest (abbreviated IOCCC) is a computer programming contest for the most creatively obfuscated C code. Held semi-annually, it is described as "celebrating [C's] syntactical opaqueness". The winning code for the 27th contest, held in 2020, was released in July 2020. Previous contests were held in the years 1984–1996, 1998, 2000, 2001, 2004–2006, 2011–2015 and 2018–2020.
Entries are evaluated anonymously by a panel of judges. The judging process is documented in the competition guidelines and consists of elimination rounds. By tradition, no information is given about the total number of entries for each competition. Winning entries are awarded with a category, such as "Worst Abuse of the C preprocessor" or "Most Erratic Behavior", and then announced on the official IOCCC website. The contest states that being announced on the IOCCC website is the reward for winning.
History
The IOCCC was started by Landon Curt Noll and Larry Bassel in 1984 while employed at National Semiconductor's Genix porting group. The idea for the contest came after they compared notes with each other about some poorly written code that they had to fix, notably the Bourne shell, which used macros to emulate ALGOL 68 syntax, and a buggy version of finger for BSD. The contest itself was the topic of a quiz question in the 1993 Computer Bowl. After a hiatus of five years starting in 2006, the contest returned in 2011.
Compared with other programming contests, the IOCCC is described as "not all that serious" by Michael Swaine, editor of Dr. Dobb's Journal.
Rules
Each year, the rules of the contest are published on the IOCCC website. All material is published under Creative Commons license BY-SA 3.0 Unported. Rules vary from year to year and are posted with a set of guidelines that attempt to convey the spirit of the rules.
The rules are often deliberately written with loopholes that contestants are encouraged to find and abuse. Entries that take advantage of loopholes can cause the rules for the following year's contest to be adjusted.
Obfuscations employed
Entries often employ strange or unusual tricks, such as using the C preprocessor to do things it was not designed to do (in some cases "spectacularly", according to Dr. Dobbs, with one entry creating an 11-bit ALU in the C preprocessor<ref name="cpp_abuse">IOCCC 2004 – Best Abuse of CPP IOCCC. Retrieved 2023-05-01.</ref>), or avoiding commonly used constructs in the C programming language in favor of much more obscure ways of achieving the same thing.
Contributions have included source code formatted to resemble images, text, etc., after the manner of ASCII art, preprocessor redefinitions to make code harder to read, and self-modifying code. In several years an entry was submitted that required a new definition of some of the rules for the next year, regarded as a high honor. An example is the world's shortest self-reproducing program. The entry was a program designed to output its own source code, and which had zero bytes of source code. When the program ran, it printed out zero bytes, equivalent to its source code.
In the effort to take obfuscation to its extremes, contestants have produced programs which skirt around the edges of C standards, or result in constructs which trigger rarely used code path combinations in compilers. As a result, several of the past entries may not compile directly in a modern compiler, and some may cause crashes.
Examples
Within the code size limit of only a few kilobytes, contestants have managed to do complicated things – a 2004 winner turned out an operating system.
Toledo NanochessToledo Nanochess is a chess engine created by Mexican software developer Oscar Toledo Gutiérrez, a five-time winner of the IOCCC. In accordance with IOCCC rules, it is 1255 characters long. The author claims that it is the world's smallest chess program written in C.
The source code for Toledo Nanochess and other engines is available.
Because Toledo Nanochess is based on Toledo's winning entry from the 18th IOCCC (Best Game), it is heavily obfuscated.
On February 2, 2014, the author published the book Toledo Nanochess: The commented source code, which contains the fully commented source code.
As of February 7, 2010, it appears to be one of only two chess engines written in less than 2 kilobytes of C that are able to play full legal chess moves, along with Micro-Max by Dutch physicist H. G. Muller. In 2014 the 1 kilobyte barrier was broken by Super Micro Chess – a derivative of Micro-Max – totaling 760 characters (spaces and newlines included). There is also a smaller version of Toledo's engine, the Toledo Picochess, consisting of 944 non-blank characters.
Source code excerpt
B,i,y,u,b,I[411],*G=I,x=10,z=15,M=1e4;X(w,c,h,e,S,s){int t,o,L,E,d,O=e,N=-M*M,K
=78-h<<x,p,*g,n,*m,A,q,r,C,J,a=y?-x:x;y^=8;G++;d=w||s&&s>=h&&v 0,0)>M;do{_ o=I[
p=O]){q=o&z^y _ q<7){A=q--&2?8:4;C=o-9&z?q["& .$ "]:42;do{r=I[p+=C[l]-64]_!w|p
==w){g=q|p+a-S?0:I+S _!r&(q|A<3||g)||(r+1&z^y)>9&&q|A>2){_ m=!(r-2&7))P G[1]=O,
K;J=n=o&z;E=I[p-a]&z;t=q|E-7?n:(n+=2,6^y);Z n<=t){L=r?l[r&7]*9-189-h-q:0 _ s)L
+=(1-q?l[p/x+5]-l[O/x+5]+l[p%x+6]*-~!q-l[O%x+6]+o/16*8:!!m*9)+(q?0:!(I[p-1]^n)+
!(I[p+1]^n)+l[n&7]*9-386+!!g*99+(A<2))+!(E^y^9)_ s>h||1<s&s==h&&L>z|d){p[I]=n,O
[I]=m?*g=*m,*m=0:g?*g=0:0;L-=X(s>h|d?0:p,L-N,h+1,G[1],J=q|A>1?0:p,s)_!(h||s-1|B
-O|i-n|p-b|L<-M))P y^=8,u=J;J=q-1|A<7||m||!s|d|r|o<z||v 0,0)>M;O[I]=o;p[I]=r;m?
*m=*g,*g=0:g?*g=9^y:0;}_ L>N){*G=O _ s>1){_ h&&c-L<0)P L _!h)i=n,B=O,b=p;}N=L;}
n+=J||(g=I+p,m=p<O?g-3:g+2,*m<z|m[O-p]||I[p+=p-O]);}}}}Z!r&q>2||(p=O,q|A>2|o>z&
!r&&++C*--A));}}}Z++O>98?O=20:e-O);P N+M*M&&N>-K+1924|d?N:0;}main(){Z++B<121)*G
++=B/x%x<2|B%x<2?7:B/x&4?0:*l++&31;Z B=19){Z B++<99)putchar(B%x?l[B[I]|16]:x)_
x-(B=F)){i=I[B+=(x-F)*x]&z;b=F;b+=(x-F)*x;Z x-(*G=F))i=*G^8^y;}else v u,5);v u,
1);}}
Pi
Below is a 1988 entry which calculates pi by looking at its own area:
#define _ -F<00||--F-OO--;
int F=00,OO=00;main(){F_OO();printf("%1.3f\n",4.*-F/OO/OO);}F_OO()
{
_-_-_-_
_-_-_-_-_-_-_-_-_
_-_-_-_-_-_-_-_-_-_-_-_
_-_-_-_-_-_-_-_-_-_-_-_-_-_
_-_-_-_-_-_-_-_-_-_-_-_-_-_-_
_-_-_-_-_-_-_-_-_-_-_-_-_-_-_
_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_
_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_
_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_
_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_
_-_-_-_-_-_-_-_-_-_-_-_-_-_-_
_-_-_-_-_-_-_-_-_-_-_-_-_-_-_
_-_-_-_-_-_-_-_-_-_-_-_-_-_
_-_-_-_-_-_-_-_-_-_-_-_
_-_-_-_-_-_-_-_
_-_-_-_
}
(This entry was written in K&R C; it does not work correctly in ANSI C without some changes.)
Flight simulator
Another example is the following flight simulator, the winner of the 1998 IOCCC, as listed and described in Calculated Bets: Computers, Gambling, and Mathematical Modeling to Win (2001) and shown below:
#include <math.h>
#include <sys/time.h>
#include <X11/Xlib.h>
#include <X11/keysym.h>
double L ,o ,P
,_=dt,T,Z,D=1,d,
s[999],E,h= 8,I,
J,K,w[999],M,m,O
,n[999],j=33e-3,i=
1E3,r,t, u,v ,W,S=
74.5,l=221,X=7.26,
a,B,A=32.2,c, F,H;
int N,q, C, y,p,U;
Window z; char f[52]
; GC k; main(){ Display*e=
XOpenDisplay( 0); z=RootWindow(e,0); for (XSetForeground(e,k=XCreateGC (e,z,0,0),BlackPixel(e,0))
; scanf("%lf%lf%lf",y +n,w+y, y+s)+1; y ++); XSelectInput(e,z= XCreateSimpleWindow(e,z,0,0,400,400,
0,0,WhitePixel(e,0) ),KeyPressMask); for(XMapWindow(e,z); ; T=sin(O)){ struct timeval G={ 0,dt*1e6}
; K= cos(j); N=1e4; M+= H*_; Z=D*K; F+=_*P; r=E*K; W=cos( O); m=K*W; H=K*T; O+=D*_*F/ K+d/K*E*_; B=
sin(j); a=B*T*D-E*W; XClearWindow(e,z); t=T*E+ D*B*W; j+=d*_*D-_*F*E; P=W*E*B-T*D; for (o+=(I=D*W+E
*T*B,E*d/K *B+v+B/K*F*D)*_; p<y; ){ T=p[s]+i; E=c-p[w]; D=n[p]-L; K=D*m-B*T-H*E; if(p [n]+w[ p]+p[s
]== 0|K <fabs(W=T*r-I*E +D*P) |fabs(D=t *D+Z *T-a *E)> K)N=1e4; else{ q=W/K *4E2+2e2; C= 2E2+4e2/ K
*D; N-1E4&& XDrawLine(e ,z,k,N ,U,q,C); N=q; U=C; } ++p; } L+=_* (X*t +P*M+m*l); T=X*X+ l*l+M *M;
XDrawString(e,z,k ,20,380,f,17); D=v/l*15; i+=(B *l-M*r -X*Z)*_; for(; XPending(e); u *=CS!=N){
XEvent z; XNextEvent(e ,&z);
++*((N=XLookupKeysym
(&z.xkey,0))-IT?
N-LT? UP-N?& E:&
J:& u: &h); --*(
DN -N? N-DT ?N==
RT?&u: & W:&h:&J
); } m=15*F/l;
c+=(I=M/ l,l*H
+I*M+a*X)*_; H
=A*r+v*X-F*l+(
E=.1+X*4.9/l,t
=T*m/32-I*T/24
)/S; K=F*M+(
h* 1e4/l-(T+
E*5*T*E)/3e2
)/S-X*d-B*A;
a=2.63 /l*d;
X+=( d*l-T/S
*(.19*E +a
*.64+J/1e3
)-M* v +A*
Z)*_; l +=
K *_; W=d;
sprintf(f,
"%5d %3d"
"%7d",p =l
/1.7,(C=9E3+
O*57.3)%0550,(int)i); d+=T*(.45-14/l*
X-a*130-J* .14)*_/125e2+F*_*v; P=(T*(47
*I-m* 52+E*94 *D-t*.38+u*.21*E) /1e2+W*
179*v)/2312; select(p=0,0,0,0,&G); v-=(
W*F-T*(.63*m-I*.086+m*E*19-D*25-.11*u
)/107e2)*_; D=cos(o); E=sin(o); } }
This program needs the following command line on a Linux system to be compiled:
cc banks.c -o banks -DIT=XK_Page_Up -DDT=XK_Page_Down \
-DUP=XK_Up -DDN=XK_Down -DLT=XK_Left -DRT=XK_Right \
-DCS=XK_Return -Ddt=0.02 -lm -lX11 -L/usr/X11R6/lib
In order to run the binary file () it has to be supplied with a scenery file via input:
cat pittsburgh.sc | ./banks
Akari
Below is a 2011 entry which downsamples PGM, PPM images and ASCII art (of Akari from YuruYuri'') by Don, Yang:
/*
+
+
+
+
[ >i>n[t
*/ #include<stdio.h>
/*2w0,1m2,]_<n+a m+o>r>i>=>(['0n1'0)1;
*/int/**/main(int/**/n,char**m){FILE*p,*q;int A,k,a,r,i/*
#uinndcelfu_dset<rsitcdti_oa.nhs>i/_*/;char*d="P%" "d\n%d\40%d"/**/
"\n%d\n\00wb+",b[1024],y[]="yuriyurarararayuruyuri*daijiken**akkari~n**"
"/y*u*k/riin<ty(uyr)g,aur,arr[a1r2a82*y2*/u*r{uyu}riOcyurhiyua**rrar+*arayra*="
"yuruyurwiyuriyurara'rariayuruyuriyuriyu>rarararayuruy9uriyu3riyurar_aBrMaPrOaWy^?"
"*]/f]`;hvroai<dp/f*i*s/<ii(f)a{tpguat<cahfaurh(+uf)a;f}vivn+tf/g*`*w/jmaa+i`ni("/**
*/"i+k[>+b+i>++b++>l[rb";int/**/u;for(i=0;i<101;i++)y[i*2]^="~hktrvg~dmG*eoa+%squ#l2"
":(wn\"1l))v?wM353{/Y;lgcGp`vedllwudvOK`cct~[|ju {stkjalor(stwvne\"gt\"yogYURUYURI"[
i]^y[i*2+1]^4;/*!*/p=(n>1&&(m[1][0]-'-'||m[1][1] !='\0'))?fopen(m[1],y+298):stdin;
/*y/riynrt~(^w^)],]c+h+a+r+*+*[n>)+{>f+o<r<(-m] =<2<5<64;}-]-(m+;yry[rm*])/[*
*/q=(n<3||!(m[2][0]-'-'||m[2][1]))?stdout /*]{ }[*/:fopen(m[2],d+14);if(!p||/*
"]<<*-]>y++>u>>+r >+u+++y>--u---r>++i+++" <)< ;[>-m-.>a-.-i.++n.>[(w)*/!q/**/)
return+printf("Can " "not\x20open\40%s\40" "" "for\40%sing\n",m[!p?1:2],!p?/*
o=82]5<<+(+3+1+&.(+ m +-+1.)<)<|<|.6>4>-+(> m- &-1.9-2-)-|-|.28>-w-?-m.:>([28+
*/"read":"writ");for ( a=k=u= 0;y[u]; u=2 +u){y[k++ ]=y[u];}if((a=fread(b,1,1024/*
,mY/R*Y"R*/,p/*U*/)/* R*/ )>/*U{ */ 2&& b/*Y*/[0]/*U*/=='P' &&4==/*"y*r/y)r\}
*/sscanf(b,d,&k,& A,& i, &r)&& ! (k-6&&k -5)&&r==255){u=A;if(n>3){/*
]&<1<6<?<m.-+1>3> +:+ .1>3+++ . -m-) -;.u+=++.1<0< <; f<o<r<(.;<([m(=)/8*/
u++;i++;}fprintf (q, d,k, u >>1,i>>1,r);u = k-5?8:4;k=3;}else
/*]>*/{(u)=/*{ p> >u >t>-]s >++(.yryr*/+( n+14>17)?8/4:8*5/
4;}for(r=i=0 ; ;){u*=6;u+= (n>3?1:0);if (y[u]&01)fputc(/*
<g-e<t.c>h.a r -(-).)8+<1. >;+i.(<)< <)+{+i.f>([180*/1*
(r),q);if(y[u ]&16)k=A;if (y[u]&2)k--;if(i/*
("^w^NAMORI; { I*/==a/*" )*/){/**/i=a=(u)*11
&255;if(1&&0>= (a= fread(b,1,1024,p))&&
")]i>(w)-;} { /i-f-(-m--M1-0.)<{"
[ 8]==59/* */ )break;i=0;}r=b[i++]
;u+=(/**>> *..</<<<)<[[;]**/+8&*
(y+u))?(10- r?4:2):(y[u] &4)?(k?2:4):2;u=y[u/*
49;7i\(w)/;} y}ru\=*ri[ ,mc]o;n}trientuu ren (
*/]-(int)'`';} fclose( p);k= +fclose( q);
/*] <*.na/m*o{ri{ d;^w^;} }^_^}}
" */ return k- -1+ /*\' '-`*/
( -/*}/ */0x01 ); {;{ }}
; /*^w^*/ ;}
If the program is run using its own source as the input, the result is:[root@host ~]# ./akari akari.c
int
*w,m,_namori=('n');
#include<stdio.h>/*;hrd"% dnd4%"*/
/**/int(y),u,r[128*2/*{y}icuhya*rr*rya=
*/];void/**/i(){putchar(u);}int/**/main(/*
"(n"l)?M5{YlcpvdluvKct[j skao(tve"t"oYRYR"
*/int(w),char**n){for(m =256;--m;r[m]/*
"<*]y+u>r>u+y-u-r+i+" ) ;>m.a.i+n>()/q*/
=25<(31&( m -1))||64-( m &192)||2>w?m:(2+
m/*"*,/U// R/)/U * & /Y/0/U/=P &=/"*/)\
&16?m-13 : 13+ m) ;u=+10 ;for(;(m=/*
*>/()/{ p u t-s +(yy*+ n1>7?/:*/
getchar ())+1 ;i() ){if(10/*
"wNMR;{ I/=/" )/{*/==u*1
)i(); if(m-10){
u=/*> *./<)[;*/8*
4;i(); }u=r[ m];}return(
* *n/*{i ;w; }_}
( -*/ *00 ) ; }
[root@host ~]# ./akari akari.c > ./akari.small
[root@host ~]# ./akari ./akari.small
wm_aoi(n)
/*ity,,[2*/{}char*y=
(")M{lpduKtjsa(v""YY"
"*yuruyuri") ;main(/*
/",U/ R)U* Y0U= ="/\
*/){puts (y+ 17/*
"NR{I=" ){/=*
=* */);/*
**/{ ;;}}
[root@host ~]#
[root@host ~]# ./akari ./akari.small > ./akari.smaller
[root@host ~]# ./akari ./akari.smaller
main
(){puts("Y"
"U RU YU "\
"RI" )/*
*/ ;}
[root@host ~]#
See also
Obfuscated Perl Contest
Underhanded C Contest
Esoteric programming language
Notes and references
External links
C (programming language) contests
Computer humour
Software obfuscation
Ironic and humorous awards
Recurring events established in 1984 | International Obfuscated C Code Contest | Technology,Engineering | 6,591 |
4,713,458 | https://en.wikipedia.org/wiki/Canadian%20Institute%20for%20Theoretical%20Astrophysics | The Canadian Institute for Theoretical Astrophysics (CITA) is a national research institute funded by the Natural Sciences and Engineering Research Council, located at the University of Toronto in Toronto, Ontario, Canada. CITA's mission is "to foster interaction within the Canadian theoretical Astrophysics community and to serve as an international center of excellence for theoretical studies in astrophysics." CITA was incorporated in 1984.
CITA has close administrative and academic relations with the Canadian Institute for Advanced Research (CIFAR); several CITA faculty also serve as members of CIFAR.
History
The concept of a nationally supported institute for theoretical astrophysics dates back to discussions within the Canadian Astronomical Society in the early 1980s. A series of committees advocated a model of a university‑based institute governed by a council of Canadian astrophysicists. Proposals were solicited from universities across the country to host this institute, which by now had been named the Canadian Institute for Theoretical Astrophysics/Institut Canadien d'astrophysique theorique (CITA/ICAT). The University of Toronto won the resulting spirited competition, and CITA (University of Toronto) was established as an institute within the School of Graduate Studies in June 1984, with staff consisting of a single professor (Peter G. Martin) as the acting director and a visiting professor from Queen's University (Richard Henriksen) and a temporary administrative assistant. Today there are 9 faculty members two of which are Canada Research Chairs, two administrative staff, a Systems Manager and technical computing staff.
At the same time, Professor Richard Henriksen worked on establishing CITA, Inc. (a separate entity from CITA the institute at the University of Toronto) as an incorporated national institute and charity governed by an elected Council of Canadian astrophysics/relativity professors to promote research in theoretical astrophysics across the country. CITA Council is selected from CITA Inc members. There are presently 55 members of CITA, Inc.
CITA's research activities are supported by the University of Toronto, NSERC, multiple grants by the Ontario and Federal governments, as well as private sponsors including the Simons Foundation and CIFAR.
Membership
CITA has a small number of long-term faculty members, and a larger number of short term (3- or 5-year) postdoctoral positions, as well as an active visitor program; the purpose of the relatively high influx of new researchers or visitors is to ensure that timely topics are well represented at CITA. There are currently approximately 20 postdoctoral researchers at CITA, and 4 full-time administrative and computer staff. Several graduate students in the University of Toronto Department of Astronomy and Astrophysics or Department of Physics work with CITA researchers throughout their graduate work, and typically ten undergraduates come to CITA to work over the summer.
In 1985, Scott D. Tremaine came to CITA as its first director; Margaret Fukunaga was hired as the permanent Business Officer and Dick Bond arrived as the second faculty member. Dick Bond became the director in 1996 and Norman Murray was director 2006–2016.
Since 1984, the directors of the institute have been as follows:
* Denotes interim directors.
Notable past and present faculty members of CITA also include:
Peter G. Martin, 1984–present
Scott Tremaine, 1985–1997, 2020–present
Dick Bond, 1985–present
Nick Kaiser, 1988–1997
Norm Murray, 1993–present
Lev Kofman, 1998–2009
Ue-Li Pen, 1998–present
Chris Thompson, 2000–present
Roman Rafikov, 2005–2007
Harald Pfeiffer, 2009–2018
Daniel Green, 2014–2015
Juna Kollmeier, 2021-present
Maya Fishbach, 2022-present
Reed Essick, 2022-present
Bart Ripperda, 2022-present
Research
CITA has active research programs in cosmology (particularly in studies of the cosmic microwave background and intensity mapping), early universe studies and cosmic inflation, neutron stars (especially scintillometry and magnetars), fast radio burst, active galaxies, star formation, planet formation, gravitational waves and plasma astrophysics.
See also
Algonquin 46m radio telescope
Algonquin Radio Observatory
Dominion Astrophysical Observatory
Dominion Radio Astrophysical Observatory
Herzberg Institute of Astrophysics
References
External links
The Canadian Institute for Theoretical Astrophysics
An incomplete listing of recent CITA publications
University of Toronto
Astronomy institutes and departments | Canadian Institute for Theoretical Astrophysics | Astronomy | 890 |
20,989,214 | https://en.wikipedia.org/wiki/Susanne%20Albers | Susanne Albers is a German theoretical computer scientist and professor of computer science at the Department of Informatics of the Technical University of Munich. She is a recipient of the Otto Hahn Medal and the Leibniz Prize.
Education and career
Albers studied mathematics, computer science, and business administration in Osnabrück and received her PhD (Dr. rer. nat.) in 1993 at Saarland University under the supervision of Kurt Mehlhorn. Until 1999, she was associated with the Max Planck Institute for Computer Science and held visiting and postdoctoral positions at the International Computer Science Institute in Berkeley, Free University of Berlin, and University of Paderborn. In 1999, she received her habilitation and accepted a position at Dortmund University. From 2001 to 2009, she was professor of computer science at University of Freiburg. From 2009 to 2013, she has been at Humboldt University of Berlin.
Since 2013, Albers has held the Chair for Efficient Algorithms at the Department of Informatics of the Technical University of Munich.
Research
Albers' research is in the design and analysis of algorithms, especially online algorithms, approximation algorithms, algorithmic game theory and algorithm engineering.
Awards and honors
In 1993, she received the Otto Hahn Medal from the Max Planck Society, and in 2008 the Leibniz Prize from the German Research Foundation, considered the most important German research prize that includes a grant of €2.5 million. In 2011, she was elected as a fellow of the German Informatics Society. In 2014, she became one of ten inaugural fellows of the European Association for Theoretical Computer Science.
References
External links
German computer scientists
Theoretical computer scientists
Academic staff of the University of Freiburg
Academic staff of the Humboldt University of Berlin
Academic staff of the Technical University of Munich
Gottfried Wilhelm Leibniz Prize winners
1965 births
Living people
People from Georgsmarienhütte
German women computer scientists
German women academics
Game theorists | Susanne Albers | Mathematics | 379 |
46,822,454 | https://en.wikipedia.org/wiki/Hobby%20ZR-84 | Hobby ZR-84 was an educational and home computer developed by MICROSYS Beočin in SFRY (now Serbia) in 1984.
Specifications
Source:
CPU: Z80A running at 4 MHz
ROM: 12 KB BASIC
Primary memory: 16 KB (expandable up to 48 KB)
Secondary storage: cassette tape, floppy drive
Display: text mode 16 lines with 64 characters each; low-res graphics mode 128x48
Sound: separate board
I/O ports: composite and RF video, cassette tape storage, and expansion connector
References
External links
http://forum.benchmark.rs/showthread.php?323083-Hobby-ZR-84-intervju
Home computers
Z80-based home computers
Computer-related introductions in 1984 | Hobby ZR-84 | Technology | 159 |
1,782,310 | https://en.wikipedia.org/wiki/Abu%20Said%20Gorgani | Abu Sa'id Dharir Gurgani (), also Gurgani, was a 9th-century Persian mathematician and astronomer from Gurgan, Iran. He wrote a treatise on geometrical problems and another on the drawing of the meridian. George Sarton considers him a pupil of Ibn al-A'rabi, but Carl Brockelmann rejects this opinion.
Works
Two of his works are extant:
Masa'il Hindisia (a manuscript is available in Cairo)
Istikhraj khat nisf al-nahar min kitab analima wa al-borhan alayh (available in Cairo, translated by Carl Schoy)
See also
List of Iranian scientists
Sources
H. Suter. Mathematiker (12, 1900).
845 deaths
9th-century Iranian mathematicians
Year of birth unknown
9th-century Iranian astronomers
Medieval Iranian astronomers
People from Gorgan
Medieval Iranian physicists | Abu Said Gorgani | Astronomy | 186 |
46,810,026 | https://en.wikipedia.org/wiki/Penicillium%20mononematosum | Penicillium mononematosum is an anamorph species of the genus Penicillium which produces viriditoxin.
Further reading
References
mononematosum
Fungi described in 1989
Fungus species | Penicillium mononematosum | Biology | 44 |
7,271,098 | https://en.wikipedia.org/wiki/Edge%20recombination%20operator | The edge recombination operator (ERO) is an operator that creates a path that is similar to a set of existing paths (parents) by looking at the edges rather than the vertices. The main application of this is for crossover in genetic algorithms when a genotype with non-repeating gene sequences is needed such as for the travelling salesman problem. It was described by Darrell Whitley and others in 1989.
Algorithm
ERO is based on an adjacency matrix, which lists the neighbors of each node in any parent.
For example, in a travelling salesman problem such as the one depicted, the node map for the parents CABDEF and ABCEFD (see illustration) is generated by taking the first parent, say, 'ABCEFD' and recording its immediate neighbors, including those that roll around the end of the string.
Therefore;
... -> [A] <-> [B] <-> [C] <-> [E] <-> [F] <-> [D] <- ...
...is converted into the following adjacency matrix by taking each node in turn, and listing its connected neighbors;
A: B D
B: A C
C: B E
D: F A
E: C F
F: E D
With the same operation performed on the second parent (CABDEF), the following is produced:
A: C B
B: A D
C: F A
D: B E
E: D F
F: E C
Followed by making a union of these two lists, and ignoring any duplicates. This is as simple as taking the elements of each list and appending them to generate a list of unique link end points. In our example, generating this;
A: B C D = {B,D} ∪ {C,B}
B: A C D = {A,C} ∪ {A,D}
C: A B E F = {B,E} ∪ {F,A}
D: A B E F = {F,A} ∪ {B,E}
E: C D F = {C,F} ∪ {D,F}
F: C D E = {E,D} ∪ {E,C}
The result is another adjacency matrix, which stores the links for a network described by all the links in the parents. Note that more than two parents can be employed here to give more diverse links. However, this approach may result in sub-optimal paths.
Then, to create a path , the following algorithm is employed:
algorithm ero is
let K be the empty list
let N be the first node of a random parent.
while length(K) < length(Parent) do
K := K, N (append N to K)
Remove N from all neighbor lists
if Ns neighbor list is non-empty then let N* be the neighbor of N with the fewest neighbors in its list (or a random one, should there be multiple)
else'''
let N* be a randomly chosen node that is not in K
N := N''*
To step through the example, we randomly select a node from the parent starting points, {A, C}.
() -> A. We remove A from all the neighbor sets, and find that the smallest of B, C and D is B={C,D}.
AB. The smallest sets of C and D are C={E,F} and D={E,F}. We randomly select D.
ABD. Smallest are E={C,F}, F={C,E}. We pick F.
ABDF. C={E}, E={C}. We pick C.
ABDFC. The smallest set is E={}.
ABDFCE. The length of the child is now the same as the parent, so we are done.
Note that the only edge introduced in ABDFCE is AE.
Comparison with other operators
Edge recombination is generally considered a good option for problems like the travelling salesman problem. In a 1999 study at the University of the Basque Country, edge recombination provided better results than all the other crossover operators including partially mapped crossover and cycle crossover.
References
Genetic algorithms | Edge recombination operator | Biology | 879 |
55,958,870 | https://en.wikipedia.org/wiki/Caldanaerobius%20fijiensis | Caldanaerobius fijiensis is a thermophilic, obligately anaerobic and spore-forming bacterium from the genus of Caldanaerobius which has been isolated from a hot spring in Fiji.
References
Thermoanaerobacterales
Bacteria described in 2008
Thermophiles
Anaerobes | Caldanaerobius fijiensis | Biology | 69 |
8,854,508 | https://en.wikipedia.org/wiki/Ostriker%E2%80%93Peebles%20criterion | In astronomy, the Ostriker–Peebles criterion, named after its discoverers Jeremiah Ostriker and Jim Peebles, describes the formation of barred galaxies.
The rotating disc of a spiral galaxy, consisting of stars and solar systems, may become unstable in a way that the stars in the outer parts of the "arms" are released from the galaxy system, resulting in the collapse of the remaining stars into a bar-shaped galaxy. This occurs in approximately 1/3 of the known spiral galaxies.
Based on the first kinetic energy component T and the total gravitational energy W, a galaxy will become barred when .
References
External links
About barred galaxies
Extragalactic astronomy | Ostriker–Peebles criterion | Astronomy | 135 |
292,239 | https://en.wikipedia.org/wiki/Transitive%20closure | In mathematics, the transitive closure of a homogeneous binary relation on a set is the smallest relation on that contains and is transitive. For finite sets, "smallest" can be taken in its usual sense, of having the fewest related pairs; for infinite sets is the unique minimal transitive superset of .
For example, if is a set of airports and means "there is a direct flight from airport to airport " (for and in ), then the transitive closure of on is the relation such that means "it is possible to fly from to in one or more flights".
More formally, the transitive closure of a binary relation on a set is the smallest (w.r.t. ⊆) transitive relation on such that ⊆ ; see . We have = if, and only if, itself is transitive.
Conversely, transitive reduction adduces a minimal relation from a given relation such that they have the same closure, that is, ; however, many different with this property may exist.
Both transitive closure and transitive reduction are also used in the closely related area of graph theory.
Transitive relations and examples
A relation R on a set X is transitive if, for all x, y, z in X, whenever and then . Examples of transitive relations include the equality relation on any set, the "less than or equal" relation on any linearly ordered set, and the relation "x was born before y" on the set of all people. Symbolically, this can be denoted as: if and then .
One example of a non-transitive relation is "city x can be reached via a direct flight from city y" on the set of all cities. Simply because there is a direct flight from one city to a second city, and a direct flight from the second city to the third, does not imply there is a direct flight from the first city to the third. The transitive closure of this relation is a different relation, namely "there is a sequence of direct flights that begins at city x and ends at city y". Every relation can be extended in a similar way to a transitive relation.
An example of a non-transitive relation with a less meaningful transitive closure is "x is the day of the week after y". The transitive closure of this relation is "some day x comes after a day y on the calendar", which is trivially true for all days of the week x and y (and thus equivalent to the Cartesian square, which is "x and y are both days of the week").
Existence and description
For any relation R, the transitive closure of R always exists. To see this, note that the intersection of any family of transitive relations is again transitive. Furthermore, there exists at least one transitive relation containing R, namely the trivial one: X × X. The transitive closure of R is then given by the intersection of all transitive relations containing R.
For finite sets, we can construct the transitive closure step by step, starting from R and adding transitive edges.
This gives the intuition for a general construction. For any set X, we
can prove that transitive closure is given by the following expression
where is the i-th power of R, defined inductively by
and, for ,
where denotes composition of relations.
To show that the above definition of R+ is the least transitive relation containing R, we show that it contains R, that it is transitive, and that it is the smallest set with both of those characteristics.
: contains all of the , so in particular contains .
is transitive: If , then and for some by definition of . Since composition is associative, ; hence by definition of and .
is minimal, that is, if is any transitive relation containing , then : Given any such , induction on can be used to show for all as follows: Base: by assumption. Step: If holds, and , then and for some , by definition of . Hence, by assumption and by induction hypothesis. Hence by transitivity of ; this completes the induction. Finally, for all implies by definition of .
Properties
The intersection of two transitive relations is transitive.
The union of two transitive relations need not be transitive. To preserve transitivity, one must take the transitive closure. This occurs, for example, when taking the union of two equivalence relations or two preorders. To obtain a new equivalence relation or preorder one must take the transitive closure (reflexivity and symmetry—in the case of equivalence relations—are automatic).
In graph theory
In computer science, the concept of transitive closure can be thought of as constructing a data structure that makes it possible to answer reachability questions. That is, can one get from node a to node d in one or more hops? A binary relation tells you only that node a is connected to node b, and that node b is connected to node c, etc. After the transitive closure is constructed, as depicted in the following figure, in an O(1) operation one may determine that node d is reachable from node a. The data structure is typically stored as a Boolean matrix, so if matrix[1][4] = true, then it is the case that node 1 can reach node 4 through one or more hops.
The transitive closure of the adjacency relation of a directed acyclic graph (DAG) is the reachability relation of the DAG and a strict partial order.
The transitive closure of an undirected graph produces a cluster graph, a disjoint union of cliques. Constructing the transitive closure is an equivalent formulation of the problem of finding the components of the graph.
In logic and computational complexity
The transitive closure of a binary relation cannot, in general, be expressed in first-order logic (FO).
This means that one cannot write a formula using predicate symbols R and T that will be satisfied in
any model if and only if T is the transitive closure of R.
In finite model theory, first-order logic (FO) extended with a transitive closure operator is usually called transitive closure logic, and abbreviated FO(TC) or just TC. TC is a sub-type of fixpoint logics. The fact that FO(TC) is strictly more expressive than FO was discovered by Ronald Fagin in 1974; the result was then rediscovered by Alfred Aho and Jeffrey Ullman in 1979, who proposed to use fixpoint logic as a database query language. With more recent concepts of finite model theory, proof that FO(TC) is strictly more expressive than FO follows immediately from the fact that FO(TC) is not Gaifman-local.
In computational complexity theory, the complexity class NL corresponds precisely to the set of logical sentences expressible in TC. This is because the transitive closure property has a close relationship with the NL-complete problem STCON for finding directed paths in a graph. Similarly, the class L is first-order logic with the commutative, transitive closure. When transitive closure is added to second-order logic instead, we obtain PSPACE.
In database query languages
Since the 1980s Oracle Database has implemented a proprietary SQL extension CONNECT BY... START WITH that allows the computation of a transitive closure as part of a declarative query. The SQL 3 (1999) standard added a more general WITH RECURSIVE construct also allowing transitive closures to be computed inside the query processor; as of 2011 the latter is implemented in IBM Db2, Microsoft SQL Server, Oracle, PostgreSQL, and MySQL (v8.0+). SQLite released support for this in 2014.
Datalog also implements transitive closure computations.
MariaDB implements Recursive Common Table Expressions, which can be used to compute transitive closures. This feature was introduced in release 10.2.2 of April 2016.
Algorithms
Efficient algorithms for computing the transitive closure of the adjacency relation of a graph can be found in . Reducing the problem to multiplications of adjacency matrices achieves the time complexity of matrix multiplication, . However, this approach is not practical since both the constant factors and the memory consumption for sparse graphs are high . The problem can also be solved by the Floyd–Warshall algorithm in , or by repeated breadth-first search or depth-first search starting from each node of the graph.
For directed graphs, Purdom's algorithm solves the problem by first computing its condensation DAG and its transitive closure, then lifting it to the original graph. Its runtime is , where is the number of edges between its strongly connected components.
More recent research has explored efficient ways of computing transitive closure on distributed systems based on the MapReduce paradigm.
See also
Ancestral relation
Deductive closure
Reflexive closure
Symmetric closure
Transitive reduction (a smallest relation having the transitive closure of R as its transitive closure)
References
Foto N. Afrati, Vinayak Borkar, Michael Carey, Neoklis Polyzotis, Jeffrey D. Ullman, Map-Reduce Extensions and Recursive Queries, EDBT 2011, March 22–24, 2011, Uppsala, Sweden,
Keller, U., 2004, Some Remarks on the Definability of Transitive Closure in First-order Logic and Datalog (unpublished manuscript)*
Appendix C (online only)
External links
"Transitive closure and reduction", The Stony Brook Algorithm Repository, Steven Skiena.
Binary relations
Closure operators
Graph algorithms | Transitive closure | Mathematics | 1,974 |
5,625,361 | https://en.wikipedia.org/wiki/Phosphoenolpyruvate%20carboxylase | Phosphoenolpyruvate carboxylase (also known as PEP carboxylase, PEPCase, or PEPC; , PDB ID: 3ZGE) is an enzyme in the family of carboxy-lyases found in plants and some bacteria that catalyzes the addition of bicarbonate (HCO3−) to phosphoenolpyruvate (PEP) to form the four-carbon compound oxaloacetate and inorganic phosphate:
PEP + HCO3− → oxaloacetate + Pi
This reaction is used for carbon fixation in CAM (crassulacean acid metabolism) and organisms, as well as to regulate flux through the citric acid cycle (also known as Krebs or TCA cycle) in bacteria and plants. The enzyme structure and its two step catalytic, irreversible mechanism have been well studied. PEP carboxylase is highly regulated, both by phosphorylation and allostery.
Enzyme structure
The PEP carboxylase enzyme is present in plants and some types of bacteria, but not in fungi or animals (including humans). The genes vary between organisms, but are strictly conserved around the active and allosteric sites discussed in the mechanism and regulation sections. Tertiary structure of the enzyme is also conserved.
The crystal structure of PEP carboxylase in multiple organisms, including Zea mays (maize), and Escherichia coli has been determined. The overall enzyme exists as a dimer-of-dimers: two identical subunits closely interact to form a dimer through salt bridges between arginine (R438 - exact positions may vary depending on the origin of the gene) and glutamic acid (E433) residues. This dimer assembles (more loosely) with another of its kind to form the four subunit complex. The monomer subunits are mainly composed of alpha helices (65%), and have a mass of 106kDa each. The sequence length is about 966 amino acids.
The enzyme active site is not completely characterized. It includes a conserved aspartic acid (D564) and a glutamic acid (E566) residue that non-covalently bind a divalent metal cofactor ion through the carboxylic acid functional groups. This metal ion can be magnesium, manganese or cobalt depending on the organism, and its role is to coordinate the phosphoenolpyruvate molecule as well as the reaction intermediates. A histidine (H138) residue at the active site is believed to facilitate proton transfer during the catalytic mechanism.
Enzyme mechanism
The mechanism of PEP carboxylase has been well studied. The enzymatic mechanism of forming oxaloacetate is very exothermic and thereby irreversible; the biological Gibbs free energy change (△G°’) is -30kJmol−1. The substrates and cofactor bind in the following order: metal cofactor (either Co2+, Mg2+, or Mn2+), PEP, bicarbonate (HCO3−). The mechanism proceeds in two major steps, as described below and shown in figure 2:
The bicarbonate acts as a nucleophile to attack the phosphate group in PEP. This results in the splitting of PEP into a carboxyphosphate and the (very reactive) enolate form of pyruvate.
Proton transfer takes place at the carboxyphosphate. This is most likely modulated by a histidine (H138) residue that first deprotonates the carboxy side, and then, as an acid, protonates the phosphate part. The carboxyphosphate then exothermically decomposes into carbon dioxide and inorganic phosphate, at this point making this an irreversible reaction. Finally, after the decomposition, the carbon dioxide is attacked by the enolate to form oxaloacetate.
The metal cofactor is necessary to coordinate the enolate and carbon dioxide intermediates; the CO2 molecule is only lost 3% of the time. The active site is hydrophobic to exclude water, since the carboxyphosphate intermediate is susceptible to hydrolysis.
Function
The three most important roles that PEP carboxylase plays in plants and bacteria metabolism are in the cycle, the CAM cycle, and the citric acid cycle biosynthesis flux.
The primary mechanism of carbon dioxide assimilation in plants is through the enzyme ribulose-1,5-bisphosphate carboxylase/oxygenase (also known as RuBisCO), that adds CO2 to ribulose-1,5-bisphosphate (a 5 carbon sugar), to form two molecules of 3-phosphoglycerate (2x3 carbon sugars). However, at higher temperatures and lower CO2 concentrations, RuBisCO adds oxygen instead of carbon dioxide, to form the unusable product glycolate in a process called photorespiration. To prevent this wasteful process, plants increase the local CO2 concentration in a process called the cycle. PEP carboxylase plays the key role of binding CO2 in the form of bicarbonate with PEP to create oxaloacetate in the mesophyll tissue. This is then converted back to pyruvate (through a malate intermediate), to release the CO2 in the deeper layer of bundle sheath cells for carbon fixation by RuBisCO and the Calvin cycle. Pyruvate is converted back to PEP in the mesophyll cells, and the cycle begins again, thus actively pumping CO2.
The second important and very similar biological significance of PEP carboxylase is in the CAM cycle. This cycle is common in organisms living in arid habitats. Plants cannot afford to open stomata during the day to take in CO2, as they would lose too much water by transpiration. Instead, stomata open at night, when water evaporation is minimal, and take in CO2 by fixing with PEP to form oxaloacetate though PEP carboxylase. Oxaloacetate is converted to malate by malate dehydrogenase, and stored for use during the day when the light dependent reaction generates energy (mainly in the form of ATP) and reducing equivalents such as NADPH to run the Calvin cycle.
Third, PEP carboxylase is significant in non-photosynthetic metabolic pathways. Figure 3 shows this metabolic flow (and its regulation). Similar to pyruvate carboxylase, PEP carboxylase replenishes oxaloacetate in the citric acid cycle. At the end of glycolysis, PEP is converted to pyruvate, which is converted to acetyl-coenzyme-A (acetyl-CoA), which enters the citric acid cycle by reacting with oxaloacetate to form citrate. To increase flux through the cycle, some of the PEP is converted to oxaloacetate by PEP carboxylase. Since the citric acid cycle intermediates provide a hub for metabolism, increasing flux is important for the biosynthesis of many molecules, such as for example amino acids.
Regulation
PEP carboxylase is mainly subject to two levels of regulation: phosphorylation and allostery. Figure 3 shows a schematic of the regulatory mechanism.
Phosphorylation by phosphoenolpyruvate carboxylase kinase turns the enzyme on, whereas phosphoenolpyruvate carboxylase phosphatase turns it back off. Both kinase and phosphatase are regulated by transcription. It is further believed that malate acts as a feedback inhibitor of kinase expression levels, and as an activator for phosphatase expression (transcription). Since oxaloacetate is converted to malate in CAM and organisms, high concentrations of malate activate phosphatase expression - the phosphatase subsequently de-phosphorylates and thus de-actives PEP carboxylase, leading to no further accumulation of oxaloacetate and thus no further conversion of oxaloacetate to malate. Hence malate production is down-regulated.
The main allosteric inhibitors of PEP carboxylase are the carboxylic acids malate (weak) and aspartate (strong). Since malate is formed in the next step of the CAM and cycles after PEP carboxylase catalyses the condensation of CO2 and PEP to oxaloacetate, this works as a feedback inhibition pathway. Oxaloacetate and aspartate are easily inter-convertible through a transaminase mechanism; thus high concentrations of aspartate are also a pathway of feedback inhibition of PEP carboxylase.
The main allosteric activators of PEP carboxylase are acetyl-CoA and fructose-1,6-bisphosphate (F-1,6-BP). Both molecules are indicators of increased glycolysis levels, and thus positive feed-forward effectors of PEP carboxylase. They signal the need to produce oxaloacetate to allow more flux through the citric acid cycle. Additionally, increased glycolysis means a higher supply of PEP is available, and thus more storage capacity for binding CO2 in transport to the Calvin cycle. It is also noteworthy that the negative effectors aspartate competes with the positive effector acetyl-CoA, suggesting that they share an allosteric binding site.
Studies have shown that energy equivalents such as AMP, ADP and ATP have no significant effect on PEP carboxylase.
The magnitudes of the allosteric effects of these different molecules on PEP carboxylase activity depend on individual organisms.
References
EC 4.1.1
Photosynthesis | Phosphoenolpyruvate carboxylase | Chemistry,Biology | 2,066 |
1,301,687 | https://en.wikipedia.org/wiki/Wallis%20product | The Wallis product is the infinite product representation of :
It was published in 1656 by John Wallis.
Proof using integration
Wallis derived this infinite product using interpolation, though his method is not regarded as rigorous. A modern derivation can be found by examining for even and odd values of , and noting that for large , increasing by 1 results in a change that becomes ever smaller as increases. Let
(This is a form of Wallis' integrals.) Integrate by parts:
Now, we make two variable substitutions for convenience to obtain:
We obtain values for and for later use.
Now, we calculate for even values by repeatedly applying the recurrence relation result from the integration by parts. Eventually, we end get down to , which we have calculated.
Repeating the process for odd values ,
We make the following observation, based on the fact that
Dividing by :
, where the equality comes from our recurrence relation.
By the squeeze theorem,
Proof using Laplace's method
See the main page on Gaussian integral.
Proof using Euler's infinite product for the sine function
While the proof above is typically featured in modern calculus textbooks, the Wallis product is, in retrospect, an easy corollary of the later Euler infinite product for the sine function.
Let :
Relation to Stirling's approximation
Stirling's approximation for the factorial function asserts that
Consider now the finite approximations to the Wallis product, obtained by taking the first terms in the product
where can be written as
Substituting Stirling's approximation in this expression (both for and ) one can deduce (after a short calculation) that converges to as .
Derivative of the Riemann zeta function at zero
The Riemann zeta function and the Dirichlet eta function can be defined:
Applying an Euler transform to the latter series, the following is obtained:
See also
John Wallis, English mathematician who is given partial credit for the development of infinitesimal calculus and pi.
Viète's formula, a different infinite product formula for .
Leibniz formula for , an infinite sum that can be converted into an infinite Euler product for .
Wallis sieve
The Pippenger product formula obtains e by taking roots of terms in the Wallis product.
Notes
External links
Articles containing proofs
Pi algorithms
Infinite products | Wallis product | Mathematics | 465 |
47,105,636 | https://en.wikipedia.org/wiki/Thomas%20Maddock%27s%20Sons%20Company | Thomas Maddock's Sons Company was founded by Thomas Maddock.
History
The firm was originally named 'Millington & Asthury, before Maddock joined it in 1872. It was subsequently renamed 'Millington, Astbury & Maddock the next year. When Millington left, it became Asthury & Maddock, before assuming the name Thomas Maddock & Sons upon the departure of Asthury. The plant is in Hamilton Township, New Jersey. It was built in 1924-25 and manufactured sanitary ware.
Later it was purchased by American Standard in 1929 and production continued until 2002. The site lies adjacent to the Hamilton Train Station on the Northeast Corridor Line. It has been redeveloped as offices and is the centerpiece of transit-oriented development around the station.
The building's original address was 240 Princeton Avenue but now lies on American Metro Boulevard.
See also
Thomas Maddock
National Register of Historic Places listings in Mercer County, New Jersey
References
Bibliography
Industrial buildings and structures on the National Register of Historic Places in New Jersey
Hamilton Township, Mercer County, New Jersey
National Register of Historic Places in Mercer County, New Jersey
New Jersey Register of Historic Places
Toilets | Thomas Maddock's Sons Company | Biology | 236 |
1,804,365 | https://en.wikipedia.org/wiki/Retrograde%20inversion | In music theory, retrograde inversion is a musical term that literally means "backwards and upside down": "The inverse of the series is sounded in reverse order." Retrograde reverses the order of the motif's pitches: what was the first pitch becomes the last, and vice versa. This is a technique used in music, specifically in twelve-tone technique, where the inversion and retrograde techniques are performed on the same tone row successively, "[t]he inversion of the prime series in reverse order from last pitch to first."
Conventionally, inversion is carried out first, and the inverted form is then taken backward to form the retrograde inversion, so that the untransposed retrograde inversion ends with the pitch that began the prime form of the series. In his late twelve-tone works, however, Igor Stravinsky preferred the opposite order, so that his row charts use inverse retrograde (IR) forms for his source sets, instead of retrograde inversions (RI), although he sometimes labeled them RI in his sketches.
For example, the forms of the row from Requiem Canticles are as follows:
P0:
R0:
I0:
RI0:
IR0:
Note that IR is a transposition of RI, the pitch class between the last pitches of P and I above RI.
Other compositions that include retrograde inversions in its rows include works by Tadeusz Baird and Karel Goeyvaerts. One work in particular by the latter composer, Nummer 2, employs retrograde of the recurring twelve-tone row B–F–F–E–G–A–E–D–A–B–D–C in the piano part. It is performed in both styles, particularly in the outer sections of the piece. The final movement of Paul Hindemith's Ludus Tonalis, the Postludium, is an exact retrograde inversion of the work's opening Praeludium.
Sources
Musical symmetry
Serialism | Retrograde inversion | Physics | 408 |
27,242,038 | https://en.wikipedia.org/wiki/Pilar%20Ruiz-Lapuente | Pilar Ruiz-Lapuente (born 1964, Barcelona) is an astrophysicist working as a professor at the University of Barcelona. Her work has included research on type Ia supernovae. In 2004, she led the team that searched for the companion star to the white dwarf that became supernova SN 1572, observed by Tycho Brahe, among others. Ruiz-Lapuente's research on supernovae contributed to the discovery of the accelerating expansion of the universe.
Career overview
Ruiz-Lapuente completed her degree in Physics at the University of Barcelona, then did her doctoral studies at the University of Barcelona, the Max Planck Institute for Astrophysics, and the European Southern Observatory. She then went on to become a research fellow at the Center for Astrophysics Harvard & Smithsonian. As of 2012, she was a professor with the Department of Astronomy and Meteorology at the University of Barcelona.
Research on accelerating universe
Ruiz-Lapuente was one of the members of the Supernova Cosmology Project, one of two research teams which made the unexpected co-discovery, in 1998, that the universe was expanding at an accelerating rate. The teams discovered this by studying Type Ia supernovae and posited dark energy as an explanation for this accelerating expansion.
She said of her contribution to the work...
As a result of this discovery, Ruiz-Lapuente, along with her colleagues on the Supernova Cosmology Project and the co-discoverers on the High-z Supernova Search Team, received the 2007 Gruber Prize in Cosmology and the 2015 Breakthrough Prize in Fundamental Physics. The research she contributed to also resulted in the awarding of a Nobel Prize to her team's lead researcher, Saul Perlmutter, which he shared with the High-z Supernova Search Team's directors.
Notable publications
As of 2012, Pilar-Ruiz had authored more than 130 journal articles. These include works published in Nature and in Science.
Some articles include :
Nebular spectra of type IA supernovae as probes for extragalactic distances, reddening, and nucleosynthesis
A possible low-mass type Ia supernova
Tycho Brahe's supernova: light from centuries past
Dark energy, gravitation and supernovae
She has also written a book titled "El enigma de la realidad. Las entidades de la física de Aristóteles a Einstein."
References
1964 births
Spanish astrophysicists
Living people
Academic staff of the University of Barcelona
20th-century Spanish scientists
Women astronomers
Women astrophysicists
University of Barcelona alumni
Spanish women scientists
20th-century Spanish women | Pilar Ruiz-Lapuente | Astronomy | 541 |
209,627 | https://en.wikipedia.org/wiki/Klein%E2%80%93Gordon%20equation | The Klein–Gordon equation (Klein–Fock–Gordon equation or sometimes Klein–Gordon–Fock equation) is a relativistic wave equation, related to the Schrödinger equation. It is second-order in space and time and manifestly Lorentz-covariant. It is a differential equation version of the relativistic energy–momentum relation .
Statement
The Klein–Gordon equation can be written in different ways. The equation itself usually refers to the position space form, where it can be written in terms of separated space and time components or by combining them into a four-vector By Fourier transforming the field into momentum space, the solution is usually written in terms of a superposition of plane waves whose energy and momentum obey the energy-momentum dispersion relation from special relativity. Here, the Klein–Gordon equation is given for both of the two common metric signature conventions
Here, is the wave operator and is the Laplace operator. The speed of light and Planck constant are often seen to clutter the equations, so they are therefore often expressed in natural units where
Unlike the Schrödinger equation, the Klein–Gordon equation admits two values of for each : One positive and one negative. Only by separating out the positive and negative frequency parts does one obtain an equation describing a relativistic wavefunction. For the time-independent case, the Klein–Gordon equation becomes
which is formally the same as the homogeneous screened Poisson equation. In addition, the Klein–Gordon equation can also be represented as:
where, the momentum operator is given as:
Relevance
The equation is to be understood first as a classical continuous scalar field equation that can be quantized. The quantization process introduces then a quantum field whose quanta are spinless particles. Its theoretical relevance is similar to that of the Dirac equation.
The equation solutions include a scalar or pseudoscalar field. In the realm of particle physics electromagnetic interactions can be incorporated, forming the topic of scalar electrodynamics, the practical utility for particles like pions is limited. There is a second version of the equation for a complex scalar field that is theoretically important being the equation of the Higgs Boson. In the realm of condensed matter it can be used for many approximations of quasi-particles without spin.
The equation can be put into the form of a Schrödinger equation. In this form it is expressed as two coupled differential equations, each of first order in time. The solutions have two components, reflecting the charge degree of freedom in relativity. It admits a conserved quantity, but this is not positive definite. The wave function cannot therefore be interpreted as a probability amplitude. The conserved quantity is instead interpreted as electric charge, and the norm squared of the wave function is interpreted as a charge density. The equation describes all spinless particles with positive, negative, and zero charge.
Any solution of the free Dirac equation is, for each of its four components, a solution of the free Klein–Gordon equation. Despite historically it was invented as a single particle equation the Klein–Gordon equation cannot form the basis of a consistent quantum relativistic one-particle theory, any relativistic theory implies creation and annihilation of particles beyond a certain energy threshold.
Solution for free particle
Here, the Klein–Gordon equation in natural units, , with the metric signature is solved by Fourier transformation. Inserting the Fourier transformationand using orthogonality of the complex exponentials gives the dispersion relationThis restricts the momenta to those that lie on shell, giving positive and negative energy solutionsFor a new set of constants , the solution then becomesIt is common to handle the positive and negative energy solutions by separating out the negative energies and work only with positive :In the last step, was renamed. Now we can perform the -integration, picking up the positive frequency part from the delta function only:
This is commonly taken as a general solution to the free Klein–Gordon equation. Note that because the initial Fourier transformation contained Lorentz invariant quantities like only, the last expression is also a Lorentz invariant solution to the Klein–Gordon equation. If one does not require Lorentz invariance, one can absorb the -factor into the coefficients and .
History
The equation was named after the physicists Oskar Klein and Walter Gordon, who in 1926 proposed that it describes relativistic electrons. Vladimir Fock also discovered the equation independently in 1926 slightly after Klein's work, in that Klein's paper was received on 28 April 1926, Fock's paper was received on 30 July 1926 and Gordon's paper on 29 September 1926. Other authors making similar claims in that same year include Johann Kudar, Théophile de Donder and Frans-H. van den Dungen, and Louis de Broglie. Although it turned out that modeling the electron's spin required the Dirac equation, the Klein–Gordon equation correctly describes the spinless relativistic composite particles, like the pion. On 4 July 2012, European Organization for Nuclear Research CERN announced the discovery of the Higgs boson. Since the Higgs boson is a spin-zero particle, it is the first observed ostensibly elementary particle to be described by the Klein–Gordon equation. Further experimentation and analysis is required to discern whether the Higgs boson observed is that of the Standard Model or a more exotic, possibly composite, form.
The Klein–Gordon equation was first considered as a quantum wave equation by Erwin Schrödinger in his search for an equation describing de Broglie waves. The equation is found in his notebooks from late 1925, and he appears to have prepared a manuscript applying it to the hydrogen atom. Yet, because it fails to take into account the electron's spin, the equation predicts the hydrogen atom's fine structure incorrectly, including overestimating the overall magnitude of the splitting pattern by a factor of for the -th energy level. The Dirac equation relativistic spectrum is, however, easily recovered if the orbital-momentum quantum number is replaced by total angular-momentum quantum number . In January 1926, Schrödinger submitted for publication instead his equation, a non-relativistic approximation that predicts the Bohr energy levels of hydrogen without fine structure.
In 1926, soon after the Schrödinger equation was introduced, Vladimir Fock wrote an article about its generalization for the case of magnetic fields, where forces were dependent on velocity, and independently derived this equation. Both Klein and Fock used Kaluza and Klein's method. Fock also determined the gauge theory for the wave equation. The Klein–Gordon equation for a free particle has a simple plane-wave solution.
Derivation
The non-relativistic equation for the energy of a free particle is
By quantizing this, we get the non-relativistic Schrödinger equation for a free particle:
where
is the momentum operator ( being the del operator), and
is the energy operator.
The Schrödinger equation suffers from not being relativistically invariant, meaning that it is inconsistent with special relativity.
It is natural to try to use the identity from special relativity describing the energy:
Then, just inserting the quantum-mechanical operators for momentum and energy yields the equation
The square root of a differential operator can be defined with the help of Fourier transformations, but due to the asymmetry of space and time derivatives, Dirac found it impossible to include external electromagnetic fields in a relativistically invariant way. So he looked for another equation that can be modified in order to describe the action of electromagnetic forces. In addition, this equation, as it stands, is nonlocal (see also Introduction to nonlocal equations).
Klein and Gordon instead began with the square of the above identity, i.e.
which, when quantized, gives
which simplifies to
Rearranging terms yields
Since all reference to imaginary numbers has been eliminated from this equation, it can be applied to fields that are real-valued, as well as those that have complex values.
Rewriting the first two terms using the inverse of the Minkowski metric , and writing the Einstein summation convention explicitly we get
Thus the Klein–Gordon equation can be written in a covariant notation. This often means an abbreviation in the form of
where
and
This operator is called the wave operator.
Today this form is interpreted as the relativistic field equation for spin-0 particles. Furthermore, any component of any solution to the free Dirac equation (for a spin-1/2 particle) is automatically a solution to the free Klein–Gordon equation. This generalizes to particles of any spin due to the Bargmann–Wigner equations. Furthermore, in quantum field theory, every component of every quantum field must satisfy the free Klein–Gordon equation, making the equation a generic expression of quantum fields.
Klein–Gordon equation in a potential
The Klein–Gordon equation can be generalized to describe a field in some potential as
Then the Klein–Gordon equation is the case .
Another common choice of potential which arises in interacting theories is the potential for a real scalar field
Higgs sector
The pure Higgs boson sector of the Standard model is modelled by a Klein–Gordon field with a potential, denoted for this section. The Standard model is a gauge theory and so while the field transforms trivially under the Lorentz group, it transforms as a -valued vector under the action of the part of the gauge group. Therefore, while it is a vector field , it is still referred to as a scalar field, as scalar describes its transformation (formally, representation) under the Lorentz group. This is also discussed below in the scalar chromodynamics section.
The Higgs field is modelled by a potential
,
which can be viewed as a generalization of the potential, but has an important difference: it has a circle of minima. This observation is an important one in the theory of spontaneous symmetry breaking in the Standard model.
Conserved U(1) current
The Klein–Gordon equation (and action) for a complex field admits a symmetry. That is, under the transformations
the Klein–Gordon equation is invariant, as is the action (see below). By Noether's theorem for fields, corresponding to this symmetry there is a current defined as
which satisfies the conservation equation
The form of the conserved current can be derived systematically by applying Noether's theorem to the symmetry. We will not do so here, but simply verify that this current is conserved.
From the Klein–Gordon equation for a complex field of mass , written in covariant notation and mostly plus signature,
and its complex conjugate
Multiplying by the left respectively by and (and omitting for brevity the explicit dependence),
Subtracting the former from the latter, we obtain
or in index notation,
Applying this to the derivative of the current one finds
This symmetry is a global symmetry, but it can also be gauged to create a local or gauge symmetry: see below scalar QED. The name of gauge symmetry is somewhat misleading: it is really a redundancy, while the global symmetry is a genuine symmetry.
Lagrangian formulation
The Klein–Gordon equation can also be derived by a variational method, arising as the Euler–Lagrange equation of the action
In natural units, with signature mostly minus, the actions take the simple form
for a real scalar field of mass , and
for a complex scalar field of mass .
Applying the formula for the stress–energy tensor to the Lagrangian density (the quantity inside the integral), we can derive the stress–energy tensor of the scalar field. It is
and in natural units,
By integration of the time–time component over all space, one may show that both the positive- and negative-frequency plane-wave solutions can be physically associated with particles with positive energy. This is not the case for the Dirac equation and its energy–momentum tensor.
The stress energy tensor is the set of conserved currents corresponding to the invariance of the Klein–Gordon equation under space-time translations . Therefore, each component is conserved, that is, (this holds only on-shell, that is, when the Klein–Gordon equations are satisfied). It follows that the integral of over space is a conserved quantity for each . These have the physical interpretation of total energy for and total momentum for with .
Non-relativistic limit
Classical field
Taking the non-relativistic limit () of a classical Klein–Gordon field begins with the ansatz factoring the oscillatory rest mass energy term,
Defining the kinetic energy , in the non-relativistic limit , and hence
Applying this yields the non-relativistic limit of the second time derivative of ,
Substituting into the free Klein–Gordon equation, , yields
which (by dividing out the exponential and subtracting the mass term) simplifies to
This is a classical Schrödinger field.
Quantum field
The analogous limit of a quantum Klein–Gordon field is complicated by the non-commutativity of the field operator. In the limit , the creation and annihilation operators decouple and behave as independent quantum Schrödinger fields.
Scalar electrodynamics
There is a way to make the complex Klein–Gordon field interact with electromagnetism in a gauge-invariant way. We can replace the (partial) derivative with the gauge-covariant derivative. Under a local gauge transformation, the fields transform as
where is a function of spacetime, thus making it a local transformation, as opposed to a constant over all of spacetime, which would be a global transformation. A subtle point is that global transformations can arise as local ones, when the function is taken to be a constant function.
A well-formulated theory should be invariant under such transformations. Precisely, this means that the equations of motion and action (see below) are invariant. To achieve this, ordinary derivatives must be replaced by gauge-covariant derivatives , defined as
where the 4-potential or gauge field transforms under a gauge transformation as
.
With these definitions, the covariant derivative transforms as
In natural units, the Klein–Gordon equation therefore becomes
Since an ungauged symmetry is only present in complex Klein–Gordon theory, this coupling and promotion to a gauged symmetry is compatible only with complex Klein–Gordon theory and not real Klein–Gordon theory.
In natural units and mostly minus signature we have
where is known as the Maxwell tensor, field strength or curvature depending on viewpoint.
This theory is often known as scalar quantum electrodynamics or scalar QED, although all aspects we've discussed here are classical.
Scalar chromodynamics
It is possible to extend this to a non-abelian gauge theory with a gauge group , where we couple the scalar Klein–Gordon action to a Yang–Mills Lagrangian. Here, the field is actually vector-valued, but is still described as a scalar field: the scalar describes its transformation under space-time transformations, but not its transformation under the action of the gauge group.
For concreteness we fix to be , the special unitary group for some . Under a gauge transformation , which can be described as a function the scalar field transforms as a vector
.
The covariant derivative is
where the gauge field or connection transforms as
This field can be seen as a matrix valued field which acts on the vector space .
Finally defining the chromomagnetic field strength or curvature,
we can define the action.
Klein–Gordon on curved spacetime
In general relativity, we include the effect of gravity by replacing partial derivatives with covariant derivatives, and the Klein–Gordon equation becomes (in the mostly pluses signature)
or equivalently,
where is the inverse of the metric tensor that is the gravitational potential field, g is the determinant of the metric tensor, is the covariant derivative, and is the Christoffel symbol that is the gravitational force field.
With natural units this becomes
This also admits an action formulation on a spacetime (Lorentzian) manifold . Using abstract index notation and in mostly plus signature this is
or
See also
Quantum field theory
Quartic interaction
Relativistic wave equations
Dirac equation (spin 1/2)
Proca action (spin 1)
Rarita–Schwinger equation (spin 3/2)
Scalar field theory
Sine–Gordon equation
Remarks
Notes
References
External links
Linear Klein–Gordon Equation at EqWorld: The World of Mathematical Equations.
Nonlinear Klein–Gordon Equation at EqWorld: The World of Mathematical Equations.
Introduction to nonlocal equations.
Partial differential equations
Special relativity
Waves
Quantum field theory
Equations of physics
Mathematical physics | Klein–Gordon equation | Physics,Mathematics | 3,441 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.