text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
Neutron stimulated emission computed tomography (NSECT) uses induced gamma emission through neutron inelastic scattering to generate images of the spatial distribution of elements in a sample.
== Clinical Applications ==
NSECT has been shown to be effective in detecting liver iron overload disorders and breast cancer. Due to its sensitivity in measuring elemental concentrations, NSECT is currently being developed for cancer staging, among other medical applications.
== NSECT mechanism ==
A given atomic nucleus, defined by its proton and neutron numbers, is a quantized system with a set of characteristic higher energy levels that it can occupy as a nuclear isomer. When the nucleus in its ground state is struck by a fast neutron with kinetic energy greater than that of its first excited state, it can undergo an isomeric transition to one of its excited states by receiving the necessary energy from the fast neutron through inelastic scatter. Promptly (on the order of picoseconds, on average) after excitation, the excited nuclear isomer de-excites (either directly or through a series of cascades) to the ground state, emitting a characteristic gamma ray for each decay transition with energy equal to the difference in the energy levels involved (see induced gamma emission). After irradiating the sample with neutrons, the measured number of emitted gamma rays of energy characteristic to the nucleus of interest is directly proportional to the number of such nuclei along the incident neutron beam trajectory. After repeating the measurement for neutron beam incidence at positions around the sample, an image of the distribution of the nuclei in the sample can be reconstructed as done in tomography.
== References ==
== Further reading ==
NSECT at Ravin Advanced Imaging Laboratories, Duke University
[1] Floyd CE, Bender JE, Sharma AC, Kapadia A, Xia J, and Harrawood B, Tourassi GD, Lo JY, Crowell A, and Howell C. "Introduction to neutron stimulated emission computed tomography," Physics in Medicine and Biology. 51:3375. 2006.
[2] Sharma AC, Harrawood BP, Bender JE, Tourassi GD, and Kapadia AJ. "Neutron stimulated emission computed tomography: a Monte Carlo simulation approach,"Physics in Medicine and Biology. 52:6117. 2007.
[3] Floyd CE, Kapadia, AJ, et al. "Neutron-stimulated emission computed tomography of a multi-element phantom," Physics in Medicine and Biology. 53:2313. 2008. | Wikipedia/Neutron_stimulated_emission_computed_tomography |
Hydraulic tomography (HT) is a sequential cross-hole hydraulic test followed by inversion of all the data to map the spatial distribution of aquifer hydraulic properties. Specifically, HT involves installation of multiple wells in an aquifer, which are partitioned into several intervals along the depth using packers. A sequential aquifer test at selected intervals is then conducted. During the test, water is injected or withdrawn (i.e. a pressure excitation) at a selected interval in a given well. Pressure responses of the subsurface are then monitored at other intervals at this well and also in other wells. This test produces a set of pressure excitation/response data of the subsurface.
Once a given test has been completed, the pump is moved to another interval and the test is repeated to collect another set of data. The same procedure is then applied to the intervals at other wells. Afterward, the data sets from all tests are processed by a mathematical model to estimate the spatial distribution of hydraulic properties of the aquifer. These pairs of pumping and drawdown data sets at different locations make an inverse problem better posed, because each pair cross-validates the others such that the estimates become less non-unique. In other words, predictions of ground water flow based on the HT estimates will be more accurate and less uncertain than those based on estimates from traditional site-characterization approaches and model calibrations.
== References ==
https://web.archive.org/web/20071201142040/http://tian.hwr.arizona.edu/yeh/index.html
http://tian.hwr.arizona.edu/research/HT/examples | Wikipedia/Hydraulic_tomography |
Geometric tomography is a mathematical field that focuses on problems of reconstructing homogeneous (often convex) objects from tomographic data (this might be X-rays, projections, sections, brightness functions, or covariograms). More precisely, according to R.J. Gardner (who introduced the term), "Geometric tomography deals with the retrieval of information about a geometric object from data concerning its projections (shadows) on planes or cross-sections by planes."
== Theory ==
A key theorem in this area states that any convex body in
E
n
{\displaystyle E^{n}}
can be determined by parallel, coplanar
X-rays in a set of four directions whose slopes have a transcendental cross ratio.
== Examples ==
Radon transform
Funk transform (a.k.a. spherical Radon transform)
== See also ==
Tomography
Tomographic reconstruction
Discrete tomography
Generalized conic
== References ==
== External links ==
Website summarizing geometric tomography – Describes its history, theory, relation to computerized and discrete tomography, and includes interactive demonstrations of reconstruction algorithms.
Geometric tomography applet I
Geometric tomography applet II | Wikipedia/Geometric_tomography |
Ultrasound transmission tomography (UTT) is a form of tomography involving ultrasound.
Like X-ray tomography, the attenuation of the ultrasound as it passes through the object can be measured, but since the speed of sound is so much lower than the speed of light, the delay as it passes through the object can also be measured, allowing estimation of both the attenuation coefficient and the index of refraction. Traditional ultrasound imaging primarily detects boundaries between different media. Also unlike X-rays, the paths through the object are not necessarily straight lines, as they are deflected at each boundary. Tumors typically have a higher speed of sound than surrounding tissue.
== See also ==
Ultrasound computer tomography
== References == | Wikipedia/Ultrasound_transmission_tomography |
Electron tomography (ET) is a tomography technique for obtaining detailed 3D structures of sub-cellular, macro-molecular, or materials specimens. Electron tomography is an extension of traditional transmission electron microscopy and uses a transmission electron microscope to collect the data. In the process, a beam of electrons is passed through the sample at incremental degrees of rotation around the center of the target sample. This information is collected and used to assemble a three-dimensional image of the target. For biological applications, the typical resolution of ET systems are in the 5–20 nm range, suitable for examining supra-molecular multi-protein structures, although not the secondary and tertiary structure of an individual protein or polypeptide. Recently, atomic resolution in 3D electron tomography reconstructions has been demonstrated.
== BF-TEM and ADF-STEM tomography ==
In the field of biology, bright-field transmission electron microscopy (BF-TEM) and high-resolution TEM (HRTEM) are the primary imaging methods for tomography tilt series acquisition. However, there are two issues associated with BF-TEM and HRTEM. First, acquiring an interpretable 3-D tomogram requires that the projected image intensities vary monotonically with material thickness. This condition is difficult to guarantee in BF/HRTEM, where image intensities are dominated by phase-contrast with the potential for multiple contrast reversals with thickness, making it difficult to distinguish voids from high-density inclusions. Second, the contrast transfer function of BF-TEM is essentially a high-pass filter – information at low spatial frequencies is significantly suppressed – resulting in an exaggeration of sharp features. However, the technique of annular dark-field scanning transmission electron microscopy (ADF-STEM), which is typically used on material specimens, more effectively suppresses phase and diffraction contrast, providing image intensities that vary with the projected mass-thickness of samples up to micrometres thick for materials with low atomic number. ADF-STEM also acts as a low-pass filter, eliminating the edge-enhancing artifacts common in BF/HRTEM. Thus, provided that the features can be resolved, ADF-STEM tomography can yield a reliable reconstruction of the underlying specimen which is extremely important for its application in materials science. For 3D imaging, the resolution is traditionally described by the Crowther criterion. In 2010, a 3D resolution of 0.5±0.1×0.5±0.1×0.7±0.2 nm was achieved with a single-axis ADF-STEM tomography.
== Atomic Electron Tomography (AET) ==
Atomic level resolution in 3D electron tomography reconstructions has been demonstrated. With the aid of computational ptychography, identification and precise 3D coordinates of every single atom in tiny objects have been imaged, clearly depicting molecular structures at large and small scales. Reconstructions of crystal defects such as stacking faults, grain boundaries, dislocations, and twinning in structures have been achieved. This method is relevant to the physical sciences, where cryo-EM techniques cannot always be used to locate the coordinates of individual atoms in disordered materials. AET reconstructions are achieved using the combination of an ADF-STEM tomographic tilt series and iterative algorithms for reconstruction. Currently, algorithms such as the real-space algebraic reconstruction technique (ART) and the fast Fourier transform equal slope tomography (EST) are used to address issues such as image noise, sample drift, and limited data. ADF-STEM tomography has recently been used to directly visualize the atomic structure of screw dislocations in nanoparticles.
AET has also been used to find the 3D coordinates of 3,769 atoms in a tungsten needle with 19 pm precision and 20,000 atoms in a multiply twinned palladium nanoparticle. The combination of AET with electron energy loss spectroscopy (EELS) allows for investigation of electronic states in addition to 3D reconstruction. Challenges to atomic level resolution from electron tomography include the need for better reconstruction algorithms and increased precision of tilt angle required to image defects in non-crystalline samples.
=== Different tilting methods ===
The most popular tilting methods are the single-axis and the dual-axis tilting methods. The geometry of most specimen holders and electron microscopes normally precludes tilting the specimen through a full 180° range, which can lead to artifacts in the 3D reconstruction of the target. Standard single-tilt sample holders have a limited rotation of ±80°, leading to a missing wedge in the reconstruction. A solution is to use needle shaped-samples to allow for full rotation. By using dual-axis tilting, the reconstruction artifacts are reduced by a factor of
2
{\displaystyle {\sqrt {2}}}
compared to single-axis tilting. However, twice as many images need to be taken. Another method of obtaining a tilt-series is the so-called conical tomography method, in which the sample is tilted, and then rotated a complete turn.
== See also ==
Tomography
Tomographic reconstruction
3D reconstruction
Cryo-electron tomography
Positron emission tomography
Crowther criterion
X-ray computed tomography
tomviz tomography software
imod tomography software
X-ray diffraction computed tomography
== References == | Wikipedia/Electron_tomography |
RGBA stands for red green blue alpha. While it is sometimes described as a color space, it is actually a three-channel RGB color model supplemented with a fourth alpha channel. Alpha indicates how opaque each pixel is and allows an image to be combined over others using alpha compositing, with transparent areas and anti-aliasing of the edges of opaque regions. Each pixel is a 4D vector.
The term does not define what RGB color space is being used. It also does not state whether or not the colors are premultiplied by the alpha value, and if they are it does not state what color space that premultiplication was done in. This means more information than just "RGBA" is needed to determine how to handle an image.
In some contexts the abbreviation "RGBA" means a specific memory layout (called RGBA8888 below), with other terms such as "BGRA" used for alternatives. In other contexts "RGBA" means any layout.
== Representation ==
In computer graphics, pixels encoding the RGBA color space information must be stored in computer memory (or in files on disk). In most cases four equal-sized pieces of adjacent memory are used, one for each channel, and a 0 in a channel indicates black color or transparent alpha, while all-1 bits indicates white or fully opaque alpha. By far the most common format is to store 8 bits (one byte) for each channel, which is 32 bits for each pixel.
The order of these four bytes in memory can differ, which can lead to confusion when image data is exchanged. These encodings are often denoted by the four letters in some order (most commonly RGBA). The interpretation of these 4-letter mnemonics is not well established. There are two typical ways to understand the mnemonic "RGBA":
In the byte-order scheme, "RGBA" is understood to mean a byte R, followed by a byte G, followed by a byte B, and followed by a byte A. This scheme is commonly used for describing file formats or network protocols, which are both byte-oriented.
In the word-order scheme, "RGBA" is understood to represent a complete 32-bit word, where R is more significant than G, which is more significant than B, which is more significant than A.
In a big-endian system, the two schemes are equivalent. This is not the case for a little-endian system, where the two mnemonics are reverses of each other. Therefore, to be unambiguous, it is important to state which ordering is used when referring to the encoding. This article will use a scheme that has some popularity, which is to add the suffix "8888" to indicate if 4 8-bit units or "32" if one 32-bit unit are being discussed.
=== RGBA8888 ===
In OpenGL and Portable Network Graphics (PNG), the RGBA byte order is used, where the colors are stored in memory such that R is at the lowest address, G after it, B after that, and A last. On a little endian architecture this is equivalent to ABGR32.
In many systems when there are more than 8 bits per channel (such as 16 bits or floating-point), the channels are stored in RGBA order, even if 8-bit channels are stored in some other order.
=== ARGB32 ===
The channels are arranged in memory in such manner that a single 32-bit unsigned integer has the alpha sample in the highest 8 bits, followed by the red sample, green sample and finally the blue sample in the lowest 8 bits:
ARGB values are typically expressed using 8 hexadecimal digits, with each pair of the hexadecimal digits representing the values of the Alpha, Red, Green and Blue channel, respectively. For example, 80FFFF00 represents 50.2% opaque (non-premultiplied) yellow. The 80 hex value, which is 128 in decimal, represents a 50.2% alpha value because 128 is approximately 50.2% of the maximum value of 255 (FF hex); to continue to decipher the 80FFFF00 value, the first FF represents the maximum value red can have; the second FF is like the previous but for green; the final 00 represents the minimum value blue can have (effectively – no blue). Consequently, red + green yields yellow. In cases where the alpha is not used this can be shortened to 6 digits RRGGBB, this is why it was chosen to put the alpha in the top bits. Depending on the context a 0x or a number sign (#) is put before the hex digits.
This layout became popular when 24-bit color (and 32-bit RGBA) was introduced on personal computers. At the time it was much faster and easier for programs to manipulate one 32-bit unit than four 8-bit units.
On little-endian systems, this is equivalent to BGRA byte order. On big-endian systems, this is equivalent to ARGB byte order.
=== RGBA32 ===
In some software originating on big-endian machines such as Silicon Graphics, colors were stored in 32 bits similar to ARGB32, but with the alpha in the bottom 8 bits rather than the top. For example, 808000FF would be Red and Green:50.2%, Blue:0% and Alpha:100%, a brown. This is what you would get if RGBA8888 data was read as words on these machines. It is used in Portable Arbitrary Map and in FLTK, but in general it is rare.
The bytes are stored in memory on a little-endian machine in the order ABGR.
== See also ==
Portable Network Graphics
== References ==
== External links ==
Alpha transparency on W3C PNG specification
RGBA Colors – Preview page with implementation info on CSS3.info | Wikipedia/RGBA_color_model |
Robotics is the interdisciplinary study and practice of the design, construction, operation, and use of robots.
Within mechanical engineering, robotics is the design and construction of the physical structures of robots, while in computer science, robotics focuses on robotic automation algorithms. Other disciplines contributing to robotics include electrical, control, software, information, electronic, telecommunication, computer, mechatronic, and materials engineering.
The goal of most robotics is to design machines that can help and assist humans. Many robots are built to do jobs that are hazardous to people, such as finding survivors in unstable ruins, and exploring space, mines and shipwrecks. Others replace people in jobs that are boring, repetitive, or unpleasant, such as cleaning, monitoring, transporting, and assembling. Today, robotics is a rapidly growing field, as technological advances continue; researching, designing, and building new robots serve various practical purposes.
== Robotics aspects ==
Robotics usually combines three aspects of design work to create robot systems:
Mechanical construction: a frame, form or shape designed to achieve a particular task. For example, a robot designed to travel across heavy dirt or mud might use caterpillar tracks. Origami inspired robots can sense and analyze in extreme environments. The mechanical aspect of the robot is mostly the creator's solution to completing the assigned task and dealing with the physics of the environment around it. Form follows function.
Electrical components that power and control the machinery. For example, the robot with caterpillar tracks would need some kind of power to move the tracker treads. That power comes in the form of electricity, which will have to travel through a wire and originate from a battery, a basic electrical circuit. Even petrol-powered machines that get their power mainly from petrol still require an electric current to start the combustion process which is why most petrol-powered machines like cars, have batteries. The electrical aspect of robots is used for movement (through motors), sensing (where electrical signals are used to measure things like heat, sound, position, and energy status), and operation (robots need some level of electrical energy supplied to their motors and sensors in order to activate and perform basic operations)
Software. A program is how a robot decides when or how to do something. In the caterpillar track example, a robot that needs to move across a muddy road may have the correct mechanical construction and receive the correct amount of power from its battery, but would not be able to go anywhere without a program telling it to move. Programs are the core essence of a robot, it could have excellent mechanical and electrical construction, but if its program is poorly structured, its performance will be very poor (or it may not perform at all). There are three different types of robotic programs: remote control, artificial intelligence, and hybrid. A robot with remote control programming has a preexisting set of commands that it will only perform if and when it receives a signal from a control source, typically a human being with remote control. It is perhaps more appropriate to view devices controlled primarily by human commands as falling in the discipline of automation rather than robotics. Robots that use artificial intelligence interact with their environment on their own without a control source, and can determine reactions to objects and problems they encounter using their preexisting programming. A hybrid is a form of programming that incorporates both AI and RC functions in them.
== Applied robotics ==
As many robots are designed for specific tasks, this method of classification becomes more relevant. For example, many robots are designed for assembly work, which may not be readily adaptable for other applications. They are termed "assembly robots". For seam welding, some suppliers provide complete welding systems with the robot i.e. the welding equipment along with other material handling facilities like turntables, etc. as an integrated unit. Such an integrated robotic system is called a "welding robot" even though its discrete manipulator unit could be adapted to a variety of tasks. Some robots are specifically designed for heavy load manipulation, and are labeled as "heavy-duty robots".
Current and potential applications include:
Manufacturing. Robots have been increasingly used in manufacturing since the 1960s. According to the Robotic Industries Association US data, in 2016 the automotive industry was the main customer of industrial robots with 52% of total sales. In the auto industry, they can amount for more than half of the "labor". There are even "lights off" factories such as an IBM keyboard manufacturing factory in Texas that was fully automated as early as 2003.
Autonomous transport including airplane autopilot and self-driving cars
Domestic robots including robotic vacuum cleaners, robotic lawn mowers, dishwasher loading and flatbread baking.
Construction robots. Construction robots can be separated into three types: traditional robots, robotic arm, and robotic exoskeleton.
Automated mining.
Space exploration, including Mars rovers.
Energy applications including cleanup of nuclear contaminated areas; and cleaning solar panel arrays.
Medical robots and Robot-assisted surgery designed and used in clinics.
Agricultural robots. The use of robots in agriculture is closely linked to the concept of AI-assisted precision agriculture and drone usage.
Food processing. Commercial examples of kitchen automation are Flippy (burgers), Zume Pizza (pizza), Cafe X (coffee), Makr Shakr (cocktails), Frobot (frozen yogurts), Sally (salads), salad or food bowl robots manufactured by Dexai (a Draper Laboratory spinoff, operating on military bases), and integrated food bowl assembly systems manufactured by Spyce Kitchen (acquired by Sweetgreen) and Silicon Valley startup Hyphen. Other examples may include manufacturing technologies based on 3D Food Printing.
Military robots.
Robot sports for entertainment and education, including Robot combat, Autonomous racing, drone racing, and FIRST Robotics.
== Mechanical robotics areas ==
=== Power source ===
At present, mostly (lead–acid) batteries are used as a power source. Many different types of batteries can be used as a power source for robots. They range from lead–acid batteries, which are safe and have relatively long shelf lives but are rather heavy compared to silver–cadmium batteries which are much smaller in volume and are currently much more expensive. Designing a battery-powered robot needs to take into account factors such as safety, cycle lifetime, and weight. Generators, often some type of internal combustion engine, can also be used. However, such designs are often mechanically complex and need fuel, require heat dissipation, and are relatively heavy. A tether connecting the robot to a power supply would remove the power supply from the robot entirely. This has the advantage of saving weight and space by moving all power generation and storage components elsewhere. However, this design does come with the drawback of constantly having a cable connected to the robot, which can be difficult to manage.
Potential power sources could be:
pneumatic (compressed gases)
Solar power (using the sun's energy and converting it into electrical power)
hydraulics (liquids)
flywheel energy storage
organic garbage (through anaerobic digestion)
nuclear
=== Actuation ===
Actuators are the "muscles" of a robot, the parts which convert stored energy into movement. By far the most popular actuators are electric motors that rotate a wheel or gear, and linear actuators that control industrial robots in factories. There are some recent advances in alternative types of actuators, powered by electricity, chemicals, or compressed air.
==== Electric motors ====
The vast majority of robots use electric motors, often brushed and brushless DC motors in portable robots or AC motors in industrial robots and CNC machines. These motors are often preferred in systems with lighter loads, and where the predominant form of motion is rotational.
==== Linear actuators ====
Various types of linear actuators move in and out instead of by spinning, and often have quicker direction changes, particularly when very large forces are needed such as with industrial robotics. They are typically powered by compressed and oxidized air (pneumatic actuator) or an oil (hydraulic actuator) Linear actuators can also be powered by electricity which usually consists of a motor and a leadscrew. Another common type is a mechanical linear actuator such as a rack and pinion on a car.
==== Series elastic actuators ====
Series elastic actuation (SEA) relies on the idea of introducing intentional elasticity between the motor actuator and the load for robust force control. Due to the resultant lower reflected inertia, series elastic actuation improves safety when a robot interacts with the environment (e.g., humans or workpieces) or during collisions. Furthermore, it also provides energy efficiency and shock absorption (mechanical filtering) while reducing excessive wear on the transmission and other mechanical components. This approach has successfully been employed in various robots, particularly advanced manufacturing robots and walking humanoid robots.
The controller design of a series elastic actuator is most often performed within the passivity framework as it ensures the safety of interaction with unstructured environments. Despite its remarkable stability and robustness, this framework suffers from the stringent limitations imposed on the controller which may trade-off performance. The reader is referred to the following survey which summarizes the common controller architectures for SEA along with the corresponding sufficient passivity conditions. One recent study has derived the necessary and sufficient passivity conditions for one of the most common impedance control architectures, namely velocity-sourced SEA. This work is of particular importance as it drives the non-conservative passivity bounds in an SEA scheme for the first time which allows a larger selection of control gains.
==== Air muscles ====
Pneumatic artificial muscles also known as air muscles, are special tubes that expand (typically up to 42%) when air is forced inside them. They are used in some robot applications.
==== Wire muscles ====
Muscle wire, also known as shape memory alloy, is a material that contracts (under 5%) when electricity is applied. They have been used for some small robot applications.
==== Electroactive polymers ====
EAPs or EPAMs are a plastic material that can contract substantially (up to 380% activation strain) from electricity, and have been used in facial muscles and arms of humanoid robots, and to enable new robots to float, fly, swim or walk.
==== Piezo motors ====
Recent alternatives to DC motors are piezo motors or ultrasonic motors. These work on a fundamentally different principle, whereby tiny piezoceramic elements, vibrating many thousands of times per second, cause linear or rotary motion. There are different mechanisms of operation; one type uses the vibration of the piezo elements to step the motor in a circle or a straight line. Another type uses the piezo elements to cause a nut to vibrate or to drive a screw. The advantages of these motors are nanometer resolution, speed, and available force for their size. These motors are already available commercially and being used on some robots.
==== Elastic nanotubes ====
Elastic nanotubes are a promising artificial muscle technology in early-stage experimental development. The absence of defects in carbon nanotubes enables these filaments to deform elastically by several percent, with energy storage levels of perhaps 10 J/cm3 for metal nanotubes. Human biceps could be replaced with an 8 mm diameter wire of this material. Such compact "muscle" might allow future robots to outrun and outjump humans.
=== Sensing ===
Sensors allow robots to receive information about a certain measurement of the environment, or internal components. This is essential for robots to perform their tasks, and act upon any changes in the environment to calculate the appropriate response. They are used for various forms of measurements, to give the robots warnings about safety or malfunctions, and to provide real-time information about the task it is performing.
==== Touch ====
Current robotic and prosthetic hands receive far less tactile information than the human hand. Recent research has developed a tactile sensor array that mimics the mechanical properties and touch receptors of human fingertips. The sensor array is constructed as a rigid core surrounded by conductive fluid contained by an elastomeric skin. Electrodes are mounted on the surface of the rigid core and are connected to an impedance-measuring device within the core. When the artificial skin touches an object the fluid path around the electrodes is deformed, producing impedance changes that map the forces received from the object. The researchers expect that an important function of such artificial fingertips will be adjusting the robotic grip on held objects.
Scientists from several European countries and Israel developed a prosthetic hand in 2009, called SmartHand, which functions like a real one —allowing patients to write with it, type on a keyboard, play piano, and perform other fine movements. The prosthesis has sensors which enable the patient to sense real feelings in its fingertips.
==== Other ====
Other common forms of sensing in robotics use lidar, radar, and sonar. Lidar measures the distance to a target by illuminating the target with laser light and measuring the reflected light with a sensor. Radar uses radio waves to determine the range, angle, or velocity of objects. Sonar uses sound propagation to navigate, communicate with or detect objects on or under the surface of the water.
==== Mechanical grippers ====
One of the most common types of end-effectors are "grippers". In its simplest manifestation, it consists of just two fingers that can open and close to pick up and let go of a range of small objects. Fingers can, for example, be made of a chain with a metal wire running through it. Hands that resemble and work more like a human hand include the Shadow Hand and the Robonaut hand. Hands that are of a mid-level complexity include the Delft hand. Mechanical grippers can come in various types, including friction and encompassing jaws. Friction jaws use all the force of the gripper to hold the object in place using friction. Encompassing jaws cradle the object in place, using less friction.
==== Suction end-effectors ====
Suction end-effectors, powered by vacuum generators, are very simple astrictive devices that can hold very large loads provided the prehension surface is smooth enough to ensure suction.
Pick and place robots for electronic components and for large objects like car windscreens, often use very simple vacuum end-effectors.
Suction is a highly used type of end-effector in industry, in part because the natural compliance of soft suction end-effectors can enable a robot to be more robust in the presence of imperfect robotic perception. As an example: consider the case of a robot vision system that estimates the position of a water bottle but has 1 centimeter of error. While this may cause a rigid mechanical gripper to puncture the water bottle, the soft suction end-effector may just bend slightly and conform to the shape of the water bottle surface.
==== General purpose effectors ====
Some advanced robots are beginning to use fully humanoid hands, like the Shadow Hand, MANUS, and the Schunk hand. They have powerful robot dexterity intelligence (RDI), with as many as 20 degrees of freedom and hundreds of tactile sensors.
== Control robotics areas ==
The mechanical structure of a robot must be controlled to perform tasks. The control of a robot involves three distinct phases – perception, processing, and action (robotic paradigms). Sensors give information about the environment or the robot itself (e.g. the position of its joints or its end effector). This information is then processed to be stored or transmitted and to calculate the appropriate signals to the actuators (motors), which move the mechanical structure to achieve the required co-ordinated motion or force actions.
The processing phase can range in complexity. At a reactive level, it may translate raw sensor information directly into actuator commands (e.g. firing motor power electronic gates based directly upon encoder feedback signals to achieve the required torque/velocity of the shaft). Sensor fusion and internal models may first be used to estimate parameters of interest (e.g. the position of the robot's gripper) from noisy sensor data. An immediate task (such as moving the gripper in a certain direction until an object is detected with a proximity sensor) is sometimes inferred from these estimates. Techniques from control theory are generally used to convert the higher-level tasks into individual commands that drive the actuators, most often using kinematic and dynamic models of the mechanical structure.
At longer time scales or with more sophisticated tasks, the robot may need to build and reason with a "cognitive" model. Cognitive models try to represent the robot, the world, and how the two interact. Pattern recognition and computer vision can be used to track objects. Mapping techniques can be used to build maps of the world. Finally, motion planning and other artificial intelligence techniques may be used to figure out how to act. For example, a planner may figure out how to achieve a task without hitting obstacles, falling over, etc.
Modern commercial robotic control systems are highly complex, integrate multiple sensors and effectors, have many interacting degrees-of-freedom (DOF) and require operator interfaces, programming tools and real-time capabilities. They are oftentimes interconnected to wider communication networks and in many cases are now both IoT-enabled and mobile. Progress towards open architecture, layered, user-friendly and 'intelligent' sensor-based interconnected robots has emerged from earlier concepts related to Flexible Manufacturing Systems (FMS), and several 'open or 'hybrid' reference architectures exist which assist developers of robot control software and hardware to move beyond traditional, earlier notions of 'closed' robot control systems have been proposed. Open architecture controllers are said to be better able to meet the growing requirements of a wide range of robot users, including system developers, end users and research scientists, and are better positioned to deliver the advanced robotic concepts related to Industry 4.0. In addition to utilizing many established features of robot controllers, such as position, velocity and force control of end effectors, they also enable IoT interconnection and the implementation of more advanced sensor fusion and control techniques, including adaptive control, Fuzzy control and Artificial Neural Network (ANN)-based control. When implemented in real-time, such techniques can potentially improve the stability and performance of robots operating in unknown or uncertain environments by enabling the control systems to learn and adapt to environmental changes. There are several examples of reference architectures for robot controllers, and also examples of successful implementations of actual robot controllers developed from them. One example of a generic reference architecture and associated interconnected, open-architecture robot and controller implementation was used in a number of research and development studies, including prototype implementation of novel advanced and intelligent control and environment mapping methods in real-time.
=== Manipulation ===
A definition of robotic manipulation has been provided by Matt Mason as: "manipulation refers to an agent's control of its environment through selective contact".
Robots need to manipulate objects; pick up, modify, destroy, move or otherwise have an effect. Thus the functional end of a robot arm intended to make the effect (whether a hand, or tool) are often referred to as end effectors, while the "arm" is referred to as a manipulator. Most robot arms have replaceable end-effectors, each allowing them to perform some small range of tasks. Some have a fixed manipulator that cannot be replaced, while a few have one very general-purpose manipulator, for example, a humanoid hand.
=== Locomotion ===
==== Rolling robots ====
For simplicity, most mobile robots have four wheels or a number of continuous tracks. Some researchers have tried to create more complex wheeled robots with only one or two wheels. These can have certain advantages such as greater efficiency and reduced parts, as well as allowing a robot to navigate in confined places that a four-wheeled robot would not be able to.
===== Two-wheeled balancing robots =====
Balancing robots generally use a gyroscope to detect how much a robot is falling and then drive the wheels proportionally in the same direction, to counterbalance the fall at hundreds of times per second, based on the dynamics of an inverted pendulum. Many different balancing robots have been designed. While the Segway is not commonly thought of as a robot, it can be thought of as a component of a robot, when used as such Segway refer to them as RMP (Robotic Mobility Platform). An example of this use has been as NASA's Robonaut that has been mounted on a Segway.
===== One-wheeled balancing robots =====
A one-wheeled balancing robot is an extension of a two-wheeled balancing robot so that it can move in any 2D direction using a round ball as its only wheel. Several one-wheeled balancing robots have been designed recently, such as Carnegie Mellon University's "Ballbot" which is the approximate height and width of a person, and Tohoku Gakuin University's "BallIP". Because of the long, thin shape and ability to maneuver in tight spaces, they have the potential to function better than other robots in environments with people.
===== Spherical orb robots =====
Several attempts have been made in robots that are completely inside a spherical ball, either by spinning a weight inside the ball, or by rotating the outer shells of the sphere. These have also been referred to as an orb bot or a ball bot.
===== Six-wheeled robots =====
Using six wheels instead of four wheels can give better traction or grip in outdoor terrain such as on rocky dirt or grass.
===== Tracked robots =====
Tracks provide even more traction than a six-wheeled robot. Tracked wheels behave as if they were made of hundreds of wheels, therefore are very common for outdoor off-road robots, where the robot must drive on very rough terrain. However, they are difficult to use indoors such as on carpets and smooth floors. Examples include NASA's Urban Robot "Urbie".
==== Walking robots ====
Walking is a difficult and dynamic problem to solve. Several robots have been made which can walk reliably on two legs, however, none have yet been made which are as robust as a human. There has been much study on human-inspired walking, such as AMBER lab which was established in 2008 by the Mechanical Engineering Department at Texas A&M University. Many other robots have been built that walk on more than two legs, due to these robots being significantly easier to construct. Walking robots can be used for uneven terrains, which would provide better mobility and energy efficiency than other locomotion methods. Typically, robots on two legs can walk well on flat floors and can occasionally walk up stairs. None can walk over rocky, uneven terrain. Some of the methods which have been tried are:
===== ZMP technique =====
The zero moment point (ZMP) is the algorithm used by robots such as Honda's ASIMO. The robot's onboard computer tries to keep the total inertial forces (the combination of Earth's gravity and the acceleration and deceleration of walking), exactly opposed by the floor reaction force (the force of the floor pushing back on the robot's foot). In this way, the two forces cancel out, leaving no moment (force causing the robot to rotate and fall over). However, this is not exactly how a human walks, and the difference is obvious to human observers, some of whom have pointed out that ASIMO walks as if it needs the lavatory. ASIMO's walking algorithm is not static, and some dynamic balancing is used (see below). However, it still requires a smooth surface to walk on.
===== Hopping =====
Several robots, built in the 1980s by Marc Raibert at the MIT Leg Laboratory, successfully demonstrated very dynamic walking. Initially, a robot with only one leg, and a very small foot could stay upright simply by hopping. The movement is the same as that of a person on a pogo stick. As the robot falls to one side, it would jump slightly in that direction, in order to catch itself. Soon, the algorithm was generalised to two and four legs. A bipedal robot was demonstrated running and even performing somersaults. A quadruped was also demonstrated which could trot, run, pace, and bound. For a full list of these robots, see the MIT Leg Lab Robots page.
===== Dynamic balancing (controlled falling) =====
A more advanced way for a robot to walk is by using a dynamic balancing algorithm, which is potentially more robust than the Zero Moment Point technique, as it constantly monitors the robot's motion, and places the feet in order to maintain stability. This technique was recently demonstrated by Anybots' Dexter Robot, which is so stable, it can even jump. Another example is the TU Delft Flame.
===== Passive dynamics =====
Perhaps the most promising approach uses passive dynamics where the momentum of swinging limbs is used for greater efficiency. It has been shown that totally unpowered humanoid mechanisms can walk down a gentle slope, using only gravity to propel themselves. Using this technique, a robot need only supply a small amount of motor power to walk along a flat surface or a little more to walk up a hill. This technique promises to make walking robots at least ten times more efficient than ZMP walkers, like ASIMO.
==== Flying ====
A modern passenger airliner is essentially a flying robot, with two humans to manage it. The autopilot can control the plane for each stage of the journey, including takeoff, normal flight, and even landing. Other flying robots are uninhabited and are known as unmanned aerial vehicles (UAVs). They can be smaller and lighter without a human pilot on board, and fly into dangerous territory for military surveillance missions. Some can even fire on targets under command. UAVs are also being developed which can fire on targets automatically, without the need for a command from a human. Other flying robots include cruise missiles, the Entomopter, and the Epson micro helicopter robot. Robots such as the Air Penguin, Air Ray, and Air Jelly have lighter-than-air bodies, are propelled by paddles, and are guided by sonar.
===== Biomimetic flying robots (BFRs) =====
BFRs take inspiration from flying mammals, birds, or insects. BFRs can have flapping wings, which generate the lift and thrust, or they can be propeller actuated. BFRs with flapping wings have increased stroke efficiencies, increased maneuverability, and reduced energy consumption in comparison to propeller actuated BFRs. Mammal and bird inspired BFRs share similar flight characteristics and design considerations. For instance, both mammal and bird inspired BFRs minimize edge fluttering and pressure-induced wingtip curl by increasing the rigidity of the wing edge and wingtips. Mammal and insect inspired BFRs can be impact resistant, making them useful in cluttered environments.
Mammal inspired BFRs typically take inspiration from bats, but the flying squirrel has also inspired a prototype. Examples of bat inspired BFRs include Bat Bot and the DALER. Mammal inspired BFRs can be designed to be multi-modal; therefore, they're capable of both flight and terrestrial movement. To reduce the impact of landing, shock absorbers can be implemented along the wings. Alternatively, the BFR can pitch up and increase the amount of drag it experiences. By increasing the drag force, the BFR will decelerate and minimize the impact upon grounding. Different land gait patterns can also be implemented.
Bird inspired BFRs can take inspiration from raptors, gulls, and everything in-between. Bird inspired BFRs can be feathered to increase the angle of attack range over which the prototype can operate before stalling. The wings of bird inspired BFRs allow for in-plane deformation, and the in-plane wing deformation can be adjusted to maximize flight efficiency depending on the flight gait. An example of a raptor inspired BFR is the prototype by Savastano et al. The prototype has fully deformable flapping wings and is capable of carrying a payload of up to 0.8 kg while performing a parabolic climb, steep descent, and rapid recovery. The gull inspired prototype by Grant et al. accurately mimics the elbow and wrist rotation of gulls, and they find that lift generation is maximized when the elbow and wrist deformations are opposite but equal.
Insect inspired BFRs typically take inspiration from beetles or dragonflies. An example of a beetle inspired BFR is the prototype by Phan and Park, and a dragonfly inspired BFR is the prototype by Hu et al. The flapping frequency of insect inspired BFRs are much higher than those of other BFRs; this is because of the aerodynamics of insect flight. Insect inspired BFRs are much smaller than those inspired by mammals or birds, so they are more suitable for dense environments.
===== Biologically-inspired flying robots =====
A class of robots that are biologically inspired, but which do not attempt to mimic biology, are creations such as the Entomopter. Funded by DARPA, NASA, the United States Air Force, and the Georgia Tech Research Institute and patented by Prof. Robert C. Michelson for covert terrestrial missions as well as flight in the lower Mars atmosphere, the Entomopter flight propulsion system uses low Reynolds number wings similar to those of the hawk moth (Manduca sexta), but flaps them in a non-traditional "opposed x-wing fashion" while "blowing" the surface to enhance lift based on the Coandă effect as well as to control vehicle attitude and direction. Waste gas from the propulsion system not only facilitates the blown wing aerodynamics, but also serves to create ultrasonic emissions like that of a Bat for obstacle avoidance. The Entomopter and other biologically-inspired robots leverage features of biological systems, but do not attempt to create mechanical analogs.
===== Snaking =====
Several snake robots have been successfully developed. Mimicking the way real snakes move, these robots can navigate very confined spaces, meaning they may one day be used to search for people trapped in collapsed buildings. The Japanese ACM-R5 snake robot can even navigate both on land and in water.
===== Skating =====
A small number of skating robots have been developed, one of which is a multi-mode walking and skating device. It has four legs, with unpowered wheels, which can either step or roll. Another robot, Plen, can use a miniature skateboard or roller-skates, and skate across a desktop.
===== Climbing =====
Several different approaches have been used to develop robots that have the ability to climb vertical surfaces. One approach mimics the movements of a human climber on a wall with protrusions; adjusting the center of mass and moving each limb in turn to gain leverage. An example of this is Capuchin, built by Ruixiang Zhang at Stanford University, California. Another approach uses the specialized toe pad method of wall-climbing geckoes, which can run on smooth surfaces such as vertical glass. Examples of this approach include Wallbot and Stickybot.
China's Technology Daily reported on 15 November 2008, that Li Hiu Yeung and his research group of New Concept Aircraft (Zhuhai) Co., Ltd. had successfully developed a bionic gecko robot named "Speedy Freelander". According to Yeung, the gecko robot could rapidly climb up and down a variety of building walls, navigate through ground and wall fissures, and walk upside-down on the ceiling. It was also able to adapt to the surfaces of smooth glass, rough, sticky or dusty walls as well as various types of metallic materials. It could also identify and circumvent obstacles automatically. Its flexibility and speed were comparable to a natural gecko. A third approach is to mimic the motion of a snake climbing a pole.
===== Swimming (Piscine) =====
It is calculated that when swimming some fish can achieve a propulsive efficiency greater than 90%. Furthermore, they can accelerate and maneuver far better than any man-made boat or submarine, and produce less noise and water disturbance. Therefore, many researchers studying underwater robots would like to copy this type of locomotion. Notable examples are the Robotic Fish G9, and Robot Tuna built to analyze and mathematically model thunniform motion. The Aqua Penguin, copies the streamlined shape and propulsion by front "flippers" of penguins. The Aqua Ray and Aqua Jelly emulate the locomotion of manta ray, and jellyfish, respectively.
In 2014, iSplash-II was developed as the first robotic fish capable of outperforming real carangiform fish in terms of average maximum velocity (measured in body lengths/ second) and endurance, the duration that top speed is maintained. This build attained swimming speeds of 11.6BL/s (i.e. 3.7 m/s). The first build, iSplash-I (2014) was the first robotic platform to apply a full-body length carangiform swimming motion which was found to increase swimming speed by 27% over the traditional approach of a posterior confined waveform.
===== Sailing =====
Sailboat robots have also been developed in order to make measurements at the surface of the ocean. A typical sailboat robot is Vaimos. Since the propulsion of sailboat robots uses the wind, the energy of the batteries is only used for the computer, for the communication and for the actuators (to tune the rudder and the sail). If the robot is equipped with solar panels, the robot could theoretically navigate forever. The two main competitions of sailboat robots are WRSC, which takes place every year in Europe, and Sailbot.
== Computational robotics areas ==
Control systems may also have varying levels of autonomy.
Direct interaction is used for haptic or teleoperated devices, and the human has nearly complete control over the robot's motion.
Operator-assist modes have the operator commanding medium-to-high-level tasks, with the robot automatically figuring out how to achieve them.
An autonomous robot may go without human interaction for extended periods of time . Higher levels of autonomy do not necessarily require more complex cognitive capabilities. For example, robots in assembly plants are completely autonomous but operate in a fixed pattern.
Another classification takes into account the interaction between human control and the machine motions.
Teleoperation. A human controls each movement, each machine actuator change is specified by the operator.
Supervisory. A human specifies general moves or position changes and the machine decides specific movements of its actuators.
Task-level autonomy. The operator specifies only the task and the robot manages itself to complete it.
Full autonomy. The machine will create and complete all its tasks without human interaction.
=== Vision ===
Computer vision is the science and technology of machines that see. As a scientific discipline, computer vision is concerned with the theory behind artificial systems that extract information from images. The image data can take many forms, such as video sequences and views from cameras.
In most practical computer vision applications, the computers are pre-programmed to solve a particular task, but methods based on learning are now becoming increasingly common.
Computer vision systems rely on image sensors that detect electromagnetic radiation which is typically in the form of either visible light or infra-red light. The sensors are designed using solid-state physics. The process by which light propagates and reflects off surfaces is explained using optics. Sophisticated image sensors even require quantum mechanics to provide a complete understanding of the image formation process. Robots can also be equipped with multiple vision sensors to be better able to compute the sense of depth in the environment. Like human eyes, robots' "eyes" must also be able to focus on a particular area of interest, and also adjust to variations in light intensities.
There is a subfield within computer vision where artificial systems are designed to mimic the processing and behavior of biological system, at different levels of complexity. Also, some of the learning-based methods developed within computer vision have a background in biology.
=== Environmental interaction and navigation ===
Though a significant percentage of robots in commission today are either human controlled or operate in a static environment, there is an increasing interest in robots that can operate autonomously in a dynamic environment. These robots require some combination of navigation hardware and software in order to traverse their environment. In particular, unforeseen events (e.g. people and other obstacles that are not stationary) can cause problems or collisions. Some highly advanced robots such as ASIMO and Meinü robot have particularly good robot navigation hardware and software. Also, self-controlled cars, Ernst Dickmanns' driverless car, and the entries in the DARPA Grand Challenge, are capable of sensing the environment well and subsequently making navigational decisions based on this information, including by a swarm of autonomous robots. Most of these robots employ a GPS navigation device with waypoints, along with radar, sometimes combined with other sensory data such as lidar, video cameras, and inertial guidance systems for better navigation between waypoints.
=== Human-robot interaction ===
The state of the art in sensory intelligence for robots will have to progress through several orders of magnitude if we want the robots working in our homes to go beyond vacuum-cleaning the floors. If robots are to work effectively in homes and other non-industrial environments, the way they are instructed to perform their jobs, and especially how they will be told to stop will be of critical importance. The people who interact with them may have little or no training in robotics, and so any interface will need to be extremely intuitive. Science fiction authors also typically assume that robots will eventually be capable of communicating with humans through speech, gestures, and facial expressions, rather than a command-line interface. Although speech would be the most natural way for the human to communicate, it is unnatural for the robot. It will probably be a long time before robots interact as naturally as the fictional C-3PO, or Data of Star Trek, Next Generation. Even though the current state of robotics cannot meet the standards of these robots from science-fiction, robotic media characters (e.g., Wall-E, R2-D2) can elicit audience sympathies that increase people's willingness to accept actual robots in the future. Acceptance of social robots is also likely to increase if people can meet a social robot under appropriate conditions. Studies have shown that interacting with a robot by looking at, touching, or even imagining interacting with the robot can reduce negative feelings that some people have about robots before interacting with them. However, if pre-existing negative sentiments are especially strong, interacting with a robot can increase those negative feelings towards robots.
==== Speech recognition ====
Interpreting the continuous flow of sounds coming from a human, in real time, is a difficult task for a computer, mostly because of the great variability of speech. The same word, spoken by the same person may sound different depending on local acoustics, volume, the previous word, whether or not the speaker has a cold, etc.. It becomes even harder when the speaker has a different accent. Nevertheless, great strides have been made in the field since Davis, Biddulph, and Balashek designed the first "voice input system" which recognized "ten digits spoken by a single user with 100% accuracy" in 1952. Currently, the best systems can recognize continuous, natural speech, up to 160 words per minute, with an accuracy of 95%. With the help of artificial intelligence, machines nowadays can use people's voice to identify their emotions such as satisfied or angry.
==== Robotic voice ====
Other hurdles exist when allowing the robot to use voice for interacting with humans. For social reasons, synthetic voice proves suboptimal as a communication medium, making it necessary to develop the emotional component of robotic voice through various techniques. An advantage of diphonic branching is the emotion that the robot is programmed to project, can be carried on the voice tape, or phoneme, already pre-programmed onto the voice media. One of the earliest examples is a teaching robot named Leachim developed in 1974 by Michael J. Freeman. Leachim was able to convert digital memory to rudimentary verbal speech on pre-recorded computer discs. It was programmed to teach students in The Bronx, New York.
==== Facial expression ====
Facial expressions can provide rapid feedback on the progress of a dialog between two humans, and soon may be able to do the same for humans and robots. Robotic faces have been constructed by Hanson Robotics using their elastic polymer called Frubber, allowing a large number of facial expressions due to the elasticity of the rubber facial coating and embedded subsurface motors (servos). The coating and servos are built on a metal skull. A robot should know how to approach a human, judging by their facial expression and body language. Whether the person is happy, frightened, or crazy-looking affects the type of interaction expected of the robot. Likewise, robots like Kismet and the more recent addition, Nexi can produce a range of facial expressions, allowing it to have meaningful social exchanges with humans.
==== Gestures ====
One can imagine, in the future, explaining to a robot chef how to make a pastry, or asking directions from a robot police officer. In both of these cases, making hand gestures would aid the verbal descriptions. In the first case, the robot would be recognizing gestures made by the human, and perhaps repeating them for confirmation. In the second case, the robot police officer would gesture to indicate "down the road, then turn right". It is likely that gestures will make up a part of the interaction between humans and robots. A great many systems have been developed to recognize human hand gestures.
==== Proxemics ====
Proxemics is the study of personal space, and HRI systems may try to model and work with its concepts for human interactions.
==== Artificial emotions ====
Artificial emotions can also be generated, composed of a sequence of facial expressions or gestures. As can be seen from the movie Final Fantasy: The Spirits Within, the programming of these artificial emotions is complex and requires a large amount of human observation. To simplify this programming in the movie, presets were created together with a special software program. This decreased the amount of time needed to make the film. These presets could possibly be transferred for use in real-life robots. An example of a robot with artificial emotions is Robin the Robot developed by an Armenian IT company Expper Technologies, which uses AI-based peer-to-peer interaction. Its main task is achieving emotional well-being, i.e. overcome stress and anxiety. Robin was trained to analyze facial expressions and use his face to display his emotions given the context. The robot has been tested by kids in US clinics, and observations show that Robin increased the appetite and cheerfulness of children after meeting and talking.
==== Personality ====
Many of the robots of science fiction have a personality, something which may or may not be desirable in the commercial robots of the future. Nevertheless, researchers are trying to create robots which appear to have a personality: i.e. they use sounds, facial expressions, and body language to try to convey an internal state, which may be joy, sadness, or fear. One commercial example is Pleo, a toy robot dinosaur, which can exhibit several apparent emotions.
== Research robotics ==
Much of the research in robotics focuses not on specific industrial tasks, but on investigations into new types of robots, alternative ways to think about or design robots, and new ways to manufacture them. Other investigations, such as MIT's cyberflora project, are almost wholly academic.
To describe the level of advancement of a robot, the term "Generation Robots" can be used. This term is coined by Professor Hans Moravec, Principal Research Scientist at the Carnegie Mellon University Robotics Institute in describing the near future evolution of robot technology. First-generation robots, Moravec predicted in 1997, should have an intellectual capacity comparable to perhaps a lizard and should become available by 2010. Because the first generation robot would be incapable of learning, however, Moravec predicts that the second generation robot would be an improvement over the first and become available by 2020, with the intelligence maybe comparable to that of a mouse. The third generation robot should have intelligence comparable to that of a monkey. Though fourth generation robots, robots with human intelligence, professor Moravec predicts, would become possible, he does not predict this happening before around 2040 or 2050.
=== Dynamics and kinematics ===
The study of motion can be divided into kinematics and dynamics. Direct kinematics or forward kinematics refers to the calculation of end effector position, orientation, velocity, and acceleration when the corresponding joint values are known. Inverse kinematics refers to the opposite case in which required joint values are calculated for given end effector values, as done in path planning. Some special aspects of kinematics include handling of redundancy (different possibilities of performing the same movement), collision avoidance, and singularity avoidance. Once all relevant positions, velocities, and accelerations have been calculated using kinematics, methods from the field of dynamics are used to study the effect of forces upon these movements. Direct dynamics refers to the calculation of accelerations in the robot once the applied forces are known. Direct dynamics is used in computer simulations of the robot. Inverse dynamics refers to the calculation of the actuator forces necessary to create a prescribed end-effector acceleration. This information can be used to improve the control algorithms of a robot.
In each area mentioned above, researchers strive to develop new concepts and strategies, improve existing ones, and improve the interaction between these areas. To do this, criteria for "optimal" performance and ways to optimize design, structure, and control of robots must be developed and implemented.
=== Open source robotics ===
Open source robotics research seeks standards for defining, and methods for designing and building, robots so that they can easily be reproduced by anyone. Research includes legal and technical definitions; seeking out alternative tools and materials to reduce costs and simplify builds; and creating interfaces and standards for designs to work together. Human usability research also investigates how to best document builds through visual, text or video instructions.
=== Evolutionary robotics ===
Evolutionary robots is a methodology that uses evolutionary computation to help design robots, especially the body form, or motion and behavior controllers. In a similar way to natural evolution, a large population of robots is allowed to compete in some way, or their ability to perform a task is measured using a fitness function. Those that perform worst are removed from the population and replaced by a new set, which have new behaviors based on those of the winners. Over time the population improves, and eventually a satisfactory robot may appear. This happens without any direct programming of the robots by the researchers. Researchers use this method both to create better robots, and to explore the nature of evolution. Because the process often requires many generations of robots to be simulated, this technique may be run entirely or mostly in simulation, using a robot simulator software package, then tested on real robots once the evolved algorithms are good enough. Currently, there are about 10 million industrial robots toiling around the world, and Japan is the top country having high density of utilizing robots in its manufacturing industry.
=== Bionics and biomimetics ===
Bionics and biomimetics apply the physiology and methods of locomotion of animals to the design of robots. For example, the design of BionicKangaroo was based on the way kangaroos jump.
=== Swarm robotics ===
Swarm robotics is an approach to the coordination of multiple robots as a system which consist of large numbers of mostly simple physical robots. ″In a robot swarm, the collective behavior of the robots results from local interactions between the robots and between the robots and the environment in which they act.″*
=== Quantum computing ===
There has been some research into whether robotics algorithms can be run more quickly on quantum computers than they can be run on digital computers. This area has been referred to as quantum robotics.
=== Other research areas ===
Nanorobots.
Cobots (collaborative robots).
Autonomous drones.
High temperature crucibles allow robotic systems to automate sample analysis.
The main venues for robotics research are the international conferences ICRA and IROS.
== Human factors ==
=== Education and training ===
Robotics engineers design robots, maintain them, develop new applications for them, and conduct research to expand the potential of robotics. Robots have become a popular educational tool in some middle and high schools, particularly in parts of the USA, as well as in numerous youth summer camps, raising interest in programming, artificial intelligence, and robotics among students.
=== Employment ===
Robotics is an essential component in many modern manufacturing environments. As factories increase their use of robots, the number of robotics–related jobs grow and have been observed to be steadily rising. The employment of robots in industries has increased productivity and efficiency savings and is typically seen as a long-term investment for benefactors. A study found that 47 percent of US jobs are at risk to automation "over some unspecified number of years". These claims have been criticized on the ground that social policy, not AI, causes unemployment. In a 2016 article in The Guardian, Stephen Hawking stated "The automation of factories has already decimated jobs in traditional manufacturing, and the rise of artificial intelligence is likely to extend this job destruction deep into the middle classes, with only the most caring, creative or supervisory roles remaining". The rise of robotics is thus often used as an argument for universal basic income.
According to a GlobalData September 2021 report, the robotics industry was worth $45bn in 2020, and by 2030, it will have grown at a compound annual growth rate (CAGR) of 29% to $568bn, driving jobs in robotics and related industries.
=== Occupational safety and health implications ===
A discussion paper drawn up by EU-OSHA highlights how the spread of robotics presents both opportunities and challenges for occupational safety and health (OSH).
The greatest OSH benefits stemming from the wider use of robotics should be substitution for people working in unhealthy or dangerous environments. In space, defense, security, or the nuclear industry, but also in logistics, maintenance, and inspection, autonomous robots are particularly useful in replacing human workers performing dirty, dull or unsafe tasks, thus avoiding workers' exposures to hazardous agents and conditions and reducing physical, ergonomic and psychosocial risks. For example, robots are already used to perform repetitive and monotonous tasks, to handle radioactive material or to work in explosive atmospheres. In the future, many other highly repetitive, risky or unpleasant tasks will be performed by robots in a variety of sectors like agriculture, construction, transport, healthcare, firefighting or cleaning services.
Moreover, there are certain skills to which humans will be better suited than machines for some time to come and the question is how to achieve the best combination of human and robot skills. The advantages of robotics include heavy-duty jobs with precision and repeatability, whereas the advantages of humans include creativity, decision-making, flexibility, and adaptability. This need to combine optimal skills has resulted in collaborative robots and humans sharing a common workspace more closely and led to the development of new approaches and standards to guarantee the safety of the "man-robot merger". Some European countries are including robotics in their national programs and trying to promote a safe and flexible cooperation between robots and operators to achieve better productivity. For example, the German Federal Institute for Occupational Safety and Health (BAuA) organises annual workshops on the topic "human-robot collaboration".
In the future, cooperation between robots and humans will be diversified, with robots increasing their autonomy and human-robot collaboration reaching completely new forms. Current approaches and technical standards aiming to protect employees from the risk of working with collaborative robots will have to be revised.
=== User experience ===
Great user experience predicts the needs, experiences, behaviors, language and cognitive abilities, and other factors of each user group. It then uses these insights to produce a product or solution that is ultimately useful and usable. For robots, user experience begins with an understanding of the robot's intended task and environment, while considering any possible social impact the robot may have on human operations and interactions with it.
It defines that communication as the transmission of information through signals, which are elements perceived through touch, sound, smell and sight. The author states that the signal connects the sender to the receiver and consists of three parts: the signal itself, what it refers to, and the interpreter. Body postures and gestures, facial expressions, hand and head movements are all part of nonverbal behavior and communication. Robots are no exception when it comes to human-robot interaction. Therefore, humans use their verbal and nonverbal behaviors to communicate their defining characteristics. Similarly, social robots need this coordination to perform human-like behaviors.
== Careers ==
Robotics is an interdisciplinary field, combining primarily mechanical engineering and computer science but also drawing on electronic engineering and other subjects. The usual way to build a career in robotics is to complete an undergraduate degree in one of these established subjects, followed by a graduate (masters') degree in Robotics. Graduate degrees are typically joined by students coming from all of the contributing disciplines, and include familiarization of relevant undergraduate level subject matter from each of them, followed by specialist study in pure robotics topics which build upon them. As an interdisciplinary subject, robotics graduate programmes tend to be especially reliant on students working and learning together and sharing their knowledge and skills from their home discipline first degrees.
Robotics industry careers then follow the same pattern, with most roboticists working as part of interdisciplinary teams of specialists from these home disciplines followed by the robotics graduate degrees which enable them to work together. Workers typically continue to identify as members of their home disciplines who work in robotics, rather than as 'roboticists'. This structure is reinforced by the nature of some engineering professions, which grant chartered engineer status to members of home disciplines rather than to robotics as a whole.
Robotics careers are widely predicted to grow in the 21st century, as robots replace more manual and intellectual human work. Some workers who lose their jobs to robotics may be well-placed to retrain to build and maintain these robots, using their domain-specific knowledge and skills.
== History ==
== See also ==
== Notes ==
== References ==
== Further reading ==
R. Andrew Russell (1990). Robot Tactile Sensing. New York: Prentice Hall. ISBN 978-0-13-781592-0.
McGaughey, Ewan (16 October 2019). "Will robots automate your job away? Full employment, basic income, and economic democracy". LawArXiv Papers. doi:10.31228/osf.io/udbj8. S2CID 243172487. SSRN 3044448.
Autor, David H. (1 August 2015). "Why Are There Still So Many Jobs? The History and Future of Workplace Automation". Journal of Economic Perspectives. 29 (3): 3–30. doi:10.1257/jep.29.3.3. hdl:1721.1/109476.
Tooze, Adam (6 June 2019). "Democracy and Its Discontents". The New York Review of Books. Vol. 66, no. 10.
== External links ==
IEEE Robotics and Automation Society
Investigation of social robots – Robots that mimic human behaviors and gestures.
Wired's guide to the '50 best robots ever', a mix of robots in fiction (Hal, R2D2, K9) to real robots (Roomba, Mobot, Aibo). | Wikipedia/robotics |
A flexible manufacturing system (FMS) is a manufacturing system in which there is some amount of flexibility that allows the system to react in case of changes, whether predicted or unpredicted.
This flexibility is generally considered to fall into two categories, which both contain numerous subcategories.
The first category is called routing flexibility, which covers the system's ability to be changed to produce new product types, and the ability to change the order of operations executed on a part.
The second category is called machine flexibility, which consists of the ability to use multiple machines to perform the same operation on a part, as well as the system's ability to absorb large-scale changes, such as in volume, capacity, or capability.
Most flexible manufacturing systems consist of three main systems:
The work machines which are often automated CNC machines are connected by
A material handling system to optimize parts flow and
The central control computer controls material movements and machine flow.
The main advantage of a flexible manufacturing system is its high flexibility in managing manufacturing resources like time and effort to manufacture a new product.
The best application of a flexible manufacturing system is found in the 'production of small sets of products like those from mass production.
== Advantages ==
Reduced manufacturing cost
Lower cost per unit produced,
Greater labor productivity,
Greater machine efficiency,
Improved quality,
Increased system reliability,
Reduced parts inventories,
Adaptability to CAD/CAM operations.
Shorter lead times
Improved efficiency
Increase production rate
== Disadvantages ==
Initial set-up cost is high,
Substantial pre-planning
Requirement of skilled labor
Complicated system
Maintenance is complicated
чЬшМ
== Flexibility ==
Flexibility in manufacturing means the ability to deal with slightly or greatly mixed parts, to allow variation in parts assembly and variations in process sequence, change the production volume and change the design of certain product being manufactured.
== Industrial FMS communication ==
An industrial flexible manufacturing system consists of robots, computer-controlled Machines, computer numerical controlled machines (CNC), instrumentation devices, computers, sensors, and other stand alone systems such as inspection machines. The use of robots in the production segment of manufacturing industries promises a variety of benefits ranging from high utilization to high volume of productivity. Each Robotic cell or node will be located along a material handling system such as a conveyor or automatic guided vehicle. The production of each part or work-piece will require a different combination of manufacturing nodes. The movement of parts from one node to another is done through the material handling system. At the end of part processing, the finished parts will be routed to an automatic inspection node, and subsequently unloaded from the Flexible Manufacturing System.
The FMS data traffic consists of large files and short messages, and mostly come from nodes, devices and instruments. The message size ranges between a few bytes to several hundreds of bytes. Executive software and other data, for example, are files with a large size, while messages for machining data, instrument to instrument communications, status monitoring, and data reporting are transmitted in small size.
There is also some variation on response time. Large program files from a main computer usually take about 60 seconds to be down loaded into each instrument or node at the beginning of FMS operation. Messages for instrument data need to be sent in a periodic time with deterministic time delay. Other types of messages used for emergency reporting are quite short in size and must be transmitted and received with an almost instantaneous response.
The demands for reliable FMS protocol that support all the FMS data characteristics are now urgent. The existing IEEE standard protocols do not fully satisfy the real time communication requirements in this environment. The delay of CSMA/CD is unbounded as the number of nodes increases due to the message collisions. Token bus has a deterministic message delay, but it does not support prioritized access scheme which is needed in FMS communications. Token Ring provides prioritized access and has a low message delay, however, its data transmission is unreliable. A single node failure which may occur quite often in FMS causes transmission errors of passing message in that node. In addition, the topology of Token Ring results in high wiring installation and cost.
A design of FMS communication that supports a real time communication with bounded message delay and reacts promptly to any emergency signal is needed. Because of machine failure and malfunction due to heat, dust, and electromagnetic interference is common, a prioritized mechanism and immediate transmission of emergency messages are needed so that a suitable recovery procedure can be applied. A modification of standard Token Bus to implement a prioritized access scheme was proposed to allow transmission of short and periodic messages with a low delay compared to the one for long messages.
== Further reading ==
Manufacturing Flexibility: a literature review. By A. de Toni and S. Tonchia. International Journal of Production Research, 1998, vol. 36, no. 6, 1587-617.
Computer Control of Manufacturing Systems. By Y. Koren. McGraw Hill, Inc. 1983, 287 pp, ISBN 0-07-035341-7
Manufacturing Systems – Theory and Practice. By G. Chryssolouris. New York, NY: Springer Verlag, 2005. 2nd edition.
Design of Flexible Production Systems – Methodologies and Tools. By T. Tolio. Berlin: Springer, 2009. ISBN 978-3-540-85413-5
== See also ==
Agile management
Lean manufacturing
== References ==
== External links ==
FMS video 1
FMS video 2 | Wikipedia/Flexible_Manufacturing_Systems |
Impedance control is an approach to dynamic control relating force and position. It is often used in applications where a manipulator interacts with its environment and the force position relation is of concern. Examples of such applications include humans interacting with robots, where the force produced by the human relates to how fast the robot should move/stop. Simpler control methods, such as position control or torque control, perform poorly when the manipulator experiences contacts. Thus impedance control is commonly used in these settings.
Mechanical impedance is the ratio of force output to velocity input. This is analogous to electrical impedance, that is the ratio of voltage output to current input (e.g. resistance is voltage divided by current). A "spring constant" defines the force output for a displacement (extension or compression) of the spring. A "damping constant" defines the force output for a velocity input. If we control the impedance of a mechanism, we are controlling the force of resistance to external motions that are imposed by the environment.
Mechanical admittance is the inverse of impedance - it defines the motions that result from a force input. If a mechanism applies a force to the environment, the environment will move, or not move, depending on its properties and the force applied. For example, a marble sitting on a table will react very differently to a given force than will a log floating in a lake.
The key theory behind the method is to treat the environment as an admittance and the manipulator as an impedance. It assumes the postulate that "no controller can make the manipulator appear to the environment as anything other than a physical system."
This rule of thumb can also be stated as: "in the most common case in which the environment is an admittance (e.g. a mass, possibly kinematically constrained) that relation should be an impedance, a function, possibly nonlinear, dynamic, or even discontinuous, specifying the force produced in response to a motion imposed by the environment."
== Principle ==
Impedance control doesn't simply regulate the force or position of a mechanism. Instead it regulates the relationship between force and position on the one hand, and velocity and acceleration on the other hand, i.e. the impedance of the mechanism. It requires a position (velocity or acceleration) as input and has a resulting force as output. The inverse of impedance is admittance. It imposes position.
So actually the controller imposes a spring-mass-damper behavior on the mechanism by maintaining a dynamic relationship between force
(
F
)
{\displaystyle ({\boldsymbol {F}})}
and position, velocity and acceleration
(
x
,
v
,
a
)
{\displaystyle ({\boldsymbol {x}},{\boldsymbol {v}},{\boldsymbol {a}})}
:
F
=
M
a
+
C
v
+
K
x
+
f
+
s
{\displaystyle {\boldsymbol {F}}=M{\boldsymbol {a}}+C{\boldsymbol {v}}+K{\boldsymbol {x}}+{\boldsymbol {f}}+{\boldsymbol {s}}}
, with
f
{\displaystyle {\boldsymbol {f}}}
being friction and
s
{\displaystyle {\boldsymbol {s}}}
being static force.
Masses (
M
{\displaystyle M}
) and springs (with stiffness
K
{\displaystyle K}
) are energy storing elements, whereas a damper (with damping
C
{\displaystyle C}
) is an energy dissipating device. If we can control impedance, we are able to control energy exchange during interaction,
i.e. the work being done. So impedance control is interaction control.
Note that mechanical systems are inherently multi-dimensional - a typical robot arm can place an object in three dimensions (
(
x
,
y
,
z
)
{\displaystyle (x,y,z)}
coordinates) and in three orientations (e.g. roll, pitch, yaw). In theory, an impedance controller can cause the mechanism to exhibit a multi-dimensional mechanical impedance. For example, the mechanism might act very stiff along one axis and very compliant along another. By compensating for the kinematics and inertias of the mechanism, we can orient those axes arbitrarily and in various coordinate systems. For example, we might cause a robotic part holder to be very stiff tangentially to a grinding wheel, while being very compliant (controlling force with little concern for position) in the radial axis of the wheel.
== Mathematical basics ==
=== Joint space ===
An uncontrolled robot can be expressed in Lagrangian formulation as
where
q
{\displaystyle {\boldsymbol {q}}}
denotes joint angular position,
M
{\displaystyle {\boldsymbol {M}}}
is the symmetric and positive-definite inertia matrix,
c
{\displaystyle {\boldsymbol {c}}}
the Coriolis and centrifugal torque,
g
{\displaystyle {\boldsymbol {g}}}
the gravitational torque,
h
{\displaystyle {\boldsymbol {h}}}
includes further torques from, e.g., inherent stiffness, friction, etc., and
τ
e
x
t
{\displaystyle {\boldsymbol {\tau }}_{\mathrm {ext} }}
summarizes all the external forces from the environment. The actuation torque
τ
{\displaystyle {\boldsymbol {\tau }}}
on the left side is the input variable to the robot.
One may propose a control law of the following form:
where
q
d
{\displaystyle {\boldsymbol {q}}_{\mathrm {d} }}
denotes the desired joint angular position,
K
{\displaystyle {\boldsymbol {K}}}
and
D
{\displaystyle {\boldsymbol {D}}}
are the control parameters, and
M
^
{\displaystyle {\hat {\boldsymbol {M}}}}
,
c
^
{\displaystyle {\hat {\boldsymbol {c}}}}
,
g
^
{\displaystyle {\hat {\boldsymbol {g}}}}
, and
h
^
{\displaystyle {\hat {\boldsymbol {h}}}}
are the internal model of the corresponding mechanical terms.
Inserting (2) into (1) gives an equation of the closed-loop system (controlled robot):
K
(
q
d
−
q
)
+
D
(
q
˙
d
−
q
˙
)
+
M
(
q
)
(
q
¨
d
−
q
¨
)
=
τ
e
x
t
.
{\displaystyle {\boldsymbol {K}}({\boldsymbol {q}}_{\mathrm {d} }-{\boldsymbol {q}})+{\boldsymbol {D}}({\dot {\boldsymbol {q}}}_{\mathrm {d} }-{\dot {\boldsymbol {q}}})+{\boldsymbol {M}}({\boldsymbol {q}})({\ddot {\boldsymbol {q}}}_{\mathrm {d} }-{\ddot {\boldsymbol {q}}})={\boldsymbol {\tau }}_{\mathrm {ext} }.}
Let
e
=
q
d
−
q
{\displaystyle {\boldsymbol {e}}={\boldsymbol {q}}_{\mathrm {d} }-{\boldsymbol {q}}}
, one obtains
K
e
+
D
e
˙
+
M
e
¨
=
τ
e
x
t
{\displaystyle {\boldsymbol {K}}{\boldsymbol {e}}+{\boldsymbol {D}}{\dot {\boldsymbol {e}}}+{\boldsymbol {M}}{\ddot {\boldsymbol {e}}}={\boldsymbol {\tau }}_{\mathrm {ext} }}
Since the matrices
K
{\displaystyle {\boldsymbol {K}}}
and
D
{\displaystyle {\boldsymbol {D}}}
have the units of stiffness and damping, they are commonly referred to as stiffness and damping matrix, respectively. Clearly, the controlled robot is essentially a multi-dimensional mechanical impedance (mass-spring-damper) to the environment, which is addressed by
τ
e
x
t
{\displaystyle {\boldsymbol {\tau }}_{\mathrm {ext} }}
.
=== Task space ===
The same principle also applies to task space. An uncontrolled robot has the following task-space representation in Lagrangian formulation:
F
=
Λ
(
q
)
x
¨
+
μ
(
x
,
x
˙
)
+
γ
(
q
)
+
η
(
q
,
q
˙
)
+
F
e
x
t
{\displaystyle {\boldsymbol {\mathcal {F}}}={\boldsymbol {\Lambda }}({\boldsymbol {q}}){\ddot {\boldsymbol {x}}}+{\boldsymbol {\mu }}({\boldsymbol {x}},{\dot {\boldsymbol {x}}})+{\boldsymbol {\gamma }}({\boldsymbol {q}})+{\boldsymbol {\eta }}({\boldsymbol {q}},{\dot {\boldsymbol {q}}})+{\boldsymbol {\mathcal {F}}}_{\mathrm {ext} }}
,
where
q
{\displaystyle {\boldsymbol {q}}}
denotes joint angular position,
x
{\displaystyle {\boldsymbol {x}}}
task-space position,
Λ
{\displaystyle {\boldsymbol {\Lambda }}}
the symmetric and positive-definite task-space inertia matrix. The terms
μ
{\displaystyle {\boldsymbol {\mu }}}
,
γ
{\displaystyle {\boldsymbol {\gamma }}}
,
η
{\displaystyle {\boldsymbol {\eta }}}
, and
F
e
x
t
{\displaystyle {\boldsymbol {\mathcal {F}}}_{\mathrm {ext} }}
are the generalized force of the Coriolis and centrifugal term, the gravitation, further nonlinear terms, and environmental contacts. Note that this representation only applies to robots with redundant kinematics. The generalized force
F
{\displaystyle {\boldsymbol {\mathcal {F}}}}
on the left side corresponds to the input torque of the robot.
Analogously, one may propose the following control law:
F
=
K
x
(
x
d
−
x
)
+
D
x
(
x
˙
d
−
x
˙
)
+
Λ
^
(
q
)
x
¨
d
+
μ
^
(
q
,
q
˙
)
+
γ
^
(
q
)
+
η
^
(
q
,
q
˙
)
,
{\displaystyle {\boldsymbol {\mathcal {F}}}={\boldsymbol {K}}_{\mathrm {x} }({\boldsymbol {x}}_{\mathrm {d} }-{\boldsymbol {x}})+{\boldsymbol {D}}_{\mathrm {x} }({\dot {\boldsymbol {x}}}_{\mathrm {d} }-{\dot {\boldsymbol {x}}})+{\hat {\boldsymbol {\Lambda }}}({\boldsymbol {q}}){\ddot {\boldsymbol {x}}}_{\mathrm {d} }+{\hat {\boldsymbol {\mu }}}({\boldsymbol {q}},{\dot {\boldsymbol {q}}})+{\hat {\boldsymbol {\gamma }}}({\boldsymbol {q}})+{\hat {\boldsymbol {\eta }}}({\boldsymbol {q}},{\dot {\boldsymbol {q}}}),}
where
x
d
{\displaystyle {\boldsymbol {x}}_{\mathrm {d} }}
denotes the desired task-space position,
K
x
{\displaystyle {\boldsymbol {K}}_{\mathrm {x} }}
and
D
x
{\displaystyle {\boldsymbol {D}}_{\mathrm {x} }}
are the task-space stiffness and damping matrices, and
Λ
^
{\displaystyle {\hat {\boldsymbol {\Lambda }}}}
,
μ
^
{\displaystyle {\hat {\boldsymbol {\mu }}}}
,
γ
^
{\displaystyle {\hat {\boldsymbol {\gamma }}}}
, and
η
^
{\displaystyle {\hat {\boldsymbol {\eta }}}}
are the internal model of the corresponding mechanical terms.
Similarly, one has
e
x
=
x
d
−
x
{\displaystyle {\boldsymbol {e}}_{\mathrm {x} }={\boldsymbol {x}}_{\mathrm {d} }-{\boldsymbol {x}}}
,
as the closed-loop system, which is essentially a multi-dimensional mechanical impedance to the environment (
F
e
x
t
{\displaystyle {\boldsymbol {\mathcal {F}}}_{\mathrm {ext} }}
) as well. Thus, one can choose desired impedance (mainly stiffness) in the task space. For example, one may want to make the controlled robot act very stiff along one direction while relatively compliant along others by setting
K
x
=
(
1
0
0
0
1
0
0
0
1000
)
N
/
m
,
{\displaystyle {\boldsymbol {K}}_{\mathrm {x} }={\begin{pmatrix}1&0&0\\0&1&0\\0&0&1000\end{pmatrix}}\mathrm {N/m} ,}
assuming the task space is a three-dimensional Euclidean space. The damping matrix
D
x
{\displaystyle {\boldsymbol {D}}_{\mathrm {x} }}
is usually chosen such that the closed-loop system (3) is stable.
== Applications ==
Impedance control is used in applications such as robotics as a general strategy to send commands to a robotics arm and end effector that takes into account the non-linear kinematics and dynamics of the object being manipulated.
== References == | Wikipedia/Impedance_control |
David Hanson Jr. is an American roboticist who is the founder and chief executive officer (CEO) of Hanson Robotics, a Hong Kong–based robotics company founded in 2013.
The designer and researcher creates human-looking robots who have realistic facial expressions, including Sophia and other robots designed to mimic human behavior. Sophia has received widespread media attention, and was the first robot to be granted citizenship.
== Early life and education ==
Hanson was born on December 20, 1969, in Dallas, Texas, United States. He studied at Highland Park High School for his senior year to focus on math and science. As a teenager, Hanson's hobbies included drawing and reading science fiction works by writers like Isaac Asimov and Philip K. Dick—the latter of whom he would later replicate in android form.
Hanson has a Bachelor of Fine Arts from the Rhode Island School of Design in Film, Animation, Video (FAV) and a Ph.D. from the University of Texas at Dallas in interactive arts and engineering. In 1995 as part of an independent-study project on out-of-body experiences, he built a humanoid head in his own likeness, operated by a remote operator.
== Career ==
Hanson’s career has focused on creating humanlike robots. Hanson's most well-known creation is Sophia, the world's first ever robot citizen.
In 2004 at a Denver American Association for the Advancement of Science (AAAS) conference, Hanson presented K-Bot, a robotic head created with polymer skin, finely sculpted features, and big blue eyes. Named after his lab assistant Kristen Nelson, the robot head had 24 servomotors for realistic movement and cameras in its eyes. At the time he was 33 years old and a graduate student at the University of Texas Dallas.
After he graduated from university, Hanson worked as an artist, and went on to work for Disney where he was a sculptor and material researcher in the Disney Imagineering Lab. He has worked as a designer, sculptor, and robotics developer for Universal Studios and MTV. In 2004, Hanson built the humanoid robot Hertz, a female presenting animated robot head that took about nine months to build.
Hanson is the founder and CEO of Hong Kong-based Hanson Robotics, which was founded in 2013.
Hanson has been published in materials science, artificial intelligence, cognitive science, and robotics journals.
Hanson argues precise human looks are a must if people are going to effectively communicate with robots. Hanson believes social humanoid robots have the potential to serve humanity in a variety of functions and helping roles, like tutor, companion, or security guard. He argues the realism of his work has the potential to pose "an identity challenge to the human being," and that realistic robots may polarize the market between those who love realistic robots and those who find them disturbing. Many of Hanson's creations currently serve at research or non-profit institutions around the world, including at the University of Cambridge, University of Geneva, University of Pisa and in laboratories for cognitive science and AI research.
Hanson's creation Zeno, a two-foot tall robot designed in the style of a cartoon boy, provides treatment sessions to children with autism in Texas as a result of a collaboration between the University of Texas at Arlington, Dallas Autism Treatment Center, Texas Instruments and National Instruments, and Hanson.
Other robots include Albert Einstein HUBO, a robotic head designed to look like Albert Einstein's and put it on top of the "HUBO" bipedal robotic frame, and Professor Einstein, a 14.5 inch personal robot that engages in conversation and acts as a companion/tutor.
Hanson collaborated with musician David Byrne on Song for Julio, which appeared at the Reina Sofia Museum in Madrid in 2008 as part of the Máquinas&Almas (Souls&Machines) exhibit, and his creations have appeared in other museums around the world.
== Educational institutions ==
From 2011 to 2013 Hanson was an adjunct professor of Computer Science and Engineering Teaching at the University of Texas at Arlington. He also taught in 2010 at the University of North Texas as an adjunct professor in fine arts, kinetic/interactive sculpture, and at the University of Texas at Dallas as an instructor of independent study in interactive sculpture.
== Public and media appearances ==
Hanson has keynote speeches at leading international technology conferences such as the Consumer Electronics Show and IBC.
== Selected publications ==
=== Books ===
Bar-Cohen, Yoseph; Hanson, David (2009). Marom, Ari (ed.). The Coming Robot Revolution: Expectations and Fears About Emerging Intelligent, Humanlike Machines. New York: Springer. ISBN 978-0387853482.
=== Papers ===
Hanson, D. (2002). Bio-inspired Facial Expression Interface for Emotive Robots. AAAI National Conference. Edmonton, Canada.
Hanson, D.; White, V. (2004). Converging the Capabilities of ElectroActive Polymer Artificial Muscles and the Requirements of Bio-inspired Robotics. Proc. SPIE‘s Electroactive Polymer Actuators and Devices Conf., 10th Smart Structures and Materials Symposium. San Diego, US.
Hanson, D. (2005). "Bioinspired Robotics" (PDF). In Bar-Cohen, Yoseph (ed.). Biomimetics. CRC Press. doi:10.1109/ROMAN.2009.5326148. S2CID 8768746. Archived from the original (PDF) on 2017-12-22.
Hanson, D. (December 2005). Expanding the Aesthetics Possibilities for Humanlike Robots. Proc. IEEE Humanoid Robotics Conference, special session on the Uncanny Valley. Journal of Research in Personality. Vol. 68. Tskuba, Japan. pp. 96–113. doi:10.1016/j.jrp.2017.02.001.
Hanson, D.; Bergs, R.; Tadesse, Y.; White, V.; Priya, S. (2006). Enhancement of EAP Actuated Facial Expressions by Designed Chamber Geometry in Elastomers. Proc. SPIE‘s Electroactive Polymer Actuators and Devices Conf., 10th Smart Structures and Materials Symposium. San Diego, CA.
Tadesse, Y.; Priya, S.; Stephanou, H.; Popa, D.; Hanson, D. (2006). "Piezoelectric Actuation and Sensing for Facial Robotics". Ferroelectrics. 345 (1): 13–25. Bibcode:2006Fer...345...13T. doi:10.1080/00150190601018010. S2CID 122300723.
Hanson, David (2017) [2007]. Humanizing Interfaces — an Integrative Analysis of HumanLike Robots (PhD dissertation). University of Texas at Dallas. ASIN B072MFGVBR.
Hanson, D.; Baurmann, S.; Riccio, T.; Margolin, R.; Dockins, T.; Tavares, M.; Carpenter, K. (2008). Zeno: a Cognitive Character (PDF). AAAI Conference on Artificial Intelligence. pp. 9–11.
Hanson, D.; Mazzei, D.; Garver, C.; De Rossi, D.; Stevenson, M. (2012). "Realistic Humanlike Robots for Treatment of ASD, Social Training, and Research; Shown to Appeal to Youths with ASD, Cause Physiological Arousal, and Increase Human-to-Human Social Engagement". PETRA.
Mazzei, D.; Lazzeri, N.; Hanson, D.; De Rossi, D. (2012). HEFES: An Hybrid Engine for Facial Expressions Synthesis to Control Human-Like Androids and Avatars. The Fourth IEEE RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics. doi:10.1109/BioRob.2012.6290687.
Bergman, M.; Zhuang, Z.; Palmiero, A.; Wander, J.; Heimbuch, B.; McDonald, M.; Hanson, D. (2014). "Development of an Advanced Respirator Fit Test Headform". J Occup Environ Hyg. 11 (2): 117–25. Bibcode:2014JOEH...11..117B. doi:10.1080/15459624.2013.816434. PMC 4470376. PMID 24369934.
== References ==
== External links ==
Official website
David Hanson at TED | Wikipedia/David_Hanson_(robotics_designer) |
In machine learning, a neural network (also artificial neural network or neural net, abbreviated ANN or NN) is a computational model inspired by the structure and functions of biological neural networks.
A neural network consists of connected units or nodes called artificial neurons, which loosely model the neurons in the brain. Artificial neuron models that mimic biological neurons more closely have also been recently investigated and shown to significantly improve performance. These are connected by edges, which model the synapses in the brain. Each artificial neuron receives signals from connected neurons, then processes them and sends a signal to other connected neurons. The "signal" is a real number, and the output of each neuron is computed by some non-linear function of the sum of its inputs, called the activation function. The strength of the signal at each connection is determined by a weight, which adjusts during the learning process.
Typically, neurons are aggregated into layers. Different layers may perform different transformations on their inputs. Signals travel from the first layer (the input layer) to the last layer (the output layer), possibly passing through multiple intermediate layers (hidden layers). A network is typically called a deep neural network if it has at least two hidden layers.
Artificial neural networks are used for various tasks, including predictive modeling, adaptive control, and solving problems in artificial intelligence. They can learn from experience, and can derive conclusions from a complex and seemingly unrelated set of information.
== Training ==
Neural networks are typically trained through empirical risk minimization. This method is based on the idea of optimizing the network's parameters to minimize the difference, or empirical risk, between the predicted output and the actual target values in a given dataset. Gradient-based methods such as backpropagation are usually used to estimate the parameters of the network. During the training phase, ANNs learn from labeled training data by iteratively updating their parameters to minimize a defined loss function. This method allows the network to generalize to unseen data.
== History ==
=== Early work ===
Today's deep neural networks are based on early work in statistics over 200 years ago. The simplest kind of feedforward neural network (FNN) is a linear network, which consists of a single layer of output nodes with linear activation functions; the inputs are fed directly to the outputs via a series of weights. The sum of the products of the weights and the inputs is calculated at each node. The mean squared errors between these calculated outputs and the given target values are minimized by creating an adjustment to the weights. This technique has been known for over two centuries as the method of least squares or linear regression. It was used as a means of finding a good rough linear fit to a set of points by Legendre (1805) and Gauss (1795) for the prediction of planetary movement.
Historically, digital computers such as the von Neumann model operate via the execution of explicit instructions with access to memory by a number of processors. Some neural networks, on the other hand, originated from efforts to model information processing in biological systems through the framework of connectionism. Unlike the von Neumann model, connectionist computing does not separate memory and processing.
Warren McCulloch and Walter Pitts (1943) considered a non-learning computational model for neural networks. This model paved the way for research to split into two approaches. One approach focused on biological processes while the other focused on the application of neural networks to artificial intelligence.
In the late 1940s, D. O. Hebb proposed a learning hypothesis based on the mechanism of neural plasticity that became known as Hebbian learning. It was used in many early neural networks, such as Rosenblatt's perceptron and the Hopfield network. Farley and Clark (1954) used computational machines to simulate a Hebbian network. Other neural network computational machines were created by Rochester, Holland, Habit and Duda (1956).
In 1958, psychologist Frank Rosenblatt described the perceptron, one of the first implemented artificial neural networks, funded by the United States Office of Naval Research.
R. D. Joseph (1960) mentions an even earlier perceptron-like device by Farley and Clark: "Farley and Clark of MIT Lincoln Laboratory actually preceded Rosenblatt in the development of a perceptron-like device." However, "they dropped the subject."
The perceptron raised public excitement for research in Artificial Neural Networks, causing the US government to drastically increase funding. This contributed to "the Golden Age of AI" fueled by the optimistic claims made by computer scientists regarding the ability of perceptrons to emulate human intelligence.
The first perceptrons did not have adaptive hidden units. However, Joseph (1960) also discussed multilayer perceptrons with an adaptive hidden layer. Rosenblatt (1962): section 16 cited and adopted these ideas, also crediting work by H. D. Block and B. W. Knight. Unfortunately, these early efforts did not lead to a working learning algorithm for hidden units, i.e., deep learning.
=== Deep learning breakthroughs in the 1960s and 1970s ===
Fundamental research was conducted on ANNs in the 1960s and 1970s. The first working deep learning algorithm was the Group method of data handling, a method to train arbitrarily deep neural networks, published by Alexey Ivakhnenko and Lapa in the Soviet Union (1965). They regarded it as a form of polynomial regression, or a generalization of Rosenblatt's perceptron. A 1971 paper described a deep network with eight layers trained by this method, which is based on layer by layer training through regression analysis. Superfluous hidden units are pruned using a separate validation set. Since the activation functions of the nodes are Kolmogorov-Gabor polynomials, these were also the first deep networks with multiplicative units or "gates."
The first deep learning multilayer perceptron trained by stochastic gradient descent was published in 1967 by Shun'ichi Amari. In computer experiments conducted by Amari's student Saito, a five layer MLP with two modifiable layers learned internal representations to classify non-linearily separable pattern classes. Subsequent developments in hardware and hyperparameter tunings have made end-to-end stochastic gradient descent the currently dominant training technique.
In 1969, Kunihiko Fukushima introduced the ReLU (rectified linear unit) activation function. The rectifier has become the most popular activation function for deep learning.
Nevertheless, research stagnated in the United States following the work of Minsky and Papert (1969), who emphasized that basic perceptrons were incapable of processing the exclusive-or circuit. This insight was irrelevant for the deep networks of Ivakhnenko (1965) and Amari (1967).
In 1976 transfer learning was introduced in neural networks learning.
Deep learning architectures for convolutional neural networks (CNNs) with convolutional layers and downsampling layers and weight replication began with the Neocognitron introduced by Kunihiko Fukushima in 1979, though not trained by backpropagation.
=== Backpropagation ===
Backpropagation is an efficient application of the chain rule derived by Gottfried Wilhelm Leibniz in 1673 to networks of differentiable nodes. The terminology "back-propagating errors" was actually introduced in 1962 by Rosenblatt, but he did not know how to implement this, although Henry J. Kelley had a continuous precursor of backpropagation in 1960 in the context of control theory. In 1970, Seppo Linnainmaa published the modern form of backpropagation in his Master's thesis (1970). G.M. Ostrovski et al. republished it in 1971. Paul Werbos applied backpropagation to neural networks in 1982 (his 1974 PhD thesis, reprinted in a 1994 book, did not yet describe the algorithm). In 1986, David E. Rumelhart et al. popularised backpropagation but did not cite the original work.
=== Convolutional neural networks ===
Kunihiko Fukushima's convolutional neural network (CNN) architecture of 1979 also introduced max pooling, a popular downsampling procedure for CNNs. CNNs have become an essential tool for computer vision.
The time delay neural network (TDNN) was introduced in 1987 by Alex Waibel to apply CNN to phoneme recognition. It used convolutions, weight sharing, and backpropagation. In 1988, Wei Zhang applied a backpropagation-trained CNN to alphabet recognition.
In 1989, Yann LeCun et al. created a CNN called LeNet for recognizing handwritten ZIP codes on mail. Training required 3 days. In 1990, Wei Zhang implemented a CNN on optical computing hardware. In 1991, a CNN was applied to medical image object segmentation and breast cancer detection in mammograms. LeNet-5 (1998), a 7-level CNN by Yann LeCun et al., that classifies digits, was applied by several banks to recognize hand-written numbers on checks digitized in 32×32 pixel images.
From 1988 onward, the use of neural networks transformed the field of protein structure prediction, in particular when the first cascading networks were trained on profiles (matrices) produced by multiple sequence alignments.
=== Recurrent neural networks ===
One origin of RNN was statistical mechanics. In 1972, Shun'ichi Amari proposed to modify the weights of an Ising model by Hebbian learning rule as a model of associative memory, adding in the component of learning. This was popularized as the Hopfield network by John Hopfield (1982). Another origin of RNN was neuroscience. The word "recurrent" is used to describe loop-like structures in anatomy. In 1901, Cajal observed "recurrent semicircles" in the cerebellar cortex. Hebb considered "reverberating circuit" as an explanation for short-term memory. The McCulloch and Pitts paper (1943) considered neural networks that contain cycles, and noted that the current activity of such networks can be affected by activity indefinitely far in the past.
In 1982 a recurrent neural network with an array architecture (rather than a multilayer perceptron architecture), namely a Crossbar Adaptive Array, used direct recurrent connections from the output to the supervisor (teaching) inputs. In addition of computing actions (decisions), it computed internal state evaluations (emotions) of the consequence situations. Eliminating the external supervisor, it introduced the self-learning method in neural networks.
In cognitive psychology, the journal American Psychologist in early 1980's carried out a debate on the relation between cognition and emotion. Zajonc in 1980 stated that emotion is computed first and is independent from cognition, while Lazarus in 1982 stated that cognition is computed first and is inseparable from emotion. In 1982 the Crossbar Adaptive Array gave a neural network model of cognition-emotion relation. It was an example of a debate where an AI system, a recurrent neural network, contributed to an issue in the same time addressed by cognitive psychology.
Two early influential works were the Jordan network (1986) and the Elman network (1990), which applied RNN to study cognitive psychology.
In the 1980s, backpropagation did not work well for deep RNNs. To overcome this problem, in 1991, Jürgen Schmidhuber proposed the "neural sequence chunker" or "neural history compressor" which introduced the important concepts of self-supervised pre-training (the "P" in ChatGPT) and neural knowledge distillation. In 1993, a neural history compressor system solved a "Very Deep Learning" task that required more than 1000 subsequent layers in an RNN unfolded in time.
In 1991, Sepp Hochreiter's diploma thesis identified and analyzed the vanishing gradient problem and proposed recurrent residual connections to solve it. He and Schmidhuber introduced long short-term memory (LSTM), which set accuracy records in multiple applications domains. This was not yet the modern version of LSTM, which required the forget gate, which was introduced in 1999. It became the default choice for RNN architecture.
During 1985–1995, inspired by statistical mechanics, several architectures and methods were developed by Terry Sejnowski, Peter Dayan, Geoffrey Hinton, etc., including the Boltzmann machine, restricted Boltzmann machine, Helmholtz machine, and the wake-sleep algorithm. These were designed for unsupervised learning of deep generative models.
=== Deep learning ===
Between 2009 and 2012, ANNs began winning prizes in image recognition contests, approaching human level performance on various tasks, initially in pattern recognition and handwriting recognition. In 2011, a CNN named DanNet by Dan Ciresan, Ueli Meier, Jonathan Masci, Luca Maria Gambardella, and Jürgen Schmidhuber achieved for the first time superhuman performance in a visual pattern recognition contest, outperforming traditional methods by a factor of 3. It then won more contests. They also showed how max-pooling CNNs on GPU improved performance significantly.
In October 2012, AlexNet by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton won the large-scale ImageNet competition by a significant margin over shallow machine learning methods. Further incremental improvements included the VGG-16 network by Karen Simonyan and Andrew Zisserman and Google's Inceptionv3.
In 2012, Ng and Dean created a network that learned to recognize higher-level concepts, such as cats, only from watching unlabeled images. Unsupervised pre-training and increased computing power from GPUs and distributed computing allowed the use of larger networks, particularly in image and visual recognition problems, which became known as "deep learning".
Radial basis function and wavelet networks were introduced in 2013. These can be shown to offer best approximation properties and have been applied in nonlinear system identification and classification applications.
Generative adversarial network (GAN) (Ian Goodfellow et al., 2014) became state of the art in generative modeling during 2014–2018 period. The GAN principle was originally published in 1991 by Jürgen Schmidhuber who called it "artificial curiosity": two neural networks contest with each other in the form of a zero-sum game, where one network's gain is the other network's loss. The first network is a generative model that models a probability distribution over output patterns. The second network learns by gradient descent to predict the reactions of the environment to these patterns. Excellent image quality is achieved by Nvidia's StyleGAN (2018) based on the Progressive GAN by Tero Karras et al. Here, the GAN generator is grown from small to large scale in a pyramidal fashion. Image generation by GAN reached popular success, and provoked discussions concerning deepfakes. Diffusion models (2015) eclipsed GANs in generative modeling since then, with systems such as DALL·E 2 (2022) and Stable Diffusion (2022).
In 2014, the state of the art was training "very deep neural network" with 20 to 30 layers. Stacking too many layers led to a steep reduction in training accuracy, known as the "degradation" problem. In 2015, two techniques were developed to train very deep networks: the highway network was published in May 2015, and the residual neural network (ResNet) in December 2015. ResNet behaves like an open-gated Highway Net.
During the 2010s, the seq2seq model was developed, and attention mechanisms were added. It led to the modern Transformer architecture in 2017 in Attention Is All You Need.
It requires computation time that is quadratic in the size of the context window. Jürgen Schmidhuber's fast weight controller (1992) scales linearly and was later shown to be equivalent to the unnormalized linear Transformer.
Transformers have increasingly become the model of choice for natural language processing. Many modern large language models such as ChatGPT, GPT-4, and BERT use this architecture.
== Models ==
ANNs began as an attempt to exploit the architecture of the human brain to perform tasks that conventional algorithms had little success with. They soon reoriented towards improving empirical results, abandoning attempts to remain true to their biological precursors. ANNs have the ability to learn and model non-linearities and complex relationships. This is achieved by neurons being connected in various patterns, allowing the output of some neurons to become the input of others. The network forms a directed, weighted graph.
An artificial neural network consists of simulated neurons. Each neuron is connected to other nodes via links like a biological axon-synapse-dendrite connection. All the nodes connected by links take in some data and use it to perform specific operations and tasks on the data. Each link has a weight, determining the strength of one node's influence on another, allowing weights to choose the signal between neurons.
=== Artificial neurons ===
ANNs are composed of artificial neurons which are conceptually derived from biological neurons. Each artificial neuron has inputs and produces a single output which can be sent to multiple other neurons. The inputs can be the feature values of a sample of external data, such as images or documents, or they can be the outputs of other neurons. The outputs of the final output neurons of the neural net accomplish the task, such as recognizing an object in an image.
To find the output of the neuron we take the weighted sum of all the inputs, weighted by the weights of the connections from the inputs to the neuron. We add a bias term to this sum. This weighted sum is sometimes called the activation. This weighted sum is then passed through a (usually nonlinear) activation function to produce the output. The initial inputs are external data, such as images and documents. The ultimate outputs accomplish the task, such as recognizing an object in an image.
=== Organization ===
The neurons are typically organized into multiple layers, especially in deep learning. Neurons of one layer connect only to neurons of the immediately preceding and immediately following layers. The layer that receives external data is the input layer. The layer that produces the ultimate result is the output layer. In between them are zero or more hidden layers. Single layer and unlayered networks are also used. Between two layers, multiple connection patterns are possible. They can be 'fully connected', with every neuron in one layer connecting to every neuron in the next layer. They can be pooling, where a group of neurons in one layer connects to a single neuron in the next layer, thereby reducing the number of neurons in that layer. Neurons with only such connections form a directed acyclic graph and are known as feedforward networks. Alternatively, networks that allow connections between neurons in the same or previous layers are known as recurrent networks.
=== Hyperparameter ===
A hyperparameter is a constant parameter whose value is set before the learning process begins. The values of parameters are derived via learning. Examples of hyperparameters include learning rate, the number of hidden layers and batch size. The values of some hyperparameters can be dependent on those of other hyperparameters. For example, the size of some layers can depend on the overall number of layers.
=== Learning ===
Learning is the adaptation of the network to better handle a task by considering sample observations. Learning involves adjusting the weights (and optional thresholds) of the network to improve the accuracy of the result. This is done by minimizing the observed errors. Learning is complete when examining additional observations does not usefully reduce the error rate. Even after learning, the error rate typically does not reach 0. If after learning, the error rate is too high, the network typically must be redesigned. Practically this is done by defining a cost function that is evaluated periodically during learning. As long as its output continues to decline, learning continues. The cost is frequently defined as a statistic whose value can only be approximated. The outputs are actually numbers, so when the error is low, the difference between the output (almost certainly a cat) and the correct answer (cat) is small. Learning attempts to reduce the total of the differences across the observations. Most learning models can be viewed as a straightforward application of optimization theory and statistical estimation.
==== Learning rate ====
The learning rate defines the size of the corrective steps that the model takes to adjust for errors in each observation. A high learning rate shortens the training time, but with lower ultimate accuracy, while a lower learning rate takes longer, but with the potential for greater accuracy. Optimizations such as Quickprop are primarily aimed at speeding up error minimization, while other improvements mainly try to increase reliability. In order to avoid oscillation inside the network such as alternating connection weights, and to improve the rate of convergence, refinements use an adaptive learning rate that increases or decreases as appropriate. The concept of momentum allows the balance between the gradient and the previous change to be weighted such that the weight adjustment depends to some degree on the previous change. A momentum close to 0 emphasizes the gradient, while a value close to 1 emphasizes the last change.
==== Cost function ====
While it is possible to define a cost function ad hoc, frequently the choice is determined by the function's desirable properties (such as convexity) because it arises from the model (e.g. in a probabilistic model, the model's posterior probability can be used as an inverse cost).
==== Backpropagation ====
Backpropagation is a method used to adjust the connection weights to compensate for each error found during learning. The error amount is effectively divided among the connections. Technically, backpropagation calculates the gradient (the derivative) of the cost function associated with a given state with respect to the weights. The weight updates can be done via stochastic gradient descent or other methods, such as extreme learning machines, "no-prop" networks, training without backtracking, "weightless" networks, and non-connectionist neural networks.
=== Learning paradigms ===
Machine learning is commonly separated into three main learning paradigms, supervised learning, unsupervised learning and reinforcement learning. Each corresponds to a particular learning task.
==== Supervised learning ====
Supervised learning uses a set of paired inputs and desired outputs. The learning task is to produce the desired output for each input. In this case, the cost function is related to eliminating incorrect deductions. A commonly used cost is the mean-squared error, which tries to minimize the average squared error between the network's output and the desired output. Tasks suited for supervised learning are pattern recognition (also known as classification) and regression (also known as function approximation). Supervised learning is also applicable to sequential data (e.g., for handwriting, speech and gesture recognition). This can be thought of as learning with a "teacher", in the form of a function that provides continuous feedback on the quality of solutions obtained thus far.
==== Unsupervised learning ====
In unsupervised learning, input data is given along with the cost function, some function of the data
x
{\displaystyle \textstyle x}
and the network's output. The cost function is dependent on the task (the model domain) and any a priori assumptions (the implicit properties of the model, its parameters and the observed variables). As a trivial example, consider the model
f
(
x
)
=
a
{\displaystyle \textstyle f(x)=a}
where
a
{\displaystyle \textstyle a}
is a constant and the cost
C
=
E
[
(
x
−
f
(
x
)
)
2
]
{\displaystyle \textstyle C=E[(x-f(x))^{2}]}
. Minimizing this cost produces a value of
a
{\displaystyle \textstyle a}
that is equal to the mean of the data. The cost function can be much more complicated. Its form depends on the application: for example, in compression it could be related to the mutual information between
x
{\displaystyle \textstyle x}
and
f
(
x
)
{\displaystyle \textstyle f(x)}
, whereas in statistical modeling, it could be related to the posterior probability of the model given the data (note that in both of those examples, those quantities would be maximized rather than minimized). Tasks that fall within the paradigm of unsupervised learning are in general estimation problems; the applications include clustering, the estimation of statistical distributions, compression and filtering.
==== Reinforcement learning ====
In applications such as playing video games, an actor takes a string of actions, receiving a generally unpredictable response from the environment after each one. The goal is to win the game, i.e., generate the most positive (lowest cost) responses. In reinforcement learning, the aim is to weight the network (devise a policy) to perform actions that minimize long-term (expected cumulative) cost. At each point in time the agent performs an action and the environment generates an observation and an instantaneous cost, according to some (usually unknown) rules. The rules and the long-term cost usually only can be estimated. At any juncture, the agent decides whether to explore new actions to uncover their costs or to exploit prior learning to proceed more quickly.
Formally, the environment is modeled as a Markov decision process (MDP) with states
s
1
,
.
.
.
,
s
n
∈
S
{\displaystyle \textstyle {s_{1},...,s_{n}}\in S}
and actions
a
1
,
.
.
.
,
a
m
∈
A
{\displaystyle \textstyle {a_{1},...,a_{m}}\in A}
. Because the state transitions are not known, probability distributions are used instead: the instantaneous cost distribution
P
(
c
t
|
s
t
)
{\displaystyle \textstyle P(c_{t}|s_{t})}
, the observation distribution
P
(
x
t
|
s
t
)
{\displaystyle \textstyle P(x_{t}|s_{t})}
and the transition distribution
P
(
s
t
+
1
|
s
t
,
a
t
)
{\displaystyle \textstyle P(s_{t+1}|s_{t},a_{t})}
, while a policy is defined as the conditional distribution over actions given the observations. Taken together, the two define a Markov chain (MC). The aim is to discover the lowest-cost MC.
ANNs serve as the learning component in such applications. Dynamic programming coupled with ANNs (giving neurodynamic programming) has been applied to problems such as those involved in vehicle routing, video games, natural resource management and medicine because of ANNs ability to mitigate losses of accuracy even when reducing the discretization grid density for numerically approximating the solution of control problems. Tasks that fall within the paradigm of reinforcement learning are control problems, games and other sequential decision making tasks.
==== Self-learning ====
Self-learning in neural networks was introduced in 1982 along with a neural network capable of self-learning named crossbar adaptive array (CAA). It is a system with only one input, situation s, and only one output, action (or behavior) a. It has neither external advice input nor external reinforcement input from the environment. The CAA computes, in a crossbar fashion, both decisions about actions and emotions (feelings) about encountered situations. The system is driven by the interaction between cognition and emotion. Given the memory matrix, W =||w(a,s)||, the crossbar self-learning algorithm in each iteration performs the following computation:
In situation s perform action a;
Receive consequence situation s';
Compute emotion of being in consequence situation v(s');
Update crossbar memory w'(a,s) = w(a,s) + v(s').
The backpropagated value (secondary reinforcement) is the emotion toward the consequence situation. The CAA exists in two environments, one is behavioral environment where it behaves, and the other is genetic environment, where from it receives initial emotions (only once) about to be encountered situations in the behavioral environment. Having received the genome vector (species vector) from the genetic environment, the CAA will learn a goal-seeking behavior, in the behavioral environment that contains both desirable and undesirable situations.
==== Neuroevolution ====
Neuroevolution can create neural network topologies and weights using evolutionary computation. It is competitive with sophisticated gradient descent approaches. One advantage of neuroevolution is that it may be less prone to get caught in "dead ends".
=== Stochastic neural network ===
Stochastic neural networks originating from Sherrington–Kirkpatrick models are a type of artificial neural network built by introducing random variations into the network, either by giving the network's artificial neurons stochastic transfer functions , or by giving them stochastic weights. This makes them useful tools for optimization problems, since the random fluctuations help the network escape from local minima. Stochastic neural networks trained using a Bayesian approach are known as Bayesian neural networks.
=== Topological deep learning ===
Topological deep learning, first introduced in 2017, is an emerging approach in machine learning that integrates topology with deep neural networks to address highly intricate and high-order data. Initially rooted in algebraic topology, TDL has since evolved into a versatile framework incorporating tools from other mathematical disciplines, such as differential topology and geometric topology. As a successful example of mathematical deep learning, TDL continues to inspire advancements in mathematical artificial intelligence, fostering a mutually beneficial relationship between AI and mathematics.
=== Other ===
In a Bayesian framework, a distribution over the set of allowed models is chosen to minimize the cost. Evolutionary methods, gene expression programming, simulated annealing, expectation–maximization, non-parametric methods and particle swarm optimization are other learning algorithms. Convergent recursion is a learning algorithm for cerebellar model articulation controller (CMAC) neural networks.
==== Modes ====
Two modes of learning are available: stochastic and batch. In stochastic learning, each input creates a weight adjustment. In batch learning, weights are adjusted based on a batch of inputs, accumulating errors over the batch. Stochastic learning introduces "noise" into the process, using the local gradient calculated from one data point; this reduces the chance of the network getting stuck in local minima. However, batch learning typically yields a faster, more stable descent to a local minimum, since each update is performed in the direction of the batch's average error. A common compromise is to use "mini-batches", small batches with samples in each batch selected stochastically from the entire data set.
== Types ==
ANNs have evolved into a broad family of techniques that have advanced the state of the art across multiple domains. The simplest types have one or more static components, including number of units, number of layers, unit weights and topology. Dynamic types allow one or more of these to evolve via learning. The latter is much more complicated but can shorten learning periods and produce better results. Some types allow/require learning to be "supervised" by the operator, while others operate independently. Some types operate purely in hardware, while others are purely software and run on general purpose computers.
Some of the main breakthroughs include:
Convolutional neural networks that have proven particularly successful in processing visual and other two-dimensional data; where long short-term memory avoids the vanishing gradient problem and can handle signals that have a mix of low and high frequency components aiding large-vocabulary speech recognition, text-to-speech synthesis, and photo-real talking heads;
Competitive networks such as generative adversarial networks in which multiple networks (of varying structure) compete with each other, on tasks such as winning a game or on deceiving the opponent about the authenticity of an input.
== Network design ==
Using artificial neural networks requires an understanding of their characteristics.
Choice of model: This depends on the data representation and the application. Model parameters include the number, type, and connectedness of network layers, as well as the size of each and the connection type (full, pooling, etc.). Overly complex models learn slowly.
Learning algorithm: Numerous trade-offs exist between learning algorithms. Almost any algorithm will work well with the correct hyperparameters for training on a particular data set. However, selecting and tuning an algorithm for training on unseen data requires significant experimentation.
Robustness: If the model, cost function and learning algorithm are selected appropriately, the resulting ANN can become robust.
Neural architecture search (NAS) uses machine learning to automate ANN design. Various approaches to NAS have designed networks that compare well with hand-designed systems. The basic search algorithm is to propose a candidate model, evaluate it against a dataset, and use the results as feedback to teach the NAS network. Available systems include AutoML and AutoKeras. scikit-learn library provides functions to help with building a deep network from scratch. We can then implement a deep network with TensorFlow or Keras.
Hyperparameters must also be defined as part of the design (they are not learned), governing matters such as how many neurons are in each layer, learning rate, step, stride, depth, receptive field and padding (for CNNs), etc. The Python code snippet provides an overview of the training function, which uses the training dataset, number of hidden layer units, learning rate, and number of iterations as parameters:
== Applications ==
Because of their ability to reproduce and model nonlinear processes, artificial neural networks have found applications in many disciplines. These include:
Function approximation, or regression analysis, (including time series prediction, fitness approximation, and modeling)
Data processing (including filtering, clustering, blind source separation, and compression)
Nonlinear system identification and control (including vehicle control, trajectory prediction, adaptive control, process control, and natural resource management)
Pattern recognition (including radar systems, face identification, signal classification, novelty detection, 3D reconstruction, object recognition, and sequential decision making)
Sequence recognition (including gesture, speech, and handwritten and printed text recognition)
Sensor data analysis (including image analysis)
Robotics (including directing manipulators and prostheses)
Data mining (including knowledge discovery in databases)
Finance (such as ex-ante models for specific financial long-run forecasts and artificial financial markets)
Quantum chemistry
General game playing
Generative AI
Data visualization
Machine translation
Social network filtering
E-mail spam filtering
Medical diagnosis
ANNs have been used to diagnose several types of cancers and to distinguish highly invasive cancer cell lines from less invasive lines using only cell shape information.
ANNs have been used to accelerate reliability analysis of infrastructures subject to natural disasters and to predict foundation settlements. It can also be useful to mitigate flood by the use of ANNs for modelling rainfall-runoff. ANNs have also been used for building black-box models in geoscience: hydrology, ocean modelling and coastal engineering, and geomorphology. ANNs have been employed in cybersecurity, with the objective to discriminate between legitimate activities and malicious ones. For example, machine learning has been used for classifying Android malware, for identifying domains belonging to threat actors and for detecting URLs posing a security risk. Research is underway on ANN systems designed for penetration testing, for detecting botnets, credit cards frauds and network intrusions.
ANNs have been proposed as a tool to solve partial differential equations in physics and simulate the properties of many-body open quantum systems. In brain research ANNs have studied short-term behavior of individual neurons, the dynamics of neural circuitry arise from interactions between individual neurons and how behavior can arise from abstract neural modules that represent complete subsystems. Studies considered long-and short-term plasticity of neural systems and their relation to learning and memory from the individual neuron to the system level.
It is possible to create a profile of a user's interests from pictures, using artificial neural networks trained for object recognition.
Beyond their traditional applications, artificial neural networks are increasingly being utilized in interdisciplinary research, such as materials science. For instance, graph neural networks (GNNs) have demonstrated their capability in scaling deep learning for the discovery of new stable materials by efficiently predicting the total energy of crystals. This application underscores the adaptability and potential of ANNs in tackling complex problems beyond the realms of predictive modeling and artificial intelligence, opening new pathways for scientific discovery and innovation.
== Theoretical properties ==
=== Computational power ===
The multilayer perceptron is a universal function approximator, as proven by the universal approximation theorem. However, the proof is not constructive regarding the number of neurons required, the network topology, the weights and the learning parameters.
A specific recurrent architecture with rational-valued weights (as opposed to full precision real number-valued weights) has the power of a universal Turing machine, using a finite number of neurons and standard linear connections. Further, the use of irrational values for weights results in a machine with super-Turing power.
=== Capacity ===
A model's "capacity" property corresponds to its ability to model any given function. It is related to the amount of information that can be stored in the network and to the notion of complexity.
Two notions of capacity are known by the community. The information capacity and the VC Dimension. The information capacity of a perceptron is intensively discussed in Sir David MacKay's book which summarizes work by Thomas Cover. The capacity of a network of standard neurons (not convolutional) can be derived by four rules that derive from understanding a neuron as an electrical element. The information capacity captures the functions modelable by the network given any data as input. The second notion, is the VC dimension. VC Dimension uses the principles of measure theory and finds the maximum capacity under the best possible circumstances. This is, given input data in a specific form. As noted in, the VC Dimension for arbitrary inputs is half the information capacity of a perceptron. The VC Dimension for arbitrary points is sometimes referred to as Memory Capacity.
=== Convergence ===
Models may not consistently converge on a single solution, firstly because local minima may exist, depending on the cost function and the model. Secondly, the optimization method used might not guarantee to converge when it begins far from any local minimum. Thirdly, for sufficiently large data or parameters, some methods become impractical.
Another issue worthy to mention is that training may cross some saddle point which may lead the convergence to the wrong direction.
The convergence behavior of certain types of ANN architectures are more understood than others. When the width of network approaches to infinity, the ANN is well described by its first order Taylor expansion throughout training, and so inherits the convergence behavior of affine models. Another example is when parameters are small, it is observed that ANNs often fit target functions from low to high frequencies. This behavior is referred to as the spectral bias, or frequency principle, of neural networks. This phenomenon is the opposite to the behavior of some well studied iterative numerical schemes such as Jacobi method. Deeper neural networks have been observed to be more biased towards low frequency functions.
=== Generalization and statistics ===
Applications whose goal is to create a system that generalizes well to unseen examples, face the possibility of over-training. This arises in convoluted or over-specified systems when the network capacity significantly exceeds the needed free parameters.
Two approaches address over-training. The first is to use cross-validation and similar techniques to check for the presence of over-training and to select hyperparameters to minimize the generalization error. The second is to use some form of regularization. This concept emerges in a probabilistic (Bayesian) framework, where regularization can be performed by selecting a larger prior probability over simpler models; but also in statistical learning theory, where the goal is to minimize over two quantities: the 'empirical risk' and the 'structural risk', which roughly corresponds to the error over the training set and the predicted error in unseen data due to overfitting.
Supervised neural networks that use a mean squared error (MSE) cost function can use formal statistical methods to determine the confidence of the trained model. The MSE on a validation set can be used as an estimate for variance. This value can then be used to calculate the confidence interval of network output, assuming a normal distribution. A confidence analysis made this way is statistically valid as long as the output probability distribution stays the same and the network is not modified.
By assigning a softmax activation function, a generalization of the logistic function, on the output layer of the neural network (or a softmax component in a component-based network) for categorical target variables, the outputs can be interpreted as posterior probabilities. This is useful in classification as it gives a certainty measure on classifications.
The softmax activation function is:
y
i
=
e
x
i
∑
j
=
1
c
e
x
j
{\displaystyle y_{i}={\frac {e^{x_{i}}}{\sum _{j=1}^{c}e^{x_{j}}}}}
== Criticism ==
=== Training ===
A common criticism of neural networks, particularly in robotics, is that they require too many training samples for real-world operation.
Any learning machine needs sufficient representative examples in order to capture the underlying structure that allows it to generalize to new cases. Potential solutions include randomly shuffling training examples, by using a numerical optimization algorithm that does not take too large steps when changing the network connections following an example, grouping examples in so-called mini-batches and/or introducing a recursive least squares algorithm for CMAC.
Dean Pomerleau uses a neural network to train a robotic vehicle to drive on multiple types of roads (single lane, multi-lane, dirt, etc.), and a large amount of his research is devoted to extrapolating multiple training scenarios from a single training experience, and preserving past training diversity so that the system does not become overtrained (if, for example, it is presented with a series of right turns—it should not learn to always turn right).
=== Theory ===
A central claim of ANNs is that they embody new and powerful general principles for processing information. These principles are ill-defined. It is often claimed that they are emergent from the network itself. This allows simple statistical association (the basic function of artificial neural networks) to be described as learning or recognition. In 1997, Alexander Dewdney, a former Scientific American columnist, commented that as a result, artificial neural networks have a
something-for-nothing quality, one that imparts a peculiar aura of laziness and a distinct lack of curiosity about just how good these computing systems are. No human hand (or mind) intervenes; solutions are found as if by magic; and no one, it seems, has learned anything. One response to Dewdney is that neural networks have been successfully used to handle many complex and diverse tasks, ranging from autonomously flying aircraft to detecting credit card fraud to mastering the game of Go.
Technology writer Roger Bridgman commented:
Neural networks, for instance, are in the dock not only because they have been hyped to high heaven, (what hasn't?) but also because you could create a successful net without understanding how it worked: the bunch of numbers that captures its behaviour would in all probability be "an opaque, unreadable table...valueless as a scientific resource".
In spite of his emphatic declaration that science is not technology, Dewdney seems here to pillory neural nets as bad science when most of those devising them are just trying to be good engineers. An unreadable table that a useful machine could read would still be well worth having.
Although it is true that analyzing what has been learned by an artificial neural network is difficult, it is much easier to do so than to analyze what has been learned by a biological neural network. Moreover, recent emphasis on the explainability of AI has contributed towards the development of methods, notably those based on attention mechanisms, for visualizing and explaining learned neural networks. Furthermore, researchers involved in exploring learning algorithms for neural networks are gradually uncovering generic principles that allow a learning machine to be successful. For example, Bengio and LeCun (2007) wrote an article regarding local vs non-local learning, as well as shallow vs deep architecture.
Biological brains use both shallow and deep circuits as reported by brain anatomy, displaying a wide variety of invariance. Weng argued that the brain self-wires largely according to signal statistics and therefore, a serial cascade cannot catch all major statistical dependencies.
=== Hardware ===
Large and effective neural networks require considerable computing resources. While the brain has hardware tailored to the task of processing signals through a graph of neurons, simulating even a simplified neuron on von Neumann architecture may consume vast amounts of memory and storage. Furthermore, the designer often needs to transmit signals through many of these connections and their associated neurons – which require enormous CPU power and time.
Some argue that the resurgence of neural networks in the twenty-first century is largely attributable to advances in hardware: from 1991 to 2015, computing power, especially as delivered by GPGPUs (on GPUs), has increased around a million-fold, making the standard backpropagation algorithm feasible for training networks that are several layers deeper than before. The use of accelerators such as FPGAs and GPUs can reduce training times from months to days.
Neuromorphic engineering or a physical neural network addresses the hardware difficulty directly, by constructing non-von-Neumann chips to directly implement neural networks in circuitry. Another type of chip optimized for neural network processing is called a Tensor Processing Unit, or TPU.
=== Practical counterexamples ===
Analyzing what has been learned by an ANN is much easier than analyzing what has been learned by a biological neural network. Furthermore, researchers involved in exploring learning algorithms for neural networks are gradually uncovering general principles that allow a learning machine to be successful. For example, local vs. non-local learning and shallow vs. deep architecture.
=== Hybrid approaches ===
Advocates of hybrid models (combining neural networks and symbolic approaches) say that such a mixture can better capture the mechanisms of the human mind.
=== Dataset bias ===
Neural networks are dependent on the quality of the data they are trained on, thus low quality data with imbalanced representativeness can lead to the model learning and perpetuating societal biases. These inherited biases become especially critical when the ANNs are integrated into real-world scenarios where the training data may be imbalanced due to the scarcity of data for a specific race, gender or other attribute. This imbalance can result in the model having inadequate representation and understanding of underrepresented groups, leading to discriminatory outcomes that exacerbate societal inequalities, especially in applications like facial recognition, hiring processes, and law enforcement. For example, in 2018, Amazon had to scrap a recruiting tool because the model favored men over women for jobs in software engineering due to the higher number of male workers in the field. The program would penalize any resume with the word "woman" or the name of any women's college. However, the use of synthetic data can help reduce dataset bias and increase representation in datasets.
== Gallery ==
== Recent advancements and future directions ==
Artificial neural networks (ANNs) have undergone significant advancements, particularly in their ability to model complex systems, handle large data sets, and adapt to various types of applications. Their evolution over the past few decades has been marked by a broad range of applications in fields such as image processing, speech recognition, natural language processing, finance, and medicine.
=== Image processing ===
In the realm of image processing, ANNs are employed in tasks such as image classification, object recognition, and image segmentation. For instance, deep convolutional neural networks (CNNs) have been important in handwritten digit recognition, achieving state-of-the-art performance. This demonstrates the ability of ANNs to effectively process and interpret complex visual information, leading to advancements in fields ranging from automated surveillance to medical imaging.
=== Speech recognition ===
By modeling speech signals, ANNs are used for tasks like speaker identification and speech-to-text conversion. Deep neural network architectures have introduced significant improvements in large vocabulary continuous speech recognition, outperforming traditional techniques. These advancements have enabled the development of more accurate and efficient voice-activated systems, enhancing user interfaces in technology products.
=== Natural language processing ===
In natural language processing, ANNs are used for tasks such as text classification, sentiment analysis, and machine translation. They have enabled the development of models that can accurately translate between languages, understand the context and sentiment in textual data, and categorize text based on content. This has implications for automated customer service, content moderation, and language understanding technologies.
=== Control systems ===
In the domain of control systems, ANNs are used to model dynamic systems for tasks such as system identification, control design, and optimization. For instance, deep feedforward neural networks are important in system identification and control applications.
=== Finance ===
ANNs are used for stock market prediction and credit scoring:
In investing, ANNs can process vast amounts of financial data, recognize complex patterns, and forecast stock market trends, aiding investors and risk managers in making informed decisions.
In credit scoring, ANNs offer data-driven, personalized assessments of creditworthiness, improving the accuracy of default predictions and automating the lending process.
ANNs require high-quality data and careful tuning, and their "black-box" nature can pose challenges in interpretation. Nevertheless, ongoing advancements suggest that ANNs continue to play a role in finance, offering valuable insights and enhancing risk management strategies.
=== Medicine ===
ANNs are able to process and analyze vast medical datasets. They enhance diagnostic accuracy, especially by interpreting complex medical imaging for early disease detection, and by predicting patient outcomes for personalized treatment planning. In drug discovery, ANNs speed up the identification of potential drug candidates and predict their efficacy and safety, significantly reducing development time and costs. Additionally, their application in personalized medicine and healthcare data analysis allows tailored therapies and efficient patient care management. Ongoing research is aimed at addressing remaining challenges such as data privacy and model interpretability, as well as expanding the scope of ANN applications in medicine.
=== Content creation ===
ANNs such as generative adversarial networks (GAN) and transformers are used for content creation across numerous industries. This is because deep learning models are able to learn the style of an artist or musician from huge datasets and generate completely new artworks and music compositions. For instance, DALL-E is a deep neural network trained on 650 million pairs of images and texts across the internet that can create artworks based on text entered by the user. In the field of music, transformers are used to create original music for commercials and documentaries through companies such as AIVA and Jukedeck. In the marketing industry generative models are used to create personalized advertisements for consumers. Additionally, major film companies are partnering with technology companies to analyze the financial success of a film, such as the partnership between Warner Bros and technology company Cinelytic established in 2020. Furthermore, neural networks have found uses in video game creation, where Non Player Characters (NPCs) can make decisions based on all the characters currently in the game.
== See also ==
== References ==
== Bibliography ==
== External links ==
A Brief Introduction to Neural Networks (D. Kriesel) – Illustrated, bilingual manuscript about artificial neural networks; Topics so far: Perceptrons, Backpropagation, Radial Basis Functions, Recurrent Neural Networks, Self Organizing Maps, Hopfield Networks.
Review of Neural Networks in Materials Science Archived 7 June 2015 at the Wayback Machine
Artificial Neural Networks Tutorial in three languages (Univ. Politécnica de Madrid)
Another introduction to ANN
Next Generation of Neural Networks Archived 24 January 2011 at the Wayback Machine – Google Tech Talks
Performance of Neural Networks
Neural Networks and Information Archived 9 July 2009 at the Wayback Machine
Sanderson G (5 October 2017). "But what is a Neural Network?". 3Blue1Brown. Archived from the original on 7 November 2021 – via YouTube. | Wikipedia/Artificial_Neural_Network |
The French Academy of Sciences (French: Académie des sciences, [akademi de sjɑ̃s]) is a learned society, founded in 1666 by Louis XIV at the suggestion of Jean-Baptiste Colbert, to encourage and protect the spirit of French scientific research. It was at the forefront of scientific developments in Europe in the 17th and 18th centuries, and is one of the earliest Academies of Sciences.
Currently headed by Patrick Flandrin (President of the academy), it is one of the five Academies of the Institut de France.
== History ==
The Academy of Sciences traces its origin to Colbert's plan to create a general academy. He chose a small group of scholars who met on 22 December 1666 in the King's library, near the present-day Bibliothèque Nationale, and thereafter held twice-weekly working meetings there in the two rooms assigned to the group. The first 30 years of the academy's existence were relatively informal, since no statutes had as yet been laid down for the institution.
In contrast to its British counterpart, the academy was founded as an organ of government. In Paris, there were not many membership openings, to fill positions there were contentious elections. The election process was at least a 6-stage process with rules and regulations that allowed for chosen candidates to canvas other members and for current members to consider postponing certain stages of the process if the need would arise. Elections in the early days of the academy were important activities, and as such made up a large part of the proceedings at the academy, with many meetings being held regarding the election to fill a single vacancy within the academy. That is not to say that discussion of candidates and the election process as a whole was relegated to the meetings. Members that belonged to the vacancy's respective field would continue discussion of potential candidates for the vacancy in private. Being elected into the academy did not necessarily guarantee being a full member, in some cases, one would enter the academy as an associate or correspondent before being appointed as a full member of the academy.
The election process was originally only to replace members from a specific section. For example, if someone whose study was mathematics was either removed or resigned from his position, the following election process nominated only those whose focus was also mathematics in order to fill that discipline's vacancy. That led to some periods of time in which no specialists for specific fields of study could be found, which left positions in those fields vacant since they could not be filled with people in other disciplines.
The needed reform came late in the 20th century, in 1987, when the academy decided against the practice and to begin filling vacancies with people with new disciplines. This reform was not only aimed at further diversifying the disciplines under the academy, but also to help combat the internal aging of the academy itself. The academy was expected to remain apolitical, and to avoid discussion of religious and social issues.
On 20 January 1699, Louis XIV gave the Company its first rules. The academy received the name of Royal Academy of Sciences and was installed in the Louvre in Paris. Following this reform, the academy began publishing a volume each year with information on all the work done by its members and obituaries for members who had died. This reform also codified the method by which members of the academy could receive pensions for their work.
The academy was originally organized by the royal reform hierarchically into the following groups: Pensionaires, Pupils, Honoraires, and Associés.
The reform also added new groups not previously recognized, such as Vétéran. Some of these role's member limits were expanded and some roles even removed or combined throughout the course of academy's history. The Honoraires group establish by this reform in 1699 whose members were directly appointed by the King was recognized until its abolishment in 1793.
Membership in the academy the exceeded 100 officially-recognised full members only in 1976, 310 years after the academy's inception in 1666. The membership increase came with a large-scale reorganization in 1976. Under this reorganization, 130 resident members, 160 correspondents, and 80 foreign associates could be elected.
A vacancy opens only upon the death of members, as they serve for life. During elections, half of the vacancies are reserved for people less than 55 years old. This was created as an attempt to encourage younger members to join the academy.
The reorganization also divided the academy into 2 divisions:
One division, Division 1, covers the applications of mathematics and physical sciences,
the other, Division 2, covers the applications of chemical, natural, biological, and medical sciences.
On 8 August 1793, the National Convention abolished all the academies. On 22 August 1795, a National Institute of Sciences and Arts was put in place, bringing together the old academies of the sciences, literature and arts, among them the Académie française and the Académie des sciences.
Also in 1795, The academy determined these 10 titles (first 4 in Division 1 and the others in Division 2) to be their newly accepted branches of scientific study:
Mathematics
Mechanics
Astronomy
Physics
Chemistry
Mineralogy
Botany
Agriculture
Anatomy and Zoology
Medicine and Surgery
The last two sections are bundled since there were many good candidates fit to be elected for those practices, and the competition was stiff. Some individuals like Francois Magendie had made stellar advancements in their selected fields of study, that warranted a possible addition of new fields. However, even someone like Magendie that had made breakthroughs in Physiology and impressed the academy with his hands-on vivisection experiments, could not get his study into its own category. Despite Magendie being one of the leading innovators of his time, it was still a battle for him to become an official member of the academy, a feat he would later accomplish in 1821. He further improved the reverence of the academy when he and anatomist Charles Bell produced the widely known "Bell-Magendie Law".
From 1795 until 1914, the first world war, the French Academy of Science was the most prevalent organization of French science. Almost all the old members of the previously abolished Académie were formally re-elected and retook their ancient seats. Among the exceptions was Dominique, comte de Cassini, who refused to take his seat. Membership in the academy was not restricted to scientists: in 1798 Napoleon Bonaparte was elected a member of the academy and three years later a president in connection with his Egyptian expedition, which had a scientific component. In 1816, the again renamed "Royal Academy of Sciences" became autonomous, while forming part of the Institute of France; the head of State became its patron. In the Second Republic, the name returned to Académie des sciences. During this period, the academy was funded by and accountable to the Ministry of Public Instruction.
The academy came to control French patent laws in the course of the eighteenth century, acting as the liaison of artisans' knowledge to the public domain. As a result, academicians dominated technological activities in France.
The academy proceedings were published under the name Comptes rendus de l'Académie des Sciences (1835–1965). The Comptes rendus is now a journal series with seven titles. The publications can be found on site of the French National Library.
In 1818 the French Academy of Sciences launched a competition to explain the properties of light. The civil engineer Augustin-Jean Fresnel entered the competition by submitting a new wave theory of light. Siméon Denis Poisson, one of the members of the judging committee, studied Fresnel's theory in detail. Being a supporter of the particle-theory of light, he looked for a way to disprove it. Poisson thought that he had found a flaw when he demonstrate that Fresnel's theory predicts that an on-axis bright spot would exist in the shadow of a circular obstacle, where there should be complete darkness according to the particle-theory of light. The Poisson spot is not easily observed in every-day situations and so it was only natural for Poisson to interpret it as an absurd result and that it should disprove Fresnel's theory. However, the head of the committee, Dominique-François-Jean Arago, and who incidentally later became Prime Minister of France, decided to perform the experiment in more detail. He molded a 2-mm metallic disk to a glass plate with wax. To everyone's surprise he succeeded in observing the predicted spot, which convinced most scientists of the wave-nature of light.
For three centuries women were not allowed as members of the academy. This meant that many women scientists were excluded, including two-time Nobel Prize winner Marie Curie, Nobel winner Irène Joliot-Curie, mathematician Sophie Germain, and many other deserving women scientists. The first woman admitted as a correspondent member was a student of Curie's, Marguerite Perey, in 1962. The first female full member was Yvonne Choquet-Bruhat in 1979.
Membership in the academy is highly geared towards representing common French populace demographics. French population increases and changes in the early 21st century led to the academy expanding reference population sizes by reform in the early 2002.
The overwhelming majority of members leave the academy posthumously, with a few exceptions of removals, transfers, and resignations. The last member to be removed from the academy was in 1944. Removal from the academy was often for not performing to standards, not performing at all, leaving the country, or political reasons. In some rare occasions, a member has been elected twice and subsequently removed twice. This is the case for Marie-Adolphe Carnot.
== Government interference ==
The most direct involvement of the government in the affairs of the institute came in the initial nomination of members in 1795, but as its members nominated constituted only one third of the membership and most of these had previously been elected as members of the respective academies under the old regime, few objections were raised. Moreover, these nominated members were then completely free to nominate the remaining members of the institute. Members expected to remain such for life, but interference occurred in a few cases where the government suddenly terminated membership for political reasons. The other main interference came when the government refused to accept the result of academy elections. The academies control by the government was apparent in 1803, when Bonaparte decided on a general reorganization. His principal concern was not the First class but the Second, which included political scientists who were potential critics of his government. Bonaparte abolished the second class completely and, after a few expulsions, redistributed its remaining members, together with those of the Third class, into a new Second class concerned with literature and a new Third class devoted to the fine arts. Still this relationship between the academy and the government was not a one-way affair, as members expected to receive their payment of an honorarium.
== Decline ==
Although the academy still exists today, after World War I, the reputation and status of the academy was largely questioned. One factor behind its decline was the development from a meritocracy to gerontocracy: a shift from those with demonstrated scientific ability leading the academy to instead favoring those with seniority. It became known as a sort of "hall of fame" that lost control, real and symbolic, of the professional scientific diversity in France at the time. Another factor was that in the span of five years, 1909 to 1914, funding to science faculties considerably dropped, eventually leading to a financial crisis in France.
== Present use ==
Today the academy is one of five academies comprising the Institut de France. Its members are elected for life. Currently, there are 150 full members, 300 corresponding members, and 120 foreign associates. They are divided into two scientific groups: the Mathematical and Physical sciences and their applications and the Chemical, Biological, Geological and Medical sciences and their applications. The academy currently has five missions that it pursues. These being the encouraging of the scientific life, promoting the teaching of science, transmitting knowledge between scientific communities, fostering international collaborations, and ensuring a dual role of expertise and advise. The French Academy of Science originally focused its development efforts into creating a true co-development Euro-African program beginning in 1997. Since then they have broadened their scope of action to other regions of the world. The standing committee COPED is in charge of the international development projects undertaken by the French Academy of Science and their associates. The current president of COPED is Pierre Auger, the vice president is Michel Delseny, and the honorary president is Francois Gros. All of which are current members of the French Academy of Science. COPED has hosted several workshops or colloquia in Paris, involving representatives from African academies, universities or research centers, addressing a variety of themes and challenges dealing with African development and covering a large field spectrum. Specifically higher education in sciences, and research practices in basic and applied sciences that deal with various aspects relevant to development (renewable energy, infectious diseases, animal pathologies, food resources, access to safe water, agriculture, urban health, etc.).
== Current committees and working parties ==
The Academic Standing Committees and Working Parties prepare the advice notes, policy statements and the Academic Reports. Some have a statutory remit, such as the Select Committee, the Committee for International Affairs and the Committee for Scientists' Rights, some are created ad hoc by the academy and approved formally by vote in a members-only session.
Today the academies standing committees and working parties include:
The Academic Standing Committee in charge of the Biennial Report on Science and Technology
The Academic Standing Committee for Science, Ethics and Society
The Academic Standing Committee for the Environment
The Academic Standing Committee for Space Research
The Academic Standing Committee for Science and Metrology
The Academic Standing Committee for the Science History and Epistemology
The Academic Standing Committee for Science and Safety Issues
The Academic Standing Committee for Science Education and Training
The Academic Standing La main à la pâte Committee
The Academic Standing Committee for the Defense of Scientists' Rights (CODHOS)
The Academic Standing Committee for International Affairs (CORI)
The French Committee for International Scientific Unions (COFUSI)
The Academic Standing Committee for Scientific and Technological International Relations (CARIST)
The Academic Standing Committee for Developing Countries (COPED)
The Inter-academic Group for Development (GID) – Cf. for further reading
The Academic Standing Commission for Sealed Deposits
The Academic Standing Committee for Terminology and Neologisms
The Antoine Lavoisier Standing Committee
The Academic Standing Committee for Prospects in Energy Procurement
The Special Academic Working Party on Scientific Computing
The Special Academic Working Party on Material Sciences and Engineering
== Medals, awards and prizes ==
Each year, the Academy of Sciences distributes about 80 prizes. These include:
Marie Skłodowska-Curie and Pierre Curie Polish-French Science Award, created in 2022.
the Grande Médaille, awarded annually, in rotation, in the relevant disciplines of each division of the academy, to a French or foreign scholar who has contributed to the development of science in a decisive way.
the Lalande Prize, awarded from 1802 through 1970, for outstanding achievement in astronomy
the Valz Prize, awarded from 1877 through 1970, to honor advances in astronomy
the Richard Lounsbery Award, jointly with the National Academy of Sciences
the Prix Jacques Herbrand, for mathematics and physics
the Prix Paul Pascal, for chemistry
the Louis Bachelier Prize for major contributions to mathematical modeling in finance
the Prix Michel Montpetit for computer science and applied mathematics, awarded since 1977
the Leconte Prize, awarded annually since 1886, to recognize important discoveries in mathematics, physics, chemistry, natural history or medicine
the Prix Tchihatcheff (Tchihatchef; Chikhachev)
== People ==
The following are incomplete lists of the officers of the academy. See also Category:Officers of the French Academy of Sciences.
For a list of the academy's members past and present, see Category:Members of the French Academy of Sciences
=== Presidents ===
Source: French Academy of Sciences
=== Treasurers ===
?–1788 Georges-Louis Leclerc, Comte de Buffon
1788–1791 Mathieu Tillet
=== Permanent secretaries ===
==== General ====
==== Mathematical Sciences ====
==== Physical Sciences ====
==== Chemistry and Biology ====
== Publications ==
Publications of the French Academy of Sciences "Histoire de l'Académie royale des sciences" (1700–1790)
== See also ==
French art salons and academies
French Geodesic Mission
History of the metre
Seconds pendulum
Royal Commission on Animal Magnetism
== Notes ==
== References ==
== External links ==
Official website (in French) – English-language version
Complete listing of current members
Notes on the Académie des Sciences from the Scholarly Societies project (includes information on the society journals)
Search the Proceedings of the Académie des sciences in the French National Library (search item: Comptes Rendus)
Comptes rendus de l'Académie des sciences. Série 1, Mathématique in Gallica, the digital library of the BnF. | Wikipedia/Paris_Academy_of_Science |
Surgery is a medical specialty that uses manual and instrumental techniques to diagnose or treat pathological conditions (e.g., trauma, disease, injury, malignancy), to alter bodily functions (e.g., malabsorption created by bariatric surgery such as gastric bypass), to reconstruct or alter aesthetics and appearance (cosmetic surgery), or to remove unwanted tissues (body fat, glands, scars or skin tags) or foreign bodies.
The act of performing surgery may be called a surgical procedure or surgical operation, or simply "surgery" or "operation". In this context, the verb "operate" means to perform surgery. The adjective surgical means pertaining to surgery; e.g. surgical instruments, surgical facility or surgical nurse. Most surgical procedures are performed by a pair of operators: a surgeon who is the main operator performing the surgery, and a surgical assistant who provides in-procedure manual assistance during surgery. Modern surgical operations typically require a surgical team that typically consists of the surgeon, the surgical assistant, an anaesthetist (often also complemented by an anaesthetic nurse), a scrub nurse (who handles sterile equipment), a circulating nurse and a surgical technologist, while procedures that mandate cardiopulmonary bypass will also have a perfusionist. All surgical procedures are considered invasive and often require a period of postoperative care (sometimes intensive care) for the patient to recover from the iatrogenic trauma inflicted by the procedure. The duration of surgery can span from several minutes to tens of hours depending on the specialty, the nature of the condition, the target body parts involved and the circumstance of each procedure, but most surgeries are designed to be one-off interventions that are typically not intended as an ongoing or repeated type of treatment.
In British colloquialism, the term "surgery" can also refer to the facility where surgery is performed, or simply the office/clinic of a physician, dentist or veterinarian.
== Definitions ==
As a general rule, a procedure is considered surgical when it involves cutting of a person's tissues or closure of a previously sustained wound. Other procedures that do not necessarily fall under this rubric, such as angioplasty or endoscopy, may be considered surgery if they involve "common" surgical procedure or settings, such as use of antiseptic measures and sterile fields, sedation/anesthesia, proactive hemostasis, typical surgical instruments, suturing or stapling. All forms of surgery are considered invasive procedures; the so-called "noninvasive surgery" ought to be more appropriately called minimally invasive procedures, which usually refers to a procedure that utilizes natural orifices (e.g. most urological procedures) or does not penetrate the structure being excised (e.g. endoscopic polyp excision, rubber band ligation, laser eye surgery), are percutaneous (e.g. arthroscopy, catheter ablation, angioplasty and valvuloplasty), or to a radiosurgical procedure (e.g. irradiation of a tumor).
=== Types of surgery ===
Surgical procedures are commonly categorized by urgency, type of procedure, body system involved, the degree of invasiveness, and special instrumentation.
Based on timing:
Elective surgery is done to correct a non-life-threatening condition, and is carried out at the person's convenience, or to the surgeon's and the surgical facility's availability.
Semi-elective surgery is one that is better done early to avoid complications or potential deterioration of the patient's condition, but such risk are sufficiently low that the procedure can be postponed for a short period time.
Emergency surgery is surgery which must be done without any delay to prevent death or serious disabilities or loss of limbs and functions.
Based on purpose:
Exploratory surgery is performed to establish or aid a diagnosis.
Therapeutic surgery is performed to treat a previously diagnosed condition.
Curative surgery is a therapeutic procedure done to permanently remove a pathology.
Plastic surgery is done to improve a body part's function or appearance.
Reconstructive plastic surgery is done to improve the function or subjective appearance of a damaged or malformed body part.
Cosmetic surgery is done to subjectively improve the appearance of an otherwise normal body part.
Bariatric surgery is done to assist weight loss when dietary and pharmaceutical methods alone have failed.
Non-survival surgery, or terminal surgery, is where Euthanasia is performed while the subject is under Anesthesia so that the subject will not regain conscious pain perception. This type of surgery is usually done in Animal testing experiments.
By type of procedure:
Amputation involves removing an entire body part, usually a limb or digit; castration is the amputation of testes; circumcision is the removal of prepuce from the penis or clitoral hood from the clitoris (see female circumcision). Replantation involves reattaching a severed body part.
Resection is the removal of all or part of an internal organ and/or connective tissue. A segmental resection specifically removes an independent vascular region of an organ such as a hepatic segment, a bronchopulmonary segment or a renal lobe. Excision is the resection of only part of an organ, tissue or other body part (e.g. skin) without discriminating specific vascular territories. Exenteration is the complete removal of all organs and soft tissue content (especially lymphoid tissues) within a body cavity.
Extirpation is the complete excision or surgical destruction of a body part.
Ablation is destruction of tissue through the use of energy-transmitting devices such as electrocautery/fulguration, laser, focused ultrasound or freezing.
Repair involves the direct closure or restoration of an injured, mutilated or deformed organ or body part, usually by suturing or internal fixation. Reconstruction is an extensive repair of a complex body part (such as joints), often with some degrees of structural/functional replacement and commonly involves grafting and/or use of implants.
Grafting is the relocation and establishment of a tissue from one part of the body to another. A flap is the relocation of a tissue without complete separation of its original attachment, and a free flap is a completely detached flap that carries an intact neurovascular structure ready for grafting onto a new location.
Bypass involves the relocation/grafting of a tubular structure onto another in order to reroute the content flow of that target structure from a specific segment directly to a more distal ("downstream") segment.
Implantation is insertion of artificial medical devices to replace or augment existing tissue.
Transplantation is the replacement of an organ or body part by insertion of another from a different human (or animal) into the person undergoing surgery. Harvesting is the resection of an organ or body part from a live human or animal (known as the donor) for transplantation into another patient (known as the recipient).
By organ system: Surgical specialties are traditionally and academically categorized by the organ, organ system or body region involved. Examples include:
Cardiac surgery — the heart and mediastinal great vessels;
Thoracic surgery — the thoracic cavity including the lungs;
Gastrointestinal surgery — the digestive tract and its accessory organs;
Vascular surgery — the extra-mediastinal great vessels and peripheral circulatory system;
Urological surgery — the genitourinary system;
ENT surgery — ear, nose and throat, also known as head and neck surgery when including the neck region;
Oral and maxillofacial surgery — the oral cavity, jaws, and face;
Neurosurgery — the central nervous system, and;
Orthopedic surgery — the musculoskeletal system.
By degree of invasiveness of surgical procedures:
Conventional open surgery (such as a laparotomy) requires a large incision to access the area of interest, and directly exposes the internal body cavity to the outside.
Minimally-invasive surgery involves much smaller surface incisions or even natural orifices (nostril, mouth, anus or urethra) to insert miniaturized instruments within a body cavity or structure, as in laparoscopic surgery or angioplasty.
Hybrid surgery uses a combination of open and minimally-invasive techniques, and may include hand ports or larger incisions to assist with performance of elements of the procedure.
By equipment used:
Laser surgery involves use of laser ablation to divide tissue instead of a scalpel, scissors or similar sharp-edged instruments.
Cryosurgery uses low-temperature cryoablation to freeze and destroy a target tissue.
Electrosurgery involves use of electrocautery to cut and coagulate tissue.
Microsurgery involves the use of an operating microscope for the surgeon to see and manipulate small structures.
Endoscopic surgery uses optical instruments to relay the image from inside an enclosed body cavity to the outside, and the surgeon performs the procedure using specialized handheld instruments inserted through trocars placed through the body wall. Most modern endoscopic procedures are video-assisted, meaning the images are viewed on a display screen rather than through the eyepiece on the endoscope.
Robotic surgery makes use of robotics such as the Da Vinci or the ZEUS robotic surgical systems, to remotely control endoscopic or minimally-invasive instruments.
=== Terminology ===
Resection and excisional procedures start with a prefix for the target organ to be excised (cut out) and end in the suffix -ectomy. For example, removal of part of the stomach would be called a subtotal gastrectomy.
Procedures involving cutting into an organ or tissue end in -otomy. A surgical procedure cutting through the abdominal wall to gain access to the abdominal cavity is a laparotomy.
Minimally invasive procedures, involving small incisions through which an endoscope is inserted, end in -oscopy. For example, such surgery in the abdominal cavity is called laparoscopy.
Procedures for formation of a permanent or semi-permanent opening called a stoma in the body end in -ostomy, such as creation of a colostomy, a connection of colon and the abdominal wall. This prefix is also used for connection between two viscera, such as how an esophagojejunostomy refers to a connection created between the esophagus and the jejunum.
Plastic and reconstruction procedures start with the name for the body part to be reconstructed and end in -plasty. For example, rhino- is a prefix meaning "nose", therefore a rhinoplasty is a reconstructive or cosmetic surgery for the nose. A pyloroplasty refers to a type of reconstruction of the gastric pylorus.
Procedures that involve cutting the muscular layers of an organ end in -myotomy. A pyloromyotomy refers to cutting the muscular layers of the gastric pylorus.
Repair of a damaged or abnormal structure ends in -orraphy. This includes herniorrhaphy, another name for a hernia repair.
Reoperation, revision, or "redo" procedures refer to a planned or unplanned return to the operating theater after a surgery is performed to re-address an aspect of patient care. Unplanned reasons for reoperation include postoperative complications such as bleeding or hematoma formation, development of a seroma or abscess, anastomotic leak, tissue necrosis requiring debridement or excision, or in the case of malignancy, close or involved resection margins that may require re-excision to avoid local recurrence. Reoperation can be performed in the acute phase, or it can be also performed months to years later if the surgery failed to solve the indicated problem. Reoperation can also be planned as a staged operation where components of the procedure are performed or reversed under separate anesthesia.
== Description of surgical procedure ==
=== Setting ===
Inpatient surgery is performed in a hospital, and the person undergoing surgery stays at least one night in the hospital after the surgery. Outpatient surgery occurs in a hospital outpatient department or freestanding ambulatory surgery center, and the person who had surgery is discharged the same working day. Office-based surgery occurs in a physician's office, and the person is discharged the same day.
At a hospital, modern surgery is often performed in an operating theater using surgical instruments, an operating table, and other equipment. Among United States hospitalizations for non-maternal and non-neonatal conditions in 2012, more than one-fourth of stays and half of hospital costs involved stays that included operating room (OR) procedures. The environment and procedures used in surgery are governed by the principles of aseptic technique: the strict separation of "sterile" (free of microorganisms) things from "unsterile" or "contaminated" things. All surgical instruments must be sterilized, and an instrument must be replaced or re-sterilized if it becomes contaminated (i.e. handled in an unsterile manner, or allowed to touch an unsterile surface). Operating room staff must wear sterile attire (scrubs, a scrub cap, a sterile surgical gown, sterile latex or non-latex polymer gloves and a surgical mask), and they must scrub hands and arms with an approved disinfectant agent before each procedure.
=== Preoperative care ===
Prior to surgery, the person is given a medical examination, receives certain pre-operative tests, and their physical status is rated according to the ASA physical status classification system. If these results are satisfactory, the person requiring surgery signs a consent form and is given a surgical clearance. If the procedure is expected to result in significant blood loss, an autologous blood donation may be made some weeks prior to surgery. If the surgery involves the digestive system, the person requiring surgery may be instructed to perform a bowel prep by drinking a solution of polyethylene glycol the night before the procedure. People preparing for surgery are also instructed to abstain from food or drink (an NPO order after midnight on the night before the procedure), to minimize the effect of stomach contents on pre-operative medications and reduce the risk of aspiration if the person vomits during or after the procedure.
Some medical systems have a practice of routinely performing chest x-rays before surgery. The premise behind this practice is that the physician might discover some unknown medical condition which would complicate the surgery, and that upon discovering this with the chest x-ray, the physician would adapt the surgery practice accordingly. However, medical specialty professional organizations recommend against routine pre-operative chest x-rays for people who have an unremarkable medical history and presented with a physical exam which did not indicate a chest x-ray. Routine x-ray examination is more likely to result in problems like misdiagnosis, overtreatment, or other negative outcomes than it is to result in a benefit to the person. Likewise, other tests including complete blood count, prothrombin time, partial thromboplastin time, basic metabolic panel, and urinalysis should not be done unless the results of these tests can help evaluate surgical risk.
=== Preparing for surgery ===
A surgical team may include a surgeon, anesthetist, a circulating nurse, and a "scrub tech", or surgical technician, as well as other assistants who provide equipment and supplies as required. While informed consent discussions may be performed in a clinic or acute care setting, the pre-operative holding area is where documentation is reviewed and where family members can also meet the surgical team. Nurses in the preoperative holding area confirm orders and answer additional questions of the family members of the patient prior to surgery. In the pre-operative holding area, the person preparing for surgery changes out of their street clothes and are asked to confirm the details of his or her surgery as previously discussed during the process of informed consent. A set of vital signs are recorded, a peripheral IV line is placed, and pre-operative medications (antibiotics, sedatives, etc.) are given.
When the patient enters the operating room and is appropriately anesthetized, the team will then position the patient in an appropriate surgical position. If hair is present at the surgical site, it is clipped (instead of shaving). The skin surface within the operating field is cleansed and prepared by applying an antiseptic (typically chlorhexidine gluconate in alcohol, as this is twice as effective as povidone-iodine at reducing the risk of infection). Sterile drapes are then used to cover the borders of the operating field. Depending on the type of procedure, the cephalad drapes are secured to a pair of poles near the head of the bed to form an "ether screen", which separate the anesthetist/anesthesiologist's working area (unsterile) from the surgical site (sterile).
Anesthesia is administered to prevent pain from the trauma of cutting, tissue manipulation, application of thermal energy, and suturing. Depending on the type of operation, anesthesia may be provided locally, regionally, or as general anesthesia. Spinal anesthesia may be used when the surgical site is too large or deep for a local block, but general anesthesia may not be desirable. With local and spinal anesthesia, the surgical site is anesthetized, but the person can remain conscious or minimally sedated. In contrast, general anesthesia may render the person unconscious and paralyzed during surgery. The person is typically intubated to protect their airway and placed on a mechanical ventilator, and anesthesia is produced by a combination of injected and inhaled agents. The choice of surgical method and anesthetic technique aims to solve the indicated problem, minimize the risk of complications, optimize the time needed for recovery, and limit the surgical stress response.
=== Intraoperative phase ===
The intraoperative phase begins when the surgery subject is received in the surgical area (such as the operating theater or surgical department), and lasts until the subject is transferred to a recovery area (such as a post-anesthesia care unit).
An incision is made to access the surgical site. Blood vessels may be clamped or cauterized to prevent bleeding, and retractors may be used to expose the site or keep the incision open. The approach to the surgical site may involve several layers of incision and dissection, as in abdominal surgery, where the incision must traverse skin, subcutaneous tissue, three layers of muscle and then the peritoneum. In certain cases, bone may be cut to further access the interior of the body; for example, cutting the skull for brain surgery or cutting the sternum for thoracic (chest) surgery to open up the rib cage. Whilst in surgery aseptic technique is used to prevent infection or further spreading of the disease. The surgeons' and assistants' hands, wrists and forearms are washed thoroughly for at least 4 minutes to prevent germs getting into the operative field, then sterile gloves are placed onto their hands. An antiseptic solution is applied to the area of the person's body that will be operated on. Sterile drapes are placed around the operative site. Surgical masks are worn by the surgical team to avoid germs on droplets of liquid from their mouths and noses from contaminating the operative site.
Work to correct the problem in body then proceeds. This work may involve:
excision – cutting out an organ, tumor, or other tissue.
resection – partial removal of an organ or other bodily structure.
reconnection of organs, tissues, etc., particularly if severed. Resection of organs such as intestines involves reconnection. Internal suturing or stapling may be used. Surgical connection between blood vessels or other tubular or hollow structures such as loops of intestine is called anastomosis.
reduction – the movement or realignment of a body part to its normal position. e.g. Reduction of a broken nose involves the physical manipulation of the bone or cartilage from their displaced state back to their original position to restore normal airflow and aesthetics.
ligation – tying off blood vessels, ducts, or "tubes".
grafts – may be severed pieces of tissue cut from the same (or different) body or flaps of tissue still partly connected to the body but resewn for rearranging or restructuring of the area of the body in question. Although grafting is often used in cosmetic surgery, it is also used in other surgery. Grafts may be taken from one area of the person's body and inserted to another area of the body. An example is bypass surgery, where clogged blood vessels are bypassed with a graft from another part of the body. Alternatively, grafts may be from other persons, cadavers, or animals.
insertion of prosthetic parts when needed. Pins or screws to set and hold bones may be used. Sections of bone may be replaced with prosthetic rods or other parts. Sometimes a plate is inserted to replace a damaged area of skull. Artificial hip replacement has become more common. Heart pacemakers or valves may be inserted. Many other types of prostheses are used.
creation of a stoma, a permanent or semi-permanent opening in the body
in transplant surgery, the donor organ (taken out of the donor's body) is inserted into the recipient's body and reconnected to the recipient in all necessary ways (blood vessels, ducts, etc.).
arthrodesis – surgical connection of adjacent bones so the bones can grow together into one. Spinal fusion is an example of adjacent vertebrae connected allowing them to grow together into one piece.
modifying the digestive tract in bariatric surgery for weight loss.
repair of a fistula, hernia, or prolapse.
repair according to the ICD-10-PCS, in the Medical and Surgical Section 0, root operation Q, means restoring, to the extent possible, a body part to its normal anatomic structure and function. This definition, repair, is used only when the method used to accomplish the repair is not one of the other root operations. Examples would be colostomy takedown, herniorrhaphy of a hernia, and the surgical suture of a laceration.
other procedures, including:
clearing clogged ducts, blood or other vessels
removal of calculi (stones)
draining of accumulated fluids
debridement – removal of dead, damaged, or diseased tissue
Blood or blood expanders may be administered to compensate for blood lost during surgery. Once the procedure is complete, sutures or staples are used to close the incision. Once the incision is closed, the anesthetic agents are stopped or reversed, and the person is taken off ventilation and extubated (if general anesthesia was administered).
=== Postoperative care ===
After completion of surgery, the person is transferred to the post anesthesia care unit and closely monitored. When the person is judged to have recovered from the anesthesia, he/she is either transferred to a surgical ward elsewhere in the hospital or discharged home. During the post-operative period, the person's general function is assessed, the outcome of the procedure is assessed, and the surgical site is checked for signs of infection. There are several risk factors associated with postoperative complications, such as immune deficiency and obesity. Obesity has long been considered a risk factor for adverse post-surgical outcomes. It has been linked to many disorders such as obesity hypoventilation syndrome, atelectasis and pulmonary embolism, adverse cardiovascular effects, and wound healing complications. If removable skin closures are used, they are removed after 7 to 10 days post-operatively, or after healing of the incision is well under way.
It is not uncommon for surgical drains to be required to remove blood or fluid from the surgical wound during recovery. Mostly these drains stay in until the volume tapers off, then they are removed. These drains can become clogged, leading to abscess.
Postoperative therapy may include adjuvant treatment such as chemotherapy, radiation therapy, or administration of medication such as anti-rejection medication for transplants. For postoperative nausea and vomiting (PONV), solutions like saline, water, controlled breathing placebo and aromatherapy can be used in addition to medication. Other follow-up studies or rehabilitation may be prescribed during and after the recovery period. A recent post-operative care philosophy has been early ambulation. Ambulation is getting the patient moving around. This can be as simple as sitting up or even walking around. The goal is to get the patient moving as early as possible. It has been found to shorten the patient's length of stay. Length of stay is the amount of time a patient spends in the hospital after surgery before they are discharged. In a recent study done with lumbar decompressions, the patient's length of stay was decreased by 1–3 days.
The use of topical antibiotics on surgical wounds to reduce infection rates has been questioned. Antibiotic ointments are likely to irritate the skin, slow healing, and could increase risk of developing contact dermatitis and antibiotic resistance. It has also been suggested that topical antibiotics should only be used when a person shows signs of infection and not as a preventative. A systematic review published by Cochrane (organisation) in 2016, though, concluded that topical antibiotics applied over certain types of surgical wounds reduce the risk of surgical site infections, when compared to no treatment or use of antiseptics. The review also did not find conclusive evidence to suggest that topical antibiotics increased the risk of local skin reactions or antibiotic resistance.
Through a retrospective analysis of national administrative data, the association between mortality and day of elective surgical procedure suggests a higher risk in procedures carried out later in the working week and on weekends. The odds of death were 44% and 82% higher respectively when comparing procedures on a Friday to a weekend procedure. This "weekday effect" has been postulated to be from several factors including poorer availability of services on a weekend, and also, decrease number and level of experience over a weekend.
Postoperative pain affects an estimated 80% of people who underwent surgery. While pain is expected after surgery, there is growing evidence that pain may be inadequately treated in many people in the acute period immediately after surgery. It has been reported that incidence of inadequately controlled pain after surgery ranged from 25.1% to 78.4% across all surgical disciplines. There is insufficient evidence to determine if giving opioid pain medication pre-emptively (before surgery) reduces postoperative pain the amount of medication needed after surgery.
Postoperative recovery has been defined as an energy‐requiring process to decrease physical symptoms, reach a level of emotional well‐being, regain functions, and re‐establish activities. Most people are discharged from the hospital or surgical center before they are fully recovered. The recovery process may include complications such as postoperative cognitive dysfunction and postoperative depression.
== Epidemiology ==
=== United States ===
In 2011, of the 38.6 million hospital stays in U.S. hospitals, 29% included at least one operating room procedure. These stays accounted for 48% of the total $387 billion in hospital costs.
The overall number of procedures remained stable from 2001 to 2011. In 2011, over 15 million operating room procedures were performed in U.S. hospitals.
Data from 2003 to 2011 showed that U.S. hospital costs were highest for the surgical service line; the surgical service line costs were $17,600 in 2003 and projected to be $22,500 in 2013. For hospital stays in 2012 in the United States, private insurance had the highest percentage of surgical expenditure. in 2012, mean hospital costs in the United States were highest for surgical stays.
== Special populations ==
=== Elderly people ===
Older adults have widely varying physical health. Frail elderly people are at significant risk of post-surgical complications and the need for extended care. Assessment of older people before elective surgery can accurately predict the person's recovery trajectories. One frailty scale uses five items: unintentional weight loss, muscle weakness, exhaustion, low physical activity, and slowed walking speed. A healthy person scores 0; a very frail person scores 5. Compared to non-frail elderly people, people with intermediate frailty scores (2 or 3) are twice as likely to have post-surgical complications, spend 50% more time in the hospital, and are three times as likely to be discharged to a skilled nursing facility instead of to their own homes. People who are frail and elderly (score of 4 or 5) have even worse outcomes, with the risk of being discharged to a nursing home rising to twenty times the rate for non-frail elderly people.
=== Children ===
Surgery on children requires considerations that are not common in adult surgery. Children and adolescents are still developing physically and mentally making it difficult for them to make informed decisions and give consent for surgical treatments. Bariatric surgery in youth is among the controversial topics related to surgery in children.
=== Vulnerable populations ===
Doctors perform surgery with the consent of the person undergoing surgery. Some people are able to give better informed consent than others. Populations such as incarcerated persons, people living with dementia, the mentally incompetent, persons subject to coercion, and other people who are not able to make decisions with the same authority as others, have special needs when making decisions about their personal healthcare, including surgery.
== Global surgery ==
Global surgery has been defined as 'the multidisciplinary enterprise of providing improved and equitable surgical care to the world's population, with its core belief as the issues of need, access and quality". Halfdan T. Mahler, the 3rd Director-General of the World Health Organization (WHO), first brought attention to the disparities in surgery and surgical care in 1980 when he stated in his address to the World Congress of the International College of Surgeons, "'the vast majority of the world's population has no access whatsoever to skilled surgical care and little is being done to find a solution.As such, surgical care globally has been described as the 'neglected stepchild of global health,' a term coined by Paul Farmer to highlight the urgent need for further work in this area. Furthermore, Jim Young Kim, the former President of the World Bank, proclaimed in 2014 that "surgery is an indivisible, indispensable part of health care and of progress towards universal health coverage."
In 2015, the Lancet Commission on Global Surgery (LCoGS) published the landmark report titled "Global Surgery 2030: evidence and solutions for achieving health, welfare, and economic development", describing the large, pre-existing burden of surgical diseases in low- and middle-income countries (LMICs) and future directions for increasing universal access to safe surgery by the year 2030. The Commission highlighted that about 5 billion people lack access to safe and affordable surgical and anesthesia care and 143 million additional procedures were needed every year to prevent further morbidity and mortality from treatable surgical conditions as well as a $12.3 trillion loss in economic productivity by the year 2030. This was especially true in the poorest countries, which account for over one-third of the population but only 3.5% of all surgeries that occur worldwide. It emphasized the need to significantly improve the capacity for Bellwether procedures – laparotomy, caesarean section, open fracture care – which are considered a minimum level of care that first-level hospitals should be able to provide in order to capture the most basic emergency surgical care. In terms of the financial impact on the patients, the lack of adequate surgical and anesthesia care has resulted in 33 million individuals every year facing catastrophic health expenditure – the out-of-pocket healthcare cost exceeding 40% of a given household's income.
In alignment with the LCoGS call for action, the World Health Assembly adopted the resolution WHA68.15 in 2015 that stated, "Strengthening emergency and essential surgical care and anesthesia as a component of universal health coverage." This not only mandated the WHO to prioritize strengthening the surgical and anesthesia care globally, but also led to governments of the member states recognizing the urgent need for increasing capacity in surgery and anesthesia. Additionally, the third edition of Disease Control Priorities (DCP3), published in 2015 by the World Bank, declared surgery as essential and featured an entire volume dedicated to building surgical capacity.
Data from WHO and the World Bank indicate that scaling up infrastructure to enable access to surgical care in regions where it is currently limited or is non-existent is a low-cost measure relative to the significant morbidity and mortality caused by lack of surgical treatment. In fact, a systematic review found that the cost-effectiveness ratio – dollars spent per DALYs averted – for surgical interventions is on par or exceeds those of major public health interventions such as oral rehydration therapy, breastfeeding promotion, and even HIV/AIDS antiretroviral therapy. This finding challenged the common misconception that surgical care is financially prohibitive endeavor not worth pursuing in LMICs.
A key policy framework that arose from this renewed global commitment towards surgical care worldwide is the National Surgical Obstetric and Anesthesia Plan (NSOAP). NSOAP focuses on policy-to-action capacity building for surgical care with tangible steps as follows: (1) analysis of baseline indicators, (2) partnership with local champions, (3) broad stakeholder engagement, (4) consensus building and synthesis of ideas, (5) language refinement, (6) costing, (7) dissemination, and (8) implementation. This approach has been widely adopted and has served as guiding principles between international collaborators and local institutions and governments. Successful implementations have allowed for sustainability in terms of longterm monitoring, quality improvement, and continued political and financial support.
== Human rights ==
Access to surgical care is increasingly recognized as an integral aspect of healthcare and therefore is evolving into a normative derivation of human right to health. The ICESCR Article 12.1 and 12.2 define the human right to health as "the right of everyone to the enjoyment of the highest attainable standard of physical and mental health" In the August 2000, the UN Committee on Economic, Social and Cultural Rights (CESCR) interpreted this to mean "right to the enjoyment of a variety of facilities, goods, services, and conditions necessary for the realization of the highest attainable health". Surgical care can be thereby viewed as a positive right – an entitlement to protective healthcare.
Woven through the International Human and Health Rights literature is the right to be free from surgical disease. The 1966 ICESCR Article 12.2a described the need for "provision for the reduction of the stillbirth-rate and of infant mortality and for the healthy development of the child" which was subsequently interpreted to mean "requiring measures to improve… emergency obstetric services". Article 12.2d of the ICESCR stipulates the need for "the creation of conditions which would assure to all medical service and medical attention in the event of sickness", and is interpreted in the 2000 comment to include timely access to "basic preventative, curative services… for appropriate treatment of injury and disability.". Obstetric care shares close ties with reproductive rights, which includes access to reproductive health.
Surgeons and public health advocates, such as Kelly McQueen, have described surgery as "Integral to the right to health". This is reflected in the establishment of the WHO Global Initiative for Emergency and Essential Surgical Care in 2005, the 2013 formation of the Lancet Commission for Global Surgery, the 2015 World Bank Publication of Volume 1 of its Disease Control Priorities Project "Essential Surgery", and the 2015 World Health Assembly 68.15 passing of the Resolution for Strengthening Emergency and Essential Surgical Care and Anesthesia as a Component of Universal Health Coverage. The Lancet Commission for Global Surgery outlined the need for access to "available, affordable, timely and safe" surgical and anesthesia care; dimensions paralleled in ICESCR General Comment No. 14, which similarly outlines need for available, accessible, affordable and timely healthcare.
== History ==
=== Trepanation ===
Surgical treatments date back to the prehistoric era. The oldest for which there is evidence is trepanation, in which a hole is drilled or scraped into the skull, thus exposing the dura mater in order to treat health problems related to intracranial pressure.
=== Ancient Egypt ===
Prehistoric surgical techniques are seen in Ancient Egypt, where a mandible dated to approximately 2650 BC shows two perforations just below the root of the first molar, indicating the draining of an abscessed tooth. Surgical texts from ancient Egypt date back about 3500 years ago. Surgical operations were performed by priests, specialized in medical treatments similar to today, and used sutures to close wounds. Infections were treated with honey.
=== India ===
9,000-year-old skeletal remains of a prehistoric individual from the Indus River valley show evidence of teeth having been drilled. Sushruta Samhita is one of the oldest known surgical texts and its period is usually placed in the first millennium BCE. It describes in detail the examination, diagnosis, treatment, and prognosis of numerous ailments, as well as procedures for various forms of cosmetic surgery, plastic surgery and rhinoplasty.
=== Sri Lanka ===
In 1982 archaeologists were able to find significant evidence when the ancient land, called 'Alahana Pirivena' situated in Polonnaruwa, with ruins, was excavated. In that place ruins of an ancient hospital emerged. The hospital building was 147.5 feet in width and 109.2 feet in length. The instruments which were used for complex surgeries were there among the things discovered from the place, including forceps, scissors, probes, lancets, and scalpels. The instruments discovered may be dated to 11th century AD.
=== Ancient and Medieval Greece ===
In ancient Greece, temples dedicated to the healer-god Asclepius, known as Asclepieia (Greek: Ασκληπιεία, sing. Asclepieion Ασκληπιείον), functioned as centers of medical advice, prognosis, and healing. In the Asclepieion of Epidaurus, some of the surgical cures listed, such as the opening of an abdominal abscess or the removal of traumatic foreign material, are realistic enough to have taken place. The Greek Galen was one of the greatest surgeons of the ancient world and performed many audacious operations – including brain and eye surgery – that were not tried again for almost two millennia. Hippocrates stated in the oath (c. 400 BCE) "I will not use the knife, even upon those suffering from stones, but I will leave this to those who are trained in this craft."
Researchers from the Adelphi University discovered in the Paliokastro on Thasos ten skeletal remains, four women and six men, who were buried between the fourth and seventh centuries A.D. Their bones illuminated their physical activities, traumas, and even a complex form of brain surgery. According to the researchers: "The very serious trauma cases sustained by both males and females had been treated surgically or orthopedically by a very experienced physician/surgeon with great training in trauma care. We believe it to have been a military physician". The researchers were impressed by the complexity of the brain surgical operation.
In 1991 at the Polystylon fort in Greece, researchers discovered the head of a Byzantine warrior of the 14th century. Analysis of the lower jaw revealed that a surgery has been performed, when the warrior was alive, to the jaw which had been badly fractured and it tied back together until it healed.
=== Islamic world ===
During the Islamic Golden Age, largely based upon Paul of Aegina's Pragmateia, the writings of Albucasis (Abu al-Qasim Khalaf ibn al-Abbas Al-Zahrawi), an Andalusian-Arab physician and scientist who practiced in the Zahra suburb of Córdoba, were influential. Al-Zahrawi specialized in curing disease by cauterization. He invented several surgical instruments for purposes such as inspection of the interior of the urethra and for removing foreign bodies from the throat, the ear, and other body organs. He was also the first to illustrate the various cannulae and to treat warts with an iron tube and caustic metal as a boring instrument. He describes what is thought to be the first attempt at reduction mammaplasty for the management of gynaecomastia and the first mastectomy to treat breast cancer. He is credited with the performance of the first thyroidectomy. Al-Zahrawi pioneered techniques of neurosurgery and neurological diagnosis, treating head injuries, skull fractures, spinal injuries, hydrocephalus, subdural effusions and headache. The first clinical description of an operative procedure for hydrocephalus was given by Al-Zahrawi, who clearly describes the evacuation of superficial intracranial fluid in hydrocephalic children.
=== Early modern Europe ===
In Europe, the demand grew for surgeons to formally study for many years before practicing; universities such as Montpellier, Padua and Bologna were particularly renowned. In the 12th century, Rogerius Salernitanus composed his Chirurgia, laying the foundation for modern Western surgical manuals. Barber-surgeons generally had a bad reputation that was not to improve until the development of academic surgery as a specialty of medicine, rather than an accessory field. Basic surgical principles for asepsis etc., are known as Halsteads principles.
There were some important advances to the art of surgery during this period. The professor of anatomy at the University of Padua, Andreas Vesalius, was a pivotal figure in the Renaissance transition from classical medicine and anatomy based on the works of Galen, to an empirical approach of 'hands-on' dissection. In his anatomic treaties De humani corporis fabrica, he exposed the many anatomical errors in Galen and advocated that all surgeons should train by engaging in practical dissections themselves.
The second figure of importance in this era was Ambroise Paré (sometimes spelled "Ambrose"), a French army surgeon from the 1530s until his death in 1590. The practice for cauterizing gunshot wounds on the battlefield had been to use boiling oil; an extremely dangerous and painful procedure. Paré began to employ a less irritating emollient, made of egg yolk, rose oil and turpentine. He also described more efficient techniques for the effective ligation of the blood vessels during an amputation.
=== Modern surgery ===
The discipline of surgery was put on a sound, scientific footing during the Age of Enlightenment in Europe. An important figure in this regard was the Scottish surgical scientist, John Hunter, generally regarded as the father of modern scientific surgery. He brought an empirical and experimental approach to the science and was renowned around Europe for the quality of his research and his written works. Hunter reconstructed surgical knowledge from scratch; refusing to rely on the testimonies of others, he conducted his own surgical experiments to determine the truth of the matter. To aid comparative analysis, he built up a collection of over 13,000 specimens of separate organ systems, from the simplest plants and animals to humans.
He greatly advanced knowledge of venereal disease and introduced many new techniques of surgery, including new methods for repairing damage to the Achilles tendon and a more effective method for applying ligature of the arteries in case of an aneurysm. He was also one of the first to understand the importance of pathology, the danger of the spread of infection and how the problem of inflammation of the wound, bone lesions and even tuberculosis often undid any benefit that was gained from the intervention. He consequently adopted the position that all surgical procedures should be used only as a last resort.
Other important 18th- and early 19th-century surgeons included Percival Pott (1713–1788) who described tuberculosis on the spine and first demonstrated that a cancer may be caused by an environmental carcinogen (he noticed a connection between chimney sweep's exposure to soot and their high incidence of scrotal cancer). Astley Paston Cooper (1768–1841) first performed a successful ligation of the abdominal aorta, and James Syme (1799–1870) pioneered the Symes Amputation for the ankle joint and successfully carried out the first hip disarticulation.
Modern pain control through anesthesia was discovered in the mid-19th century. Before the advent of anesthesia, surgery was a traumatically painful procedure and surgeons were encouraged to be as swift as possible to minimize patient suffering. This also meant that operations were largely restricted to amputations and external growth removals. Beginning in the 1840s, surgery began to change dramatically in character with the discovery of effective and practical anaesthetic chemicals such as ether, first used by the American surgeon Crawford Long, and chloroform, discovered by Scottish obstetrician James Young Simpson and later pioneered by John Snow, physician to Queen Victoria. In addition to relieving patient suffering, anaesthesia allowed more intricate operations in the internal regions of the human body. In addition, the discovery of muscle relaxants such as curare allowed for safer applications.
==== Infection and antisepsis ====
The introduction of anesthetics encouraged more surgery, which inadvertently caused more dangerous patient post-operative infections. The concept of infection was unknown until relatively modern times. The first progress in combating infection was made in 1847 by the Hungarian doctor Ignaz Semmelweis who noticed that medical students fresh from the dissecting room were causing excess maternal death compared to midwives. Semmelweis, despite ridicule and opposition, introduced compulsory handwashing for everyone entering the maternal wards and was rewarded with a plunge in maternal and fetal deaths; however, the Royal Society dismissed his advice.
Until the pioneering work of British surgeon Joseph Lister in the 1860s, most medical men believed that chemical damage from exposures to bad air (see "miasma") was responsible for infections in wounds, and facilities for washing hands or a patient's wounds were not available. Lister became aware of the work of French chemist Louis Pasteur, who showed that rotting and fermentation could occur under anaerobic conditions if micro-organisms were present. Pasteur suggested three methods to eliminate the micro-organisms responsible for gangrene: filtration, exposure to heat, or exposure to chemical solutions. Lister confirmed Pasteur's conclusions with his own experiments and decided to use his findings to develop antiseptic techniques for wounds. As the first two methods suggested by Pasteur were inappropriate for the treatment of human tissue, Lister experimented with the third, spraying carbolic acid on his instruments. He found that this remarkably reduced the incidence of gangrene and he published his results in The Lancet. Later, on 9 August 1867, he read a paper before the British Medical Association in Dublin, on the Antiseptic Principle of the Practice of Surgery, which was reprinted in the British Medical Journal. His work was groundbreaking and laid the foundations for a rapid advance in infection control that saw modern antiseptic operating theatres widely used within 50 years.
Lister continued to develop improved methods of antisepsis and asepsis when he realised that infection could be better avoided by preventing bacteria from getting into wounds in the first place. This led to the rise of sterile surgery. Lister introduced the Steam Steriliser to sterilize equipment, instituted rigorous hand washing and later implemented the wearing of rubber gloves. These three crucial advances – the adoption of a scientific methodology toward surgical operations, the use of anaesthetic and the introduction of sterilised equipment – laid the groundwork for the modern invasive surgical techniques of today.
The use of X-rays as an important medical diagnostic tool began with their discovery in 1895 by German physicist Wilhelm Röntgen. He noticed that these rays could penetrate the skin, allowing the skeletal structure to be captured on a specially treated photographic plate.
== Surgical specialties ==
== Learned societies ==
== See also ==
=== List of Surgery-related fields ===
== Notes ==
== References ==
== Further reading ==
Bartolo, M., Bargellesi, S., Castioni, C. A., Intiso, D., Fontana, A., Copetti, M., Scarponi, F., Bonaiuti, D., & Intensive Care and Neurorehabilitation Italian Study Group (2017). Mobilization in early rehabilitation in intensive care unit patients with severe acquired brain injury: An observational study. Journal of rehabilitation medicine, 49(9), 715–722.
Ni, C.-yan, Wang, Z.-hong, Huang, Z.-ping, Zhou, H., Fu, L.-juan, Cai, H., Huang, X.-xuan, Yang, Y., Li, H.-fen, & Zhou, W.-ping. (2018). Early enforced mobilization after liver resection: A prospective randomized controlled trial. International Journal of Surgery, 54, 254–258.
Lei, Y. T., Xie, J. W., Huang, Q., Huang, W., & Pei, F. X. (2021). Benefits of early ambulation within 24 h after total knee arthroplasty: a multicenter retrospective cohort study in China. Military Medical Research, 8(1), 17.
Stethen, T. W., Ghazi, Y. A., Heidel, R. E., Daley, B. J., Barnes, L., Patterson, D., & McLoughlin, J. M. (2018). Walking to recovery: the effects of missed ambulation events on postsurgical recovery after bowel resection. Journal of gastrointestinal oncology, 9(5), 953–961.
Yakkanti, R. R., Miller, A. J., Smith, L. S., Feher, A. W., Mont, M. A., & Malkani, A. L. (2019). Impact of early mobilization on length of stay after primary total knee arthroplasty. Annals of translational medicine, 7(4), 69. | Wikipedia/Resection_(surgery) |
Computer-assisted surgery (CAS) represents a surgical concept and set of methods, that use computer technology for surgical planning, and for guiding or performing surgical interventions. CAS is also known as computer-aided surgery, computer-assisted intervention, image-guided surgery, digital surgery and surgical navigation, but these are terms that are more or less synonymous with CAS. CAS has been a leading factor in the development of robotic surgery.
== General principles ==
=== Creating a virtual image of the patient ===
The most important component for CAS is the development of an accurate model of the patient. This can be conducted through a number of medical imaging technologies including CT, MRI, x-rays, ultrasound plus many more. For the generation of this model, the anatomical region to be operated has to be scanned and uploaded into the computer system. It is possible to employ a number of scanning methods, with the datasets combined through data fusion techniques. The final objective is the creation of a 3D dataset that reproduces the exact geometrical situation of the normal and pathological tissues and structures of that region. Of the available scanning methods, the CT is preferred, because MRI data sets are known to have volumetric deformations that may lead to inaccuracies. An example data set can include the collection of data compiled with 180 CT slices, that are 1 mm apart, each having 512 by 512 pixels. The contrasts of the 3D dataset (with its tens of millions of pixels) provide the detail of soft vs hard tissue structures, and thus allow a computer to differentiate, and visually separate for a human, the different tissues and structures. The image data taken from a patient will often include intentional landmark features, in order to be able to later realign the virtual dataset against the actual patient during surgery. See patient registration.
=== Image analysis and processing ===
Image analysis involves the manipulation of the patients 3D model to extract relevant information from the data. Using the differing contrast levels of the different tissues within the imagery, as examples, a model can be changed to show just hard structures such as bone, or view the flow of arteries and veins through the brain.
=== Diagnostic, preoperative planning, surgical simulation ===
Using specialized software the gathered dataset can be rendered as a virtual 3D model of the patient, this model can be easily manipulated by a surgeon to provide views from any angle and at any depth within the volume. Thus the surgeon can better assess the case and establish a more accurate diagnostic. Furthermore, the surgical intervention will be planned and simulated virtually, before actual surgery takes place (computer-aided surgical simulation [CASS]). Using dedicated software, the surgical robot will be programmed to carry out the planned actions during the actual surgical intervention.
=== Surgical navigation ===
In computer-assisted surgery, the actual intervention is defined as surgical navigation. Using the surgical navigation system the surgeon uses special instruments, which are tracked by the navigation system. The position of a tracked instrument in relation to the patient's anatomy is shown on images of the patient, as the surgeon moves the instrument. The surgeon thus uses the system to 'navigate' the location of an instrument. The feedback the system provides of the instrument location is particularly useful in situations where the surgeon cannot actually see the tip of the instrument, such as in minimally invasive surgeries.
=== Robotic surgery ===
Robotic surgery is a term used for correlated actions of a surgeon and a surgical robot (that has been programmed to carry out certain actions during the preoperative planning procedure). A surgical robot is a mechanical device (generally looking like a robotic arm) that is computer-controlled.
Robotic surgery can be divided into three types, depending on the degree of surgeon interaction during the procedure: supervisory-controlled, telesurgical, and shared-control. In a supervisory-controlled system, the procedure is executed solely by the robot, which will perform the pre-programmed actions. A telesurgical system, also known as remote surgery, requires the surgeon to manipulate the robotic arms during the procedure rather than allowing the robotic arms to work from a predetermined program. With shared-control systems, the surgeon carries out the procedure with the use of a robot that offers steady-hand manipulations of the instrument. In most robots, the working mode can be chosen for each separate intervention, depending on the surgical complexity and the particularities of the case.
== Applications ==
Computer-assisted surgery is the beginning of a revolution in surgery. It already makes a great difference in high-precision surgical domains, but it is also used in standard surgical procedures.
=== Computer-assisted neurosurgery ===
Telemanipulators have been used for the first time in neurosurgery, in the 1980s. This allowed a greater development in brain microsurgery (compensating surgeon’s physiological tremor by 10-fold), increased accuracy and precision of the intervention. It also opened a new gate to minimally invasive brain surgery, furthermore reducing the risk of post-surgical morbidity by avoiding accidental damage to adjacent centers.
Computer-assisted neurosurgery also includes spinal procedures using navigation and robotics systems. Current navigation systems available include Medtronic StealthStation, BrainLab, 7D Surgical, Stryker, and Zeta Surgical Zeta; current robotics systems available include Mazor Renaissance, MazorX, Globus Excelsius GPS, and Brainlab Cirq.
=== Computer-assisted oral and maxillofacial surgery ===
Bone segment navigation is the modern surgical approach in orthognathic surgery (correction of the anomalies of the jaws and skull), in temporo-mandibular joint (TMJ) surgery, or in the reconstruction of the mid-face and orbit.
It is also used in implantology where the available bone can be seen and the position, angulation and depth of the implants can be simulated before the surgery. During the operation surgeon is guided visually and by sound alerts. IGI (Image Guided Implantology) is one of the navigation systems which uses this technology.
==== Computer Assisted Implant Surgery (CAIS) ====
New therapeutic concepts as guided surgery are being developed and applied in the placement of dental implants. Guided surgery in the field of Implant Dentistry is currently described as "Computer Assisted Implant Dentistry" (CAIS) which presently encompasses three distinct technologies: static, dynami and robotic. Static utilises prefabricated guides to direct osteotomy and implant placement, dynamic is based or real time tracking of the drills position through optical technology while robotic includes implant placement by autonomous robotic arm.
The prosthetic rehabilitation is also planned and performed parallel to the surgical procedures. The planning steps are at the foreground and carried out in a cooperation of the surgeon, the dentist and the dental technician. Edentulous patients, either one or both jaws, benefit as the time of treatment is reduced.
Regarding the edentulous patients, conventional denture support is often compromised due to moderate bone atrophy, even if the dentures are constructed based on correct anatomic morphology.
Using cone beam computed tomography, the patient and the existing prosthesis are being scanned. Furthermore, the prosthesis alone is also scanned. Glass pearls of defined diameter are placed in the prosthesis and used as reference points for the upcoming planning. The resulting data is processed and the position of the implants determined. The surgeon, using special developed software, plans the implants based on prosthetic concepts considering the anatomic morphology. After the planning of the surgical part is completed, a CAD/CAM surgical guide for dental placement is constructed. The mucosal-supported surgical splint ensures the exact placement of the implants in the patient. Parallel to this step, the new implant supported prosthesis is constructed.
The dental technician, using the data resulting from the previous scans, manufactures a model representing the situation after the implant placement. The prosthetic compounds, abutments, are already prefabricated. The length and the inclination can be chosen. The abutments are connected to the model at a position in consideration of the prosthetic situation. The exact position of the abutments is registered. The dental technician can now manufacture the prosthesis.
The fit of the surgical splint is clinically proved. After that, the splint is attached using a three-point support pin system. Prior to the attachment, irrigation with a chemical disinfectant is advised. The pins are driven through defined sheaths from the vestibular to the oral side of the jaw. Ligaments anatomy should be considered, and if necessary decompensation can be achieved with minimal surgical interventions. The proper fit of the template is crucial and should be maintained throughout the whole treatment. Regardless of the mucosal resilience, a correct and stable attachment is achieved through the bone fixation.
The access to the jaw can now only be achieved through the sleeves embedded in the surgical template. Using specific burs through the sleeves the mucosa is removed. Every bur used, carries a sleeve compatible to the sleeves in the template, which ensures that the final position is achieved but no further progress in the alveolar ridge can take place. Further procedure is very similar to the traditional implant placement. The pilot hole is drilled and then expanded. With the aid of the splint, the implants are finally placed. After that, the splint can be removed.
With the aid of a registration template, the abutments can be attached and connected to the implants at the defined position. No less than a pair of abutments should be connected simultaneously to avoid any discrepancy. An important advantage of this technique is the parallel positioning of the abutments. A radiological control is necessary to verify the correct placement and connection of implant and abutment.
In a further step, abutments are covered by gold cone caps, which represent the secondary crowns. Where necessary, the transition of the gold cone caps to the mucosa can be isolated with rubber dam rings.
The new prosthesis corresponds to a conventional total prosthesis but the basis contains cavities so that the secondary crowns can be incorporated. The prosthesis is controlled at the terminal position and corrected if needed. The cavities are filled with a self-curing cement and the prosthesis is placed in the terminal position. After the self-curing process, the gold caps are definitely cemented in the prosthesis cavities and the prosthesis can now be detached. Excess cement may be removed and some corrections like polishing or under filling around the secondary crowns may be necessary.
The new prosthesis is fitted using a construction of telescope double cone crowns. At the end position, the prosthesis buttons down on the abutments to ensure an adequate hold.
At the same sitting, the patient receives the implants and the prosthesis. An interim prosthesis is not necessary. The extent of the surgery is kept to minimum. Due to the application of the splint, a reflection of soft tissues in not needed. The patient experiences less bleeding, swelling and discomfort. Complications such as injuring of neighbouring structures are also avoided.
Using 3D imaging during the planning phase, the communication between the surgeon, dentist and dental technician is highly supported and any problems can easily detected and eliminated. Each specialist accompanies the whole treatment and interaction can be made. As the end result is already planned and all surgical intervention is carried according to the initial plan, the possibility of any deviation is kept to a minimum. Given the effectiveness of the initial planning the whole treatment duration is shorter than any other treatment procedures.
=== Computer-assisted ENT surgery ===
Image-guided surgery and CAS in ENT commonly consists of navigating preoperative image data such as CT or cone beam CT to assist with locating or avoiding anatomically important structures such as the optic nerve or the opening to the frontal sinus. For use in middle-ear surgery there has been some application of robotic surgery due to the requirement for high-precision actions.
=== Computer-assisted orthopedic surgery (CAOS) ===
The application of robotic surgery is widespread in orthopedics, especially in routine interventions, like total hip replacement or pedicle screw insertion during spinal fusion. It is also useful in pre-planning and guiding the correct anatomical position of displaced bone fragments in fractures, allowing a good fixation by osteosynthesis, especially for malrotated bones. Early CAOS systems include the HipNav, OrthoPilot, and Praxim. Recently, mini-optical navigation tools called Intellijoint HIP have been developed for hip arthroplasty procedures.
=== Computer-assisted visceral surgery ===
With the advent of computer-assisted surgery, great progresses have been made in general surgery towards minimal invasive approaches. Laparoscopy in abdominal and gynecologic surgery is one of the beneficiaries, allowing surgical robots to perform routine operations, like cholecystectomies, or even hysterectomies. In cardiac surgery, shared control systems can perform mitral valve replacement or ventricular pacing by small thoracotomies. In urology, surgical robots contributed in laparoscopic approaches for pyeloplasty or nephrectomy or prostatic interventions.
=== Computer-assisted cardiac interventions ===
Applications include atrial fibrillation and cardiac resynchronization therapy. Pre-operative MRI or CT is used to plan the procedure. Pre-operative images, models or planning information can be registered to intra-operative fluoroscopic image to guide procedures.
=== Computer-assisted radiosurgery ===
Radiosurgery is also incorporating advanced robotic systems. CyberKnife is such a system that has a lightweight linear accelerator mounted on the robotic arm. It is guided towards tumor processes, using the skeletal structures as a reference system (Stereotactic Radiosurgery System). During the procedure, real time X-ray is used to accurately position the device before delivering radiation beam. The robot can compensate for respiratory motion of the tumor in real-time.
== Advantages ==
CAS starts with the premise of a much better visualization of the operative field, thus allowing a more accurate preoperative diagnostic and a well-defined surgical planning, by using surgical planning in a preoperative virtual environment. This way, the surgeon can easily assess most of the surgical difficulties and risks and have a clear idea about how to optimize the surgical approach and decrease surgical morbidity. During the operation, the computer guidance improves the geometrical accuracy of the surgical gestures and also reduce the redundancy of the surgeon’s acts. This significantly improves ergonomy in the operating theatre, decreases the risk of surgical errors, reduces the operating time and improves the surgical outcome.
== Disadvantages ==
There are several disadvantages of computer-assisted surgery. Many systems have costs in the millions of dollars, making them a large investment for even big hospitals. Some people believe that improvements in technology, such as haptic feedback, increased processor speeds, and more complex and capable software will increase the cost of these systems. Another disadvantage is the size of the systems. These systems have relatively large footprints. This is an important disadvantage in today's already-crowded operating rooms. It may be difficult for both the surgical team and the robot to fit into the operating room.
== See also ==
Advanced Simulation Library is a hardware accelerated multiphysics simulation software
== References ==
== External links ==
Media related to Computer assisted surgery at Wikimedia Commons | Wikipedia/Computer-assisted_surgery |
Stereolithography (SLA or SL; also known as vat photopolymerisation, optical fabrication, photo-solidification, or resin printing) is a form of 3D printing technology used for creating models, prototypes, patterns, and production parts in a layer by layer fashion using photochemical processes by which light causes chemical monomers and oligomers to cross-link together to form polymers. Those polymers then make up the body of a three-dimensional solid. Research in the area had been conducted during the 1970s, but the term was coined by Chuck Hull in 1984 when he applied for a patent on the process, which was granted in 1986. Stereolithography can be used to create prototypes for products in development, medical models, and computer hardware, as well as in many other applications. While stereolithography is fast and can produce almost any design, it can be expensive.
== History ==
Stereolithography or "SLA" printing is an early and widely used 3D printing technology. In the early 1980s, Japanese researcher Hideo Kodama first invented the modern layered approach to stereolithography by using ultraviolet light to cure photosensitive polymers. In 1984, just before Chuck Hull filed his own patent, Alain Le Mehaute, Olivier de Witte and Jean Claude André filed a patent for the stereolithography process. The French inventors' patent application was abandoned by the French General Electric Company (now Alcatel-Alsthom) and CILAS (The Laser Consortium). Le Mehaute believes that the abandonment reflects a problem with innovation in France.
The term “stereolithography” (Greek: stereo-solid and lithography) was coined in 1984 by Chuck Hull when he filed his patent for the process. Hull patented stereolithography as a method of creating 3D objects by successively "printing" thin layers of an object using a medium curable by ultraviolet light, starting from the bottom layer to the top layer. Hull's patent described a concentrated beam of ultraviolet light focused onto the surface of a vat filled with a liquid photopolymer. The beam is focused onto the surface of the liquid photopolymer, creating each layer of the desired 3D object by means of crosslinking (generation of intermolecular bonds in polymers). It was invented with the intent of allowing engineers to create prototypes of their designs in a more time effective manner. After the patent was granted in 1986, Hull co-founded the world's first 3D printing company, 3D Systems, to commercialize it.
Stereolithography's success in the automotive industry allowed 3D printing to achieve industry status and the technology continues to find innovative uses in many fields of study. Attempts have been made to construct mathematical models of stereolithography processes and to design algorithms to determine whether a proposed object may be constructed using 3D printing.
== Technology ==
Stereolithography is an additive manufacturing process that, in its most common form, works by focusing an ultraviolet (UV) laser on to a vat of photopolymer resin. With the help of computer aided manufacturing or computer-aided design (CAM/CAD) software, the UV laser is used to draw a pre-programmed design or shape on to the surface of the photopolymer vat. Photopolymers are sensitive to ultraviolet light, so the resin is photochemically solidified and forms a single layer of the desired 3D object. Then, the build platform lowers one layer and a blade recoats the top of the tank with resin. This process is repeated for each layer of the design until the 3D object is complete. Completed parts must be washed with a solvent to clean wet resin from their surfaces.
It is also possible to print objects "bottom up" by using a vat with a transparent bottom and focusing the UV or deep-blue polymerization laser upward through the bottom of the vat. An inverted stereolithography machine starts a print by lowering the build platform to touch the bottom of the resin-filled vat, then moving upward the height of one layer. The UV laser then writes the bottom-most layer of the desired part through the transparent vat bottom. Then the vat is "rocked", flexing and peeling the bottom of the vat away from the hardened photopolymer; the hardened material detaches from the bottom of the vat and stays attached to the rising build platform, and new liquid photopolymer flows in from the edges of the partially built part. The UV laser then writes the second-from-bottom layer and repeats the process. An advantage of this bottom-up mode is that the build volume can be much bigger than the vat itself, and only enough photopolymer is needed to keep the bottom of the build vat continuously full of photopolymer. This approach is typical of desktop SLA printers, while the right-side-up approach is more common in industrial systems.
Stereolithography requires the use of supporting structures which attach to the elevator platform to prevent deflection due to gravity, resist lateral pressure from the resin-filled blade, or retain newly created sections during the "vat rocking" of bottom up printing. Supports are typically created automatically during the preparation of CAD models and can also be made manually. In either situation, the supports must be removed manually after printing.
Other forms of stereolithography build each layer by LCD masking, or using a DLP projector.
== Materials ==
The liquid materials used for SLA printing are commonly referred to as "resins" and are thermoset polymers. A wide variety of resins are commercially available and it is also possible to use homemade resins to test different compositions for example. Material properties vary according to formulation configurations: "materials can be soft or hard, heavily filled with secondary materials like glass and ceramic, or imbued with mechanical properties like high heat deflection temperature or impact resistance". Recently, some studies have tested the possibility to green or reusable materials to produce "sustainable" resins. It is possible to classify the resins in the following categories:
Standard resins, for general prototyping
Engineering resins, for specific mechanical and thermal properties
Dental and medical resins, for biocompatibility certifications
Castable resins, for zero ash-content after burnout
Biomaterial resins, formulated as aqueous solutions of synthetic polymers like polyethylene glycol, or biological polymers such as gelatin, dextran, or hyaluronic acid.
== Uses ==
=== Medical modeling ===
Stereolithographic models have been used in medicine since the 1990s, for creating accurate 3D models of various anatomical regions of a patient, based on data from computer scans. Medical modelling involves first acquiring a CT, MRI, or other scan. This data consists of a series of cross sectional images of the human anatomy. In these images different tissues show up as different levels of grey. Selecting a range of grey values enables specific tissues to be isolated. A region of interest is then selected and all the pixels connected to the target point within that grey value range are selected. This enables a specific organ to be selected. This process is referred to as segmentation. The segmented data may then be translated into a format suitable for stereolithography. While stereolithography is normally accurate, the accuracy of a medical model depends on many factors, especially the operator performing the segmentation correctly. There are potential errors possible when making medical models using stereolithography but these can be avoided with practice and well trained operators.
Stereolithographic models are used as an aid to diagnosis, preoperative planning and implant design and manufacture. This might involve planning and rehearsing osteotomies, for example. Surgeons use models to help plan surgeries but prosthetists and technologists also use models as an aid to the design and manufacture of custom-fitting implants. For instance, medical models created through stereolithography can be used to help in the construction of Cranioplasty plates.
In 2019, scientists at Rice University published an article in the journal Science, presenting soft hydrogel materials for stereolithography used in biological research applications.
=== Prototyping ===
Stereolithography is often used for prototyping parts. For a relatively low price, stereolithography can produce accurate prototypes, even of irregular shapes. Businesses can use those prototypes to assess the design of their product or as publicity for the final product.
== Advantages and disadvantages ==
=== Advantages ===
One of the advantages of stereolithography is its speed; functional parts can be manufactured within a day. The length of time it takes to produce a single part depends upon the complexity of the design and the size. Printing time can last anywhere from hours to more than a day. SLA printed parts, unlike those obtained from FFF/FDM, do not exhibit significant anisotropy (structural non-uniformity) and there's no visible layering pattern. The surface quality is, in general, superior. Prototypes and designs made with stereolithography are strong enough to be machined and can also be used to make master patterns for injection molding or various metal casting processes.
=== Disadvantages ===
Although stereolithography can be used to produce virtually any synthetic design, it is often costly (due to costlier machines compared to FFF, costlier resin and costly post-processing steps such as washing and curing), though the price is coming down. Since 2012, however, public interest in 3D printing has inspired the design of several consumer SLA machines which can cost considerably less.
Beginning in 2016, substitution of the SLA and DLP methods using a high resolution, high contrast LCD panel has brought prices down to below US$200. The layers are created in their entirety since the entire layer is displayed on the LCD screen and is exposed using UV LEDs that lie below. Resolutions of .01mm are attainable.
Another disadvantage is that the photopolymers are sticky, messy, and need to be handled with care. Newly made parts need to be washed, further cured, and dried. The environmental impact of all these processes requires more study to be understood, but in general SLA technologies have not created any biodegradable or compostable forms of resin, while other 3-D printing methods offer some compostable PLA options. The choice of materials is limited compared to FFF, which can process virtually any thermoplastic.
== See also ==
Fused filament fabrication (FFF or FDM)
Selective laser sintering (SLS)
Thermoforming
laminated object manufacturing (LOM)
.stl - file format
== References ==
== Sources ==
== External links ==
Rapid Prototyping and Stereolithography animation – Animation demonstrates stereolithography and the actions of an SL machine | Wikipedia/Stereolithography_(medicine) |
Natural orifice transluminal endoscopic surgery (NOTES) is a surgical technique whereby "scarless" abdominal operations can be performed with an endoscope passed through a natural orifice (mouth, urethra, anus, vagina, etc.) then through an internal incision in the stomach, vagina, bladder or colon, thus avoiding any external incisions or scars. Memic's hominis robotic system is the first and only FDA-authorized surgical robotic platform for NOTES procedures. The system is currently at use for transvaginal hysterectomies through the rectouterine pouch - the removal of the uterus, along with one or both of the fallopian tubes and ovaries, in cases where there is no cancer present, as well as with the removal of ovarian cysts
== See also ==
Single port laparoscopy
== References ==
Tsin DA, Colombero LT, Lambeck J, Manolas P (2007). "Minilaparoscopy-assisted natural orifice surgery". JSLS. 11 (1): 24–9. PMC 3015810. PMID 17651552. | Wikipedia/Natural_orifice_transluminal_endoscopic_surgery |
Hernia repair is a surgical operation for the correction of a hernia—a bulging of internal organs or tissues through the wall that contains it. It can be of two different types: herniorrhaphy; or hernioplasty. This operation may be performed to correct hernias of the abdomen, groin, diaphragm, brain, or at the site of a previous operation. Hernia repair is often performed as an ambulatory procedure.
== Techniques ==
=== Inguinal hernia repair ===
The first differentiating factor in hernia repair is whether the surgery is done open, or laparoscopically. Open hernia repair is when an incision is made in the skin directly over the hernia. Laparoscopic hernia repair is when minimally invasive cameras and equipment are used and the hernia is repaired with only small incisions adjacent to the hernia. These techniques are similar to the techniques used in laparoscopic gallbladder surgery.
An operation in which the hernia sac is removed without any repair of the inguinal canal is described as a herniotomy. When herniotomy is combined with a reinforced repair of the posterior inguinal canal wall with autogenous (patient's own tissue) or heterogeneous material such as prolene mesh, it is termed hernioplasty as opposed to herniorrhaphy, in which no autogenous or heterogeneous material is used for reinforcement.
=== Stoppa procedure ===
The Stoppa procedure is a tension-free type of hernia repair. It is performed by wrapping the lower part of the parietal peritoneum with prosthetic mesh and placing it at a preperitoneal level over Fruchaud's myopectineal orifice. It was first described in 1975 by Rene Stoppa. This operation is also known as "giant prosthetic reinforcement of the visceral sac" (GPRVS).
This technique has met particular success in the repair of bilateral hernias, large scrotal hernias, and recurrent or rerecurrent hernias in which conventional repair is difficult and which carries a high morbidity and failure rate. The most recent reported recurrence rate (involving 230 patients with 420 hernias and a maximum of 8 years follow-up) was 0.71%. The totally extra-peritoneal repair (TEP) uses exactly the same principles as the Stoppa repair, except that it is performed laparoscopically.
=== Advancements in Hernia Repair ===
Robotic-assisted hernia repair has gained popularity due to its precision, smaller incisions, and quicker recovery times. Using robotic systems, surgeons have greater flexibility and control during the procedure. Although its use is still more expensive and requires specialized equipment, early studies suggest that robotic hernia repair may offer improved outcomes in complex cases, such as large or recurrent hernias.
== References ==
== External links ==
European Hernia Society guidelines on the treatment of inguinal hernia in adult patients.
American Hernia Society
Surgery Methods, Inguinal Hernia, Description, Comparison Archived 2011-06-29 at the Wayback Machine | Wikipedia/Hernia_surgery |
Minimally invasive spine surgery, also known as MISS, has no specific meaning or definition. It implies a lack of severe surgical invasion. The older style of open-spine surgery for a relatively small disc problem used to require a 5-6 inch incision and a month in the hospital. MISS techniques utilize more modern technology, advanced imaging techniques and special medical equipment to reduce tissue trauma, bleeding, radiation exposure, infection risk, and decreased hospital stays by minimizing the size of the incision. Modern endoscopic procedures (see below) can be done through a 2 to 5 mm skin opening. By contrast, procedures done with a microscope require skin openings of approximately one inch, or more.
MISS can be used to treat a number of spinal conditions such as degenerative disc disease, disc herniation, fractures, tumors, infections, instability, and deformity. It also makes spine surgery possible for patients who were previously considered too high-risk for traditional surgery due to previous medical history or the complexity of the condition.
== Methods ==
Traditionally, spine surgery has required surgeons to create a 5-6 inch incision down the affected portion of the spine and to pull back the tissue and muscle using retractors in order to reveal the bone. The wound itself takes a long time to heal; the aim of minimally invasive surgery is reduce tissue trauma and the associated bleeding and risk of infection by minimizing the size of the incision.
Some minimally invasive spine surgery may be performed by a spinal neurosurgeon or an orthopedic surgeon and a trained medical team. Typically, they will begin the operation by delivering a type of anesthesia that numbs a particular part of the body in conjunction with sedation or simply give a general anesthesia that prevents pain and allows the patient to sleep throughout the surgery.
Next, the surgeon may begin taking continuous X-ray images in real time, a process called fluoroscopy, of the affected portion of the spine. This allows them to see what they're operating on, in real-time, throughout the surgery without creating a large incision.
At this point, the surgeon may begin performing the operation, by creating an incision in the skin above the affected portion of the spine and then using a device called an obturator to push the underlying tissue apart; the obturator is inside a tube, which is left behind after the obturator is removed, leaving a channel down to the spine. Small operating tools as well as cameras and a light are used through this tube. In other surgeries this is called a trocar; in spine surgery it is called a "tubular retractor."
The surgeon makes the necessary repairs to the spine, extracting affected disc material out through the tubular retractor and inserting medical devices, such as intervertebral spacers, rods, pedicle screws, facet screws, nucleus replacement devices, and artificial discs, through the retractor.
Robot-assisted surgery is another technique that is used occasionally in minimally invasive spine surgery.
When the procedure is done the tube is removed, and the wound is stitched, stapled, or glued shut.
== Specific procedures ==
There are many spinal procedures that make use of minimally invasive techniques. They can involve cutting away tissue (discectomy), fixing adjacent vertebrae to one another (spinal fusion), and replacing bone or other tissue.The main philosophy is least bloods, tissue damage, and keep bone/tissue architecture The name of the procedure often includes the region of the spine that is operated on, including cervical spine, thoracic spine, lumbar spine. These procedures include:
Anterior cervical discectomy
Artificial disc replacement or total disc replacement
Epidural lysis of adhesions, also known as percutaneous adhesiolysis or the Racz procedure
Laminectomy
Laminotomy
OLLIF Oblique lateral lumbar inter body fusion
Percutaneous vertebroplasty, a.k.a. Kyphoplasty
Endoscopic Discectomy
Percutaneous Stenoscopic Lumbar Decompression
Small or ultra-small endoscopic discectomy (called Nano Endoscopic Discectomy or Endoscopic Transforaminal Lumbar Discectomy and Reconfiguration) does not have bone removal, like laminectomy or laminotomy. These procedures do not cause post-laminectomy syndrome (Failed back syndrome).
== Risks and benefits ==
Risks include damage to nerves or muscles, a cerebrospinal fluid leak, and typical surgical risks, such as infection or a failure to resolve the condition that prompted the surgery.
Claims are made that the larger style of MISS has better outcomes than open surgery with respect to fewer complications and shorter hospital stays, but data supporting those claims is non-conclusive.
== History ==
Humans have been trying to treat spinal pain for at least 5,000 years. The first evidence of spine surgery appeared in Egyptian mummies buried in 3,000 BC. However, Hippocrates is often credited with being the father of spine surgery due to the extensive amount of writing and proposed treatments he produced on the topic. The first operative spine surgery is credited to Paul of Aegina who lived during the 7th century.
However, only within the last 50 years have advances in digital fluoroscopy, image guidance, endoscopy and minimally invasive surgical tools allowed minimally invasive spine surgery to rise to the forefront of spinal procedures.
== References == | Wikipedia/Minimally_Invasive_Spine_Surgery |
A yaw-rate sensor is a gyroscopic device that measures a vehicle's yaw rate, its angular velocity around its vertical axis. The angle between the vehicle's heading and velocity is called its slip angle, which is related to the yaw rate.
== Types ==
There are two types of yaw-rate sensors: piezoelectric and micromechanical.
In the piezoelectric type, the sensor is a tuning fork-shaped structure with four piezoelectric elements, two on top and two below. When the slip angle is zero (no slip), the upper elements produce no voltage as no Coriolis force acts on them. But when cornering, the rotational movement causes the upper part of the tuning fork to leave the oscillatory plane, creating an alternating voltage (and thus an alternating current) proportional to the yaw rate and oscillatory speed. The output signal's sign depends on the direction of rotation.
In the micromechanical type, the Coriolis acceleration is measured by a micromechanical capacitive acceleration sensor placed on an oscillating element. This acceleration is proportional to the product of the yaw rate and oscillatory velocity, the latter of which is maintained electronically at a constant value.
== Applications ==
Yaw rate sensors are used in aircraft and electronic stability control systems in cars.
== References ==
== See also ==
Attitude dynamics and control
Ship motions
Aircraft principal axes | Wikipedia/Yaw_rate_sensor |
In materials science Functionally Graded Materials (FGMs) may be characterized by the variation in composition and structure gradually over volume, resulting in corresponding changes in the properties of the material. The materials can be designed for specific function and applications. Various approaches based on the bulk (particulate processing), preform processing, layer processing and melt processing are used to fabricate the functionally graded materials.
== History ==
The concept of FGM was first considered in Japan in 1984 during a space plane project, where a combination of materials used would serve the purpose of a thermal barrier capable of withstanding a surface temperature of 2000 K and a temperature gradient of 1000 K across a 10 mm section. In recent years this concept has become more popular in Europe, particularly in Germany. A transregional collaborative research center (SFB Transregio) is funded since 2006 in order to exploit the potential of grading monomaterials, such as steel, aluminium and polypropylen, by using thermomechanically coupled manufacturing processes.
== General information ==
FGMs can vary in either composition and structure, for example, porosity, or both to produce the resulting gradient. The gradient can be categorized as either continuous or discontinuous, which exhibits a stepwise gradient.
There are several examples of FGMs in nature, including bamboo and bone, which alter their microstructure to create a material property gradient. In biological materials, the gradients can be produced through changes in the chemical composition, structure, interfaces, and through the presence of gradients spanning multiple length scales. Specifically within the variation of chemical compositions, the manipulation of the mineralization, the presence of inorganic ions and biomolecules, and the level of hydration have all been known to cause gradients in plants and animals.
The basic structural units of FGMs are elements or material ingredients represented by maxel. The term maxel was introduced in 2005 by Rajeev Dwivedi and Radovan Kovacevic at Research Center for Advanced Manufacturing (RCAM). The attributes of maxel include the location and volume fraction of individual material components.
A maxel is also used in the context of the additive manufacturing processes (such as stereolithography, selective laser sintering, fused deposition modeling, etc.) to describe a physical voxel (a portmanteau of the words 'volume' and 'element'), which defines the build resolution of either a rapid prototyping or rapid manufacturing process, or the resolution of a design produced by such fabrication means.
The transition between the two materials can be approximated by through either a power-law or exponential law relation:
Power Law:
E
=
E
o
z
k
{\displaystyle E=E_{o}z^{k}}
where
E
o
{\displaystyle E_{o}}
is the Young's modulus at the surface of the material, z is the depth from surface, and k is a non-dimensional exponent (
0
<
k
<
1
{\displaystyle 0<k<1}
).
Exponential Law:
E
=
E
o
e
α
z
{\displaystyle E=E_{o}e^{\alpha z}}
where
α
<
0
{\displaystyle \alpha <0}
indicates a hard surface and
α
>
0
{\displaystyle \alpha >0}
indicates soft surface.
== Applications ==
There are many areas of application for FGM. The concept is to make a composite material by varying the microstructure from one material to another material with a specific gradient. This enables the material to have the best of both materials. If it is for thermal, or corrosive resistance or malleability and toughness both strengths of the material may be used to avoid corrosion, fatigue, fracture and stress corrosion cracking.
There is a myriad of possible applications and industries interested in FGMs. They span from defense, looking at protective armor, to biomedical, investigating implants, to optoelectronics and energy.
The aircraft and aerospace industry and the computer circuit industry are very interested in the possibility of materials that can withstand very high thermal gradients. This is normally achieved by using a ceramic layer connected with a metallic layer.
The Air Vehicles Directorate has conducted a Quasi-static bending test results of functionally graded titanium/titanium boride test specimens which can be seen below. The test correlated to the finite element analysis (FEA) using a quadrilateral mesh with each element having its own structural and thermal properties.
Advanced Materials and Processes Strategic Research Programme (AMPSRA) have done analysis on producing a thermal barrier coating using Zr02 and NiCoCrAlY. Their results have proved successful but no results of the analytical model are published.
The rendition of the term that relates to the additive fabrication processes has its origins at the RMRG (Rapid Manufacturing Research Group) at Loughborough University in the United Kingdom. The term forms a part of a descriptive taxonomy of terms relating directly to various particulars relating to the additive CAD-CAM manufacturing processes, originally established as a part of the research conducted by architect Thomas Modeen into the application of the aforementioned
techniques in the context of architecture.
Gradient of elastic modulus essentially changes the fracture toughness of adhesive contacts.
Additionally, there has been an increased focus on how to apply FGMs to biomedical applications, specifically dental and orthopedic implants. For example, bone is an FGM that exhibits a change in elasticity and other mechanical properties between the cortical and cancellous bone. It logically follows that FGMs for orthopedic implants would be ideal for mimicking the performance of bone. FGMs for biomedical applications have the potential benefit of preventing stress concentrations that could lead to biomechanical failure and improving biocompatibility and biomechanical stability. FGMs in relation to orthopedic implants are particularly important as the common materials used (titanium, stainless steel, etc.) are stiffer and thus pose a risk of creating abnormal physiological conditions that alter the stress concentration at the interface between the implant and the bone. If the implant is too stiff it risks causing bone resorption, while a flexible implant can cause stability and the bone-implant interface. Numerous FEM simulations have been carried out to understand the possible FGM and mechanical gradients that could be implemented into different orthopedic implants, as the gradients and mechanical properties are highly geometry specific.
An example of a FGM for use in orthopedic implants is carbon fiber reinforcement polymer matrix (CRFP) with yttria-stabilized zirconia (YSZ). Varying the amount of YSZ present as a filler in the material, resulted in a flexural strength gradation ratio of 1.95. This high gradation ratio and overall high flexibility shows promise as being a supportive material in bone implants. There are quite a few FGMs being explored using hydroxyapatite (HA) due to its osteoconductivity which assists with osseointegration of implants. However, HA exhibits lower fracture strength and toughness compared to bone, which requires it to be used in conjunction with other materials in implants. One study combined HA with alumina and zirconia via a spark plasma process to create a FGM that shows a mechanical gradient as well as good cellular adhesion and proliferation.
== Modeling and simulation ==
Numerical methods have been developed for modelling the mechanical response of FGMs, with the finite element method being the most popular one. Initially, the variation of material properties was introduced by means of rows (or columns) of homogeneous elements, leading to a discontinuous step-type variation in the mechanical properties. Later, Santare and Lambros developed functionally graded finite elements, where the mechanical property variation takes place at the element level. Martínez-Pañeda and Gallego extended this approach to commercial finite element software. Contact properties of FGM can be simulated using the Boundary Element Method (which can be applied both to non-adhesive and adhesive contacts). Molecular dynamics simulation has also been implemented to study functionally graded materials. M. Islam studied the mechanical and vibrational properties of functionally graded Cu-Ni nanowires using molecular dynamics simulation.
Mechanics of functionally graded material structures was considered by many authors. However, recently a new micro-mechanical model is developed to calculate the effective elastic Young modulus for graphene-reinforced plates composite. The model considers the average dimensions of the graphene nanoplates, weight fraction, and the graphene/ matrix ratio in the Representative Volume Element. The dynamic behavior of this functionally graded polymer-based composite reinforced with graphene fillers is crucial for engineering applications. | Wikipedia/Functionally_graded_material |
Energy (from Ancient Greek ἐνέργεια (enérgeia) 'activity') is the quantitative property that is transferred to a body or to a physical system, recognizable in the performance of work and in the form of heat and light. Energy is a conserved quantity—the law of conservation of energy states that energy can be converted in form, but not created or destroyed. The unit of measurement for energy in the International System of Units (SI) is the joule (J).
Forms of energy include the kinetic energy of a moving object, the potential energy stored by an object (for instance due to its position in a field), the elastic energy stored in a solid object, chemical energy associated with chemical reactions, the radiant energy carried by electromagnetic radiation, the internal energy contained within a thermodynamic system, and rest energy associated with an object's rest mass. These are not mutually exclusive.
All living organisms constantly take in and release energy. The Earth's climate and ecosystems processes are driven primarily by radiant energy from the sun. The energy industry provides the energy required for human civilization to function, which it obtains from energy resources such as fossil fuels, nuclear fuel, and renewable energy.
== Forms ==
The total energy of a system can be subdivided and classified into potential energy, kinetic energy, or combinations of the two in various ways. Kinetic energy is determined by the movement of an object – or the composite motion of the object's components – while potential energy reflects the potential of an object to have motion, generally being based upon the object's position within a field or what is stored within the field itself.
While these two categories are sufficient to describe all forms of energy, it is often convenient to refer to particular combinations of potential and kinetic energy as its own form. For example, the sum of translational and rotational kinetic and potential energy within a system is referred to as mechanical energy, whereas nuclear energy refers to the combined potentials within an atomic nucleus from either the nuclear force or the weak force, among other examples.
== History ==
The word energy derives from the Ancient Greek: ἐνέργεια, romanized: energeia, lit. 'activity, operation', which possibly appears for the first time in the work of Aristotle in the 4th century BC. In contrast to the modern definition, energeia was a qualitative philosophical concept, broad enough to include ideas such as happiness and pleasure.
In the late 17th century, Gottfried Leibniz proposed the idea of the Latin: vis viva, or living force, which defined as the product of the mass of an object and its velocity squared; he believed that total vis viva was conserved. To account for slowing due to friction, Leibniz theorized that thermal energy consisted of the motions of the constituent parts of matter, although it would be more than a century until this was generally accepted. The modern analog of this property, kinetic energy, differs from vis viva only by a factor of two. Writing in the early 18th century, Émilie du Châtelet proposed the concept of conservation of energy in the marginalia of her French language translation of Newton's Principia Mathematica, which represented the first formulation of a conserved measurable quantity that was distinct from momentum, and which would later be called "energy".
In 1807, Thomas Young was possibly the first to use the term "energy" instead of vis viva, in its modern sense. Gustave-Gaspard Coriolis described "kinetic energy" in 1829 in its modern sense, and in 1853, William Rankine coined the term "potential energy". The law of conservation of energy was also first postulated in the early 19th century, and applies to any isolated system. It was argued for some years whether heat was a physical substance, dubbed the caloric, or merely a physical quantity, such as momentum. In 1845 James Prescott Joule discovered the link between mechanical work and the generation of heat.
These developments led to the theory of conservation of energy, formalized largely by William Thomson (Lord Kelvin) as the field of thermodynamics. Thermodynamics aided the rapid development of explanations of chemical processes by Rudolf Clausius, Josiah Willard Gibbs, and Walther Nernst. It also led to a mathematical formulation of the concept of entropy by Clausius and to the introduction of laws of radiant energy by Jožef Stefan. According to Noether's theorem, the conservation of energy is a consequence of the fact that the laws of physics do not change over time. Thus, since 1918, theorists have understood that the law of conservation of energy is the direct mathematical consequence of the translational symmetry of the quantity conjugate to energy, namely time.
== Units of measure ==
In the International System of Units (SI), the unit of energy is the joule. It is a derived unit that is equal to the energy expended, or work done, in applying a force of one newton through a distance of one metre. However energy can also be expressed in many other units not part of the SI, such as ergs, calories, British thermal units, kilowatt-hours and kilocalories, which require a conversion factor when expressed in SI units.
The SI unit of power, defined as energy per unit of time, is the watt, which is a joule per second. Thus, one joule is one watt-second, and 3600 joules equal one watt-hour. The CGS energy unit is the erg and the imperial and US customary unit is the foot pound. Other energy units such as the electronvolt, food calorie or thermodynamic kcal (based on the temperature change of water in a heating process), and BTU are used in specific areas of science and commerce.
In 1843, English physicist James Prescott Joule, namesake of the unit of measure, discovered that the gravitational potential energy lost by a descending weight attached via a string was equal to the internal energy gained by the water through friction with the paddle.
== Scientific use ==
=== Classical mechanics ===
In classical mechanics, energy is a conceptually and mathematically useful property, as it is a conserved quantity. Several formulations of mechanics have been developed using energy as a core concept.
Work, a function of energy, is force times distance.
W
=
∫
C
F
⋅
d
s
{\displaystyle W=\int _{C}\mathbf {F} \cdot \mathrm {d} \mathbf {s} }
This says that the work (
W
{\displaystyle W}
) is equal to the line integral of the force F along a path C; for details see the mechanical work article. Work and thus energy is frame dependent. For example, consider a ball being hit by a bat. In the center-of-mass reference frame, the bat does no work on the ball. But, in the reference frame of the person swinging the bat, considerable work is done on the ball.
The total energy of a system is sometimes called the Hamiltonian, after William Rowan Hamilton. The classical equations of motion can be written in terms of the Hamiltonian, even for highly complex or abstract systems. These classical equations have direct analogs in nonrelativistic quantum mechanics.
Another energy-related concept is called the Lagrangian, after Joseph-Louis Lagrange. This formalism is as fundamental as the Hamiltonian, and both can be used to derive the equations of motion or be derived from them. It was invented in the context of classical mechanics, but is generally useful in modern physics. The Lagrangian is defined as the kinetic energy minus the potential energy. Usually, the Lagrange formalism is mathematically more convenient than the Hamiltonian for non-conservative systems (such as systems with friction).
Noether's theorem (1918) states that any differentiable symmetry of the action of a physical system has a corresponding conservation law. Noether's theorem has become a fundamental tool of modern theoretical physics and the calculus of variations. A generalisation of the seminal formulations on constants of motion in Lagrangian and Hamiltonian mechanics (1788 and 1833, respectively), it does not apply to systems that cannot be modeled with a Lagrangian; for example, dissipative systems with continuous symmetries need not have a corresponding conservation law.
=== Chemistry ===
In the context of chemistry, energy is an attribute of a substance as a consequence of its atomic, molecular, or aggregate structure. Since a chemical transformation is accompanied by a change in one or more of these kinds of structure, it is usually accompanied by a decrease, and sometimes an increase, of the total energy of the substances involved. Some energy may be transferred between the surroundings and the reactants in the form of heat or light; thus the products of a reaction have sometimes more but usually less energy than the reactants. A reaction is said to be exothermic or exergonic if the final state is lower on the energy scale than the initial state; in the less common case of endothermic reactions the situation is the reverse.
Chemical reactions are usually not possible unless the reactants surmount an energy barrier known as the activation energy. The speed of a chemical reaction (at a given temperature T) is related to the activation energy E by the Boltzmann's population factor e−E/kT; that is, the probability of a molecule to have energy greater than or equal to E at a given temperature T. This exponential dependence of a reaction rate on temperature is known as the Arrhenius equation. The activation energy necessary for a chemical reaction can be provided in the form of thermal energy.
=== Biology ===
In biology, energy is an attribute of all biological systems, from the biosphere to the smallest living organism. Within an organism it is responsible for growth and development of a biological cell or organelle of a biological organism. Energy used in respiration is stored in substances such as carbohydrates (including sugars), lipids, and proteins stored by cells. In human terms, the human equivalent (H-e) (Human energy conversion) indicates, for a given amount of energy expenditure, the relative quantity of energy needed for human metabolism, using as a standard an average human energy expenditure of 6,900 kJ per day and a basal metabolic rate of 80 watts.
For example, if our bodies run (on average) at 80 watts, then a light bulb running at 100 watts is running at 1.25 human equivalents (100 ÷ 80) i.e. 1.25 H-e. For a difficult task of only a few seconds' duration, a person can put out thousands of watts, many times the 746 watts in one official horsepower. For tasks lasting a few minutes, a fit human can generate perhaps 1,000 watts. For an activity that must be sustained for an hour, output drops to around 300; for an activity kept up all day, 150 watts is about the maximum. The human equivalent assists understanding of energy flows in physical and biological systems by expressing energy units in human terms: it provides a "feel" for the use of a given amount of energy.
Sunlight's radiant energy is also captured by plants as chemical potential energy in photosynthesis, when carbon dioxide and water (two low-energy compounds) are converted into carbohydrates, lipids, proteins and oxygen. Release of the energy stored during photosynthesis as heat or light may be triggered suddenly by a spark in a forest fire, or it may be made available more slowly for animal or human metabolism when organic molecules are ingested and catabolism is triggered by enzyme action.
All living creatures rely on an external source of energy to be able to grow and reproduce – radiant energy from the Sun in the case of green plants and chemical energy (in some form) in the case of animals. The daily 1500–2000 Calories (6–8 MJ) recommended for a human adult are taken as food molecules, mostly carbohydrates and fats, of which glucose (C6H12O6) and stearin (C57H110O6) are convenient examples. The food molecules are oxidized to carbon dioxide and water in the mitochondria
C
6
H
12
O
6
+
6
O
2
⟶
6
CO
2
+
6
H
2
O
{\displaystyle {\ce {C6H12O6 + 6O2 -> 6CO2 + 6H2O}}}
C
57
H
110
O
6
+
(
81
1
2
)
O
2
⟶
57
CO
2
+
55
H
2
O
{\displaystyle {\ce {C57H110O6 + (81 1/2) O2 -> 57CO2 + 55H2O}}}
and some of the energy is used to convert ADP into ATP:
The rest of the chemical energy of the carbohydrate or fat are converted into heat: the ATP is used as a sort of "energy currency", and some of the chemical energy it contains is used for other metabolism when ATP reacts with OH groups and eventually splits into ADP and phosphate (at each stage of a metabolic pathway, some chemical energy is converted into heat). Only a tiny fraction of the original chemical energy is used for work:
gain in kinetic energy of a sprinter during a 100 m race: 4 kJ
gain in gravitational potential energy of a 150 kg weight lifted through 2 metres: 3 kJ
daily food intake of a normal adult: 6–8 MJ
It would appear that living organisms are remarkably inefficient (in the physical sense) in their use of the energy they receive (chemical or radiant energy); most machines manage higher efficiencies. In growing organisms the energy that is converted to heat serves a vital purpose, as it allows the organism tissue to be highly ordered with regard to the molecules it is built from. The second law of thermodynamics states that energy (and matter) tends to become more evenly spread out across the universe: to concentrate energy (or matter) in one specific place, it is necessary to spread out a greater amount of energy (as heat) across the remainder of the universe ("the surroundings"). Simpler organisms can achieve higher energy efficiencies than more complex ones, but the complex organisms can occupy ecological niches that are not available to their simpler brethren. The conversion of a portion of the chemical energy to heat at each step in a metabolic pathway is the physical reason behind the pyramid of biomass observed in ecology. As an example, to take just the first step in the food chain: of the estimated 124.7 Pg/a of carbon that is fixed by photosynthesis, 64.3 Pg/a (52%) are used for the metabolism of green plants, i.e. reconverted into carbon dioxide and heat.
=== Earth sciences ===
In geology, continental drift, mountain ranges, volcanoes, and earthquakes are phenomena that can be explained in terms of energy transformations in the Earth's interior, while meteorological phenomena like wind, rain, hail, snow, lightning, tornadoes and hurricanes are all a result of energy transformations in our atmosphere brought about by solar energy.
Sunlight is the main input to Earth's energy budget which accounts for its temperature and climate stability. Sunlight may be stored as gravitational potential energy after it strikes the Earth, as (for example when) water evaporates from oceans and is deposited upon mountains (where, after being released at a hydroelectric dam, it can be used to drive turbines or generators to produce electricity). Sunlight also drives most weather phenomena, save a few exceptions, like those generated by volcanic events for example. An example of a solar-mediated weather event is a hurricane, which occurs when large unstable areas of warm ocean, heated over months, suddenly give up some of their thermal energy to power a few days of violent air movement.
In a slower process, radioactive decay of atoms in the core of the Earth releases heat. This thermal energy drives plate tectonics and may lift mountains, via orogenesis. This slow lifting represents a kind of gravitational potential energy storage of the thermal energy, which may later be transformed into active kinetic energy during landslides, after a triggering event. Earthquakes also release stored elastic potential energy in rocks, a store that has been produced ultimately from the same radioactive heat sources. Thus, according to present understanding, familiar events such as landslides and earthquakes release energy that has been stored as potential energy in the Earth's gravitational field or elastic strain (mechanical potential energy) in rocks. Prior to this, they represent release of energy that has been stored in heavy atoms since the collapse of long-destroyed supernova stars (which created these atoms).
=== Cosmology ===
In cosmology and astronomy the phenomena of stars, nova, supernova, quasars and gamma-ray bursts are the universe's highest-output energy transformations of matter. All stellar phenomena (including solar activity) are driven by various kinds of energy transformations. Energy in such transformations is either from gravitational collapse of matter (usually molecular hydrogen) into various classes of astronomical objects (stars, black holes, etc.), or from nuclear fusion (of lighter elements, primarily hydrogen).
The nuclear fusion of hydrogen in the Sun also releases another store of potential energy which was created at the time of the Big Bang. At that time, according to theory, space expanded and the universe cooled too rapidly for hydrogen to completely fuse into heavier elements. This meant that hydrogen represents a store of potential energy that can be released by fusion. Such a fusion process is triggered by heat and pressure generated from gravitational collapse of hydrogen clouds when they produce stars, and some of the fusion energy is then transformed into sunlight.
=== Quantum mechanics ===
In quantum mechanics, energy is defined in terms of the energy operator
(Hamiltonian) as a time derivative of the wave function. The Schrödinger equation equates the energy operator to the full energy of a particle or a system. Its results can be considered as a definition of measurement of energy in quantum mechanics. The Schrödinger equation describes the space- and time-dependence of a slowly changing (non-relativistic) wave function of quantum systems. The solution of this equation for a bound system is discrete (a set of permitted states, each characterized by an energy level) which results in the concept of quanta. In the solution of the Schrödinger equation for any oscillator (vibrator) and for electromagnetic waves in a vacuum, the resulting energy states are related to the frequency by Planck's relation:
E
=
h
ν
{\displaystyle E=h\nu }
(where
h
{\displaystyle h}
is the Planck constant and
ν
{\displaystyle \nu }
the frequency). In the case of an electromagnetic wave these energy states are called quanta of light or photons.
=== Relativity ===
When calculating kinetic energy (work to accelerate a massive body from zero speed to some finite speed) relativistically – using Lorentz transformations instead of Newtonian mechanics – Einstein discovered an unexpected by-product of these calculations to be an energy term which does not vanish at zero speed. He called it rest energy: energy which every massive body must possess even when being at rest. The amount of energy is directly proportional to the mass of the body:
E
0
=
m
0
c
2
,
{\displaystyle E_{0}=m_{0}c^{2},}
where
m0 is the rest mass of the body,
c is the speed of light in vacuum,
E
0
{\displaystyle E_{0}}
is the rest energy.
For example, consider electron–positron annihilation, in which the rest energy of these two individual particles (equivalent to their rest mass) is converted to the radiant energy of the photons produced in the process. In this system the matter and antimatter (electrons and positrons) are destroyed and changed to non-matter (the photons). However, the total mass and total energy do not change during this interaction. The photons each have no rest mass but nonetheless have radiant energy which exhibits the same inertia as did the two original particles. This is a reversible process – the inverse process is called pair creation – in which the rest mass of particles is created from the radiant energy of two (or more) annihilating photons.
In general relativity, the stress–energy tensor serves as the source term for the gravitational field, in rough analogy to the way mass serves as the source term in the non-relativistic Newtonian approximation.
Energy and mass are manifestations of one and the same underlying physical property of a system. This property is responsible for the inertia and strength of gravitational interaction of the system ("mass manifestations"), and is also responsible for the potential ability of the system to perform work or heating ("energy manifestations"), subject to the limitations of other physical laws.
In classical physics, energy is a scalar quantity, the canonical conjugate to time. In special relativity energy is also a scalar (although not a Lorentz scalar but a time component of the energy–momentum 4-vector). In other words, energy is invariant with respect to rotations of space, but not invariant with respect to rotations of spacetime (= boosts).
== Transformation ==
Energy may be transformed between different forms at various efficiencies. Items that transform between these forms are called transducers. Examples of transducers include a battery (from chemical energy to electric energy), a dam (from gravitational potential energy to kinetic energy of moving water (and the blades of a turbine) and ultimately to electric energy through an electric generator), and a heat engine (from heat to work).
Examples of energy transformation include generating electric energy from heat energy via a steam turbine, or lifting an object against gravity using electrical energy driving a crane motor. Lifting against gravity performs mechanical work on the object and stores gravitational potential energy in the object. If the object falls to the ground, gravity does mechanical work on the object which transforms the potential energy in the gravitational field to the kinetic energy released as heat on impact with the ground. The Sun transforms nuclear potential energy to other forms of energy; its total mass does not decrease due to that itself (since it still contains the same total energy even in different forms) but its mass does decrease when the energy escapes out to its surroundings, largely as radiant energy.
There are strict limits to how efficiently heat can be converted into work in a cyclic process, e.g. in a heat engine, as described by Carnot's theorem and the second law of thermodynamics. However, some energy transformations can be quite efficient. The direction of transformations in energy (what kind of energy is transformed to what other kind) is often determined by entropy (equal energy spread among all available degrees of freedom) considerations. In practice all energy transformations are permitted on a small scale, but certain larger transformations are not permitted because it is statistically unlikely that energy or matter will randomly move into more concentrated forms or smaller spaces.
Energy transformations in the universe over time are characterized by various kinds of potential energy, that has been available since the Big Bang, being "released" (transformed to more active types of energy such as kinetic or radiant energy) when a triggering mechanism is available. Familiar examples of such processes include nucleosynthesis, a process ultimately using the gravitational potential energy released from the gravitational collapse of supernovae to "store" energy in the creation of heavy isotopes (such as uranium and thorium), and nuclear decay, a process in which energy is released that was originally stored in these heavy elements, before they were incorporated into the Solar System and the Earth. This energy is triggered and released in nuclear fission bombs or in civil nuclear power generation. Similarly, in the case of a chemical explosion, chemical potential energy is transformed to kinetic and thermal energy in a very short time.
Yet another example is that of a pendulum. At its highest points the kinetic energy is zero and the gravitational potential energy is at its maximum. At its lowest point the kinetic energy is at its maximum and is equal to the decrease in potential energy. If one (unrealistically) assumes that there is no friction or other losses, the conversion of energy between these processes would be perfect, and the pendulum would continue swinging forever.
Energy is also transferred from potential energy (
E
p
{\displaystyle E_{p}}
) to kinetic energy (
E
k
{\displaystyle E_{k}}
) and then back to potential energy constantly. This is referred to as conservation of energy. In this isolated system, energy cannot be created or destroyed; therefore, the initial energy and the final energy will be equal to each other. This can be demonstrated by the following:
The equation can then be simplified further since
E
p
=
m
g
h
{\displaystyle E_{p}=mgh}
(mass times acceleration due to gravity times the height) and
E
k
=
1
2
m
v
2
{\textstyle E_{k}={\frac {1}{2}}mv^{2}}
(half mass times velocity squared). Then the total amount of energy can be found by adding
E
p
+
E
k
=
E
total
{\displaystyle E_{p}+E_{k}=E_{\text{total}}}
.
=== Conservation of energy and mass in transformation ===
Energy gives rise to weight when it is trapped in a system with zero momentum, where it can be weighed. It is also equivalent to mass, and this mass is always associated with it. Mass is also equivalent to a certain amount of energy, and likewise always appears associated with it, as described in mass–energy equivalence. The formula E = mc2, derived by Albert Einstein (1905) quantifies the relationship between relativistic mass and energy within the concept of special relativity. In different theoretical frameworks, similar formulas were derived by J.J. Thomson (1881), Henri Poincaré (1900), Friedrich Hasenöhrl (1904) and others (see Mass–energy equivalence#History for further information).
Part of the rest energy (equivalent to rest mass) of matter may be converted to other forms of energy (still exhibiting mass), but neither energy nor mass can be destroyed; rather, both remain constant during any process. However, since
c
2
{\displaystyle c^{2}}
is extremely large relative to ordinary human scales, the conversion of an everyday amount of rest mass (for example, 1 kg) from rest energy to other forms of energy (such as kinetic energy, thermal energy, or the radiant energy carried by light and other radiation) can liberate tremendous amounts of energy (~ 9×1016 joules, equivalent to 21 megatons of TNT), as can be seen in nuclear reactors and nuclear weapons.
Conversely, the mass equivalent of an everyday amount energy is minuscule, which is why a loss of energy (loss of mass) from most systems is difficult to measure on a weighing scale, unless the energy loss is very large. Examples of large transformations between rest energy (of matter) and other forms of energy (e.g., kinetic energy into particles with rest mass) are found in nuclear physics and particle physics. Often, however, the complete conversion of matter (such as atoms) to non-matter (such as photons) is forbidden by conservation laws.
=== Reversible and non-reversible transformations ===
Thermodynamics divides energy transformation into two kinds: reversible processes and irreversible processes. An irreversible process is one in which energy is dissipated (spread) into empty energy states available in a volume, from which it cannot be recovered into more concentrated forms (fewer quantum states), without degradation of even more energy. A reversible process is one in which this sort of dissipation does not happen. For example, conversion of energy from one type of potential field to another is reversible, as in the pendulum system described above.
In processes where heat is generated, quantum states of lower energy, present as possible excitations in fields between atoms, act as a reservoir for part of the energy, from which it cannot be recovered, in order to be converted with 100% efficiency into other forms of energy. In this case, the energy must partly stay as thermal energy and cannot be completely recovered as usable energy, except at the price of an increase in some other kind of heat-like increase in disorder in quantum states, in the universe (such as an expansion of matter, or a randomization in a crystal).
As the universe evolves with time, more and more of its energy becomes trapped in irreversible states (i.e., as heat or as other kinds of increases in disorder). This has led to the hypothesis of the inevitable thermodynamic heat death of the universe. In this heat death the energy of the universe does not change, but the fraction of energy which is available to do work through a heat engine, or be transformed to other usable forms of energy (through the use of generators attached to heat engines), continues to decrease.
== Conservation of energy ==
The fact that energy can be neither created nor destroyed is called the law of conservation of energy. In the form of the first law of thermodynamics, this states that a closed system's energy is constant unless energy is transferred in or out as work or heat, and that no energy is lost in transfer. The total inflow of energy into a system must equal the total outflow of energy from the system, plus the change in the energy contained within the system. Whenever one measures (or calculates) the total energy of a system of particles whose interactions do not depend explicitly on time, it is found that the total energy of the system always remains constant.
While heat can always be fully converted into work in a reversible isothermal expansion of an ideal gas, for cyclic processes of practical interest in heat engines the second law of thermodynamics states that the system doing work always loses some energy as waste heat. This creates a limit to the amount of heat energy that can do work in a cyclic process, a limit called the available energy. Mechanical and other forms of energy can be transformed in the other direction into thermal energy without such limitations. The total energy of a system can be calculated by adding up all forms of energy in the system.
Richard Feynman said during a 1961 lecture:
There is a fact, or if you wish, a law, governing all natural phenomena that are known to date. There is no known exception to this law – it is exact so far as we know. The law is called the conservation of energy. It states that there is a certain quantity, which we call energy, that does not change in manifold changes which nature undergoes. That is a most abstract idea, because it is a mathematical principle; it says that there is a numerical quantity which does not change when something happens. It is not a description of a mechanism, or anything concrete; it is just a strange fact that we can calculate some number and when we finish watching nature go through her tricks and calculate the number again, it is the same.
Most kinds of energy (with gravitational energy being a notable exception) are subject to strict local conservation laws as well. In this case, energy can only be exchanged between adjacent regions of space, and all observers agree as to the volumetric density of energy in any given space. There is also a global law of conservation of energy, stating that the total energy of the universe cannot change; this is a corollary of the local law, but not vice versa.
This law is a fundamental principle of physics. As shown rigorously by Noether's theorem, the conservation of energy is a mathematical consequence of translational symmetry of time, a property of most phenomena below the cosmic scale that makes them independent of their locations on the time coordinate. Put differently, yesterday, today, and tomorrow are physically indistinguishable. This is because energy is the quantity which is canonical conjugate to time. This mathematical entanglement of energy and time also results in the uncertainty principle – it is impossible to define the exact amount of energy during any definite time interval (though this is practically significant only for very short time intervals). The uncertainty principle should not be confused with energy conservation – rather it provides mathematical limits to which energy can in principle be defined and measured.
Each of the basic forces of nature is associated with a different type of potential energy, and all types of potential energy (like all other types of energy) appear as system mass, whenever present. For example, a compressed spring will be slightly more massive than before it was compressed. Likewise, whenever energy is transferred between systems by any mechanism, an associated mass is transferred with it.
In quantum mechanics energy is expressed using the Hamiltonian operator. On any time scales, the uncertainty in the energy is by
Δ
E
Δ
t
≥
ℏ
2
{\displaystyle \Delta E\Delta t\geq {\frac {\hbar }{2}}}
which is similar in form to the Heisenberg Uncertainty Principle (but not really mathematically equivalent thereto, since H and t are not dynamically conjugate variables, neither in classical nor in quantum mechanics).
In particle physics, this inequality permits a qualitative understanding of virtual particles, which carry momentum. The exchange of virtual particles with real particles is responsible for the creation of all known fundamental forces (more accurately known as fundamental interactions). Virtual photons are also responsible for the electrostatic interaction between electric charges (which results in Coulomb's law), for spontaneous radiative decay of excited atomic and nuclear states, for the Casimir force, for the Van der Waals force and some other observable phenomena.
== Energy transfer ==
=== Closed systems ===
Energy transfer can be considered for the special case of systems which are closed to transfers of matter. The portion of the energy which is transferred by conservative forces over a distance is measured as the work the source system does on the receiving system. The portion of the energy which does not do work during the transfer is called heat. Energy can be transferred between systems in a variety of ways. Examples include the transmission of electromagnetic energy via photons, physical collisions which transfer kinetic energy, tidal interactions, and the conductive transfer of thermal energy.
Energy is strictly conserved and is also locally conserved wherever it can be defined. In thermodynamics, for closed systems, the process of energy transfer is described by the first law:
where
E
{\displaystyle E}
is the amount of energy transferred,
W
{\displaystyle W}
represents the work done on or by the system, and
Q
{\displaystyle Q}
represents the heat flow into or out of the system. As a simplification, the heat term,
Q
{\displaystyle Q}
, can sometimes be ignored, especially for fast processes involving gases, which are poor conductors of heat, or when the thermal efficiency of the transfer is high. For such adiabatic processes,
This simplified equation is the one used to define the joule, for example.
=== Open systems ===
Beyond the constraints of closed systems, open systems can gain or lose energy in association with matter transfer (this process is illustrated by injection of an air-fuel mixture into a car engine, a system which gains in energy thereby, without addition of either work or heat). Denoting this energy by
E
matter
{\displaystyle E_{\text{matter}}}
, one may write
== Thermodynamics ==
=== Internal energy ===
Internal energy is the sum of all microscopic forms of energy of a system. It is the energy needed to create the system. It is related to the potential energy, e.g., molecular structure, crystal structure, and other geometric aspects, as well as the motion of the particles, in form of kinetic energy. Thermodynamics is chiefly concerned with changes in internal energy and not its absolute value, which is impossible to determine with thermodynamics alone.
=== First law of thermodynamics ===
The first law of thermodynamics asserts that the total energy of a system and its surroundings (but not necessarily thermodynamic free energy) is always conserved and that heat flow is a form of energy transfer. For homogeneous systems, with a well-defined temperature and pressure, a commonly used corollary of the first law is that, for a system subject only to pressure forces and heat transfer (e.g., a cylinder-full of gas) without chemical changes, the differential change in the internal energy of the system (with a gain in energy signified by a positive quantity) is given as
d
E
=
T
d
S
−
P
d
V
,
{\displaystyle \mathrm {d} E=T\mathrm {d} S-P\mathrm {d} V\,,}
where the first term on the right is the heat transferred into the system, expressed in terms of temperature T and entropy S (in which entropy increases and its change dS is positive when heat is added to the system), and the last term on the right hand side is identified as work done on the system, where pressure is P and volume V (the negative sign results since compression of the system requires work to be done on it and so the volume change, dV, is negative when work is done on the system).
This equation is highly specific, ignoring all chemical, electrical, nuclear, and gravitational forces, effects such as advection of any form of energy other than heat and PV-work. The general formulation of the first law (i.e., conservation of energy) is valid even in situations in which the system is not homogeneous. For these cases the change in internal energy of a closed system is expressed in a general form by
d
E
=
δ
Q
+
δ
W
{\displaystyle \mathrm {d} E=\delta Q+\delta W}
where
δ
Q
{\displaystyle \delta Q}
is the heat supplied to the system and
δ
W
{\displaystyle \delta W}
is the work applied to the system.
=== Equipartition of energy ===
The energy of a mechanical harmonic oscillator (a mass on a spring) is alternately kinetic and potential energy. At two points in the oscillation cycle it is entirely kinetic, and at two points it is entirely potential. Over a whole cycle, or over many cycles, average energy is equally split between kinetic and potential. This is an example of the equipartition principle: the total energy of a system with many degrees of freedom is equally split among all available degrees of freedom, on average.
This principle is vitally important to understanding the behavior of a quantity closely related to energy, called entropy. Entropy is a measure of evenness of a distribution of energy between parts of a system. When an isolated system is given more degrees of freedom (i.e., given new available energy states that are the same as existing states), then total energy spreads over all available degrees equally without distinction between "new" and "old" degrees. This mathematical result is part of the second law of thermodynamics. The second law of thermodynamics is simple only for systems which are near or in a physical equilibrium state. For non-equilibrium systems, the laws governing the systems' behavior are still debatable. One of the guiding principles for these systems is the principle of maximum entropy production. It states that nonequilibrium systems behave in such a way as to maximize their entropy production.
== See also ==
== Notes ==
== References ==
== Further reading ==
=== Journals ===
The Journal of Energy History / Revue d'histoire de l'énergie (JEHRHE), 2018–
== External links ==
Differences between Heat and Thermal energy (Archived 2016-08-27 at the Wayback Machine) – BioCab | Wikipedia/Forms_of_energy |
Princeton Plasma Physics Laboratory (PPPL) is a United States Department of Energy national laboratory for plasma physics and nuclear fusion science. Its primary mission is research into and development of fusion as an energy source. It is known for the development of the stellarator and tokamak designs, along with numerous fundamental advances in plasma physics and the exploration of many other plasma confinement concepts.
PPPL grew out of the top-secret Cold War project to control thermonuclear reactions, called Project Matterhorn. The focus of this program changed from H-bombs to fusion power in 1951, when Lyman Spitzer developed the stellarator concept and was granted funding from the Atomic Energy Commission to study the concept. This led to a series of machines in the 1950s and 1960s. In 1961, after declassification, Project Matterhorn was renamed the Princeton Plasma Physics Laboratory.
PPPL's stellarators proved unable to meet their performance goals. In 1968, Soviet's claims of excellent performance on their tokamaks generated intense scepticism, and to test it, PPPL's Model C stellarator was converted to a tokamak. It verified the Soviet claims, and since that time, PPPL has been a worldwide leader in tokamak theory and design, building a series of record-breaking machines including the Princeton Large Torus, TFTR and many others. Dozens of smaller machines were also built to test particular problems and solutions, including the ATC, NSTX, and LTX.
PPPL is operated by Princeton University on the Forrestal Campus in Plainsboro Township, New Jersey.
== History ==
=== Formation ===
In 1950, John Wheeler was setting up a secret H-bomb research lab at Princeton University. Lyman Spitzer, Jr., an avid mountaineer, was aware of this program and suggested the name "Project Matterhorn".
Spitzer, a professor of astronomy, had for many years been involved in the study of very hot rarefied gases in interstellar space. While leaving for a ski trip to Aspen in February 1951, his father called and told him to read the front page of the New York Times. The paper had a story about claims released the day before in Argentina that a relatively unknown German scientist named Ronald Richter had achieved nuclear fusion in his Huemul Project. Spitzer ultimately dismissed these claims, and they were later proven erroneous, but the story got him thinking about fusion. While riding the chairlift at Aspen, he struck upon a new concept to confine a plasma for long periods so it could be heated to fusion temperatures. He called this concept the stellarator.
Later that year he took this design to the Atomic Energy Commission in Washington. As a result of this meeting and a review of the invention by scientists throughout the nation, the stellarator proposal was funded in 1951. As the device would produce high-energy neutrons, which could be used for breeding weapon fuel, the program was classified and carried out as part of Project Matterhorn. Matterhorn ultimately ended its involvement in the bomb field in 1954, becoming entirely devoted to the fusion power field.
In 1958, this magnetic fusion research was declassified following the United Nations International Conference on the Peaceful Uses of Atomic Energy. This generated an influx of graduate students eager to learn the "new" physics, which in turn influenced the lab to concentrate more on basic research.
The early figure-8 stellarators included: Model-A, Model-B, Model-B2, Model-B3. Model-B64 was a square with round corners, and Model-B65 had a racetrack configuration. The last and most powerful stellarator at this time was the "racetrack" Model C (operating from 1961 to 1969).
=== Tokamak ===
By the mid-1960s it was clear something was fundamentally wrong with the stellarators, as they leaked fuel at rates far beyond what theory predicted, rates that carried away energy from the plasma that was far beyond what the fusion reactions could ever produce. Spitzer became extremely skeptical that fusion energy was possible and expressed this opinion in very public fashion in 1965 at an international meeting in the UK. At the same meeting, the Soviet delegation announced results about 10 times better than any previous device, which Spitzer dismissed as a measurement error.
At the next meeting in 1968, the Soviets presented considerable data from their devices that showed even greater performance, about 100 times the Bohm diffusion limit. An enormous argument broke out between the AEC and the various labs about whether this was real. When a UK team verified the results in 1969, the AEC suggested PPPL to convert their Model C to a tokamak to test it, as the only lab willing to build one from scratch, Oak Ridge, would need some time to build theirs. Seeing the possibility of being bypassed in the fusion field, PPPL eventually agreed to convert the Model C to what became the Symmetric Tokamak (ST), quickly verifying the approach.
Two small machines followed the ST, exploring ways to heat the plasma, and then the Princeton Large Torus (PLT) to test whether the theory that larger machines would be more stable was true. Starting in 1975, PLT verified these "scaling laws" and then went on to add neutral beam injection from Oak Ridge that resulted in a series of record-setting plasma temperatures, eventually topping out at 78 million kelvins, well beyond what was needed for a practical fusion power system. Its success was major news.
With this string of successes, PPPL had little trouble winning the bid to build an even larger machine, one specifically designed to reach "breakeven" while running on an actual fusion fuel, rather than a test gas. This produced the Tokamak Fusion Test Reactor, or TFTR, which was completed in 1982. After a lengthy breaking-in period, TFTR began slowly increasing the temperature and density of the fuel, while introducing deuterium gas as the fuel. In April 1986, it demonstrated a combination of density and confinement, the so-called fusion triple product, well beyond what was needed for a practical reactor. In July, it reached a temperature of 200 million kelvins, far beyond what was needed. However, when the system was operated with both of these conditions at the same time, a high enough triple product and temperature, the system became unstable. Three years of effort failed to address these issues, and TFTR never reached its goal. The system continued performing basic studies on these problems until being shut down in 1997. Beginning in 1993, TFTR was the first in the world to use 1:1 mixtures of deuterium–tritium. In 1994 it yielded an unprecedented 10.7 megawatts of fusion power.
=== Later designs ===
In 1999, the National Spherical Torus Experiment (NSTX), based on the spherical tokamak concept, came online at the PPPL.
Odd-parity heating was demonstrated in the 4 cm radius PFRC-1 experiment in 2006. PFRC-2 has a plasma radius of 8 cm. Studies of electron heating in PFRC-2 reached 500 eV with pulse lengths of 300 ms.
In 2015, PPPL completed an upgrade to NSTX to produce NSTX-U that made it the most powerful experimental fusion facility, or tokamak, of its type in the world.
In 2017, the group received a Phase II NIAC grant along with two NASA STTRs funding the RF subsystem and superconducting coil subsystem.
In 2024, the lab announced MUSE, a new stellarator. MUSE uses rare-earth permanent magnets with a field strength that can exceed 1.2 teslas. The device uses quasiaxisymmetry, a subtype of quasisymmetry. The research team claimed that its use of quasisymmetry was more sophisticated than prior devices. Also in 2024, PPL announced a reinforcement learning model that could forecast tearing mode instabilities up to 300 milliseconds in advance. That is enough time for the plasma controller to adjust operating parameters to prevent the tear and maintain H-mode performance.
== Directors ==
In 1961 Gottlieb became the first director of the renamed Princeton Plasma Physics Laboratory.
1951–1961: Lyman Spitzer, director of Project Matterhorn
1961–1980: Melvin B. Gottlieb
1981–1990: Harold Fürth
1991–1996: Ronald C. Davidson
1997 (January–July): John A. Schmidt, interim director
1997–2008: Robert J. Goldston
2008–2016: Stewart C. Prager
2016–2017: Terrence K. Brog (interim)
2017–2018: Richard J. Hawryluk (interim)
2018–present: Sir Steven Cowley, 1 July 2018
== Timeline of major research projects and experiments ==
== Other domestic and international research activities ==
Laboratory scientists are collaborating with researchers on fusion science and technology at other facilities, including DIII-D in San Diego, EAST in China, JET in the United Kingdom, KSTAR in South Korea, the LHD in Japan, the Wendelstein 7-X (W7-X) device in Germany, and the International Thermonuclear Experimental Reactor (ITER) in France.
PPPL manages the U.S. ITER project activities together with Oak Ridge National Laboratory and Savannah River National Laboratory. The lab delivered 75% of components for the fusion energy experiment's electrical network in 2017 and has been leading the design and construction of six diagnostic tools for analyzing ITER plasmas. The PPPL physicist Richard Hawryluk served as ITER Deputy Director-General from 2011 to 2013. In 2022, PPPL staff developed with researchers from other national labs and universities over several months a US ITER research plan during the joint Fusion Energy Sciences Research Needs Workshop.
Staff are applying knowledge gained in fusion research to a number of theoretical and experimental areas including materials science, solar physics, chemistry, and manufacturing. PPPL also aims to speed the development of fusion energy through the development of an increased number of public-private partnerships.
=== Plasma science and technology ===
Beam Dynamics and Nonneutral Plasma
Laboratory for Plasma Nanosynthesis (LPN)
=== Theoretical plasma physics ===
DOE Scientific Simulation Initiative
U.S. MHD Working Group
Field Reversed Configuration (FRC) Theory Consortium
Tokamak Physics Design and Analysis Codes
TRANSP Code
National Transport Code Collaboration (NTCC) Modules Library
== Transportation ==
Tiger Transit's Route 3 runs to Forrestal Campus and terminates at PPPL.
== See also ==
Project Sherwood
National Compact Stellarator Experiment (NCSX)
== References ==
== External links ==
Media related to Princeton Plasma Physics Laboratory at Wikimedia Commons
Project Matterhorn Publications and Reports, 1951–1958. Princeton University Library Digital Collections
Princeton Plasma Physics Laboratory Official Website | Wikipedia/Princeton_Plasma_Physics_Laboratory |
A kinetic energy recovery system (KERS) is an automotive system for recovering a moving vehicle's kinetic energy under braking. The recovered energy is stored in a reservoir (for example a flywheel or high voltage batteries) for later use under acceleration. Examples include complex high end systems such as the Zytek, Flybrid, Torotrak and Xtrac used in Formula One racing and simple, easily manufactured and integrated differential based systems such as the Cambridge Passenger/Commercial Vehicle Kinetic Energy Recovery System (CPC-KERS).
Xtrac and Flybrid are both licensees of Torotrak's technologies, which employ a small and sophisticated ancillary gearbox incorporating a continuously variable transmission (CVT). The CPC-KERS is similar as it also forms part of the driveline assembly. However, the whole mechanism including the flywheel sits entirely in the vehicle's hub (looking like a drum brake). In the CPC-KERS, a differential replaces the CVT and transfers torque between the flywheel, drive wheel and road wheel.
== Use in motorsport ==
=== History ===
The first of these systems to be revealed was the Flybrid. This system weighs 24 kg (53 lbs) and has an energy capacity of 400 kJ after allowing for internal losses. A maximum power boost of 60 kW (81.6 PS, 80.4 HP) for 6.67 seconds is available. The 240 mm (9.4 in) diameter flywheel weighs 5.0 kg (11 lbs) and revolves at up to 64,500 rpm. Maximum torque at the flywheel is 18 Nm (13.3 ftlbs), and the torque at the gearbox connection is correspondingly higher for the change in speed. The system occupies a volume of 13 litres.
Already in 2006, a first KERS system based on supercapacitors has been studied at EPFL (Ecole Polytechnique Fédérale de Lausanne) in the framework of the development of the "Formula S2000". A 180kJ system has been developed in collaboration with other institutes.
Two minor incidents were reported during testing of various KERS systems in 2008. The first occurred when the Red Bull Racing team tested their KERS battery for the first time in July: it malfunctioned and caused a fire scare that led to the team's factory being evacuated. The second was less than a week later when a BMW Sauber mechanic was given an electric shock when he touched Christian Klien's KERS-equipped car during a test at the Jerez circuit.
=== Formula One ===
Formula One has stated that they support responsible solutions to the world's environmental challenges, and the FIA allowed the use of 60 kW (82 PS; 80 bhp) KERS in the regulations for the 2009 Formula One season. Teams began testing systems in 2008: energy can either be stored as mechanical energy (as in a flywheel) or as electrical energy (as in a battery or supercapacitor).
With the introduction of KERS in the 2009 season, only four teams used it at some point in the season: Ferrari, Renault, BMW and McLaren. Eventually, during the season, Renault and BMW stopped using the system. Nick Heidfeld was the first driver to take a podium position with a KERS equipped car, at the Malaysian Grand Prix. McLaren Mercedes became the first team to win an F1 GP using a KERS equipped car when Lewis Hamilton won the Hungarian Grand Prix on July 26, 2009. Their second KERS equipped car finished fifth. At the following race, Lewis Hamilton became the first driver to take pole position with a KERS car, his teammate, Heikki Kovalainen qualifying second. This was also the first instance of an all KERS front row. On August 30, 2009, Kimi Räikkönen won the Belgian Grand Prix with his KERS equipped Ferrari. It was the first time that KERS contributed directly to a race victory, with second placed Giancarlo Fisichella claiming "Actually, I was quicker than Kimi. He only took me because of KERS at the beginning".
Although KERS was still legal in F1 in the 2010 season, all the teams had agreed not to use it. New rules for the 2011 F1 season which raised the minimum weight limit of the car and driver by 20 kg to 640 kg, along with the FOTA teams agreeing to the use of KERS devices once more, meant that KERS returned for the 2011 season. Use of KERS was still optional as in the 2009 season; and at the start of the 2011 season three teams chose not to use it.
WilliamsF1 developed their own flywheel-based KERS system but decided not to use it in their F1 cars due to packaging issues, and have instead developed their own electrical KERS system. However, they set up Williams Hybrid Power to sell their developments. In 2012 it was announced that the Audi Le Mans R18 hybrid cars would use Williams Hybrid Power.
Since 2014, the power capacity of the KERS units were increased from 60 kilowatts (80 bhp) to 120 kilowatts (160 bhp). This was introduced to balance the sport's move from 2.4 litre V8 engines to 1.6 litre V6 turbo engines.
=== Working diagram for KERS ===
=== Autopart makers ===
Bosch Motorsport Service is developing a KERS for use in motor racing. These electricity storage systems for hybrid and engine functions include a lithium-ion battery with scalable capacity or a flywheel, a four to eight kilogram electric motor (with a maximum power level of 60 kW (81 hp)), as well as the KERS controller for power and battery management. Bosch also offers a range of electric hybrid systems for commercial and light-duty applications.
=== Car manufacturers ===
Several automakers have been testing KERS systems. At the 2008 1000 km of Silverstone, Peugeot Sport unveiled the Peugeot 908 HY, a hybrid electric variant of the diesel 908, with KERS. Peugeot planned to campaign the car in the 2009 Le Mans Series season, although it was not allowed to score championship points.
McLaren began testing of their KERS system in September 2008 at Jerez in preparation for the 2009 F1 season, although at that time it was not yet known if they would be operating an electrical or mechanical system. In November 2008, it was announced that Freescale Semiconductor would collaborate with McLaren Electronic Systems to further develop its KERS for McLaren's Formula One cars from 2010 onwards. Both parties believed this collaboration would improve McLaren's KERS system and help the system to transfer its technology to road cars.
Toyota has used a supercapacitor for regeneration on its Supra HV-R hybrid race car that won the Tokachi 24-Hour endurance race in July 2007. This Supra became the first hybrid car in the history of motorsport to win such a race.
At the NAIAS 2011, Porsche unveiled a RSR variant of their Porsche 918 concept car which uses a flywheel-based KERS that sits beside the driver in the passenger compartment and boosts the dual electric motors driving the front wheels and the 565 BHP V8 gasoline engine driving the rear to a combined power output of 767 BHP. This system has many problems including the imbalance caused to the vehicle due to the flywheel. Porsche is currently developing an electrical storage system.
In 2011, Mazda has announced i-ELOOP, a system which uses a variable-voltage alternator to convert kinetic energy to electric power during deceleration. The energy, stored in a double-layer capacitor, is used to supply power needed by vehicle electrical systems. When used in conjunction with Mazda's start-stop system, i-Stop, the company claims fuel savings of up to 10%.
Bosch and PSA Peugeot Citroën have developed a hybrid system that uses hydraulics as a way to transfer energy to and from a compressed nitrogen tank. An up to 45% reduction in fuel consumption is claimed, corresponding to 2.9 L/100 km (81 mpg, 69 g CO2/km) on the NEDC cycle for a compact frame like Peugeot 208. The system is claimed to be much more affordable than competing electric and flywheel systems and was expected on road cars by 2016 but was abandoned in 2015.
In 2020, FIAT launched the series of the FIAT Panda mild-hybrid with KERS technology.
=== Motorcycles ===
KTM racing boss Harald Bartol revealed that the factory raced with a secret kinetic energy recovery system fitted to Tomoyoshi Koyama's motorcycle during the 125cc race of the 2008 Valencian Community motorcycle Grand Prix. Koyama finished 7th. The system was later ruled illegal and thus was banned. The Lit C-1 electric motorcycle will also use a KERS as a regenerative braking system.
=== Bicycles ===
KERS is also possible on a bicycle. The EPA, working with students from the University of Michigan, developed the hydraulic Regenerative Brake Launch Assist (RBLA)
This has also been demonstrated by mounting a flywheel on a bike frame and connecting it with a CVT to the back wheel. By shifting the gear, 20% of the kinetic energy can be stored in the flywheel, ready to give an acceleration boost by reshifting the gear.
=== Races ===
Automobile Club de l'Ouest, the organizer behind the annual 24 Hours of Le Mans event and the Le Mans Series, has promoted the use of kinetic energy recovery systems in the LMP1 class since the late 2000s. Peugeot was the first manufacturer to unveil a fully functioning LMP1 car in the form of the 908 HY at the 2008 Autosport 1000 km race at Silverstone.
The 2011 24 Hours of Le Mans saw Hope Racing enter with a Flybrid Systems mechanical KERS, to be the first car ever to compete at the event with a hybrid. The system consisted of high speed slipping clutches which transfer torque to and from the vehicle, coupled to a 60,000 rpm flywheel.
Audi and Toyota both developed LMP1 cars with kinetic energy recovery systems for the 2012 and 2013 24 Hours of Le Mans. The Audi R18 e-tron quattro uses a flywheel-based system, while the Toyota TS030 Hybrid uses a supercapacitor-based system. When Porsche announced its return to Le Mans in 2014, it also unveiled an LMP1 car with a kinetic energy recovery system. The Porsche 919 Hybrid, introduced in 2014, uses a battery system, in contrast to the previous Porsche 911 GT3 R Hybrid that used a flywheel system.
== Use in public transport ==
=== London buses ===
A KERS using a carbon fibre flywheel, originally developed for the Williams Formula One racing team, has been modified for retrofitting to existing London double-decker buses. Buses (500 from the Go-Ahead Group) were fitted with this technology from 2014 to 2016, anticipating a fuel efficiency improvement of approximately 20%. The team who developed the technology were awarded the Dewar Trophy of the Royal Automobile Club in 2015.
=== Parry People Mover ===
Parry People Mover railcars use a small engine and large flywheel to move. The system also supports regenerative braking.
== See also ==
Regenerative brake
Make Cars Green
== References == | Wikipedia/Kinetic_energy_recovery_system |
A control moment gyroscope (CMG) is an attitude control device generally used in spacecraft attitude control systems. A CMG consists of a spinning rotor and one or more motorized gimbals that tilt the rotor’s angular momentum. As the rotor tilts, the changing angular momentum causes a gyroscopic torque that rotates the spacecraft.
== Comparison with Reaction Wheels ==
CMGs and reaction wheels are two common types of spacecraft attitude control actuators and serve the same function, though they differ in mechanics and performance characteristics. The latter apply torque simply by changing rotor spin speed, but the former tilt the rotor's spin axis without necessarily changing its spin speed. CMGs are more mechanically complex than reaction wheels and typically more expensive, but are far more power efficient. For a few hundred watts and about 100 kg of mass, large CMGs have produced thousands of newton meters of torque. A reaction wheel of similar capability would require megawatts of power.
== Design varieties ==
=== Single-gimbal ===
The most effective CMGs include only a single gimbal. When the gimbal of such a CMG rotates, the change in direction of the rotor's angular momentum represents a torque that reacts onto the body to which the CMG is mounted, e.g. a spacecraft. Except for effects due to the motion of the spacecraft, this torque is due to a constraint, so it does no mechanical work (i.e., requires no energy). Single-gimbal CMGs exchange angular momentum in a way that requires very little power, with the result that they can apply very large torques for minimal electrical input.
=== Dual-gimbal ===
Such a CMG includes two gimbals per rotor. As an actuator, it is more versatile than a single-gimbal CMG because it is capable of pointing the rotor's angular momentum vector in any direction. However, the torque generated by one gimbal's motion must often be reacted by the other gimbal on its way to the spacecraft, requiring more power for a given torque than a single-gimbal CMG. If the goal is simply to store angular momentum in a mass-efficient way, as in the case of the International Space Station, dual-gimbal CMGs are a good design choice. However, if a spacecraft instead requires large output torque while consuming minimal power, single-gimbal CMGs are a better choice.
=== Variable-speed ===
Most CMGs hold rotor speed constant using relatively small motors to offset changes due to dynamic coupling and non-conservative effects. Some academic research has focused on the possibility of increasing and decreasing rotor speed while the CMG gimbals. Variable-speed CMGs (VSCMGs) offer few practical advantages when considering actuation capability because the output torque from the rotor is typically much smaller than that caused by the gimbal motion. The primary practical benefit of the VSCMG when compared to the conventional CMG is an additional degree of freedom—afforded by the available rotor torque—which can be exploited for continuous CMG singularity avoidance and VSCMG cluster reorientation. Research has shown that the rotor torques required for these two purposes are very small and within the capability of conventional CMG rotor motors. Thus, the practical benefits of VSCMGs are readily available using conventional CMGs with alterations to CMG cluster steering and CMG rotor motor control laws. The VSCMG also can be used as a mechanical battery to store electric energy as kinetic energy of the flywheels.
=== Spacecraft body ===
If a spacecraft has rotating parts these can be utilized or controlled as CMGs.
== Potential problems ==
=== Singularities ===
At least three single-axis CMGs are necessary for control of spacecraft attitude. However, no matter how many CMGs a spacecraft uses, gimbal motion can lead to relative orientations that produce no usable output torque along certain directions. These orientations are known as singularities and are related to the kinematics of robotic systems that encounter limits on the end-effector velocities due to certain joint alignments. Avoiding these singularities is naturally of great interest, and several techniques have been proposed. David Bailey and others have argued (in patents and in academic publications) that merely avoiding the "divide by zero" error that is associated with these singularities is sufficient. Two more recent patents summarize competing approaches. See also Gimbal lock.
=== Saturation ===
A cluster of CMGs can become saturated, in the sense that it is holding a maximum amount of angular momentum in a particular direction and can hold no more.
As an example, suppose a spacecraft equipped with two or more dual-gimbal CMGs experiences a transient unwanted torque, perhaps caused by reaction from venting waste gas, tending to make it roll clockwise about its forward axis and thus increase its angular momentum along that axis. Then the CMG control program will command the gimbal motors of the CMGs to slant the rotors' spin axes gradually more and more forward, so that the angular momentum vectors of the rotors point more nearly along the forward axis. While this gradual change in rotor spin direction is in progress, the rotors will be creating gyroscopic torques whose resultant is anticlockwise about the forward axis, holding the spacecraft steady against the unwanted waste gas torque.
When the transient torque ends, the control program will stop the gimbal movement, and the rotors will be left pointing more forward than before. The inflow of unwanted forward angular momentum has been routed through the CMGs and dumped into the rotors; the forward component of their total angular momentum vector is now greater than before.
If these events are repeated, the angular momentum vectors of the individual rotors will bunch more and more closely together round the forward direction. In the limiting case, they will all end up parallel, and the CMG cluster will now be saturated in that direction; it can hold no more angular momentum. If the CMGs were initially holding no angular momentum about any other axes, they will end up saturated exactly along the forward axis. If however (for example) they were already holding a little angular momentum in the "up" (yaw left) direction, they will saturate (end up parallel) along an axis pointing forward and slightly up, and so on. Saturation is possible about any axis.
In the saturated condition attitude control is impossible. Since the gyroscopic torques can now only be created at right angles to the saturation axis, roll control about that axis itself is now non-existent. There will also be major difficulties with control about other axes. For example, an unwanted left yaw can only be countered by storing some "up" angular momentum in the CMG rotors. This can only be done by tilting at least one of their axes up, which will slightly reduce the forward component of their total angular momentum. Since they can now store less "right roll" forward angular momentum, they will have to release some back into the spacecraft, which will be forced to start an unwanted roll to the right.
The only remedy for this loss of control is to desaturate the CMGs by removing the excess angular momentum from the spacecraft. The simplest way of doing this is to use reaction control system (RCS) thrusters. In our example of saturation along the forward axis, the RCS will be fired to produce an anticlockwise torque about that axis. The CMG control program will then command the rotor spin axes to begin fanning out away from the forward direction, producing gyroscopic torques whose resultant is clockwise about the forward direction, opposing the RCS as long as it is firing, and so holding the spacecraft steady. This is continued until a suitable amount of forward angular momentum has been drained out of the CMG rotors; it is transformed into the moment of momentum of the moving matter in the RCS thruster exhausts and carried away from the spacecraft.
It is worth noting that "saturation" can only apply to a cluster of two or more CMGs, since it means that their rotor spins have become parallel. It is meaningless to say that a single constant-speed CMG can become saturated; in a sense it is "permanently saturated" in whatever direction the rotor happens to be pointing. This contrasts with a single reaction wheel, which can absorb more and more angular momentum along its fixed axis by spinning faster, until it reaches saturation at its maximum design speed.
=== Anti-parallel alignment ===
There are other undesirable rotor axis configurations apart from saturation, notably anti-parallel alignments. For example, if a spacecraft with two dual-gimbal CMGs gets into a state in which one rotor spin axis is facing directly forward, while the other rotor spin is facing directly aft (i.e. anti-parallel to the first), then all roll control will be lost. This happens for the same reason as for saturation; the rotors can only produce gyroscopic torques at right angles to their spin axes, and here these torques will have no fore-and-aft components and so no influence on roll. However, in this case the CMGs are not saturated at all; their angular momenta are equal and opposite, so the total stored angular momentum adds up to zero. Just as for saturation, however, and for exactly the same reasons, roll control will become increasingly difficult if the CMGs even approach anti-parallel alignment.
In the anti-parallel configuration, although roll control is lost, control about other axes still works well (in contrast to the situation with saturation). An unwanted left yaw can be dealt with by storing some "up" angular momentum, which is easily done by tilting both rotor spin axes slightly up by equal amounts. Since their fore and aft components will still be equal and opposite, there is no change in fore-and-aft angular momentum (it will still be zero) and therefore no unwanted roll. In fact the situation will be improved, because the rotor axes are no longer quite anti-parallel and some roll control will be restored.
Anti-parallel alignment is therefore not quite as serious as saturation but must still be avoided. It is theoretically possible with any number of CMGs; as long as some rotors are aligned parallel along a particular axis, and all the others point in exactly the opposite direction, there is no saturation but still no roll control about that axis. With three or more CMGs the situation can be immediately rectified simply by redistributing the existing total angular momentum among the rotors (even if that total is zero). In practice the CMG control program will continuously redistribute the total angular momentum to avoid the situation arising in the first place.
If there are only two CMGs in the cluster, as in our first example, then anti-parallel alignment will inevitably occur if the total stored angular momentum reaches zero. The remedy is to keep it away from zero, possibly by using RCS firings. This is not very satisfactory, and in practice all spacecraft using CMGs are fitted with at least three. However it sometimes happens that after malfunctions a cluster is left with only two working CMGs, and the control program must be able to deal with this situation.
=== Hitting the gimbal stops ===
Older CMG models like the ones launched with Skylab in 1973 had limited gimbal travel between fixed mechanical stops. On the Skylab CMGs the limits were plus or minus 80 degrees from zero for the inner gimbals, and from plus 220 degrees to minus 130 degrees for the outer ones (so zero was offset by 45 degrees from the centre of travel). Visualising the inner angle as 'latitude' and the outer as 'longitude', it can be seen that for an individual CMG there were 'blind spots' with radius 10 degrees of latitude at the 'North and South poles', and an additional 'blind strip' of width 10 degrees of 'longitude' running from pole to pole, centred on the line of 'longitude' at plus 135 degrees. These 'blind areas' represented directions in which the rotor's spin axis could never be pointed.: 11
Skylab carried three CMGs, mounted with their casings (and therefore their rotor axes when the gimbals were set to zero) facing in three mutually perpendicular directions. This ensured that the six 'polar blind spots' were spaced 90 degrees apart from each other. The 45 degree zero offset then ensured that the three 'blind strips' of the outer gimbals would pass halfway between neighbouring 'polar blind spots' and at a maximum distance from each other. The whole arrangement ensured that the 'blind areas' of the three CMGs never overlapped, and thus that at least two of the three rotor spins could be pointed in any given direction.: 4
The CMG control program was responsible for making sure that the gimbals never hit the stops, by redistributing angular momentum between the three rotors to bring large gimbal angles closer to zero. Since the total angular momentum to be stored had only three degrees of freedom, while the control program could change six independent variables (the three pairs of gimbal angles), the program had sufficient freedom of action to do this while still obeying other constraints such as avoiding anti-parallel alignments.: 5
One advantage of limited gimbal movement such as Skylab's is that singularities are less of a problem. If Skylab's inner gimbals had been able to reach 90 degrees or more away from zero, then the 'North and South poles' could have become singularities; the gimbal stops prevented this.
More modern CMGs such as the four units installed on the ISS in 2000 have unlimited gimbal travel and therefore no 'blind areas'. Thus they do not have to be mounted facing along mutually perpendicular directions; the four units on the ISS all face the same way. The control program need not concern itself with gimbal stops, but on the other hand it must pay more attention to avoiding singularities.
== Applications ==
=== Skylab ===
Skylab, launched in May 1973, was the first manned spacecraft to be fitted with large CMGs for attitude control. Three dual-gimbal CMGs were mounted on the equipment rack of the Apollo Telescope Mount at the hub of the windmill-shaped array of solar panels on the side of the station. They were arranged so that the casings (and therefore the rotors when all gimbals were at their zero positions) pointed in three mutually perpendicular directions. Since the units were dual-gimballed, each one could produce a torque about any axis at right angles to its rotor axis, thus providing some redundancy; if any one of the three failed, the combination of the remaining two could in general still produce a torque around any desired axis.
=== Gyrodynes on Salyut and Mir ===
CMGs were used for attitude control on the Salyut and Mir space stations, where they were called gyrodynes (from the Russian гиродин girodin; this word is also sometimes used – especially by Russian crew – for the CMGs on the ISS). They were first tested on Salyut 3 in 1974, and introduced as standard components from Salyut 6 onwards.
The completed Mir station had 18 gyrodynes altogether, starting with six in the pressurised interior of the Kvant-1 module. These were later supplemented by another six on the unpressurised outside of Kvant-2. According to NPO Energia, putting them outside turned out to be a mistake, as it made gyrodyne replacement much more difficult. A third set of gyrodynes was installed in Kristall during Mir-18
=== International Space Station ===
The ISS employs a total of four CMGs, mounted on Z1 truss as primary actuating devices during normal flight mode operation. The objective of the CMG flight control system is to hold the space station at a fixed attitude relative to the surface of the Earth. In addition, it seeks a torque equilibrium attitude (TEA), in which the combined torque contribution of gravity gradient, atmospheric drag, solar pressure, and geomagnetic interactions are minimized. In the presence of these continual environmental disturbances CMGs absorb angular momentum in an attempt to maintain the space station at a desired attitude. The CMGs will eventually saturate (accumulating angular momentum to the point where they can accumulate no more), resulting in loss of effectiveness of the CMG array for control. Some kind of angular momentum management scheme (MMS) is necessary to allow the CMGs to hold a desired attitude and at the same time prevent CMG saturation. Since in the absence of an external torque the CMGs can only exchange angular momentum between themselves without changing the total, external control torques must be used to desaturate the CMGs, that is, bring the angular momentum back to nominal value. Some methods for unloading CMG angular momentum include the use of magnetic torques, reaction thrusters, and gravity gradient torque. For the space station, the gravity gradient torque approach is preferred because it requires no consumables or external hardware and because the gravity-gradient torque on the ISS can be very high. CMG saturation has been observed during spacewalks, requiring propellant to be used to maintain desired attitude. In 2006 and 2007, CMG-based experiments demonstrated the viability of zero-propellant maneuvers to adjust attitude of the ISS 90° and 180°. By 2016, four Soyuz undockings had been done using CMG-based attitude adjustment, resulting in considerable propellant savings.
=== Tiangong station ===
Tiangong has a total of 6 CMGs, mounted on the Tianhe core module, with round parts visible on the side.
=== Proposed ===
As of 2016, the Russian Orbital Segment of the ISS carries no CMGs of its own. However, the proposed but as yet unbuilt Science and Power Module (NEM-1) would be fitted with several externally-mounted CMGs. NEM-1 would be installed on one of the lateral ports of the small Uzlovoy Module or Nodal Module scheduled for completion and launch at some time within the 2016–25 Russian programme. Its twin NEM-2 (if completed) would later be installed symmetrically on the other lateral UM port.
On 24 February 2015, the Scientific and Technical Council of Roscosmos announced that after decommissioning of the ISS (then planned for 2024) the newer Russian modules would be detached and form the nucleus of a small all-Russian space station to be called OPSEK. If this plan is carried out, the CMGs on NEM-1 (and NEM-2, if built) would provide attitude control for the new Russian station.
The proposed space habitat Island 3 was designed to utilize two contrarotating habitats as opposed CMGs with net zero momentum, and therefore no need for attitude control thrusters.
== See also ==
Anti-rolling gyro, a system that stabilizes roll motion in ocean-going ships
Reaction wheel – Attitude control device used in spacecraft - no gimbals
== Notes ==
== References ==
== External links ==
CMG applications and fundamental research are undertaken at several institutions.
Georgia Tech's Panagiotis Tsiotras has studied variable-speed CMGs in connection with flywheel energy storage and has built a spacecraft simulator based on them: faculty page
Virginia Tech's Christopher Hall has built a spacecraft simulator as well: faculty page
Texas A&M's John Junkins and Srinivas Vadali have written papers on VSCMGs for use in singularity avoidance: faculty page
Cornell's Mason Peck is researching CMG-driven nanosats with the Violet spacecraft: Violet project page
Space Systems Group at the University of Florida under Prof. Norman Fitz-Coy have been researching on the development of CMGs for pico- and nano-satellites and on various steering logics for singularity avoidance SSG
Professor Brij Agrawal at the Naval Postgraduate School has built two spacecraft simulators, at least one of which uses CMGs: [1]
Honeywell Defense and Space Systems performs research in [broken link] Control Moment Gyros They also have developed a spacecraft simulator driven by CMGs: [broken link] CMG Testbed Video
Naval Postgraduate School's Marcello Romano has studied variable-speed CMGs and has developed a mini single gimbal control moment gyro for laboratory experiment of spacecraft proximity maneuvers faculty page | Wikipedia/Control_moment_gyroscope |
In digital electronics, analogue electronics and entertainment, the user interface may include media controls, transport controls or player controls, to enact and change or adjust the process of video playback, audio playback, and alike. These controls are commonly depicted as widely known symbols found in a multitude of products, exemplifying what is known as dominant design.
== Symbols ==
Media control symbols are commonly found on both software and physical media players, remote controls, and multimedia keyboards. Their application is described in ISO/IEC 18035.
The main symbols date back to the 1960s, with the Pause symbol having reportedly been invented at Ampex during that decade for use on reel-to-reel audio recorder controls, due to the difficulty of translating the word "pause" into some languages used in foreign markets. The Pause symbol was designed as a combination of the existing square Stop symbol and the caesura, and was intended to evoke the concept of an interruption or "stutter stop". The right-pointing triangle was adopted to indicate the direction of tape movement during playback. This design choice was straightforward: the arrow pointed in the direction the tape advanced. Over time, this symbol became standardized across various media devices, from cassette players to CD players, and eventually digital interfaces.
== In popular culture ==
=== Consumer products ===
The Play symbol is arguably the most widely used of the media control symbols. In many ways, this symbol has become synonymous with music culture and more broadly the digital download era. As such, there are now a multitude of items such as T-shirts, posters, and tattoos that feature this symbol. Similar cultural references can be observed with the Power symbol which is especially popular among video gamers and technology enthusiasts.
=== Branding ===
Media symbols can be found on an array of advertisements: from live music venues to streaming services.
In 2012, Google rebranded its digital download store to Google Play, using the Play symbol in its logo. The Play symbol also serves as a logo for YouTube since 2017. Television station owners Morgan Murphy Media and TEGNA have begun to institute the Play symbol into the logos of their stations to further connect their websites to their over-the-air television presences.
== Use on appliances and other mechanical devices ==
In recent years, there has been a proliferation of electronics that use media control symbols in order to represent the Run, Stop, and Pause functions. Likewise, user interface programing pertaining to these functions has also been influenced by that of media players.
For example, some washers and dryers with an illuminated Play/pause button are programmed such that it stays lit when the appliance is running. A line of Philips pasta makers has the Play/pause button for controlling the pasta-making process.
== See also ==
List of international common standards
Power symbol
Miscellaneous Technical
== References == | Wikipedia/Media_controls |
Haptic technology (also kinaesthetic communication or 3D touch) is technology that can create an experience of touch by applying forces, vibrations, or motions to the user. These technologies can be used to create virtual objects in a computer simulation, to control virtual objects, and to enhance remote control of machines and devices (telerobotics). Haptic devices may incorporate tactile sensors that measure forces exerted by the user on the interface. The word haptic, from the Ancient Greek: ἁπτικός (haptikos), means "tactile, pertaining to the sense of touch". Simple haptic devices are common in the form of game controllers, joysticks, and steering wheels.
Haptic technology facilitates investigation of how the human sense of touch works by allowing the creation of controlled haptic virtual objects. Vibrations and other tactile cues have also become an integral part of mobile user experience and interface design. Most researchers distinguish three sensory systems related to sense of touch in humans: cutaneous, kinaesthetic and haptic. All perceptions mediated by cutaneous and kinaesthetic sensibility are referred to as tactual perception. The sense of touch may be classified as passive and active, and the term "haptic" is often associated with active touch to communicate or recognize objects.
== History ==
One of the earliest applications of haptic technology was in large aircraft that use servomechanism systems to operate control surfaces. In lighter aircraft without servo systems, as the aircraft approached a stall, the aerodynamic buffeting (vibrations) was felt in the pilot's controls. This was a useful warning of a dangerous flight condition. Servo systems tend to be "one-way", meaning external forces applied aerodynamically to the control surfaces are not perceived at the controls, resulting in the lack of this important sensory cue. To address this, the missing normal forces are simulated with springs and weights. The angle of attack is measured, and as the critical stall point approaches a stick shaker is engaged which simulates the response of a simpler control system. Alternatively, the servo force may be measured and the signal directed to a servo system on the control, also known as force feedback. Force feedback has been implemented experimentally in some excavators and is useful when excavating mixed material such as large rocks embedded in silt or clay. It allows the operator to "feel" and work around unseen obstacles.
In the 1960s, Paul Bach-y-Rita developed a vision substitution system using a 20x20 array of metal rods that could be raised and lowered, producing tactile "dots" analogous to the pixels of a screen. People sitting in a chair equipped with this device could identify pictures from the pattern of dots poked into their backs.
The first US patent for a tactile telephone was granted to Thomas D. Shannon in 1973. An early tactile man-machine communication system was constructed by A. Michael Noll at Bell Telephone Laboratories, Inc. in the early 1970s and a patent was issued for his invention in 1975.
In 1994, the Aura Interactor vest was developed. The vest is a wearable force-feedback device that monitors an audio signal and uses electromagnetic actuator technology to convert bass sound waves into vibrations that can represent such actions as a punch or kick. The vest plugs into the audio output of a stereo, TV, or VCR and the audio signal is reproduced through a speaker embedded in the vest.
In 1995, Thomas Massie developed the PHANToM (Personal HAptic iNTerface Mechanism) system. It used thimble-like receptacles at the end of computerized arms into which a person's fingers could be inserted, allowing them to "feel" an object on a computer screen.
In 1995, Norwegian Geir Jensen described a wristwatch haptic device with a skin tap mechanism, termed Tap-in. The wristwatch would connect to a mobile phone via Bluetooth, and tapping-frequency patterns would enable the wearer to respond to callers with selected short messages.
In 2015, the Apple Watch was launched. It uses skin tap sensing to deliver notifications and alerts from the mobile phone of the watch wearer.
== Types of mechanical touch sensing ==
Human sensing of mechanical loading in the skin is managed by Mechanoreceptors. There are a number of types of mechanoreceptors but those present in the finger pad are typically placed into two categories. Fast acting (FA) and slow acting (SA). SA mechanoreceptors are sensitive to relatively large stresses and at low frequencies while FA mechanoreceptors are sensitive to smaller stresses at higher frequencies. The result of this is that generally SA sensors can detect textures with amplitudes greater than 200 micrometers and FA sensors can detect textures with amplitudes less than 200 micrometers down to about 1 micrometer, though some research suggests that FA can only detect textures smaller than the fingerprint wavelength. FA mechanoreceptors achieve this high resolution of sensing by sensing vibrations produced by friction and an interaction of the fingerprint texture moving over fine surface texture.
== Implementation ==
Haptic feedback (often shortened to just haptics) is controlled vibrations at set frequencies and intervals to provide a sensation representative of an in-game action; this includes 'bumps', 'knocks', and 'tap' of one's hand or fingers.
The majority of electronics offering haptic feedback use vibrations, and most use a type of eccentric rotating mass (ERM) actuator, consisting of an unbalanced weight attached to a motor shaft. As the shaft rotates, the spinning of this irregular mass causes the actuator and the attached device to shake. Piezoelectric actuators are also employed to produce vibrations, and offer even more precise motion than LRAs, with less noise and in a smaller platform, but require higher voltages than do ERMs and LRAs.
=== Controller rumble ===
One of the most common forms of haptic feedback in video games is controller rumble. In 1976, Sega's motorbike game Moto-Cross, also known as Fonz, was the first game to use haptic feedback, causing the handlebars to vibrate during a collision with another vehicle.
=== Force feedback ===
Force feedback devices use motors to manipulate the movement of an item held by the user. A common use is in automobile driving video games and simulators, which turn the steering wheel to simulate forces experienced when cornering a real vehicle. Direct-drive wheels, introduced in 2013, are based on servomotors and are the most high-end, for strength and fidelity, type of force feedback racing wheels.
In 2007, Novint released the Falcon, the first consumer 3D touch device with high resolution three-dimensional force feedback. This allowed the haptic simulation of objects, textures, recoil, momentum, and the physical presence of objects in games.
=== Air vortex rings ===
Air vortex rings are donut-shaped air pockets made up of concentrated gusts of air. Focused air vortices can have the force to blow out a candle or disturb papers from a few yards away. Both Microsoft Research (AirWave) and Disney Research (AIREAL) have used air vortices to deliver non-contact haptic feedback.
=== Ultrasound ===
Focused ultrasound beams can be used to create a localized sense of pressure on a finger without touching any physical object. The focal point that creates the sensation of pressure is generated by individually controlling the phase and intensity of each transducer in an array of ultrasound transducers. These beams can also be used to deliver sensations of vibration, and to give users the ability to feel virtual 3D objects. The first commercially available ultrasound device was the Stratos Explore by Ultrahaptics that consisted of 256-transducer array board and a Leap motion controller for hand tracking
Another form of tactile feed back results from active touch when a human scans (runs their finger over a surface) to gain information about a surfaces texture. A significant amount of information about a surface's texture on the micro meter scale can be gathered through this action as vibrations resulting from friction and texture activate mechanoreceptors in the human skin. Towards this goal plates can be made to vibrate at an ultrasonic frequency which reduces the friction between the plate and skin.
=== Electrical stimulation ===
Electrical muscle stimulation (EMS) and transcutaneous electrical nerve stimulation (TENS) can be used to create haptic sensations in the skin or muscles. Most notable examples include haptic suits Tesla suit, Owo haptic vest and wearable armbands Valkyrie EIR. In addition to improving immersion, e.g. by simulating bullet hits, these technologies are sought to create sensations similar to weight and resistance, and can promote muscle training.
== Applications ==
=== Control ===
==== Telepresence ====
Haptic feedback is essential to perform complex tasks via telepresence. The Shadow Hand, an advanced robotic hand, has a total of 129 touch sensors embedded in every joint and finger pad that relay information to the operator. This allows tasks such as typing to be performed from a distance. An early prototype can be seen in NASA's collection of humanoid robots, or robonauts.
==== Teleoperation ====
Teleoperators are remote controlled robotic tools. When the operator is given feedback on the forces involved, this is called haptic teleoperation. The first electrically actuated teleoperators were built in the 1950s at the Argonne National Laboratory by Raymond Goertz to remotely handle radioactive substances. Since then, the use of force feedback has become more widespread in other kinds of teleoperators, such as remote-controlled underwater exploration devices.
Devices such as medical simulators and flight simulators ideally provide the force feedback that would be felt in real life. Simulated forces are generated using haptic operator controls, allowing data representing touch sensations to be saved or played back.
==== Medicine and dentistry ====
Haptic interfaces for medical simulation are being developed for training in minimally invasive procedures such as laparoscopy and interventional radiology, and for training dental students. A Virtual Haptic Back (VHB) was successfully integrated in the curriculum at the Ohio University College of Osteopathic Medicine. Haptic technology has enabled the development of telepresence surgery, allowing expert surgeons to operate on patients from a distance. As the surgeon makes an incision, they feel tactile and resistance feedback as if working directly on the patient.
==== Automotive ====
With the introduction of large touchscreen control panels in vehicle dashboards, haptic feedback technology is used to provide confirmation of touch commands without needing the driver to take their eyes off the road. Additional contact surfaces, for example the steering wheel or seat, can also provide haptic information to the driver, for example, a warning vibration pattern when close to other vehicles.
==== Aviation ====
Force-feedback can be used to increase adherence to a safe flight envelope and thus reduce the risk of pilots entering dangerous states of flights outside the operational borders while maintaining the pilots' final authority and increasing their situation awareness.
=== Electronic devices ===
==== Video games ====
Haptic feedback is commonly used in arcade games, especially racing video games. In 1976, Sega's motorbike game Moto-Cross, also known as Fonz, was the first game to use haptic feedback, causing the handlebars to vibrate during a collision with another vehicle. Tatsumi's TX-1 introduced force feedback to car driving games in 1983. The game Earthshaker! added haptic feedback to a pinball machine in 1989.
Simple haptic devices are common in the form of game controllers, joysticks, and steering wheels. Early implementations were provided through optional components, such as the Nintendo 64 controller's Rumble Pak in 1997. In the same year, the Microsoft SideWinder Force Feedback Pro with built-in feedback was released by Immersion Corporation. Many console controllers and joysticks feature built-in feedback devices, which are motors with unbalanced weights that spin, causing it to vibrate, including Sony's DualShock technology and Microsoft's Impulse Trigger technology. Some automobile steering wheel controllers, for example, are programmed to provide a "feel" of the road. As the user makes a turn or accelerates, the steering wheel responds by resisting turns or slipping out of control.
Notable introductions include:
2013: The first direct-drive wheel for sim racing is introduced.
2014: A new type of haptic cushion that responds to multimedia inputs by LG Electronics.
2015: Steam Machines (console-like PCs) by Valve include a new Steam Controller that uses weighted electromagnets capable of delivering a wide range of haptic feedback via the unit's trackpads. These controllers' feedback systems are user-configurable, delivering precise feedback with haptic force actuators on both sides of the controller.
2017: The Nintendo Switch's Joy-Con introduced the HD Rumble feature, developed with Immersion Corporation, using actuators from Alps Electric.
2018: The Razer Nari Ultimate, gaming headphones using a pair of wide frequency haptic drivers, developed by Lofelt.
2020: The Sony PlayStation 5 DualSense controllers supports vibrotactile haptic provided by voice coil actuators integrated in the palm grips, and force feedback for the Adaptive Triggers provided by two DC rotary motors. The actuators in the hand grip are able to give varied and intuitive feedback about in-game actions; for example, in a sandstorm, the player can feel the wind and sand, and the motors in the Adaptive Triggers support experiences such as virtually drawing an arrow from a bow.
2021, SuperTuxKart 1.3 was released, adding support for force feedback. Force feedback is extremely uncommon for free software games.
==== Mobile devices ====
Tactile haptic feedback is common in cellular devices. In most cases, this takes the form of vibration response to touch. Alpine Electronics uses a haptic feedback technology named PulseTouch on many of their touch-screen car navigation and stereo units. The Nexus One features haptic feedback, according to their specifications. Samsung first launched a phone with haptics in 2007.
Surface haptics refers to the production of variable forces on a user's finger as it interacts with a surface such as a touchscreen.
Notable introductions include:
Tanvas uses an electrostatic technology to control the in-plane forces experienced by a fingertip, as a programmable function of the finger's motion. The TPaD Tablet Project uses an ultrasonic technology to modulate the apparent slipperiness of a glass touchscreen.
In 2013, Apple Inc. was awarded the patent for a haptic feedback system that is suitable for multitouch surfaces. Apple's U.S. Patent for a "Method and apparatus for localization of haptic feedback" describes a system where at least two actuators are positioned beneath a multitouch input device, providing vibratory feedback when a user makes contact with the unit. Specifically, the patent provides for one actuator to induce a feedback vibration, while at least one other actuator uses its vibrations to localize the haptic experience by preventing the first set of vibrations from propagating to other areas of the device. The patent gives the example of a "virtual keyboard," however, it is also noted that the invention can be applied to any multitouch interface. Apple's iPhones (and MacBooks) featuring the "Taptic Engine", accomplish their vibrations with a linear resonant actuator (LRA), which moves a mass in a reciprocal manner by means of a magnetic voice coil, similar to how AC electrical signals are translated into motion in the cone of a loudspeaker. LRAs are capable of quicker response times than ERMs, and thus can transmit more accurate haptic imagery.
==== Virtual reality ====
Haptics are gaining widespread acceptance as a key part of virtual reality systems, adding the sense of touch to previously visual-only interfaces. Systems are being developed to use haptic interfaces for 3D modeling and design, including systems that allow holograms to be both seen and felt. Several companies are making full-body or torso haptic vests or haptic suits for use in immersive virtual reality to allow users to feel explosions and bullet impacts.
==== Personal computers ====
In 2015, Apple Inc.'s MacBook and MacBook Pro started incorporating a "Tactile Touchpad" design with button functionality and haptic feedback incorporated into the tracking surface. The tactile touchpad allows for a feeling of "give" when clicking despite the fact that the touchpad no longer moves.
=== Sensory substitution ===
==== Sound substitution ====
In December 2015 David Eagleman demonstrated a wearable vest that "translates" speech and other audio signals into series of vibrations. This allowed hearing-impaired people to "feel" sounds on their body; it has since been made commercially as a wristband.
==== Tactile electronic displays ====
A tactile electronic display is a display device that delivers text and graphical information using the sense of touch. Devices of this kind have been developed to assist blind or deaf users by providing an alternative to visual or auditory sensation.
=== Teledildonics ===
Haptic feedback is used within teledildonics, or "sex-technology", in order to remotely connect sex toys and allow users to engage in virtual sex or allow a remote server to control their sex toy. The term was first coined by Ted Nelson in 1975, when discussing the future of love, intimacy and technology. In recent years, teledildonics and sex-technology have expanded to include toys with a two-way connection that allow virtual sex through the communication of vibrations, pressures and sensations. Many "smart" vibrators allow for a one-way connection either between the user, or a remote partner, to allow control of the toy.
=== Neurorehabilitation and balance ===
For individuals with upper limb motor dysfunction, robotic devices utilizing haptic feedback could be used for neurorehabilitation. Robotic devices, such as end-effectors, and both grounded and ungrounded exoskeletons have been designed to assist in restoring control over several muscle groups. Haptic feedback applied by these robotic devices helps in the recovery of sensory function due to its more immersive nature.
Haptic technology can also provide sensory feedback to ameliorate age-related impairments in balance control and prevent falls in the elderly and balance-impaired. Haptic Cow and Horse are used in veterinary training.
=== Puzzles ===
Haptic puzzles have been devised in order to investigate goal-oriented haptic exploration, search, learning and memory in complex 3D environments. The goal is to both enable multi-fingered robots with a sense of touch, and gain more insights into human meta-learning.
=== Art ===
Haptic technologies have been explored in virtual arts, such as sound synthesis or graphic design, that make some loose vision and animation. Haptic technology was used to enhance existing art pieces in the Tate Sensorium exhibit in 2015. In music creation, Swedish synthesizer manufacturer Teenage Engineering introduced a haptic subwoofer module for their OP-Z synthesizer allowing musicians to feel the bass frequencies directly on their instrument.
=== Space ===
The use of haptic technologies may be useful in space exploration, including visits to the planet Mars, according to news reports.
== See also ==
Haptics (disambiguation)
Haptic perception
Linkage (mechanical)
Organic user interface
Sonic interaction design
Stylus (computing)
Tactile imaging
Wired glove
== References ==
== Further reading ==
== External links ==
Haptic technology at HowStuffWorks
What Vibration Frequency Is Best For Haptic Feedback? Archived 2021-09-26 at the Wayback Machine | Wikipedia/Force_feedback |
A gamepad is a type of video game controller held in two hands, where the fingers (especially thumbs) are used to provide input. They are typically the main input device for video game consoles.
== Features ==
Some common additions to the standard pad include shoulder buttons (also called "bumpers") and triggers placed along the edges of the pad (shoulder buttons are usually digital, i.e. merely on/off; while triggers are usually analog); centrally placed start, select, and home buttons, and an internal motor to provide force feedback. Analog triggers, like that of the GameCube controller, are pressure-sensitive and games can read in the amount of pressure applied to one to control the intensity of a certain action, such as how forceful water is to be sprayed in Super Mario Sunshine.
There are programmable joysticks that can emulate keyboard input. Generally they have been made to circumvent the lack of joystick support in some computer games, e.g. the Belkin Nostromo SpeedPad n52. There are several programs that emulate keyboard and mouse input with a gamepad such as the free and open-source cross-platform software antimicro, Enjoy2, or proprietary commercial solutions such as JoyToKey, Xpadder, and Pinnacle Game Profiler.
One common issue with modern game controllers is stick drift, where the analog stick registers movement even when not being touched. This problem can affect gameplay accuracy and responsiveness. To diagnose stick drift, various online stick drift tester tools are available, allowing users to visualize stick movement and detect irregular inputs. These tools, often web-based, help determine whether recalibration, cleaning, or hardware repair is necessary. Some platforms, like Steam, also include built-in calibration settings to mitigate minor drift issues.
== History ==
The 1962 video game Spacewar! initially used toggle switches built into the computer readout display to control the game. These switches were awkward and uncomfortable to use, so Alan Kotok and Bob Saunders built and wired in a detached control device for the game. This device has been called the earliest gamepad.
=== Entry into the mass market ===
It would take many years for the gamepad to rise to prominence, as during the 1970s and the early 1980s joysticks and paddles were the dominant video game controllers, though several Atari joystick port-compatible pushbutton controllers were also available. The third generation of video games saw many major changes, and the eminence of gamepads in the video game market.
Nintendo developed a gamepad device for directional inputs, a D-pad with a "cross" design for their Donkey Kong handheld game. This design would be incorporated into their "Game & Watch" series and console controllers such as the standard NES controller. Though developed because they were more compact than joysticks, and thus more appropriate for handheld games, D-pads were soon found by developers to be more comfortable to use than joysticks. The D-pad soon became a ubiquitous element on console gamepads, though to avoid infringing on Nintendo's patent, most controller manufacturers use a cross in a circle shape for the D-pad instead of a simple cross.
=== Continued refinements ===
The original Sega Genesis/Mega Drive control pad has three face buttons, but a six-button pad was later released. The SNES controller also featured six action buttons, with four face buttons arranged in a diamond formation, and two shoulder buttons positioned to be used with the index fingers, a design which has been imitated by most controllers since. The inclusion of six action buttons was influenced by the popularity of the Street Fighter arcade series, which utilized six buttons.For most of the 1980s and early 1990s, analog joysticks were the predominant form of gaming controller for PCs, while console gaming controllers were mostly digital. This changed in 1996 when all three major consoles introduced an optional analog control. The Sony Dual Analog Controller had twin convex analog thumbsticks, the Sega Saturn 3D Control Pad had a single analog thumbstick, and the Nintendo 64 controller combined digital and analog controllers in a single body, starting a trend to have both an analog stick and a d-pad.
Despite these changes, gamepads essentially continued to follow the template set by the NES controller (a horizontally-oriented controller with two or more action buttons positioned for use with the right thumb, and a directional pad positioned for use with the left thumb).
=== Three-dimensional control ===
Though three-dimensional games rose to prominence in the mid-1990s, controllers continued to mostly operate on two-dimensional principles. Players would have to hold down a button to change the axes along which the controls operate rather than being able to control movement along all three axes at once. One of the first gaming consoles, the Fairchild Channel F, did have a controller which provided six degrees of freedom, but the processing limitations of the console itself prevented there from being any software to take advantage of this ability. In 1994, Logitech introduced the CyberMan, the first practical six-degrees-of-freedom controller; however, it sold poorly due to its high price, poor build quality, and limited software support. Industry insiders blame the CyberMan's high profile and costly failure for the gaming industry's lack of interest in developing 3D control over the next several years.
The Wii Remote is shaped like a television remote control and contains tilt sensors and three-dimensional pointing which the system uses to understand all directions of movement and rotation (back and forth around the pitch, roll, and yaw axes). The controller is also multifunctional and has an expansion port which can be used for a variety of peripherals. An analog stick peripheral, called "Nunchuk," also contains an accelerometer but unlike the Wii Remote, it lacks any pointer functionality.
== Usage across platforms ==
Gamepads are also available for personal computers. Examples of PC gamepads include the Asus Eee Stick, the Gravis PC, the Microsoft SideWinder and Saitek Cyborg range, and the Steam Controller. Third-party USB adapters and software can be employed to utilize console gamepads on PCs; the DualShock 3, DualShock 4, DualSense, Wii Remote and Joy-Con can be used with third-party software on systems with Bluetooth functionality, with USB additionally usable on DualShock 3, DualShock 4 and DualSense. Xbox 360, Xbox One and Xbox Series X/S controllers are officially supported on Windows with Microsoft-supplied drivers; a dongle can be used to connect them wirelessly, or the controller can be connected directly to the computer over USB (wired versions of Xbox 360 controllers were marketed by Microsoft as PC gamepads, while the Xbox One/Series X/S controllers can be connected to a PC via its Micro USB/USB-C slot).
== Non-gaming use ==
Gamepads or devices closely modelled on them are sometimes used for controlling real machinery and vehicles, as they are familiar to users and (in the case of actual gamepads) provide an off-the-shelf solution. The US Army and US Navy use Xbox controllers
for operating devices, and the British Army uses a device modelled on gamepads to operate systems on the Challenger 2 main battle tank.
The Titan submersible notoriously used a gamepad for control.
== See also ==
Computer keyboard
Computer mouse
Game port
Sim racing wheel
== References == | Wikipedia/Trigger_(game_controller) |
The WaveBird Wireless Controller (stylized as WAVEBIRD, commonly abbreviated as WaveBird or WaveBird controller) is a radio frequency-based wireless controller manufactured by Nintendo for use with the GameCube home video game console. Its name is a reference to Dolphin, the GameCube's codename during development. The WaveBird was available for purchase separately as well as in bundles with either Metroid Prime or Mario Party 4, which were exclusive to Kmart in the US.
== Development ==
Nintendo had attempted to create a reliable wireless controller since the development of the Famicom. Its first attempt was for the Advanced Video System (AVS), the precursor to the Nintendo Entertainment System (NES), which included two wireless controllers but was never released.
Nintendo later developed an infrared (IR) adapter called the NES Satellite for the NES. Released in 1989, it used infrared to extend the length of up to four wired controllers, which would plug into the base of the unit rather than the console. The base could then be positioned anywhere within a certain range of the NES without the need for a cable. However, the extension base still needed a direct line of sight with the NES console; line of sight is a significant limitation of IR technology, requiring a clear space between an IR port and controller.
Radio Frequency controllers were not possible in the late 1980s as the early digital RF links were bulky and used too much power to be useful in battery-powered devices. However, advancements in integrated circuits made radio controllers for game consoles commercially viable only a decade later. The WaveBird, released in 2002, solved previous usability problems of wireless controllers by relying on radio frequency communication instead of infrared, allowing the controller to be used anywhere within 6 meters (20 feet) of the console. Although Nintendo only certifies the WaveBird to work within this 6 meters (20 feet) range, tests have proven that they may work as far as 27.5 meters (90 feet) on all 16 different channels. This controller would become the first modern wireless gaming controller, leading to the proliferation of wireless console gaming controllers for subsequent gaming generations, starting with the seventh generation's Wii Remote (Wii), DualShock 3 controller (PlayStation 3) and the Xbox 360 controller (Xbox 360).
== Design ==
The WaveBird Wireless Controller was designed and sold by Nintendo. Unlike most wireless controllers of its era, it relies on RF technology (first used in gaming with Atari's CX-42 joysticks) instead of infrared line-of-sight signal transmission, and the controller's radio transceiver operates at 2.4 GHz. The range of the WaveBird controller is officially 6 meters (20 feet) but some users have reported ranges of 18–21 meters (59–69 ft). The WaveBird includes a small receiver unit which must be plugged into the controller port of the GameCube. Made of the same gray-colored plastic as the standard WaveBird, it features a channel-selection wheel and an LED to indicate when a signal is received. Up to sixteen WaveBird controllers may be used in the same area if each is set to a different channel. In 2025, an open-source implementation of the WaveBird protocol was released, called WavePhoenix. It enables the construction of a replacement receiver.
The WaveBird Wireless Controller maintains the same overall aesthetic design as the standard GameCube controller. The components (analog sticks, buttons, and triggers) and layout remain the same, while adding wireless functionality and space for two standard AA batteries. It is somewhat larger and heavier than a standard GameCube controller, with a channel selector dial, an on/off switch, and an orange LED power indicator on the face of the controller in place of the gap between the D-pad and the C-stick. Functionally, the only feature the WaveBird controller lacks compared to the standard controller is the rumble feature, the motors of which would reduce battery life.
=== Colors ===
The WaveBird Wireless Controller was available in most regions only in light gray and platinum colors. In Japan, two limited edition WaveBird models were released through Club Nintendo: 1,000 Special Edition Gundam "Char's Customized Color" WaveBirds (two-toned red with the Neo-Zeon logo) to coincide with the Japan-only GameCube release of Mobile Suit Gundam: Gundam vs. Z Gundam, and a "Club Nintendo" WaveBird (white top with light blue bottom and Club Nintendo logo).
== Use on subsequent consoles ==
Like all GameCube controllers, the WaveBird Wireless Controller is compatible with the original Wii model (RVL-001), for use with GameCube and Virtual Console titles as well as certain Wii games and WiiWare titles. Since the launch of the Wii, the WaveBird has seen increased popularity due to its ability to control these games wirelessly.
Following speculation that Nintendo might re-release the WaveBird due to the popularity of its use on the Wii, a Nintendo representative confirmed that there were no plans to offer WaveBirds in stores again. Although the representative stated that "original GameCube controllers" would be available directly from Nintendo, there is no listing for the WaveBird.
In November 2014, Nintendo released a GameCube controller adapter for use with the Wii U alongside the release of Super Smash Bros. for Wii U. In 2018, shortly after the announcement of Super Smash Bros. Ultimate for the Nintendo Switch, the company added support for the Wii U GameCube controller adapter for the newer hybrid console.
== Legal issues ==
Anascape Ltd, a Texas-based firm, filed a lawsuit against Nintendo for patent infringements regarding Nintendo's controllers. A July 2008 verdict found that a ban would be issued preventing Nintendo from selling several controllers, including the WaveBird, in the United States. Nintendo was free to continue selling the WaveBird pending an appeal to the U.S. Court of Appeals for the Federal Circuit. On April 13, 2010, Nintendo won the appeal and the previous court decision was reversed.
== See also ==
GameCube controller
== References ==
== External links ==
WavePhoenix, an open-source implementation of the WaveBird protocol | Wikipedia/WaveBird_Wireless_Controller |
A voice-user interface (VUI) enables spoken human interaction with computers, using speech recognition to understand spoken commands and answer questions, and typically text to speech to play a reply. A voice command device is a device controlled with a voice user interface.
Voice user interfaces have been added to automobiles, home automation systems, computer operating systems, home appliances like washing machines and microwave ovens, and television remote controls. They are the primary way of interacting with virtual assistants on smartphones and smart speakers. Older automated attendants (which route phone calls to the correct extension) and interactive voice response systems (which conduct more complicated transactions over the phone) can respond to the pressing of keypad buttons via DTMF tones, but those with a full voice user interface allow callers to speak requests and responses without having to press any buttons.
Newer voice command devices are speaker-independent, so they can respond to multiple voices, regardless of accent or dialectal influences. They are also capable of responding to several commands at once, separating vocal messages, and providing appropriate feedback, accurately imitating a natural conversation.
== Overview ==
A VUI is the interface to any speech application. Only a short time ago, controlling a machine by simply talking to it was only possible in science fiction. Until recently, this area was considered to be artificial intelligence. However, advances in technologies like text-to-speech, speech-to-text, natural language processing, and cloud services contributed to the mass adoption of these types of interfaces. VUIs have become more commonplace, and people are taking advantage of the value that these hands-free, eyes-free interfaces provide in many situations.
VUIs need to respond to input reliably, or they will be rejected and often ridiculed by their users. Designing a good VUI requires interdisciplinary talents of computer science, linguistics and human factors psychology – all of which are skills that are expensive and hard to come by. Even with advanced development tools, constructing an effective VUI requires an in-depth understanding of both the tasks to be performed, as well as the target audience that will use the final system. The closer the VUI matches the user's mental model of the task, the easier it will be to use with little or no training, resulting in both higher efficiency and higher user satisfaction.
A VUI designed for the general public should emphasize ease of use and provide a lot of help and guidance for first-time callers. In contrast, a VUI designed for a small group of power users (including field service workers), should focus more on productivity and less on help and guidance. Such applications should streamline the call flows, minimize prompts, eliminate unnecessary iterations and allow elaborate "mixed initiative dialogs", which enable callers to enter several pieces of information in a single utterance and in any order or combination. In short, speech applications have to be carefully crafted for the specific business process that is being automated.
Not all business processes render themselves equally well for speech automation. In general, the more complex the inquiries and transactions are, the more challenging they will be to automate, and the more likely they will be to fail with the general public. In some scenarios, automation is simply not applicable, so live agent assistance is the only option. A legal advice hotline, for example, would be very difficult to automate. On the flip side, speech is perfect for handling quick and routine transactions, like changing the status of a work order, completing a time or expense entry, or transferring funds between accounts.
== History ==
Early applications for VUI included voice-activated dialing of phones, either directly or through a (typically Bluetooth) headset or vehicle audio system.
In 2007, a CNN business article reported that voice command was over a billion dollar industry and that companies like Google and Apple were trying to create speech recognition features. In the years since the article was published, the world has witnessed a variety of voice command devices. Additionally, Google has created a speech recognition engine called Pico TTS and Apple released Siri. Voice command devices are becoming more widely available, and innovative ways for using the human voice are always being created. For example, Business Week suggests that the future remote controller is going to be the human voice. Currently Xbox Live allows such features and Jobs hinted at such a feature on the new Apple TV.
== Voice command software products on computing devices ==
Both Apple Mac and Windows PC provide built in speech recognition features for their latest operating systems.
=== Microsoft Windows ===
Two Microsoft operating systems, Windows 7 and Windows Vista, provide speech recognition capabilities. Microsoft integrated voice commands into their operating systems to provide a mechanism for people who want to limit their use of the mouse and keyboard, but still want to maintain or increase their overall productivity.
==== Windows Vista ====
With Windows Vista voice control, a user may dictate documents and emails in mainstream applications, start and switch between applications, control the operating system, format documents, save documents, edit files, efficiently correct errors, and fill out forms on the Web. The speech recognition software learns automatically every time a user uses it, and speech recognition is available in English (U.S.), English (U.K.), German (Germany), French (France), Spanish (Spain), Japanese, Chinese (Traditional), and Chinese (Simplified). In addition, the software comes with an interactive tutorial, which can be used to train both the user and the speech recognition engine.
==== Windows 7 ====
In addition to all the features provided in Windows Vista, Windows 7 provides a wizard for setting up the microphone and a tutorial on how to use the feature.
==== Mac OS X ====
All Mac OS X computers come pre-installed with the speech recognition software. The software is user-independent, and it allows for a user to, "navigate menus and enter keyboard shortcuts; speak checkbox names, radio button names, list items, and button names; and open, close, control, and switch among applications." However, the Apple website recommends a user buy a commercial product called Dictate.
=== Commercial products ===
If a user is not satisfied with the built in speech recognition software or a user does not have a built speech recognition software for their OS, then a user may experiment with a commercial product such as Braina Pro or DragonNaturallySpeaking for Windows PCs,
and Dictate, the name of the same software for Mac OS.
== Voice command mobile devices ==
Any mobile device running Android OS, Microsoft Windows Phone, iOS 9 or later, or Blackberry OS provides voice command capabilities. In addition to the built-in speech recognition software for each mobile phone's operating system, a user may download third party voice command applications from each operating system's application store: Apple App store, Google Play, Windows Phone Marketplace (initially Windows Marketplace for Mobile), or BlackBerry App World.
=== Android OS ===
Google has developed an open source operating system called Android, which allows a user to perform voice commands such as: send text messages, listen to music, get directions, call businesses, call contacts, send email, view a map, go to websites, write a note, and search Google.
The speech recognition software is available for all devices since Android 2.2 "Froyo", but the settings must be set to English. Google allows for the user to change the language, and the user is prompted when he or she first uses the speech recognition feature if he or she would like their voice data to be attached to their Google account. If a user decides to opt into this service, it allows Google to train the software to the user's voice.
Google introduced the Google Assistant with Android 7.0 "Nougat". It is much more advanced than the older version.
Amazon.com has the Echo that uses Amazon's custom version of Android to provide a voice interface.
=== Microsoft Windows ===
Windows Phone is Microsoft's mobile device's operating system. On Windows Phone 7.5, the speech app is user independent and can be used to: call someone from your contact list, call any phone number, redial the last number, send a text message, call your voice mail, open an application, read appointments, query phone status, and search the web.
In addition, speech can also be used during a phone call, and the following actions are possible during a phone call: press a number, turn the speaker phone on, or call someone, which puts the current call on hold.
Windows 10 introduces Cortana, a voice control system that replaces the formerly used voice control on Windows phones.
=== iOS ===
Apple added Voice Control to its family of iOS devices as a new feature of iPhone OS 3. The iPhone 4S, iPad 3, iPad Mini 1G, iPad Air, iPad Pro 1G, iPod Touch 5G and later, all come with a more advanced voice assistant called Siri. Voice Control can still be enabled through the Settings menu of newer devices. Siri is a user independent built-in speech recognition feature that allows a user to issue voice commands. With the assistance of Siri a user may issue commands like, send a text message, check the weather, set a reminder, find information, schedule meetings, send an email, find a contact, set an alarm, get directions, track your stocks, set a timer, and ask for examples of sample voice command queries. In addition, Siri works with Bluetooth and wired headphones.
Apple introduced Personal Voice as an accessibility feature in iOS 17, launched on September 18, 2023. This feature allows users to create a personalized, machine learning-generated (AI) version of their voice for use in text-to-speech applications. Designed particularly for individuals with speech impairments, Personal Voice helps preserve the unique sound of a user's voice. It enhances Siri and other accessibility tools by providing a more personalized and inclusive user experience. Personal Voice reflects Apple's ongoing commitment to accessibility and innovation.
=== Amazon Alexa ===
In 2014 Amazon introduced the Alexa smart home device. Its main purpose was just a smart speaker, that allowed the consumer to control the device with their voice. Eventually, it turned into a novelty device that had the ability to control home appliance with voice. Now almost all the appliances are controllable with Alexa, including light bulbs and temperature. By allowing voice control, Alexa can connect to smart home technology allowing you to lock your house, control the temperature, and activate various devices. This form of A.I allows for someone to simply ask it a question, and in response the Alexa searches for, finds, and recites the answer back to you.
== Speech recognition in cars ==
As car technology improves, more features will be added to cars and these features could potentially distract a driver. Voice commands for cars, according to CNET, should allow a driver to issue commands and not be distracted. CNET stated that Nuance was suggesting that in the future they would create a software that resembled Siri, but for cars. Most speech recognition software on the market in 2011 had only about 50 to 60 voice commands, but Ford Sync had 10,000. However, CNET suggested that even 10,000 voice commands was not sufficient given the complexity and the variety of tasks a user may want to do while driving. Voice command for cars is different from voice command for mobile phones and for computers because a driver may use the feature to look for nearby restaurants, look for gas, driving directions, road conditions, and the location of the nearest hotel. Currently, technology allows a driver to issue voice commands on both a portable GPS like a Garmin and a car manufacturer navigation system.
List of Voice Command Systems Provided By Motor Manufacturers:
Ford Sync
Lexus Voice Command
Chrysler UConnect
Honda Accord
GM IntelliLink
BMW
Mercedes
Pioneer
Harman
Hyundai
== Non-verbal input ==
While most voice user interfaces are designed to support interaction through spoken human language, there have also been recent explorations in designing interfaces take non-verbal human sounds as input. In these systems, the user controls the interface by emitting non-speech sounds such as humming, whistling, or blowing into a microphone.
One such example of a non-verbal voice user interface is Blendie, an interactive art installation created by Kelly Dobson. The piece comprised a classic 1950s-era blender which was retrofitted to respond to microphone input. To control the blender, the user must mimic the whirring mechanical sounds that a blender typically makes: the blender will spin slowly in response to a user's low-pitched growl, and increase in speed as the user makes higher-pitched vocal sounds.
Another example is VoiceDraw, a research system that enables digital drawing for individuals with limited motor abilities. VoiceDraw allows users to "paint" strokes on a digital canvas by modulating vowel sounds, which are mapped to brush directions. Modulating other paralinguistic features (e.g. the loudness of their voice) allows the user to control different features of the drawing, such as the thickness of the brush stroke.
Other approaches include adopting non-verbal sounds to augment touch-based interfaces (e.g. on a mobile phone) to support new types of gestures that wouldn't be possible with finger input alone.
== Design challenges ==
Voice interfaces pose a substantial number of challenges for usability. In contrast to graphical user interfaces (GUIs), best practices for voice interface design are still emergent.
=== Discoverability ===
With purely audio-based interaction, voice user interfaces tend to suffer from low discoverability: it is difficult for users to understand the scope of a system's capabilities. In order for the system to convey what is possible without a visual display, it would need to enumerate the available options, which can become tedious or infeasible. Low discoverability often results in users reporting confusion over what they are "allowed" to say, or a mismatch in expectations about the breadth of a system's understanding.
=== Transcription ===
While speech recognition technology has improved considerably in recent years, voice user interfaces still suffer from parsing or transcription errors in which a user's speech is not interpreted correctly. These errors tend to be especially prevalent when the speech content uses technical vocabulary (e.g. medical terminology) or unconventional spellings such as musical artist or song names.
=== Understanding ===
Effective system design to maximize conversational understanding remains an open area of research. Voice user interfaces that interpret and manage conversational state are challenging to design due to the inherent difficulty of integrating complex natural language processing tasks like coreference resolution, named-entity recognition, information retrieval, and dialog management. Most voice assistants today are capable of executing single commands very well but limited in their ability to manage dialogue beyond a narrow task or a couple turns in a conversation.
== Privacy implications ==
Privacy concerns are raised by the fact that voice commands are available to the providers of voice-user interfaces in unencrypted form, and can thus be shared with third parties and be processed in an unauthorized or unexpected manner. Additionally to the linguistic content of recorded speech, a user's manner of expression and voice characteristics can implicitly contain information about his or her biometric identity, personality traits, body shape, physical and mental health condition, sex, gender, moods and emotions, socioeconomic status and geographical origin.
== See also ==
Speech synthesis
List of speech recognition software
Natural-language user interface
User interface design
Voice browser
Speech recognition in Linux
Linguatronic
Voice computing
== References ==
== External links ==
Voice Interfaces: Assessing the Potential by Jakob Nielsen
The Rise of Voice: A Timeline
Voice First Glossary of Terms
Voice First A Reading List | Wikipedia/Voice_control |
A game controller, gaming controller, or simply controller, is an input device or input/output device used with video games or entertainment systems to provide input to a video game. Input devices that have been classified as game controllers include keyboards, mice, gamepads, and joysticks, as well as special purpose devices, such as steering wheels for driving games and light guns for shooting games. Controllers designs have evolved to include directional pads, multiple buttons, analog sticks, joysticks, motion detection, touch screens and a plethora of other features.
Game controllers may be input devices that only provide input to the system, or input/output devices that receive data from the system and produce a response (e.g. "rumble" vibration feedback, or sound).
Controllers which are included with the purchase of a home console are referred to as standard controllers, while those that are available to purchase from the console manufacturer or third-party offerings are considered peripheral controllers.
== History ==
One of the first video game controllers was a simple dial and single button, used to control the game Tennis for Two. Controllers have since evolved to include directional pads, multiple buttons, analog sticks, joysticks, motion detection, touch screens and a plethora of other features.
Game controllers have been designed and improved over the years to be as user friendly as possible. The Microsoft Xbox controller, with its shoulder triggers that mimic actual triggers such as those found on guns, has become popular for shooting games.
Before the seventh generation of video game consoles, plugging in a controller into one of a console's controller ports was the primary means of using a game controller, although since then they have been replaced by wireless controllers, which do not require controller ports on the console but are battery-powered. USB game controllers could also be connected to a computer with a USB port.
== Variants ==
Input devices that have been classified as game controllers include keyboards, mouses, gamepads, and joysticks. Special purpose devices, such as steering wheels for driving games and light guns for shooting games, are also game controllers. Some controllers are designed to be best for one type of game, such as steering wheels for driving games, or dance pads for dancing games.
=== Gamepad ===
A gamepad, also known as a joypad, is held in both hands with thumbs and fingers used to provide input. Gamepads can have a number of action buttons combined with one or more omnidirectional control sticks or buttons. Action buttons are generally handled with the digits on the right hand, and the directional input handled with the left. Gamepads are the primary means of input on most modern video game consoles. Due to the ease of use and user-friendly nature of gamepads, they have spread from their origin on traditional consoles to personal computers, where a variety of games and emulators support their input as a replacement for keyboard and mouse input. Most modern game controllers are a variation of a standard gamepad. Common additions include shoulder buttons placed along the edges of the pad, centrally placed buttons labeled start, select, and mode, and an internal motor to provide haptic feedback.
As modern game controllers advance, so too do their user ability qualities. Typically, the controllers become smaller and more compact to more easily, and comfortably, fit within the user's hand. Modern examples can be drawn from systems such as the first Xbox console, whose controller has changed in a variety of ways from the original Xbox 360 controller to the Xbox One controller introduced in 2013.
=== Paddle ===
A paddle is a controller that features a round wheel and one or more fire buttons. The wheel is typically used to control movement of the player or of an object along one axis of the video screen. As the user turns the wheel further from the default position, the speed of control in the game become more intensive.
Paddle controllers were the first analog controllers and they lost popularity when "paddle and ball" type games fell out of favor. A variation, the Atari driving controller, appeared on the Atari 2600. Designed specifically for the game Indy 500, it functioned almost identically in operation and design to the regular paddle controller. The exceptions were that its wheel could be continuously rotated in either direction, and that it was missing the extra paddle included on the previous model. Unlike a spinner, friction prevented the wheel from gaining momentum.
=== Joystick ===
A joystick is a peripheral that consists of a handheld stick that can be tilted around either of two axes and (sometimes) twisted around a third. The joystick is often used for flight simulators. HOTAS (hands on throttle and stick) controllers, composed of a joystick and throttle quadrant (see below) are a popular combination for flight simulation among its most fanatic devotees.
Most joysticks are designed to be operated with the user's primary hand (e.g. with the right hand of a right-handed person), with the base either held in the opposite hand or mounted on a desk. Arcade controllers are typically joysticks featuring a shaft that has a ball or drop-shaped handle, and one or more buttons for in game actions. Generally the layout has the joystick on the left, and the buttons on the right, although there are instances when this is reversed.
=== Trackball ===
A trackball is a smooth sphere that is manipulated with the palm of one's hand. The user can roll the ball in any direction to control the cursor. It has the advantage that it can be faster than a mouse depending on the speed of rotation of the physical ball. Another advantage is that it requires less space than a mouse, which the trackball was a precursor of. Notable uses of a Trackball as a gaming controller would be games such as Centipede, Marble Madness, Golden Tee Golf and SegaSonic the Hedgehog.
=== Throttle quadrant ===
A throttle quadrant is a set of one or more throttle levers that are most often used to simulate throttles or other similar controls in a real vehicle, particularly an aircraft. Throttle quadrants are most popular in conjunction with joysticks or yokes used in flight simulation.
=== Steering wheel ===
A racing wheel, essentially a larger version of a paddle, is used in most racing arcade games as well as more recent racing simulators such as Live for Speed, Grand Prix Legends, GTR2, and Richard Burns Rally. While most arcade racing games have been using steering wheels since Gran Trak 10 in 1974, steering wheels for home systems appeared on fifth-generation consoles such as the PlayStation and Nintendo 64. Many are force feedback (see Force Feedback Wheel), designed to give the same feedback as would be experienced when driving a real car, but the realism of this depends on the game. They usually come with pedals to control the gas and brake. Shifting is taken care of in various ways including paddle shifting systems, simple stick shifters which are moved forward or back to change gears or more complex shifters which mimic those of real vehicles, which may also use a clutch. Some wheels turn only 200 to 270 degrees lock-to-lock but higher-tier models can turn 900 degrees, or 2.5 turns, lock-to-lock, or more. The Namco Jogcon paddle was available for the PlayStation game R4: Ridge Racer Type 4. Unlike "real" video game steering wheels, the Jogcon was designed to fit in the player's hand. Its much smaller wheel (diameter roughly similar to a soda can's) resembles the jog-and-shuttle control wheel used on some VCRs. The Wii game Mario Kart Wii is bundled with the Wii Wheel: a steering wheel-shaped shell that the Wii Remote is placed inside thus using the Wii Remote's motion sensing capabilities to control the kart during the game. Hori also has a steering wheel that is made for the Nintendo 3DS game Mario Kart 7. When the steering wheel is placed on the back of the console, then it will have the same ability as in Mario Kart Wii by using the gyroscope in first-person mode.
=== Yoke ===
A yoke is very similar to a steering wheel except that it resembles the control yoke found on many aircraft and has two axes of movement: not only rotational movement about the shaft of the yoke, but also a forward-and-backward axis equivalent to that of pitch control on the yoke of an aircraft. Some yokes have additional controls attached directly to the yoke for simulation of aircraft functions such as radio push-to-talk buttons. Some flight simulator sets that include yokes also come with various other aircraft controls such as throttle quadrants and pedals. These sets, including the yoke, are intended to be used in a flight simulator.
=== Pedals ===
Pedals may be used for driving simulations or flight simulations and often ships with a steering-wheel-type input device. In the former case, an asymmetric set of pedals can simulate accelerator, brake, and clutch pedals in a real automobile. In the latter case, a symmetric set of pedals simulates rudder controls and toe brakes in an aircraft. As mentioned, most steering wheel controllers come with a set of pedals. There are also variations of the pedal controller such as the proposed rotating pedal device for a cycling game, which relies on an ergometer to generate user inputs such as pedal rpm and pedal resistance. A variation of this concept surfaced in 2016 when a startup called VirZoom debuted a set of sensors that can be installed in the pedal and handlebars, turning a physical bike into one controller for games on the HTC Vive and Oculus Rift virtual reality (VR) platforms. The same concept is behind a product called Cyber ExerCycle, which is a set of sensors attached to the pedal and connected to the PC via USB for bicycle simulation games such as NetAthlon and Fuel.
=== Mouse and keyboard ===
A mouse and computer keyboard are typical input devices for a personal computer and are currently the main game controllers for computer games. The mouse is often used with a mousepad to achieve greater speed, comfort, accuracy and smoother movement for the gamer. Some video game consoles also have the ability to function with a keyboard and a mouse. The computer keyboard is modelled after the typewriter keyboard and was designed for the input of written text. A mouse is a handheld pointing device used in addition to the keyboard. For games, the keyboard typically controls movement of the character while the mouse is used to control the game camera or used for aiming. While originally designed for general computer input, there are several keyboard and mouse peripherals available which are designed specifically for gaming, often with gaming-specific functions built-in. Examples include peripherals by Razer, the "Zboard" range of keyboards and Logitech's 'G' series. The numeric keypad found on the keyboard is also used as a game controller and can be found on a number of separate devices, most notably early consoles, usually attached to a joystick or a paddle. The keypad is a small grid of keys with at least the digits 0–9. A Gaming keypad is a specialized controller used for FPSs, RTSs and some arcade type games. These controllers can be programmed to allow the emulation of keys, and macros in some cases. These generally resemble a small part of a keyboard but may also feature other inputs such as analog sticks. They were developed because some of these games require a keyboard to play, and some players find this to be awkward for such a task. The mouse and keyboard input is also known by the abbreviation "MnK".
=== Touchscreen ===
A touchscreen is an input device that allows the user to interact with the computer by touching the display screen. The first attempt at a handheld game console with touchscreen controls was Sega's intended successor to the Game Gear, though the device was ultimately shelved and never released due to the high cost of touchscreen technology in the early 1990s. The first released console to use a touchscreen was the Tiger game.com in 1997. Nintendo popularized it for use in video games with the Nintendo DS and Nintendo 3DS; other systems including the Tapwave Zodiac as well as Smartphones and the vast majority of PDAs have also included this feature. The primary controller for Nintendo's Wii U console, the Wii U GamePad, features an embedded touchscreen. Modern touch screens use a thin, durable, transparent plastic sheet overlaid onto a glass screen. The location of a touch is calculated from the capacitance for the X and Y axes, which varies based upon where the sheet is touched. One console that is touchscreen developed by Sony is the PlayStation Vita which has a 5-inch OLED touchscreen. The Nintendo Switch features a 6.2 inch touchscreen.
=== Motion sensing ===
Motion controllers include the Sega Activator, released in 1993 for the Mega Drive (Genesis). Based on the Light Harp invented by Assaf Gurner, it could read the player's physical movements and was the first controller to allow full-body motion sensing. However, it was a commercial failure due to its "unwieldiness and inaccuracy". Nintendo's Wii system released in 2006 utilizes the Wii Remote controller, which uses accelerometers to detect its approximate orientation and acceleration and an image sensor, so it can be used as a pointing device. The Sixaxis, DualShock 3, and PlayStation Move controllers for Sony's PlayStation 3 system have similar motion sensing capabilities. In 2010, Microsoft released the Kinect for the Xbox 360. This motion sensing controller uses cameras to track a player’s movement. Microsoft released a revised version of the Kinect with the launch of the Xbox One. This controller was bundled with the console on launch, and was removed from the default bundle in June 2014. Sony's EyeToy similarly uses cameras to detect the player's motions and translate them into inputs for the game. Controllers with gyroscopes may be used to create a pointer without a camera; for example the Joy-Con and Nintendo Switch Pro Controller are used for this in games such as ports of World of Goo and Super Mario Galaxy from the Wii.
=== Adaptive controllers ===
An adaptive controller is collections of various input methods that can be combined in multiple ways to create a controller that works for the user. The adaptive controller was designed for people with physical disabilities that would prevent them from using a gamepad or mouse and keyboard. An example would be PlayStation's access controller which allows for a large joystick, eight buttons on a circular pad, and four ports to plug in additional buttons or accessories. Xbox and Logitech have collaborated to make an adaptive controller with two large touch pads, a D-pad, three buttons, and 16 ports to plug in additional accessories. These accessories can include joysticks, pedals, triggers and buttons.
=== Light gun ===
A light gun is a peripheral used to "shoot" targets on a screen. They usually roughly resemble firearms or ray guns. Their use is normally limited to rail shooters, or shooting gallery games like Duck Hunt and those which came with the Shooting Gallery light gun. A rare example of a non-rail first person shooter game is Taito's 1992 video game Gun Buster, a first-person shooter that used a joystick to move and a light gun to aim. Though light guns have been used in earlier arcade games such as Sega's Periscope in 1966 and Missile in 1969, the first home console light gun was released for the Magnavox Odyssey in 1972; later on, Nintendo would include one standard on their Famicom and NES, called the NES Zapper. Nintendo has also released a "shell" in the style of a light gun for the more recent Wii Remote called the Wii Zapper which comes bundled with the game Link's Crossbow Training.
=== Rhythm game controllers ===
Rhythm game accessories used for rhythm games can resemble musical instruments, such as guitars (from multi-button guitars in Guitar Freaks, the Guitar Hero series, and the Rock Band series to real guitars in Rock Band 3 and Rocksmith), keyboards (Rock Band 3), drums (Donkey Konga, Drum Mania, the Rock Band series and the Guitar Hero series), or maracas (Samba de Amigo) have also seen some success in arcades and home consoles. Other rhythm games are based around the art of Djing or turntablism (DJ Hero), or playing a synthesizer (IIDX) using a turntable shaped peripheral with buttons.
=== Wireless ===
Wireless versions of many popular controller types (joypads, mice, keyboards) exist, and wireless motion controls are an emerging class for virtual reality.
=== Others ===
Balance board: The Wii Balance Board comes with the game Wii Fit. This was preceded by decades by the Joyboard, made to plug into an Atari 2600, to play skiing and surfing games.
Breathing controllers help their users improve breathing through video games. All controllers have sensors that sense users breath, with which user controls video game on computer, tablet or on smartphone. Alvio is a breathing trainer, symptom tracker and mobile game controller. Zenytime promotes deep, rhythmic breathing to trigger short-term rewards of controlled breathing (relaxation, improved oxygenation...). Breathing games by Breathing Labs are based on Pursed lip breathing and are used on iPhone / iPad, Windows, macOS and Android devices.
Buzzers: A recent example of specialized, while very simple, game controllers, is the four large "buzzers" (round buttons) supplied with the PlayStation 2 and PlayStation 3 quiz show game series Buzz! (2005–present); both game and controllers clearly being inspired by the television show genre. Another example is the "Big Button Pad" supplied with the Xbox 360 quiz show games Scene It? Lights, Camera, Action and Scene It? Box Office Smash (2007–2008).
Dance pads, essentially a grid of flat pressure-sensitive gamepad buttons set on a mat meant to be stepped on, have seen niche success with the popularity of rhythm games like Dance Dance Revolution and Pump It Up. The dance pad was first introduced by Bandai on the Famicom in 1986 as a part of their "Family Fun Fitness" set, then Exus released the "Foot Craz" pad for the Atari 2600 in 1987. Nintendo purchased the technology from Bandai in 1988 and used it on their "Power Pad", for the Famicom and NES.
Exoskeleton controllers use exoskeleton technology to provide the player with different responses based on the player's body position, speed of movement, and other sensed data. In addition to audio and visual responses, an exoskeleton controller may provide a controlled resistance to movement and other stimuli to provide realism to the action. This not only lets players feel as if they are actually performing the function, but also helps reinforce the correct muscle pattern for the activity being simulated. The Forcetek XIO is an example of an exoskeleton video game controller.
Fishing rod: the first fishing rod controller appeared as an accessory for the Dreamcast video console for playing Sega Marine Fishing. Later other games for PlayStation console use also a similar controllers.
Floating Interactive Display: at least two commercial systems (Heliodisplay and FogScreen) offer interactive "floating interfaces" which display an image projected in mid-air but can be interacted with by finger similar to a touch screen.
Instrument panels are simulated aircraft instrument panels, either generic or specific to a real aircraft, that are used in place of the keyboard to send commands to a flight simulation program. Some of these are far more expensive than all the rest of a computer system combined. The panels usually only simulate switches, buttons, and controls, rather than output instrument displays.
Train controls: Other instrument panel like hardware such as train controls have been produced. The "RailDriver" for example is designed to work with Trainz, Microsoft Train Simulator and Kuju Rail Simulator. (as of January 2009) it is limited in ease of use by the lack of a Windows API for some of the software it is designed to work with. A train controller for a Taito bullet train sim has also been made for the Wii console.
Mechanical motion tracking systems like Gametrak use cables attached to gloves for tracking position of physical elements in three-dimensional space in real time. The Gametrak mechanism contains a retracting cable reel and a small tubular guide arm from which the cable passes out. The guide arm is articulated in a ball joint such that the arm and ball follow the angle at which the cable extends from the mechanism. The distance of the tracked element from the mechanism is determined through components which measure the rotation of the spool drum for the retracting cable reel, and calculating how far the cable is extended.
Microphone: A few games have made successes in using a headset or microphone as a secondary controller, such as Hey You, Pikachu!, the Rock Band series, the Guitar Hero series, the SingStar series, Tom Clancy's Endwar, Lips, the Mario Party series, and the SOCOM U.S. Navy SEALs series. The use of these microphones allowed players to issue commands to the game, controlling teammates (as in SOCOM) and other AI characters (e.g., Pikachu). The Nintendo DS features a microphone that is built into the system. It has been used for a variety of purposes, including speech recognition (Nintendogs, Brain Age: Train Your Brain in Minutes a Day!), chatting online between and during gameplay sessions (Pokémon Diamond and Pearl), and minigames that require the player to blow or shout into the microphone (Feel the Magic: XY/XX, WarioWare: Touched!, Mario Party DS).
Mind-controlled headset: As of March 24, 2007 a United States/Australian company called Emotiv Systems began launching a mind-controlled device for video games based on electroencephalography. It was reported by The Wall Street Journal's Don Clark on MSNBC.
NeGcon: is a unique controller for racing games on the PlayStation. Physically it resembles a gamepad, but its left and right halves twist relative to each other, making it a variation of the paddle controller.
Optical motion tracking systems such as TrackIR and FreeTrack use a video camera to track an infrared illuminated or emissive headpiece. Small head movements are tracked and then translated into much larger virtual in-game movements, allowing hands-free view control and improved immersiveness.
PCGamerBike similar to a pair of pedals removed from an exercise bike, then set down in front of a chair & used to precisely control game characters instead.
Pinball controllers and multi-button consoles for strategy games were released in the past, but their popularity was limited to hardcore fans of the genre.
R.O.B. (Robotic Operating Buddy) is an accessory for the Nintendo Entertainment System (NES), which allowed players to interact with NES games by controlling the robot. Known in Japan as the Famicom Robot, this short-lived accessory jumpstarted Nintendo's involvement in the western market, though only used for Stack-Up and Gyromite. As a character, R.O.B. appeared in later Nintendo games such as Mario Kart DS and Super Smash Bros. Brawl.
The Sega Toylet, an interactive urinal, uses urine as a control method; pressure sensors in the bowl translate the flow of urine into on-screen action.
Steel Battalion for the Xbox was bundled with a full dashboard, with 2 joysticks and over 30 buttons, in an attempt to make it feel like an actual mecha simulator.
SpaceOrb 360 was a 3D mouse for spatial interaction in 6DOF that e.g. could be used with Descent.
== Use on PCs and other devices ==
Although gamepads are generally developed for use with consoles, they are also often used for PC gaming and mobile gaming. Modern controllers, such as Sony's DualShock 4 and Nintendo's Switch Pro Controller, support USB and Bluetooth, allowing them to be directly connected to most PCs. Older gamepads can be connected through the use of official or third-party adapters. Controllers typically require the installation of device drivers to be used on contemporary personal computers. The device may be directly supported, or it may require the use of a specialized program which maps controller inputs to mouse and keyboard inputs. Examples of this kind of software include JoyToKey, Xpadder, and antimicro, which is free, open-source, and cross-platform.
Some controllers are specially designed for usage outside of consoles. In this case, support for mapping to different devices is built into the controller itself, such as with the Nostromo SpeedPad n52, which can act as either a keyboard, mouse, or joystick; or with the Samsung Android GamePad, designed for use with Android mobile phones.
The usage of gamepads over the mouse and keyboard has been referred to as a debate, with players of MMORPGs, RTS games, and first-person shooters tending to prefer the mouse and keyboard due to the wider variety of inputs and the high precision of the mouse when compared to an analog stick. Likewise, players of racing games, fighting games, and action RPGs tend to prefer controllers for their analog inputs and ergonomic button layouts.
== See also ==
Human interface device
List of game controllers
== References ==
== External links ==
Media related to Game controllers at Wikimedia Commons | Wikipedia/Game_controller |
A remote control is any device used to control a remote operation.
Remote control may also refer to:
== Film, television and theatre ==
Remote Control (1930 film), a film starring William Haines
Remote Control (1988 film), a film starring Kevin Dillon
Remote Control (1992 film), an Icelandic movie
Remote Control, a 1972 film from Hollis Frampton's Hapax Legomena cycle
Remote Control (game show), a 1987–1990 American game show
Remote Control, an Indian TV series featuring Mansi Parekh
"Remote Control" (Flashpoint), a 2009 episode of Flashpoint
"Remote Control" (The Zeta Project), an episode of The Zeta Project
"Remote Control", an episode of LazyTown
"Remote Control", an episode of Modern Marvels
Remote Control, a musical by Robert Steadman
"The Remote Control", an episode of Pocoyo
== Literature ==
Remote Control (McNab novel), a 1997 novel by Andy McNab starting the Nick Stone Missions novel series
Remote Control (Heath novel), 2007 by Jack Heath
Remote Control (Isaka novel), 2011 by Kōtarō Isaka
Remote Control (novella), a 2021 novella by Nnedi Okorafor
== Music ==
Remote Control (The Tubes album), 1979
Remote Control (TVT album), the 7th volume of the Television's Greatest Hits series of compilation albums by TVT Records
"Remote Control" (Beastie Boys song)
"Remote Control" (The Clash song), 1977
"Remote Control" (The Reddings song), 1980
"Remote Control", a song by Age of Electric
"Remote Control", a song by Suzi Quatro from Main Attraction
"Remote Control", a song by Kanye West from Donda
"Remote Control (Me)", a song by Electric Six from Fire
== Video games ==
Remote Control (video game), a game for the NES produced by Hi Tech Expressions
== Companies ==
Remote Control Productions (American company), a film music company run by Hans Zimmer
Remote Control Productions (German company), a video game studio
Remote Control Records, an Australian record label
== See also ==
Remote control software or remote desktop software
Remote control vehicle
Remote keyless system
Remotely Controlled, an album by Christian humorist Mark Lowry
Universal remote
Radio control
Teleoperation, controlling something remotely. | Wikipedia/Remote_control_(disambiguation) |
An arcade controller is a collective set of input devices designed primarily for use in an arcade cabinet. A typical control set consists of a joystick and a number of push-buttons. Less common setups include devices such as trackballs or steering wheels. These devices are generally produced under the assumption that they will be used in commercial settings, such as in video arcades, where they may be heavily or roughly used. Durability is one of the distinguishing characteristics of "authentic" arcade parts when compared with numerous, low-cost arcade imitations designed for private use in the home.
== Joystick design ==
A typical joystick is a digital input device that registers movement according to the range of motion that it is designed to detect. Most modern joysticks have an 8-way configuration, allowing for movement in the cardinal directions and the diagonals. There also exist common "analog" sticks that in actuality are implemented as 49-way digital, with incremental degrees of movement in each direction. Many vintage arcade games use a 4-way or even 2-way stick rather than an 8-way stick, which can cause compatibility problems that may be mitigated by the use of an appropriate restrictor gate.
=== Joystick types ===
The four most common arcade joystick types are the ball top, bat top, 4 button layout, and a keyboard WASD type. In Korea, the bat top is by far the most common with the ball top style being most common in Japan. If the ball and bat top is uncomfortable, The 4 button layout and keyboard WASD are always available. This kind of format is most common in Hitbox arcade sticks. The type of joystick is a matter of personal preference and comfort, as different types of grips, and angles, and button types are vastly available.
=== Restrictor gates ===
A restrictor gate limits the joystick's range of motion. The most common reason to use a gate in an actual arcade setting is the retrofitting of an older machine that is not compatible with a new 8-way stick. A classic example of this is Pac-Man. The game was originally designed for a 4-way stick, and is programmed to respond only when a new input occurs. If the user is holding the stick in the down position, then suddenly makes a motion to move to the right, what often happens is that the stick first moves into the down-right diagonal, which the game does not recognize as new input since down is still being held. However, right is also now considered held, and when the user completes the motion to move right, it is also not a new input, and Pac-Man will still be moving down.
In cases such as the above, a typical solution is to use a square (diamond) or clover-shaped restrictor, which prevents the stick from entering the diagonals.
For home arcade controllers, the most common restrictor gate used in modding is the octagonal gate. This is because most mid-grade and higher controllers use Japanese-style sticks with a traditional square gate—which is chosen because it has an equal area of throw for each direction. However, because of the sharp corners, the stick can get stuck in the diagonals if the player is unaccustomed to using a square gate. The octagonal gate allows for continuous motion along the edges using inertia, more similar to the circular motion of American sticks. In a fighting game many 360 degree motion special moves are usually executed easier with an octagonal gate while charge characters and characters which utilize many directional based moves will benefit more from a stick with a square gate.
== Regional style ==
Traditionally, there has been a divide between "American-style" (associated with the manufacturer Happ) and "Japanese-style" (common to Sanwa and Seimitsu) designs. American joysticks are generally made from hard plastic, with a tall, thick shaft shaped like a baseball bat (bat-top). Common grips for this type of stick utilize 4-5 fingers for pull and push, but all involve grabbing the stick from the side. The stick has a high resistance due to the amount of leverage that it gives to the user. American buttons have a long stroke, which is associated with a clicking action (which also adds to resistance) to let the player know when the switch has been activated. The buttons are generally concave and designed to be pressed with one or two fingers.
Japanese joysticks have a large spherical ball (ball-top) positioned at the top of a short, thin metal shaft. In contrast to the bat-top, a ball-top grip can be reasonably approached from almost any direction—the side, above, or below, and with different placements of the fingers, according to preference. This gives the stick more flexibility towards a general audience, however the ball itself may be considered awkward to hold. Also, because of the shaft's low mounting height, users with large hands may find the setup to be uncomfortable and constricting. Because of the shorter shaft and lighter grips used with this type of stick, resistance is relatively low. Japanese button design is based on requiring less effort from the player to press, and as such they have short strokes and very little resistance. They do not click as there is usually no question as to whether the button has been pressed, however this also means that players may find them too sensitive, and resting fingers on buttons requires more care.
In recent years, with the decline of arcades in the West, some popular Japanese arcade games are no longer considered profitable enough to be worth localizing and producing domestically even though they are eventually ported to home consoles. Because these games rely on newer, often proprietary hardware such as HD flat-screen monitors, entire cabinets for these games must be imported from Japan. It is therefore becoming more common to see cabinets with Japanese-style controls in American arcades.
== Joystick grips ==
There are many ways someone can grip the joystick on an arcade controller. One of these is the broomstick grip, which is commonly used for bat-top joysticks. The grip is done by holding the stick with your fingers and palm, with the thumb resting on top, akin to holding a broomstick handle. Some popular players who use the broomstick grip are Justin Wong and JDCR.
An alternative gripping style is called the wine-glass grip. It is done by gripping the fingers between the handle of the joystick (although this can also be accomplished with other fingers, such as pinky and ring finger). Some players turn their palm 90 degrees to rest on the side of the ball top, while others control the ball top from below much like a conventional wine-glass grip. Some notable players who use the wine-glass grip, or one of its variations, are Kazunoko and Daigo Umehara.
One last type of grip is the open-hand grip. Rather than continuously gripping the joystick, the open-hand grip wavers close to the joystick with a claw-shaped hand. When moving in a direction, the palm, fingers, and thumb are used to push or pull the joystick in one of nine hand-poses, each representing the directions in an octagonal restrictor gate.
== At home ==
Prior to the 2000s, it was generally accepted that most home consoles were not powerful enough to accurately replicate arcade games (such games are known as being "arcade-perfect"). As such, there was correspondingly little effort to bring arcade-quality controls into the home. Though many imitation arcade controllers were produced for various consoles and the PC, most were designed for affordability and few were able to deliver the responsiveness or feel of a genuine arcade setup.
Nevertheless, as early as 1990, SNK released the home version of its arcade Neo-Geo MVS system, called the Neo-Geo AES, which featured the exact (except for coin-op configuration possibilities unavailable on home console) same games on the AES (home console) as on the MVS (arcade system). SNK only made one type of arcade stick and no gamepad for this console. SNK's AES sturdy joystick was considered by many as the best arcade stick ever to be found on a 2D console at the time.
The company Exar recently offered a revised reissue with extra buttons called "Neo Geo Stick 2" (as well as "2+" and "3" versions) for the PS&PS2 in Japan in 2005, for Wii in 2008, PS3 in 2009 and the "Neo Geo Pad USB" in 2010.
In the 2000s, especially outside Japan, arcade attendance decreased as more gamers migrated to increasingly powerful home consoles. In 1994, the Neo-Geo CD was the first CD console to translate arcade games on home systems in an upgraded version, the soundtracks being rendered in CD quality, the games besides this were similar to AES/MVS versions. It was available with a new D-pad arcade stick hybrid, and was compatible with the older AES arcade sticks as well. Although the Neo-Geo CD was able only to offer 2D games, in 1998, the Dreamcast was the first console to deliver at once 3D games and near-perfect arcade translations, thanks mostly to the similarity in hardware between it and Sega's NAOMI arcade system. Interest in bringing home the arcade experience grew steadily throughout the decade, with fighting game enthusiasts building their own controllers using parts from arcade manufacturers such as Sanwa Denshi, Happ, and Seimitsu. At the same time, the PC became increasingly competent as an arcade emulator with software such as MAME, and enthusiasts have built entire faux arcade cabinets to bring the total experience home. Arcade style controllers such as the X-Arcade provided more authentic controls for such setups.
Towards the end of the decade, the popularity of the game Street Fighter IV was credited for reviving interest in playing fighting games at the arcade, and for stimulating demand for arcade-quality controllers when the game was ported to home consoles. In a licensing deal for the home version of SF IV, Mad Catz produced the Street Fighter IV FightStick Tournament Edition, the first commercially available console stick in North America to include genuine (Sanwa) arcade parts. They also released a lower-cost version of the controller with Mad Catz's own imitation parts, but designed the housing so that the parts could easily be replaced, for those who wanted to upgrade later. This in turn generated more publicity for modding and building custom sticks. These controllers have also been given the nickname of "fightstick".
=== Leverless arcade controller ===
A leverless arcade controller, also called a leverless controller or a "Hit Box", named after the same the company that produced the first commercially available leverless devices, is a type of controller that has the layout of an arcade stick for its attack buttons but replaces the joystick lever with four buttons that control up, down, left and right. Mixbox is also a well-known manufacturer. Usually, the button for up is placed low on the controller, within reach of the thumbs of both hands for easy use and have since become popular in many fighting games. In addition to "leverless", these types of controllers have been as well as an "all button controller", "button box", or "cheatbox" since the device lets players do some things a normal arcade stick would cause them to struggle with.
Leverless controllers can be rather difficult to get used to at first, since a lot of muscle memory for a regular stick or a controller is lost, but the benefits for some games can be very high; games like Tekken where there are difficult just frame inputs for moves like specials are now much easier, since pressing two buttons is much more consistent than timing the movement of a joystick to a button press. It also takes less time to press a button than to move a joystick, which means that movement can be more responsive, and players can do some moves faster on reaction. Users of leverless devices have to be careful about SOCD (Simultaneous Opposite Cardinal Directions) in which conflicting inputs both the left and right, or up and down, buttons at the same time can cause control issues in game. If the game is not programmed to handle these instances of SOCD issues can occur such as input lockups and delays causing movements not being registered properly thus breaking the flow of a game.
== References == | Wikipedia/Arcade_controller |
A paddle is a game controller with a round wheel and one or more fire buttons, where the wheel is typically used to control movement of the player object along one axis of the video screen. A paddle controller rotates through a fixed arc (usually about 330 degrees); it has a stop at each end.
The name paddle is derived from the first game that used it, Pong,, being a video game simulation of table tennis, whose racquets are commonly called paddles. Even though the simulated paddles appeared on-screen (as small line segments), it was the hand controllers used to move the line segments that actually came to bear the name.
Some famous video games using paddles are Pong, Breakout, and Night Driver.
== Design ==
The paddle wheel is usually mechanically coupled to a potentiometer, so as to generate an output voltage level varying with the wheel's angle relative to a fixed reference position. A paddle is thus an absolute position controller. That is, without any previous knowledge, the sensor can be read and the result directly indicates the position of the paddle knob. This is in contrast to a rotary encoder-based device or "spinner".
== Applications ==
Paddles first appeared in video arcade games with Atari Inc.'s Pong in 1972, while the first console to use paddles was Magnavox's Odyssey that same year. The Apple II shipped with paddles until 1980. The Atari 2600 used paddles for several of its games, as did early home computers such as the VIC-20. True (potentiometer-based) paddles are almost never employed any more because they stop reading accurately when the potentiometer contacts get dirty or worn, because turning them too far can break them and because they require more-expensive analog sensing, whereas quadrature encoder-based controllers can be sensed digitally. Any recent game that has paddle-type control uses a quadrature encoder instead, even if the game uses paddles on screen (like Arkanoid).
== Similar controllers ==
On the Atari 2600, the paddle controllers look very similar to the driving controllers. The driving controllers emulated the steering wheel controls found in contemporary games, where one spun the wheel to cause the car to turn one direction or the other, and stopped the spinning to drive straight. The driving controllers for Atari consoles operated in the same way, although they did not have a wheel, the controller was reduced to a single large knob identical to the one on the paddles.
In comparison to the driving controllers, paddle controllers rotate just under one full rotation before hitting a hard stop. They also come in pairs that plug into a single port, whereas the driving controllers were one to a port. Finally, they have a picture of a tennis racquet and the word "paddle" on it, as opposed to a racing car and the word "driving". Because two controllers connect to each port and the 2600 has two controller ports, four players simultaneously can play in games that support it. The Atari paddles are also compatible with the Atari 8-bit computers, which initially had four controller ports allowing eight paddles. Super Breakout is one example that supported up to 8 players.
Atari also offered driving controllers for use with games like Indy 500, which requires wheels that can spin around continuously in one direction. Driving controllers have a picture of a car and the word "driving" on it and a single controller attaches to each controller port. The driving controller is not compatible with paddle games. Like a mechanical computer mouse, the driving controller is a quadrature encoder-based device and thus only sensed relative position, not absolute position. This controller is functionally identical to the spin-dial controller used in Atari's Tempest arcade game. Since only one controller attaches to each port, only two people can play driving games simultaneously.
Several similar relative spinner controllers have emerged as part of the home-built arcade cabinet scene to facilitate play of such games as Tempest, including spinners from Oscar Controls and the SlikStik Tornado spinner. These devices are typically made to plug directly into a computer as a single-axis mouse.
== See also ==
Joystick
Racing wheel
Scroll wheel
== References == | Wikipedia/Paddle_(game_controller) |
Radio control (often abbreviated to RC) is the use of control signals transmitted by radio to remotely operate a device. Examples of simple radio control systems are garage door openers and keyless entry systems for vehicles, in which a small handheld radio transmitter unlocks or opens doors. Radio control is also used for control of model vehicles from a hand-held radio transmitter. Industrial, military, and scientific research organizations make use of radio-controlled vehicles as well. A rapidly growing application is control of unmanned aerial vehicles (UAVs or drones) for both civilian and military uses, although these have more sophisticated control systems than traditional applications.
== History ==
The idea of controlling unmanned vehicles (for the most part in an attempt to improve the accuracy of torpedoes for military purposes) predates the invention of radio. The latter half of the 1800s saw development of many such devices, connected to an operator by wires, including the first practical application invented by German engineer Werner von Siemens in 1870.
Getting rid of the wires via using a new wireless technology, radio, appeared in the late 1890s. In 1897 British engineer Ernest Wilson and C. J. Evans patented a radio-controlled torpedo or demonstrated radio-controlled boats on the Thames river (accounts of what they did vary). At an 1898 exhibition at Madison Square Garden, Nikola Tesla demonstrated a small boat that used a coherer-based radio control. With an eye towards selling the idea to the US government as a torpedo, Tesla's 1898 patent included a clockwork frequency changer so an enemy could not take control of the device.
In 1903, the Spanish engineer Leonardo Torres Quevedo introduced a radio based control system called the "Telekino" at the Paris Academy of Sciences. In the same year, he applied for several patents in other countries. It was intended as a way of testing Astra-Torres airship, a dirigible of his own design, without risking human lives. Unlike the previous mechanisms, which carried out actions of the 'on/off' type, Torres established a system for controlling any mechanical or electrical device with different states of operation. This method required a transmitter capable of sending a family of different code words by means of a binary telegraph key signal, and a receiver, which was able to set up a different state of operation in the device being used, depending on the code word. It was able to select different positions for the steering engine and different velocities for the propelling engine independently, and also to act over other mechanisms such an electric light, for switching it, and a flag, for raising or dropping it, at the same time, and so up to 19 different actions. In 1904, Torres chose to carry out the first test on a three-wheeled land vehicle with a range of 20 to 30 meters. In 1906, in the presence of an audience which included King Alfonso XIII of Spain, Torres demonstrated the invention in the Port of Bilbao, guiding the electrically powered launch Vizcaya from the shore with people on board, which was controlled at a distance over 2 km.
In 1904, Bat, a Windermere steam launch, was controlled using experimental radio control by its inventor, [Jack Kitchen]. In 1909 French inventor [Gabet] demonstrated what he called his "Torpille Radio-Automatique", a radio-controlled torpedo.
In 1917, Archibald Low, as head of the secret Royal Flying Corps (RFC) experimental works at Feltham, was the first person to use radio control successfully on an aircraft, a 1917 Aerial Target. It was "piloted" from the ground by future world aerial speed record holder Henry Segrave. Low's systems encoded the command transmissions as a countermeasure to prevent enemy intervention. By 1918 the secret D.C.B. Section of the Royal Navy's Signals School, Portsmouth under the command of Eric Robinson V.C. used a variant of the Aerial Target’s radio control system to control from ‘mother’ aircraft different types of naval vessels including a submarine.
During World War I American inventor John Hays Hammond, Jr. developed many techniques used in subsequent radio control including developing remote controlled torpedoes, ships, anti-jamming systems and even a system allowing his remote-controlled ship targeting an enemy ship's searchlights. In 1922 he installed radio control gear on the obsolete US Navy battleship USS Iowa so it could be used as a target ship (sunk in gunnery exercise in March 1923).
The Soviet Red Army used remotely controlled teletanks during the 1930s in the Winter War against Finland and fielded at least two teletank battalions at the beginning of the Great Patriotic War. A teletank is controlled by radio from a control tank at a distance of 500–1500 m, the two constituting a telemechanical group. There were also remotely controlled cutters and experimental remotely controlled planes in the Red Army.
The United Kingdom's World War One development of their radio-controlled 1917 'Aerial Target' (AT) and 1918 'Distant Control Boat' (DCB) using Low's control systems led eventually to their 1930s fleet of "Queen Bee". This was a remotely controlled unmanned version of the de Havilland "Tiger Moth" aircraft for Navy fleet gunnery firing practice. The "Queen Bee" was superseded by the similarly named Airspeed Queen Wasp, a purpose-built target aircraft of higher performance.
== Second World War ==
Radio control was further developed during World War II, primarily by the Germans who used it in a number of missile projects. Their main effort was the development of radio-controlled missiles and glide bombs for use against shipping, a target otherwise both difficult and dangerous to attack. However, by the end of the war, the Luftwaffe was having similar problems attacking Allied bombers and developed a number of radio command guided surface-to-air anti-aircraft missiles, none of which saw service.
The effectiveness of the Luftwaffe's systems, primarily comprising the series of Telefunken Funk-Gerät (or FuG) 203 Kehl twin-axis, single joystick-equipped transmitters mounted in the deploying aircraft, and Telefunken's companion FuG 230 Straßburg receiver placed in the ordnance to be controlled during deployment and used by both the Fritz X unpowered, armored anti-ship bomb and the powered Henschel Hs 293 guided bomb, was greatly reduced by British efforts to jam their radio signals, eventually with American assistance. After initial successes, the British launched a number of commando raids to collect the missile radio sets. Jammers were then installed on British ships, and the weapons basically "stopped working". The German development teams then turned to wire-guided missiles once they realized what was going on, but the systems were not ready for deployment until the war had already moved to France.
The German Kriegsmarine operated FL-Boote (ferngelenkte Sprengboote) which were radio controlled motor boats filled with explosives to attack enemy shipping from 1944.
Both the British and US also developed radio control systems for similar tasks, to avoid the huge anti-aircraft batteries set up around German targets. However, no system proved usable in practice, and the one major US effort, Operation Aphrodite, proved to be far more dangerous to its users than to the target. The American Azon guided free-fall ordnance, however, proved useful in both the European and CBI Theaters of World War II.
Radio control systems of this era were generally electromechanical in nature, using small metal "fingers" or "reeds" with different resonant frequencies each of which would operate one of a number of different relays when a particular frequency was received. The relays would in turn then activate various actuators acting on the control surfaces of the missile. The controller's radio transmitter would transmit the different frequencies in response to the movements of a control stick; these were typically on/off signals. The radio gear used to control the rudder function on the American-developed Azon guided ordnance, however, was a fully proportional control, with the "ailerons", solely under the control of an on-board gyroscope, serving merely to keep the ordnance from rolling.
These systems were widely used until the 1960s, when the increasing use of solid state systems greatly simplified radio control. The electromechanical systems using reed relays were replaced by similar electronic ones, and the continued miniaturization of electronics allowed more signals, referred to as control channels, to be packed into the same package. While early control systems might have two or three channels using amplitude modulation, modern systems include twenty or more using frequency modulation.
== Radio-controlled models ==
The first general use of radio control systems in models started in the early 1950s with single-channel self-built equipment; commercial equipment came later. The advent of transistors greatly reduced the battery requirements, since the current requirements at low voltage were greatly reduced and the high voltage battery was eliminated. In both tube and early transistor sets the model's control surfaces were usually operated by an electromagnetic 'escapement' controlling the stored energy in a rubber-band loop, allowing simple on/off rudder control (right, left, and neutral) and sometimes other functions such as motor speed.
Crystal-controlled superheterodyne receivers with better selectivity and stability made control equipment more capable and at lower cost. Multi-channel developments were of particular use to aircraft, which really needed a minimum of three control dimensions (yaw, pitch and motor speed), as opposed to boats, which required only two or one.
As the electronics revolution took off, single-signal channel circuit design became redundant, and instead radios provided proportionally coded signal streams which a servomechanism could interpret, using pulse-width modulation (PWM).
More recently, high-end hobby systems using pulse-code modulation (PCM) features have come on the market that provide a computerized digital data bit-stream signal to the receiving device, instead of the earlier PWM encoding type. However, even with this coding, loss of transmission during flight has become more common, in part because of the ever more wireless society. Some more modern FM-signal receivers that still use "PWM" encoding instead can, thanks to the use of more advanced computer chips in them, be made to lock onto and use the individual signal characteristics of a particular PWM-type RC transmitter's emissions alone, without needing a special "code" transmitted along with the control information as PCM encoding has always required.
In the early 21st century, 2.4 gigahertz spread spectrum RC control systems have become increasingly utilized in control of model vehicles and aircraft. Now, these 2.4 GHz systems are being made by most radio manufacturers. These radio systems range in price from a couple thousand dollars, all the way down to under US$30 for some. Some manufacturers even offer conversion kits for older digital 72 MHz or 35 MHz receivers and radios. As the emerging multitude of 2.4 GHz band spread spectrum RC systems usually use a "frequency-agile" mode of operations, like FHSS that do not stay on one set frequency any longer while in use, the older "exclusive use" provisions at model flying sites needed for VHF-band RC control systems' frequency control, for VHF-band RC systems that only used one set frequency unless serviced to change it, are not as mandatory as before.
== Modern military and aerospace applications ==
Remote control military applications are typically not radio control in the direct sense, directly operating flight control surfaces and propulsion power settings, but instead take the form of instructions sent to a completely autonomous, computerized automatic pilot. Instead of a "turn left" signal that is applied until the aircraft is flying in the right direction, the system sends a single instruction that says "fly to this point".
Some of the most outstanding examples of remote radio control of a vehicle are the Mars Exploration Rovers such as Sojourner.
== Industrial radio remote control ==
Today radio control is used in industry for such devices as overhead cranes and switchyard locomotives. Radio-controlled teleoperators are used for such purposes as inspections, and special vehicles for disarming of bombs. Some remotely controlled devices are loosely called robots, but are more properly categorized as teleoperators since they do not operate autonomously, but only under control of a human operator.
An industrial radio remote control can either be operated by a person, or by a computer control system in a machine to machine (M2M) mode. For example, an automated warehouse may use a radio-controlled crane that is operated by a computer to retrieve a particular item. Industrial radio controls for some applications, such as lifting machinery, are required to be of a fail-safe design in many jurisdictions.
Industrial remote controls work differently from most consumer products. When the receiver receives the radio signal which the transmitter sent, it checks it so that it is the correct frequency and that any security codes match. Once the verification is complete, the receiver sends an instruction to a relay which is activated. The relay activates a function in the application corresponding to the transmitters button. This could be to engage an electrical directional motor in an overhead crane.
In a receiver there are usually several relays, and in something as complex as an overhead crane, perhaps up to twelve or more relays are required to control all directions. In a receiver which opens a gate, two relays are often sufficient.
Industrial remote controls are getting more and higher safety requirements. For example: a remote control may not lose the safety functionality in case of malfunction. This can be avoided by using redundant relays with forced contacts.
== See also ==
Precision-guided munition
Radio-controlled airplane
Radio-controlled boat
Radio-controlled car
Radio-controlled helicopter
Remote control
Remote control vehicle
Telecommand
Teletank
== Notes and references ==
== Further reading ==
Bill Yenne, Attack of the drones: a history of unmanned aerial combat, Zenith Imprint, 2004, ISBN 0-7603-1825-5
Laurence R. Newcome Unmanned aviation: a brief history of unmanned aerial vehicles, AIAA, 2004, ISBN 1-56347-644-4, | Wikipedia/Radio_control |
In computing, a motion controller is a type of input device that uses accelerometers, gyroscopes, cameras, or other sensors to track motion.
Motion controllers see use as game controllers, for virtual reality and other simulation purposes, and as pointing devices for smart TVs and Personal computers.
Many of the technologies needed for motion controllers are often used together in smartphones to provide a variety of functions, including for mobile applications to use them as motion controllers.
== Technologies ==
Motion controllers have used a variety of different sensors in different combinations to detect and measure movements, sometimes as separate inputs and sometimes together to provide a more precise or more reliable input. In modern devices most of the sensors are specialized integrated circuits. The following items are examples of current and historical methods of tracking motion.
=== Inertial Motion Sensors ===
Inertial Measurement Units (IMUs) are used to detect the rate of change in rotation using gyroscopes and change in speed using accelerometers. These are often found together on the same integrated circuit and can be used together to provide six degrees of freedom (6DOF) tracking.
=== Cameras ===
Image sensors are used in conjunction with computer vision and are placed in locations such as on handheld or worn devices or in the environment to detect the relative locations of other devices and the environment, or to detect the movements of any or all parts of a user's body. They may be used in combination with paired light emitters that are tracked directly when seen by the camera, or indirectly through reflections of infrared light.
=== Magnetometer ===
A magnetic field sensor in a device may be used to detect the direction of the earth's magnetic field, or the direction to a nearby base station.
=== Mechanical ===
Mechanical sensing methods using potentiometers, Hall effect sensors, and incremental encoders have historically seen use as the basis for motion tracking but they have since mostly been replaced for that purpose by MEMS and other types of integrated circuit technologies. These sensors are used to track mechanical connections between a control element and a static object such as an arcade cabinet.
Weighing scales using load cells have been used to detect balance changes and other body movements through changes in weight distribution and momentary fluctuation in measured weight.
Unrelated to their use in motion tracking, mechanical sensors continue to see much use in joysticks and other controls that are found on motion controllers and other input devices.
=== Other ===
Ultrasonic triangulation and mercury switches were seen in optional peripherals for home video game consoles in the 1980s.
== History ==
Early uses of motion controllers included the Sega AM2 arcade game Hang-On, which was controlled using a video game arcade cabinet resembling a motorbike, which the player moved with their body. This began the "Taikan" trend, the use of motion-controlled hydraulic arcade cabinets in many arcade games of the late 1980s, two decades before motion controls became popular on video game consoles.
The Sega VR headset was an early unreleased VR device with built-in motion tracking, first announced in 1991. Its sensors tracked the player's movement and head position. Another early example is the 2000 light gun shooter arcade game Police 911, which used motion tracking technology to detect the player's movements, which are reflected by the player character within the game. The Atari Mindlink was an early proposed motion controller for the Atari 2600, which measured the movement of the user's eyebrows with a fitted headband.
The Sega Activator was based on the Light Harp invented by Assaf Gurner. It was released as an optional accessory for the Mega Drive (Genesis) in 1993 and could read the player's physical movements using full-body motion tracking. It was a commercial failure due to its "unwieldiness and inaccuracy".
Motion controllers became more widely distributed with the seventh generation of video game consoles. The Nintendo Wii console's Wii Remote controller used an image sensor so it could be used as a pointing device along with an accelerometer to track straight-line motions and the direction of gravity. The Nunchuk accessory for use in a second hand also featured an accelerometer. A later line of accessories and refreshed controllers labeled with the Motion Plus feature added gyroscopic sensors to track all three axes of rotation independent of whether the controller had line of sight to the sensors bar.
The PlayStation 3 launched with the Sixaxis controller included, which featured three-axis accelerometer motion tracking and a one axis gyroscope while not including the haptic feedback (vibration) seen in other modern consoles citing interference concerns. Both features were included in the later DualShock 3 controller refresh.
Several wand-based devices with accelerometer and gyroscopic sensors followed, including the ASUS Eee Stick, Sony PlayStation Move (adding computer vision via the PlayStation Eye to aid in position tracking), and HP Swing. Other systems used different mechanisms for input, such as Microsoft's Kinect, which combined infrared structured light and computer vision, and the Razer Hydra, which used a magnetometer.
Nintendo and Sony would adopt motion tracking using gyroscopes and accelerometers as a standard hardware feature in successive generations starting with their handheld consoles the 3DS and the PS Vita, both of which had the required three-axis accelerometers and gyroscopes. In the eighth generation of video game consoles Nintendo and Sony included those sensors as a standard feature of their two handed game controllers, the Wii U GamePad and the DualShock 4. The consoles also had support for some devices in the previous generation of motion controllers depending on individual games.
Valve's Steam Controller was designed solely for use with PC's and required its Steam software. Its 6DOF sensors were made available for use by games published on Steam, and options available to users allowed the use of its gyroscope as a pointer control. Its motion tracking features would later be adapted for the Steam Deck.
A wave of virtual reality headsets released in the 2010s adopted forms of 6DOF motion controllers; the HTC Vive was bundled with wand-like controllers, while controllers known as Oculus Touch were released initially as an optional accessory for Oculus Rift in December 2016, and became part of its standard equipment in July 2017. Both controllers are tracked using infrared emitters placed in the play space. Oculus later switched to an "inside-out" tracking system for Oculus Quest and Rift S, where the controllers are tracked by cameras in the headset itself.
The Nintendo Switch hybrid home/portable console and its included Joy-Con controllers feature 6DOF sensors in each controller in the pair as well as in the main body of the console. The optional Nintendo Switch Pro Controller and Poké Ball Plus controllers also feature 6DOF sensors.
In the ninth generation the Sony PlayStation 5 continues to provide similar motion tracking for the included DualSense controllers, while supporting the use of older generations of motion controllers when playing backwards compatible games.
== Notable controllers ==
EyeToy (PlayStation 2)
Xbox Live Vision (Xbox 360)
Wii Remote (Wii and Wii U)
Sixaxis (PlayStation 3)
DualShock 3, 4 and DualSense (PlayStation 3, PlayStation 4 and PlayStation 5)
PlayStation Move (PlayStation 3, PlayStation 4 and PlayStation 5)
Wii U GamePad (Wii U)
Kinect (Xbox 360, Xbox One, Windows)
Razer Hydra
Xavix
Joy-Con and Nintendo Switch Pro Controller (Nintendo Switch)
Steam Controller
Steam Deck
Nex Playground
== See also ==
3D motion controller
Flick Stick
Gesture recognition
Motion capture
== References == | Wikipedia/Motion_controller |
A remote control locomotive (also called an RCL) is a railway locomotive that can be operated with a remote control. It differs from a conventional locomotive in that a remote control system has been installed in one or more locomotives within the consist, which uses either a mechanical or radio transmitter and receiver system. The locomotive is operated by a person not physically at the controls within the locomotive cab. They have been in use for many years in the railroad industry, including industrial applications such as bulk material load-out, manufacturing, process and industrial switching. The systems are designed to be fail-safe so that if communication is lost the locomotive is brought to a stop automatically.
== History ==
=== United Kingdom ===
One of the earliest remote control locomotives was the GWR Autocoach, which replaced the GWR steam rail motors on both operational cost and maintenance grounds. When running 'autocoach first', the regulator is operated by a linkage to a rotating shaft running the length of the locomotive, passing below the cab floor. This engages (via a telescopic coupling) with another shaft running the full length below the floor of the autocoach. This shaft is turned by a second regulator lever in the cab of the autocoach. The driver can operate the regulator, brakes and whistle from the far (cab) end of the autocoach; the fireman remains on the locomotive and (in addition to firing) also controls the valve gear settings. The driver can also warn of the train's approach using a large mechanical gong, prominently mounted high on the cab end of the autocoach, which is operated by stamping on a pedal on the floor of the cab. The driver, guard and fireman communicate with each other by an electric bell system.
=== North America ===
In North America remote controlled locomotives have been in use since the 1980s. In 1988, the US Occupational Safety and Health Administration issued a hazard information bulletin regarding their use. By 1999 Canadian National had 115 locomotives equipped with remote control equipment, covering 70% of flat-yard switching and all of its hump yard operations. Canadian National estimated a savings of CA$20 million per year vs. traditional switching operations.
The Brotherhood of Locomotive Engineers and Trainmen expressed concerns about remote control locomotives. The union said that remote control locomotives were not as efficient as traditional engineer-in-cab switching operations while being more dangerous.
In 2001, the US Federal Railroad Administration (FRA) recommended minimal guidelines for the operation of remote control locomotives.
Union Pacific developed remote-control enabled locomotives, referred to as control car remote control locomotives (CCRCL). A CCRCL is a stripped-down locomotive fitted with remote control equipment. A CCRCL has no motive power and must be coupled to a standard locomotive.
== Present ==
Modern remote control systems are now based on digital signal technology, with most using time-division multiplexing transmission to cut down on the number of cables or radio bandwidth required for integrated control.
The UK's InterCity 125 was the first passenger train to use TDM technology, introduced from 1976 to allow it to control up to eight carriages sandwiched between two Class 43 power cars.
Locotrol is a product of GE Transportation that enables distributed power sending signals from the lead locomotive to the remote units via radio control. Locotrol is installed on more than 8,500 locomotives around the world. Users of the system include BHP Iron Ore, Westrail and Aurizon in Australia.
== References == | Wikipedia/Remote_control_locomotive |
In applied mathematics and astrodynamics, in the theory of dynamical systems, a crisis is the sudden appearance or disappearance of a strange attractor as the parameters of a dynamical system are varied. This global bifurcation occurs when a chaotic attractor comes into contact with an unstable periodic orbit or its stable manifold. As the orbit approaches the unstable orbit it will diverge away from the previous attractor, leading to a qualitatively different behaviour. Crises can produce intermittent behaviour.
Grebogi, Ott, Romeiras, and Yorke distinguished between three types of crises:
The first type, a boundary or an exterior crisis, the attractor is suddenly destroyed as the parameters are varied. In the postbifurcation state the motion is transiently chaotic, moving chaotically along the former attractor before being attracted to a fixed point, periodic orbit, quasiperiodic orbit, another strange attractor, or diverging to infinity.
In the second type of crisis, an interior crisis, the size of the chaotic attractor suddenly increases. The attractor encounters an unstable fixed point or periodic solution that is inside the basin of attraction.
In the third type, an attractor merging crisis, two or more chaotic attractors merge to form a single attractor as the critical parameter value is passed.
Note that the reverse case (sudden appearance, shrinking or splitting of attractors) can also occur. The latter two crises are sometimes called explosive bifurcations.
While crises are "sudden" as a parameter is varied, the dynamics of the system over time can show long transients before orbits leave the neighbourhood of the old attractor. Typically there is a time constant τ for the length of the transient that diverges as a power law (τ ≈ |p − pc|γ) near the critical parameter value pc. The exponent γ is called the critical crisis exponent. There also exist systems where the divergence is stronger than a power law, so-called super-persistent chaotic transients.
== See also ==
Intermittency
Bifurcation diagram
Phase portrait
== References ==
== External links ==
Scholarpedia: Crises | Wikipedia/Crisis_(dynamical_systems) |
Terrestrial locomotion has evolved as animals adapted from aquatic to terrestrial environments. Locomotion on land raises different problems than that in water, with reduced friction being replaced by the increased effects of gravity.
As viewed from evolutionary taxonomy, there are three basic forms of animal locomotion in the terrestrial environment:
legged – moving by using appendages
limbless locomotion – moving without legs, primarily using the body itself as a propulsive structure.
rolling – rotating the body over the substrate
Some terrains and terrestrial surfaces permit or demand alternative locomotive styles. A sliding component to locomotion becomes possible on slippery surfaces (such as ice and snow), where locomotion is aided by potential energy, or on loose surfaces (such as sand or scree), where friction is low but purchase (traction) is difficult. Humans, especially, have adapted to sliding over terrestrial snowpack and terrestrial ice by means of ice skates, snow skis, and toboggans.
Aquatic animals adapted to polar climates, such as ice seals and penguins also take advantage of the slipperiness of ice and snow as part of their locomotion repertoire. Beavers are known to take advantage of a mud slick known as a "beaver slide" over a short distance when passing from land into a lake or pond. Human locomotion in mud is improved through the use of cleats. Some snakes use an unusual method of movement known as sidewinding on sand or loose soil. Animals caught in terrestrial mudflows are subject to involuntary locomotion; this may be beneficial to the distribution of species with limited locomotive range under their own power. There is less opportunity for passive locomotion on land than by sea or air, though parasitism (hitchhiking) is available toward this end, as in all other habitats.
Many species of monkeys and apes use a form of arboreal locomotion known as brachiation, with forelimbs as the prime mover. Some elements of the gymnastic sport of uneven bars resemble brachiation, but most adult humans do not have the upper body strength required to sustain brachiation. Many other species of arboreal animal with tails will incorporate their tails into the locomotion repertoire, if only as a minor component of their suspensory behaviors.
Locomotion on irregular, steep surfaces require agility and dynamic balance known as sure-footedness. Mountain goats are famed for navigating vertiginous mountainsides where the least misstep could lead to a fatal fall.
Many species of animals must sometimes locomote while safely conveying their young. Most often this task is performed by adult females. Some species are specially adapted to conveying their young without occupying their limbs, such as marsupials with their special pouch. In other species, the young are carried on the mother's back, and the offspring have instinctual clinging behaviours. Many species incorporate specialized transportation behaviours as a component of their locomotion repertoire, such as the dung beetle when rolling a ball of dung, which combines both rolling and limb-based elements.
The remainder of this article focuses on the anatomical and physiological distinctions involving terrestrial locomotion from the taxonomic perspective.
== Legged locomotion ==
Movement on appendages is the most common form of terrestrial locomotion, it is the basic form of locomotion of two major groups with many terrestrial members, the vertebrates and the arthropods. Important aspects of legged locomotion are posture (the way the body is supported by the legs), the number of legs, and the functional structure of the leg and foot. There are also many gaits, ways of moving the legs to locomote, such as walking, running, or jumping.
=== Posture ===
Appendages can be used for movement in a lot of ways: the posture, the way the body is supported by the legs, is an important aspect. There are three main ways in which vertebrates support themselves with their legs – sprawling, semi-erect, and fully erect. Some animals may use different postures in different circumstances, depending on the posture's mechanical advantages. There is no detectable difference in energetic cost between stances.
The "sprawling" posture is the most primitive, and is the original limb posture from which the others evolved. The upper limbs are typically held horizontally, while the lower limbs are vertical, though upper limb angle may be substantially increased in large animals. The body may drag along the ground, as in salamanders, or may be substantially elevated, as in monitor lizards. This posture is typically associated with trotting gaits, and the body flexes from side-to-side during movement to increase step length. All limbed reptiles, excluding birds, and salamanders use this posture, as does the platypus and several species of frogs that walk. Unusual examples can be found among amphibious fish, such as the mudskipper, which drag themselves across land on their sturdy fins. Among the invertebrates, most arthropods – which includes the most diverse group of animals, the insects – have a stance best described as sprawling. There is also anecdotal evidence that some octopus species (such as the genus Pinnoctopus) can also drag themselves across land a short distance by hauling their body along by their tentacles (for example to pursue prey between rockpools) – there may be video evidence of this. The semi-erect posture is more accurately interpreted as an extremely elevated sprawling posture. This mode of locomotion is typically found in large lizards such as monitor lizards and tegus.
Mammals and birds typically have a fully erect posture, though each evolved it independently. In these groups the legs are placed beneath the body. This is often linked with the evolution of endothermy, as it avoids Carrier's constraint and thus allows prolonged periods of activity. The fully erect stance is not necessarily the "most-evolved" stance; evidence suggests that crocodilians evolved a semi-erect stance in their forelimbs from ancestors with fully erect stance as a result of adapting to a mostly aquatic lifestyle, though their hindlimbs are still held fully erect. For example, the mesozoic prehistoric crocodilian Erpetosuchus is believed to have had a fully erect stance and been terrestrial.
=== Number of legs ===
The number of locomotory appendages varies much between animals, and sometimes the same animal may use different numbers of its legs in different circumstances. The best contender for unipedal movement is the springtail, which while normally hexapedal, hurls itself away from danger using its furcula, a tail-like forked rod that can be rapidly unfurled from the underside of its body.
A number of species move and stand on two legs, that is, they are bipedal. The group that is exclusively bipedal is the birds, which have either an alternating or a hopping gait. There are also a number of bipedal mammals. Most of these move by hopping – including the macropods such as kangaroos and various jumping rodents. Only a few mammals such as humans and the ground pangolin commonly show an alternating bipedal gait. In humans, alternating bipedalism is characterized by a bobbing motion, which is due to the utilization of gravity when falling forward. This form of bipedalism has demonstrated significant energy savings. Cockroaches and some lizards may also run on their two hind legs.
With the exception of the birds, terrestrial vertebrate groups with legs are mostly quadrupedal – the mammals, reptiles, and the amphibians usually move on four legs. There are many quadrupedal gaits.
The most diverse group of animals on earth, the insects, are included in a larger taxon known as hexapods, most of which are hexapedal, walking and standing on six legs. Exceptions among the insects include praying mantises and water scorpions, which are quadrupeds with their front two legs modified for grasping, some butterflies such as the Lycaenidae (blues and hairstreaks) which use only four legs, and some kinds of insect larvae that may have no legs (e.g., maggots), or additional prolegs (e.g., caterpillars).
Spiders and many of their relatives move on eight legs – they are octopedal. However, some creatures move on many more legs. Terrestrial crustaceans may have a fair number – woodlice having fourteen legs. Also, as previously mentioned, some insect larvae such as caterpillars and sawfly larvae have up to five (caterpillars) or nine (sawflies) additional fleshy prolegs in addition to the six legs normal for insects.
Some species of invertebrate have even more legs, the unusual velvet worm having stubby legs under the length of its body, with around several dozen pairs of legs. Centipedes have one pair of legs per body segment, with typically around 50 legs, but some species have over 200. The terrestrial animals with the most legs are the millipedes. They have two pairs of legs per body segment, with common species having between 80 and 400 legs overall – with the rare species Illacme plenipes having up to 750 legs.
Animals with many legs typically move them in metachronal rhythm, which gives the appearance of waves of motion travelling forward or backward along their rows of legs. Millipedes, caterpillars, and some small centipedes move with the leg waves travelling forward as they walk, while larger centipedes move with the leg waves travelling backward.
=== Leg and foot structure ===
The legs of tetrapods, the main group of terrestrial vertebrates (which also includes amphibious fish), have internal bones, with externally attached muscles for movement, and the basic form has three key joints: the shoulder joint, the knee joint, and the ankle joint, at which the foot is attached. Within this form there is much variation in structure and shape. An alternative form of vertebrate 'leg' to the tetrapod leg is the fins found on amphibious fish. Also a few tetrapods, such as the macropods, have adapted their tails as additional locomotory appendages.
The fundamental form of the vertebrate foot has five digits, however some animals have fused digits, giving them less, and some early fishapods had more; Acanthostega had eight toes. Only ichthyosaurs evolved more than 5 digits within tetrapods, while their transition from land to water again (limb terminations were becoming flippers). Feet have evolved many forms depending on the animal's needs. One key variation is where on the foot the animal's weight is placed. Some vertebrates: amphibians, reptiles, and some mammals such as humans, bears, and rodents, are plantigrade. This means the weight of the body is placed on the heel of the foot, giving it strength and stability. Most mammals, such as cats and dogs, are digitigrade, walking on their toes, giving them what many people mistake as a “backward knee”, which is really their ankle. The extension of the joint helps store momentum and acts as a spring, allowing digitigrade creatures more speed. Digitigrade mammals are also often adept at quiet movement. Birds are also digitigrade. Hooved mammals are known as ungulates, walking on the fused tips of their fingers and toes. This can vary from odd-toed ungulates, such as horses, rhinos, and a few wild African ungulates, to even-toed ungulates, such as pigs, cows, deer, and goats.
Mammals whose limbs have adapted to grab objects have what are called prehensile limbs. This term can be attributed to front limbs as well as tails for animals such as monkeys and some rodents. All animals that have prehensile front limbs are plantigrade, even if their ankle joint looks extended (squirrels are a good example).
Among terrestrial invertebrates there are a number of leg forms. The arthropod legs are jointed and supported by hard external armor, with the muscles attached to the internal surface of this exoskeleton. The other group of legged terrestrial invertebrates, the velvet worms, have soft stumpy legs supported by a hydrostatic skeleton. The prolegs that some caterpillars have in addition to their six more-standard arthropod legs have a similar form to those of velvet worms, and suggest a distant shared ancestry.
=== Gaits ===
Animals show a vast range of gaits, the order that they place and lift their appendages in locomotion. Gaits can be grouped into categories according to their patterns of support sequence. For quadrupeds, there are three main categories: walking gaits, running gaits, and leaping gaits. In one system (relating to horses), there are 60 discrete patterns: 37 walking gaits, 14 running gaits, and 9 leaping gaits.
Walking is the most common gait, where some feet are on the ground at any given time, and found in almost all legged animals. In an informal sense, running is considered to occur when at some points in the stride all feet are off the ground in a moment of suspension. Technically, however, moments of suspension occur in both running gaits (such as trot) and leaping gaits (such as canter and gallop). Gaits involving one or more moments of suspension can be found in many animals, and compared to walking they are faster but more energetically costly forms of locomotion.
Animals will use different gaits for different speeds, terrain, and situations. For example, horses show four natural gaits, the slowest horse gait is the walk, then there are three faster gaits which, from slowest to fastest, are the trot, the canter, and the gallop. Animals may also have unusual gaits that are used occasionally, such as for moving sideways or backwards. For example, the main human gaits are bipedal walking and running, but they employ many other gaits occasionally, including a four-legged crawl in tight spaces.
In walking, and for many animals running, the motion of legs on either side of the body alternates, i.e. is out of phase. Other animals, such as a horse when galloping, or an inchworm, alternate between their front and back legs.
In saltation (hopping) all legs move together, instead of alternating. As a main means of locomotion, this is usually found in bipeds, or semi-bipeds. Among the mammals saltation is commonly used among kangaroos and their relatives, jerboas, springhares, kangaroo rats, hopping mice, gerbils, and sportive lemurs. Certain tendons in the hind legs of kangaroos are very elastic, allowing kangaroos to effectively bounce along conserving energy from hop to hop, making saltation a very energy efficient way to move around in their nutrient poor environment. Saltation is also used by many small birds, frogs, fleas, crickets, grasshoppers, and water fleas (a small planktonic crustacean).
Most animals move in the direction of their head. However, there are some exceptions. Crabs move sideways, and naked mole rats, which live in tight tunnels and can move backward or forward with equal facility. Crayfish can move backward much faster than they can move forward.
Gait analysis is the study of gait in humans and other animals. This may involve videoing subjects with markers on particular anatomical landmarks and measuring the forces of their footfall using floor transducers (strain gauges). Skin electrodes may also be used to measure muscle activity.
== Limbless locomotion ==
There are a number of terrestrial and amphibious limbless vertebrates and invertebrates. These animals, due to lack of appendages, use their bodies to generate propulsive force. These movements are sometimes referred to as "slithering" or "crawling", although neither are formally used in the scientific literature and the latter term is also used for some animals moving on all four limbs. All limbless animals come from cold-blooded groups; there are no endothermic limbless animals, i.e. there are no limbless birds or mammals.
=== Lower body surface ===
Where the foot is important to the legged mammal, for limbless animals the underside of the body is important. Some animals such as snakes or legless lizards move on their smooth dry underside. Other animals have various features that aid movement. Molluscs such as slugs and snails move on a layer of mucus that is secreted from their underside, reducing friction and protecting from injury when moving over sharp objects. Earthworms have small bristles (setae) that hook into the substrate and help them move. Some animals, such as leeches, have suction cups on either end of the body allowing two anchor movement.
=== Type of movement ===
Some limbless animals, such as leeches, have suction cups on either end of their body, which allow them to move by anchoring the rear end and then moving forward the front end, which is then anchored and then the back end is pulled in, and so on. This is known as two-anchor movement. A legged animal, the inchworm, also moves like this, clasping with appendages at either end of its body.
Limbless animals can also move using pedal locomotory waves, rippling the underside of the body. This is the main method used by molluscs such as slugs and snails, and also large flatworms, some other worms, and even earless seals. The waves may move in the opposite direction to motion, known as retrograde waves, or in the same direction as motion, known as direct waves. Earthworms move by retrograde waves alternatively swelling and contracting down the length of their body, the swollen sections being held in place using setae. Aquatic molluscs such as limpets, which are sometimes out of the water, tend to move using retrograde waves. However, terrestrial molluscs such as slugs and snails tend to use direct waves. Lugworms and seals also use direct waves.
Most snakes move using lateral undulation where a lateral wave travels down the snake's body in the opposite direction to the snake's motion and pushes the snake off irregularities in the ground. This mode of locomotion requires these irregularities to function. Another form of locomotion, rectilinear locomotion, is used at times by some snakes, especially large ones such as pythons and boa. Here, large scales on the underside of the body known as scutes are used to push backwards and downwards. This is effective on a flat surface and is used for slow, silent movement, such as when stalking prey. Snakes use concertina locomotion for moving slowly in tunnels, here the snake alternates in bracing parts of its body on it surrounds. Finally the caenophidian snakes use the fast and unusual method of movement known as sidewinding on sand or loose soil. The snake cycles through throwing the front part of its body in the direction of motion and bringing the back part of its body into line crosswise.
== Rolling ==
Although animals have never evolved wheels for locomotion, a small number of animals will move at times by rolling their whole body. Rolling animals can be divided into those that roll under the force of gravity or wind and those that roll using their own power.
=== Gravity or wind assisted ===
The web-toed salamander, a 10-centimetre (3.9 in) salamander, lives on steep hills in the Sierra Nevada mountains. When disturbed or startled it coils itself up into a ball, often causing it to roll downhill.
The pebble toad (Oreophrynella nigra) lives atop tepui in the Guiana highlands of South America. When threatened, often by tarantulas, it rolls into ball, and typically being on an incline, rolls away under gravity like a loose pebble.
Namib wheeling spiders (Carparachne spp.), found in the Namib desert, will actively roll down sand dunes. This action can be used to successfully escape predators such as the Pompilidae tarantula wasps, which lay their eggs in a paralyzed spider for their larvae to feed on when they hatch. The spiders flip their body sideways and then cartwheel over their bent legs. The rotation is fast, the golden wheel spider (Carparachne aureoflava) moving up to 20 revolutions per second, moving the spider at 1 metre per second (3.3 ft/s).
Coastal tiger beetle larvae when threatened can flick themselves into the air and curl their bodies to form a wheels, which the wind blows, often uphill, as far as 25 m (80 ft) and as fast as 11 km/h (3 m/s; 7 mph). They also may have some ability to steer themselves in this state.
Pangolins, a type of mammal covered in thick scales, roll into a tight ball when threatened. Pangolins have been reported to roll away from danger, by both gravity and self-powered methods. A pangolin in hill country in Sumatra, to flee from a researcher, ran to the edge of a slope and curled into a ball to roll down the slope, crashing through the vegetation, and covering an estimated 30 metres (100 ft) or more in 10 seconds.
=== Self-powered ===
Caterpillars of the mother-of-pearl moth, Pleuroptya ruralis, when attacked, will touch their heads to their tails and roll backwards, up to 5 revolutions at about 40 centimetres per second (16 in/s), which is about 40 times its normal speed.
Nannosquilla decemspinosa, a species of long-bodied, short-legged mantis shrimp, lives in shallow sandy areas along the Pacific coast of Central and South America. When stranded by a low tide the 3 cm (1.2 in) stomatopod lies on its back and performs backwards somersaults over and over. The animal moves up to 2 metres (6.5 ft) at a time by rolling 20–40 times, with speeds of around 72 revolutions per minute. That is 1.5 body lengths per second (3.5 cm/s or 1.4 in/s). Researchers estimate that the stomatopod acts as a true wheel around 40% of the time during this series of rolls. The remaining 60% of the time it has to "jumpstart" a roll by using its body to thrust itself upwards and forwards.
Pangolins have also been reported to roll away from danger by self-powered methods. Witnessed by a lion researcher in the Serengeti in Africa, a group of lions surrounded a pangolin, but could not get purchase on it when it rolled into a ball, and so the lions sat around it waiting and dozing. Surrounded by lions, it would unroll itself slightly and give itself a push to roll some distance, until by doing this multiple times it could get far enough away from the lions to be safe. Moving like this would allow a pangolin to cover distance while still remaining in a protective armoured ball.
Moroccan flic-flac spiders, if provoked or threatened, can escape by doubling their normal walking speed using forward or backward flips similar to acrobatic flic-flac movements.
== Limits and extremes ==
The fastest terrestrial animal is the cheetah, which can attain maximal sprint speeds of approximately 104 km/h (64 mph). The fastest running lizard is the black iguana, which has been recorded moving at speed of up to 34.9 km/h (21.7 mph).
== See also ==
== References ==
== Bibliography ==
Alexander, R McNeill (2003). Principles of Animal Locomotion. Princeton University Press. ISBN 978-0-691-08678-1.
== External links ==
Adaptations of running animals
Crocodile stance
Tetrapod stance
Lecture on crawling (slithering) at Berkeley
Animation of earthworm movement by a propagating retrograde wave | Wikipedia/Terrestrial_locomotion |
Statistical language acquisition, a branch of developmental psycholinguistics, studies the process by which humans develop the ability to perceive, produce, comprehend, and communicate with natural language in all of its aspects (phonological, syntactic, lexical, morphological, semantic) through the use of general learning mechanisms operating on statistical patterns in the linguistic input. Statistical learning acquisition claims that infants' language-learning is based on pattern perception rather than an innate biological grammar. Several statistical elements such as frequency of words, frequent frames, phonotactic patterns and other regularities provide information on language structure and meaning for facilitation of language acquisition.
== Philosophy ==
Fundamental to the study of statistical language acquisition is the centuries-old debate between rationalism (or its modern manifestation in the psycholinguistic community, nativism) and empiricism, with researchers in this field falling strongly in support of the latter category. Nativism is the position that humans are born with innate domain-specific knowledge, especially inborn capacities for language learning. Ranging from seventeenth century rationalist philosophers such as Descartes, Spinoza, and Leibniz to contemporary philosophers such as Richard Montague and linguists such as Noam Chomsky, nativists posit an innate learning mechanism with the specific function of language acquisition.
In modern times, this debate has largely surrounded Chomsky's support of a universal grammar, properties that all natural languages must have, through the controversial postulation of a language acquisition device (LAD), an instinctive mental 'organ' responsible for language learning which searches all possible language alternatives and chooses the parameters that best match the learner's environmental linguistic input. Much of Chomsky's theory is founded on the poverty of the stimulus (POTS) argument, the assertion that a child's linguistic data is so limited and corrupted that learning language from this data alone is impossible. As an example, many proponents of POTS claim that because children are never exposed to negative evidence, that is, information about what phrases are ungrammatical, the language structure they learn would not resemble that of correct speech without a language-specific learning mechanism. Chomsky's argument for an internal system responsible for language, biolinguistics, poses a three-factor model. "Genetic endowment" allows the infant to extract linguistic info, detect rules, and have universal grammar. "External environment" illuminates the need to interact with others and the benefits of language exposure at an early age. The last factor encompasses the brain properties, learning principles, and computational efficiencies that enable children to pick up on language rapidly using patterns and strategies.
Standing in stark contrast to this position is empiricism, the epistemological theory that all knowledge comes from sensory experience. This school of thought often characterizes the nascent mind as a tabula rasa, or blank slate, and can in many ways be associated with the nurture perspective of the "nature vs. nurture debate". This viewpoint has a long historical tradition that parallels that of rationalism, beginning with seventeenth century empiricist philosophers such as Locke, Bacon, Hobbes, and, in the following century, Hume. The basic tenet of empiricism is that information in the environment is structured enough that its patterns are both detectable and extractable by domain-general learning mechanisms. In terms of language acquisition, these patterns can be either linguistic or social in nature.
Chomsky is very critical of this empirical theory of language acquisition. He has said, "It's true there's been a lot of work on trying to apply statistical models to various linguistic problems. I think there have been some successes, but a lot of failures." He claims the idea of using statistical methods to acquire language is simply a mimicry of the process, rather than a true understanding of how language is acquired.
== Experimental paradigms ==
=== Headturn Preference Procedure (HPP) ===
One of the most used experimental paradigms in investigations of infants' capacities for statistical language acquisition is the Headturn Preference Procedure (HPP), developed by Stanford psychologist Anne Fernald in 1985 to study infants' preferences for prototypical child-directed speech over normal adult speech. In the classic HPP paradigm, infants are allowed to freely turn their heads and are seated between two speakers with mounted lights. The light of either the right or left speaker then flashes as that speaker provides some type of audial or linguistic input stimulus to the infant. Reliable orientation to a given side is taken to be an indication of a preference for the input associated with that side's speaker. This paradigm has since become increasingly important in the study of infant speech perception, especially for input at levels higher than syllable chunks, though with some modifications, including using the listening times instead of the side preference as the relevant dependent measure.
=== Conditioned Headturn Procedure ===
Similar to HPP, the Conditioned Headturn Procedure also makes use of an infant's differential preference for a given side as an indication of a preference for, or more often a familiarity with, the input or speech associated with that side. Used in studies of prosodic boundary markers by Gout et al. (2004) and later by Werker in her classic studies of categorical perception of native-language phonemes, infants are conditioned by some attractive image or display to look in one of two directions every time a certain input is heard, a whole word in Gout's case and a single phonemic syllable in Werker's. After the conditioning, new or more complex input is then presented to the infant, and their ability to detect the earlier target word or distinguish the input of the two trials is observed by whether they turn their head in expectation of the conditioned display or not.
=== Anticipatory eye movement ===
While HPP and the Conditioned Headturn Procedure allow for observations of behavioral responses to stimuli and after the fact inferences about what the subject's expectations must have been to motivate this behavior, the Anticipatory Eye Movement paradigm allows researchers to directly observe a subject's expectations before the event occurs. By tracking subjects' eye movements researchers have been able to investigate infant decision-making and the ways in which infants encode and act on probabilistic knowledge to make predictions about their environments. This paradigm also offers the advantage of comparing differences in eye movement behavior across a wider range of ages than others.
=== Artificial languages ===
Artificial languages, that is, small-scale languages that typically have an extremely limited vocabulary and simplified grammar rules, are a commonly used paradigm for psycholinguistic researchers. Artificial languages allow researchers to isolate variables of interest and wield a greater degree of control over the input the subject will receive. Unfortunately, the overly simplified nature of these languages and the absence of a number of phenomena common to all human natural languages such as rhythm, pitch changes, and sequential regularities raise questions of external validity for any findings obtained using this paradigm, even after attempts have been made to increase the complexity and richness of the languages used. The artificial language's lack of complexity or decreased complexity fails to account for a child's need to recognize a given syllable in natural language regardless of the sound variability inherent to natural language, though "it is possible that the complexity of natural language actually facilitates learning."
As such, artificial language experiments are typically conducted to explore what the relevant linguistic variables are, what sources of information infants are able to use and when, and how researchers can go about modeling the learning and acquisition process. Aslin and Newport, for example, have used artificial languages to explore what features of linguistic input make certain patterns salient and easily detectable by infants, allowing them to easily contrast the detection of syllable repetition with that of word-final syllables and make conclusions about the conditions under which either feature is recognized as important.
=== Audio and audiovisual recordings ===
Statistical learning has been shown to play a large role in language acquisition, but social interaction appears to be a necessary component of learning as well. In one study, infants presented with audio or audiovisual recordings of Mandarin speakers failed to distinguish the phonemes of the language. This implies that simply hearing the sounds is not sufficient for language learning; social interaction cues the infant to take statistics. Particular interactions geared towards infants is known as "child-directed" language because it is more repetitive and associative, which makes it easier to learn. These "child directed" interactions could also be the reason why it is easier to learn a language as a child rather than an adult.
=== Bilinguals ===
Studies of bilingual infants, such as a study Bijeljac-Babic, et al., on French-learning infants, have offered insight to the role of prosody in language acquisition. The Bijeljac-Babic study found that language dominance influences "sensitivity to prosodic contrasts." Although this was not a study on statistical learning, its findings on prosodic pattern recognition might have implications for statistical learning.
It is possible that the kinds of language experience and knowledge gained through the statistical learning of the first language influences one's acquisition of a second language. Some research points to the possibility that the difficulty of learning a second language may be derived from the structural patterns and language cues that one has already picked up from his or her acquisition of first language. In that sense, the knowledge of and skills to process the first language from statistical acquisition may act as a complicating factor when one tries to learn a new language with different sentence structures, grammatical rules, and speech patterns.
== Important findings ==
=== Phonetic category learning ===
The first step in developing knowledge of a system as complex as natural language is learning to distinguish the important language-specific classes of sounds, called phonemes, that distinguish meaning between words. UBC psychologist Janet Werker, since her influential series of experiments in the 1980s, has been one of the most prominent figures in the effort to understand the process by which human babies develop these phonological distinctions. While adults who speak different languages are unable to distinguish meaningful sound differences in other languages that do not delineate different meanings in their own, babies are born with the ability to universally distinguish all speech sounds. Werker's work has shown that while infants at six to eight months are still able to perceive the difference between certain Hindi and English consonants, they have completely lost this ability by 11 to 13 months.
It is now commonly accepted that children use some form of perceptual distributional learning, by which categories are discovered by clumping similar instances of an input stimulus, to form phonetic categories early in life. Developing children have been found to be effective judges of linguistic authority, screening the input they model their language on by shifting their attention less to speakers who mispronounce words. Infants also use statistical tracking to calculate the likelihood that particular phonemes will follow each other.
=== Parsing ===
Parsing is the process by which a continuous speech stream is segmented into its discrete meaningful units, e.g. sentences, words, and syllables. Saffran (1996) represents a singularly seminal study in this line of research. Infants were presented with two minutes of continuous speech of an artificial language from a computerized voice to remove any interference from extraneous variables such as prosody or intonation. After this presentation, infants were able to distinguish words from nonwords, as measured by longer looking times in the second case.
An important concept in understanding these results is that of transitional probability, the likelihood of an element, in this case a syllable, following or preceding another element. In this experiment, syllables that went together in words had a much higher transitional probability than did syllables at word boundaries that just happened to be adjacent. Incredibly, infants, after a short two-minute presentation, were able to keep track of these statistics and recognize high probability words. Further research has since replicated these results with natural languages unfamiliar to infants, indicating that learning infants also keep track of the direction (forward or backward) of the transitional probabilities. Though the neural processes behind this phenomenon remain largely unknown, recent research reports increased activity in the left inferior frontal gyrus and the middle frontal gyrus during the detection of word boundaries.
The development of syllable-ordering biases is an important step along the way to full language development. The ability to categorize syllables and group together frequently co-occurring sequences may be critical in the development of a protolexicon, a set of common language-specific word templates based on characteristic patterns in the words an infant hears. The development of this protolexicon may in turn allow for the recognition of new types of patterns, e.g. the high frequency of word-initially stressed consonants in English, which would allow infants to further parse words by recognizing common prosodic phrasings as autonomous linguistic units, restarting the dynamic cycle of word and language learning.
=== Referent-label associations ===
The question of how novice language-users are capable of associating learned labels with the appropriate referent, the person or object in the environment which the label names, has been at the heart of philosophical considerations of language and meaning from Plato to Quine to Hofstadter. This problem, that of finding some solid relationship between word and object, of finding a word's meaning without succumbing to an infinite recursion of dictionary look-up, is known as the symbol grounding problem.
Researchers have shown that this problem is intimately linked with the ability to parse language, and that those words that are easy to segment due to their high transitional probabilities are also easier to map to an appropriate referent. This serves as further evidence of the developmental progression of language acquisition, with children requiring an understanding of the sound distributions of natural languages to form phonetic categories, parse words based on these categories, and then use these parses to map them to objects as labels.
The developmentally earliest understanding of word to referent associations have been reported at six months old, with infants comprehending the words 'mommy' and 'daddy' or their familial or cultural equivalents. Further studies have shown that infants quickly develop in this capacity and by seven months are capable of learning associations between moving images and nonsense words and syllables.
It is important to note that there is a distinction, often confounded in acquisition research, between mapping a label to a specific instance or individual and mapping a label to an entire class of objects. This latter process is sometimes referred to as generalization or rule learning. Research has shown that if input is encoded in terms of perceptually salient dimensions rather than specific details and if patterns in the input indicate that a number of objects are named interchangeably in the same context, a language learner will be much more likely to generalize that name to every instance with the relevant features. This tendency is heavily dependent on the consistency of context clues and the degree to which word contexts overlap in the input. These differences are furthermore linked to the well-known patterns of under and overgeneralization in infant word learning. Research has also shown that the frequency of co-occurrence of referents is tracked as well, which helps create associations and dispel ambiguities in object-referent models.
The ability to appropriately generalize to whole classes of yet unseen words, coupled with the abilities to parse continuous speech and keep track of word-ordering regularities, may be the critical skills necessary to develop proficiency with and knowledge of syntax and grammar.
=== Differences in autistic populations ===
According to recent research, there is no neural evidence of statistical language learning in children with autism spectrum disorders. When exposed to a continuous stream of artificial speech, children without autism displayed less cortical activity in the dorsolateral frontal cortices (specifically the middle frontal gyrus) as cues for word boundaries increased. However activity in these networks remained unchanged in autistic children, regardless of the verbal cues provided. This evidence, highlighting the importance of proper Frontal Lobe brain function is in support of the "Executive Functions" Theory, used to explain some of the biologically related causes of Autistic language deficits. With impaired working memory, decision making, planning, and goal setting, which are vital functions of the Frontal Lobe, Autistic children are at loss when it comes to socializing and communication (Ozonoff, et al., 2004). Additionally, researchers have found that the level of communicative impairment in autistic children was inversely correlated with signal increases in these same regions during exposure to artificial languages. Based on this evidence, researchers have concluded that children with autism spectrum disorders don't have the neural architecture to identify word boundaries in continuous speech. Early word segmentation skills have been shown to predict later language development, which could explain why language delay is a hallmark feature of autism spectrum disorders.
=== Statistical language learning across situations ===
Language learning takes place in different contexts, with both the infant and the caregiver engaging in social interactions. Recent research have investigated how infants and adults use cross-situational statistics in order to learn about not only the meanings of words but also the constraints within a context. For example, Smith and his colleagues proposed that infants learn language by acquiring a bias to label objects to similar objects that come from categories that are well-defined. Important to this view is the idea that the constraints that assist learning of words are not independent of the input itself or the infant's experience. Rather, constraints come about as infants learn about the ways that the words are used and begin to pay attention to certain characteristics of objects that have been used in the past to represent the words.
Inductive learning problem can occur as words are oftentimes used in ambiguous situations in which there are more than one possible referents available. This can lead to confusion for the infants as they may not be able to distinguish which words should be extended to label objects being referenced to. Smith and Yu proposed that a way to make a distinction in such ambiguous situations is to track the word-referent pairings over multiple scenes. For instance, an infant who hears a word in the presence of object A and object B will be unsure of whether the word is the referent of object A or object B. However, if the infant then hears the label again in the presence of object B and object C, the infant can conclude that object B is the referent of the label because object B consistently pairs with the label across different situations.
== Computational models ==
Computational models have long been used to explore the mechanisms by which language learners process and manipulate linguistic information. Models of this type allow researchers to systematically control important learning variables that are oftentimes difficult to manipulate at all in human participants.
=== Associative models ===
Associative neural network models of language acquisition are one of the oldest types of cognitive model, using distributed representations and changes in the weights of the connections between the nodes that make up these representations to simulate learning in a manner reminiscent of the plasticity-based neuronal reorganization that forms the basis of human learning and memory. Associative models represent a break with classical cognitive models, characterized by discrete and context-free symbols, in favor of a dynamical systems approach to language better capable of handling temporal considerations.
A precursor to this approach, and one of the first model types to account for the dimension of time in linguistic comprehension and production was Elman's simple recurrent network (SRN). By making use of a feedback network to represent the system's past states, SRNs were able in a word-prediction task to cluster input into self-organized grammatical categories based solely on statistical co-occurrence patterns.
Early successes such as these paved the way for dynamical systems research into linguistic acquisition, answering many questions about early linguistic development but leaving many others unanswered, such as how these statistically acquired lexemes are represented. Of particular importance in recent research has been the effort to understand the dynamic interaction of learning (e.g. language-based) and learner (e.g. speaker-based) variables in lexical organization and competition in bilinguals. In the ceaseless effort to move toward more psychologically realistic models, many researchers have turned to a subset of associative models, self-organizing maps (SOMs), as established, cognitively plausible models of language development.
SOMs have been helpful to researchers in identifying and investigating the constraints and variables of interest in a number of acquisition processes, and in exploring the consequences of these findings on linguistic and cognitive theories. By identifying working memory as an important constraint both for language learners and for current computational models, researchers have been able to show that manipulation of this variable allows for syntactic bootstrapping, drawing not just categorical but actual content meaning from words' positional co-occurrence in sentences.
=== Probabilistic models ===
Some recent models of language acquisition have centered around methods of Bayesian Inference to account for infants' abilities to appropriately parse streams of speech and acquire word meanings. Models of this type rely heavily on the notion of conditional probability (the probability of A given B), in line with findings concerning infants' use of transitional probabilities of words and syllables to learn words.
Models that make use of these probabilistic methods have been able to merge the previously dichotomous language acquisition perspectives of social theories that emphasize the importance of learning speaker intentions and statistical and associative theories that rely on cross-situational contexts into a single joint-inference problem. This approach has led to important results in explaining acquisition phenomena such as mutual exclusivity, one-trial learning or fast mapping, and the use of social intentions.
While these results seem to be robust, studies concerning these models' abilities to handle more complex situations such as multiple referent to single label mapping, multiple label to single referent mapping, and bilingual language acquisition in comparison to associative models' successes in these areas have yet to be explored. Hope remains, though, that these model types may be merged to provide a comprehensive account of language acquisition.
=== C/V hypothesis ===
Along the lines of probabilistic frequencies, the C/V hypothesis basically states all language hearers use consonantal frequencies to distinguish between words (lexical distinctions) in continuous speech strings, in comparison to vowels. Vowels are more pertinent to rhythmic identification. Several follow-up studies revealed this finding, as they showed that vowels are processed independently of their local statistical distribution.
Other research has shown that the consonant-vowel ratio doesn't influence the sizes of lexicons when comparing distinct languages. In the case of languages with a higher consonant ratio, children may depend more on consonant neighbors than rhyme or vowel frequency.
=== Algorithms for language acquisition ===
Some models of language acquisition have been based on adaptive parsing and grammar induction algorithms.
== References == | Wikipedia/Computational_models_of_language_acquisition |
MindModeling@Home is an inactive non-profit, volunteer computing research project for the advancement of cognitive science. MindModeling@Home is hosted by Wright State University and the University of Dayton in Dayton, Ohio.
In BOINC, it is in the area of Cognitive Science and category called Cognitive science and artificial intelligence. It can only operate on a 64-bit operating system, preferably on a computer with multiple cores, running a Microsoft Windows, Mac OS X, or Linux operating system. This project is not compatible with mobile devices, unlike other projects on BOINC.
== Research focus ==
N-2 Repetition: understanding how people have a harder time returning to a task from another one
Observing how people read through their eye movement for the purpose of helping people reduce eye strain and processing what they read better and faster.
Modeling decision-making: resolving around decisions made from visual processing (focus and filtering)
Integrated Learning Models (ILM) to create algorithms based on how people learn and make decisions
How the brain performs tasks sequentially and simultaneously by measuring its blood flow
== Problems ==
Its status is inactive. However, it is "not down or closed," as its servers are still running.
The projects are long; prolonged amounts of computing time can overheat a computer. The solution is to stop work on the project until the computer cools down.
It is subject to power outages, as seen on October 7, 2018
When the website will be out of beta mode is unknown, as it has been in beta since 2007
== Scientific results ==
Godwin H.J., Walenchok S. et al. Faster than the speed of rejection: Object identification processes during visual search for multiple targets. J Exp Psychol Hum Percept Perform. 41–4, (2016).
Moore L. R., Gunzelmann G. An interpolation approach for fitting computationally intensive models. Cognitive Systems Research 19, (2014).
Moore L.R. Cognitive model exploration and optimization: a new challenge for computational science. Comput Math Organ Theory 17, 296–313. (2011).
Moore L.R., Kopala M., Mielke T. et al. Simultaneous performance exploration and optimized search with volunteer computing. 19th ACM International Symposium on High Performance Distributed Computing, (2010).
Harris J., Gluck K.A., Moore L.R. MindModeling@Home. . . and Anywhere Else You Have Idle Processors. 9th International Conference on Cognitive Modelling, (2009).
Gluck K., Scheutz M. Combinatorics meets processing power: Large-scale computational resources for BRIMS. 16th Conference on Behavior Representation in Modeling and Simulation, BRIMS. 1. 73–83. (2007).
== See also ==
List of volunteer computing projects
== References ==
== External links ==
Official website
BOINC
Video of the MindModeling@Home trailer on YouTube
MindModeling@Home screensaver video on YouTube | Wikipedia/MindModeling@Home |
Structural dynamics is a type of structural analysis which covers the behavior of a structure subjected to dynamic (actions having high acceleration) loading. Dynamic loads include people, wind, waves, traffic, earthquakes, and blasts. Any structure can be subjected to dynamic loading. Dynamic analysis can be used to find dynamic displacements, time history, and modal analysis.
Structural analysis is mainly concerned with finding out the behavior of a physical structure when subjected to force. This action can be in the form of load due to the weight of things such as people, furniture, wind, snow, etc. or some other kind of excitation such as an earthquake, shaking of the ground due to a blast nearby, etc. In essence all these loads are dynamic, including the self-weight of the structure because at some point in time these loads were not there. The distinction is made between the dynamic and the static analysis on the basis of whether the applied action has enough acceleration in comparison to the structure's natural frequency. If a load is applied sufficiently slowly, the inertia forces (Newton's first law of motion) can be ignored and the analysis can be simplified as static analysis.
A static load is one which varies very slowly. A dynamic load is one which changes with time fairly quickly in comparison to the structure's natural frequency. If it changes slowly, the structure's response may be determined with static analysis, but if it varies quickly (relative to the structure's ability to respond), the response must be determined with a dynamic analysis.
Dynamic analysis for simple structures can be carried out manually, but for complex structures finite element analysis can be used to calculate the mode shapes and frequencies.
== Displacements ==
A dynamic load can have a significantly larger effect than a static load of the same magnitude due to the structure's inability to respond quickly to the loading (by deflecting). The increase in the effect of a dynamic load is given by the dynamic amplification factor (DAF) or dynamic load factor (DLF):
DAF
=
DLF
=
u
max
u
static
{\displaystyle {\text{DAF}}={\text{DLF}}={\frac {u_{\max }}{u_{\text{static}}}}}
where u is the deflection of the structure due to the applied load.
Graphs of dynamic amplification factors vs non-dimensional rise time (tr/T) exist for standard loading functions (for an explanation of rise time, see time history analysis below). Hence the DAF for a given loading can be read from the graph, the static deflection can be easily calculated for simple structures and the dynamic deflection found.
== Time history analysis ==
A full time history will give the response of a structure over time during and after the application of a load. To find the full time history of a structure's response, you must solve the structure's equation of motion.
=== Example ===
A simple single degree of freedom system (a mass, M, on a spring of stiffness k, for example) has the following equation of motion:
M
x
¨
+
k
x
=
F
(
t
)
{\displaystyle M{\ddot {x}}+kx=F(t)}
where
x
¨
{\displaystyle {\ddot {x}}}
is the acceleration (the double derivative of the displacement) and x is the displacement.
If the loading F(t) is a Heaviside step function (the sudden application of a constant load), the solution to the equation of motion is:
x
=
F
0
k
[
1
−
cos
(
ω
t
)
]
{\displaystyle x={\frac {F_{0}}{k}}[1-\cos(\omega t)]}
where
ω
=
k
M
{\displaystyle \omega ={\sqrt {\frac {k}{M}}}}
and the fundamental natural frequency,
f
=
ω
2
π
{\displaystyle f={\frac {\omega }{2\pi }}}
.
The static deflection of a single degree of freedom system is:
x
static
=
F
0
k
{\displaystyle x_{\text{static}}={\frac {F_{0}}{k}}}
so we can write, by combining the above formulae:
x
=
x
static
[
1
−
cos
(
ω
t
)
]
{\displaystyle x=x_{\text{static}}[1-\cos(\omega t)]}
This gives the (theoretical) time history of the structure due to a load F(t), where the false assumption is made that there is no damping.
Although this is too simplistic to apply to a real structure, the Heaviside step function is a reasonable model for the application of many real loads, such as the sudden addition of a piece of furniture, or the removal of a prop to a newly cast concrete floor. However, in reality loads are never applied instantaneously – they build up over a period of time (this may be very short indeed). This time is called the rise time.
As the number of degrees of freedom of a structure increases it very quickly becomes too difficult to calculate the time history manually – real structures are analysed using non-linear finite element analysis software.
== Damping ==
Any real structure will dissipate energy (mainly through friction). This can be modelled by modifying the DAF
DAF
=
1
+
e
−
c
π
{\displaystyle {\text{DAF}}=1+e^{-c\pi }}
where
c
=
damping coefficient
critical damping coefficient
{\displaystyle c={\frac {\text{damping coefficient}}{\text{critical damping coefficient}}}}
and is typically 2–10% depending on the type of construction:
Bolted steel ~6%
Reinforced concrete ~5%
Welded steel ~2%
Brick masonry ~10%
Methods to increase damping
One of the widely used methods to increase damping is to attach a layer of material with a high Damping Coefficient, for example rubber, to a vibrating structure.
== Modal analysis ==
A modal analysis calculates the frequency modes or natural frequencies of a given system, but not necessarily its full-time history response to a given input. The natural frequency of a system is dependent only on the stiffness of the structure and the mass which participates with the structure (including self-weight). It is not dependent on the load function.
It is useful to know the modal frequencies of a structure as it allows you to ensure that the frequency of any applied periodic loading will not coincide with a modal frequency and hence cause resonance, which leads to large oscillations.
The method is:
Find the natural modes (the shape adopted by a structure) and natural frequencies
Calculate the response of each mode
Optionally superpose the response of each mode to find the full modal response to a given loading
=== Energy method ===
It is possible to calculate the frequency of different mode shape of system manually by the energy method. For a given mode shape of a multiple degree of freedom system you can find an "equivalent" mass, stiffness and applied force for a single degree of freedom system. For simple structures the basic mode shapes can be found by inspection, but it is not a conservative method. Rayleigh's principle states:
"The frequency ω of an arbitrary mode of vibration, calculated by the energy method, is always greater than – or equal to – the fundamental frequency ωn."
For an assumed mode shape
u
¯
(
x
)
{\displaystyle {\bar {u}}(x)}
, of a structural system with mass M; bending stiffness, EI (Young's modulus, E, multiplied by the second moment of area, I); and applied force, F(x):
Equivalent mass,
M
eq
=
∫
M
u
¯
2
d
u
{\displaystyle {\text{Equivalent mass, }}M_{\text{eq}}=\int M{\bar {u}}^{2}\,du}
Equivalent stiffness,
k
eq
=
∫
E
I
(
d
2
u
¯
d
x
2
)
2
d
x
{\displaystyle {\text{Equivalent stiffness, }}k_{\text{eq}}=\int EI\left({\frac {d^{2}{\bar {u}}}{dx^{2}}}\right)^{2}\,dx}
Equivalent force,
F
eq
=
∫
F
u
¯
d
x
{\displaystyle {\text{Equivalent force, }}F_{\text{eq}}=\int F{\bar {u}}\,dx}
then, as above:
ω
=
k
eq
M
eq
{\displaystyle \omega ={\sqrt {\frac {k_{\text{eq}}}{M_{\text{eq}}}}}}
=== Modal response ===
The complete modal response to a given load F(x,t) is
v
(
x
,
t
)
=
∑
u
n
(
x
,
t
)
{\displaystyle v(x,t)=\sum u_{n}(x,t)}
. The summation can be carried out by one of three common methods:
Superpose complete time histories of each mode (time consuming, but exact)
Superpose the maximum amplitudes of each mode (quick but conservative)
Superpose the square root of the sum of squares (good estimate for well-separated frequencies, but unsafe for closely spaced frequencies)
To superpose the individual modal responses manually, having calculated them by the energy method:
Assuming that the rise time tr is known (T = 2π/ω), it is possible to read the DAF from a standard graph. The static displacement can be calculated with
u
static
=
F
1
,
eq
k
1
,
eq
{\displaystyle u_{\text{static}}={\frac {F_{1,{\text{eq}}}}{k_{1,{\text{eq}}}}}}
. The dynamic displacement for the chosen mode and applied force can then be found from:
u
max
=
u
static
DAF
{\displaystyle u_{\max }=u_{\text{static}}{\text{DAF}}}
== Modal participation factor ==
For real systems there is often mass participating in the forcing function (such as the mass of ground in an earthquake) and mass participating in inertia effects (the mass of the structure itself, Meq). The modal participation factor Γ is a comparison of these two masses. For a single degree of freedom system Γ = 1.
Γ
=
∑
M
n
u
¯
n
∑
M
n
u
¯
n
2
{\displaystyle \Gamma ={\frac {\sum M_{n}{\bar {u}}_{n}}{\sum M_{n}{\bar {u}}_{n}^{2}}}}
== External links ==
Structural Dynamics and Vibration Laboratory of McGill University
Frequency response function from modal parameters
Structural Dynamics Tutorials & Matlab scripts
AIAA Exploring Structural Dynamics (http://www.exploringstructuraldynamics.org/ ) – Structural Dynamics in Aerospace Engineering: Interactive Demos, Videos & Interviews with Practicing Engineers | Wikipedia/Structural_dynamics |
Graphitization is a process of transforming a carbonaceous material, such as coal or the carbon in certain forms of iron alloys, into graphite.
== Process ==
The graphitization process involves a restructuring of the molecular structure of the carbon material. In the initial state, these materials can have an amorphous structure or a crystalline structure different from graphite. Graphitization generally occurs at high temperatures (up to 3,000 °C (5,430 °F)), and can be accelerated by catalysts such as iron or nickel.
When carbonaceous material is exposed to high temperatures for an extended period of time, the carbon atoms begin to rearrange and form layered crystal planes. In the structure of graphite, carbon atoms are arranged in flat hexagonal sheets that are stacked on top of each other. These crystal planes give graphite its characteristic flake structure, giving it specific properties such as good electrical and thermal conductivity, low friction and excellent lubrication.
== Interest ==
Graphitization can be observed in various contexts. For example, it occurs naturally during the formation of certain types of coal or graphite in the Earth's crust. It can also be artificially induced during the manufacture of specific carbon materials, such as graphite electrodes used in fuel cells, nuclear reactors or metallurgical applications.
Graphitization is of particular interest in the field of metallurgy. Some iron alloys, such as cast iron, can undergo graphitization heat treatment to improve their mechanical properties and machinability. During this process, the carbon dissolved in the iron alloy matrix separates and restructures as graphite, which gives the cast iron its specific characteristics, such as improved ductility and wear resistance.
== Notes and references == | Wikipedia/Graphitization |
Penta-graphene is a hypothetical carbon allotrope composed entirely of carbon pentagons and resembling the Cairo pentagonal tiling. Penta-graphene was proposed in 2014 on the basis of analyses and simulations. Further calculations predicted that it is unstable in its pure form, but can be stabilized by hydrogenation. Due to its atomic configuration, penta-graphene has an unusually negative Poisson’s ratio and very high ideal strength believed to exceed that of a similar material, graphene.
Penta-graphene contains both sp2 and sp3 hybridized carbon atoms. Contrary to graphene, which is a good conductor of electricity, penta-graphene is predicted to be an insulator with an indirect band gap of 4.1–4.3 eV. Its hydrogenated form is called penta-graphane. It has a diamond-like structure with sp3 and no sp2 bonds, and therefore a wider band gap (ca. 5.8 eV) than penta-graphene. Chiral penta-graphene nanotubes have also been studied as metastable allotropes of carbon.
== References ==
== External links ==
Nazir, Muhammad Azhar; Hassan, Arzoo; Shen, Yiheng; Wang, Qian (2022). "Research progress on penta-graphene and its related materials: Properties and applications". Nano Today. 44: 101501. doi:10.1016/j.nantod.2022.101501. S2CID 248767647. | Wikipedia/Penta-graphene |
Phagraphene () is a proposed graphene allotrope composed of 5-6-7 carbon rings. Phagraphene was proposed in 2015 based on systematic evolutionary structure searching. Theoretical calculations showed that phagraphene is not only dynamically and thermally stable, but also has distorted Dirac cones. The direction-dependent cones are robust against external strain with tuneable Fermi velocities.
Higher-energy allotropes named Haeckelite contained penta- hexa- and hepta-carbon rings. Three types (rectangular, oblique and hexagonal) were proposed as early as 2000. These metastable allotropes have a trivial intrinsic metallic behavior.
Phagraphene is predicted to have a potential energy of 193.2 kcal/mol. The bond order is 1.33, the same as for graphene.
== PHA/graphene ==
An unrelated material called PHA/graphene is a polyhydroxyalkanoate graphene composite.
== References == | Wikipedia/Phagraphene |
Graphene is a semimetal whose conduction and valence bands meet at the Dirac points, which are six locations in momentum space, the vertices of its hexagonal Brillouin zone, divided into two non-equivalent sets of three points. The two sets are labeled K and K′. The sets give graphene a valley degeneracy of gv = 2. By contrast, for traditional semiconductors the primary point of interest is generally Γ, where momentum is zero. Four electronic properties separate it from other condensed matter systems.
== Electronic spectrum ==
Electrons propagating through graphene's honeycomb lattice effectively lose their mass, producing quasi-particles that are described by a 2D analogue of the Dirac equation rather than the Schrödinger equation for spin-1⁄2 particles.
=== Dispersion relation ===
When atoms are placed onto the graphene hexagonal lattice, the overlap between the pz(π) orbitals and the s or the px and py orbitals is zero by symmetry. The pz electrons forming the π bands in graphene can be treated independently. Within this π-band approximation, using a conventional tight-binding model, the dispersion relation (restricted to first-nearest-neighbor interactions only) that produces energy of the electrons with wave vector
k
=
[
k
x
,
k
y
]
{\displaystyle \mathbf {k} =[k_{x},k_{y}]}
is
E
(
k
)
=
±
γ
0
1
+
4
cos
2
1
2
a
k
x
+
4
cos
1
2
a
k
x
⋅
cos
3
2
a
k
y
{\displaystyle E(\mathbf {k} )=\pm \,\gamma _{0}{\sqrt {1+4\cos ^{2}{{\tfrac {1}{2}}ak_{x}}+4\cos {{\tfrac {1}{2}}ak_{x}}\cdot \cos {{\tfrac {\sqrt {3}}{2}}ak_{y}}}}}
with the nearest-neighbor (π orbitals) hopping energy γ0 ≈ 2.8 eV and the lattice constant a ≈ 2.46 Å. The conduction and valence bands, respectively, correspond to the different signs. With one pz electron per atom in this model the valence band is fully occupied, while the conduction band is vacant. The two bands touch at the zone corners (the K point in the Brillouin zone), where there is a zero density of states but no band gap. The graphene sheet thus displays a semimetallic (or zero-gap semiconductor) character. Two of the six Dirac points are independent, while the rest are equivalent by symmetry. In the vicinity of the K-points the energy depends linearly on the wave vector, similar to a relativistic particle. Since an elementary cell of the lattice has a basis of two atoms, the wave function has an effective 2-spinor structure.
As a consequence, at low energies, even neglecting the true spin, the electrons can be described by an equation that is formally equivalent to the massless Dirac equation. Hence, the electrons and holes are called Dirac fermions. This pseudo-relativistic description is restricted to the chiral limit, i.e., to vanishing rest mass M0, which leads to additional features:
−
i
v
F
σ
→
⋅
∇
ψ
(
r
)
=
E
ψ
(
r
)
.
{\displaystyle -iv_{F}\,{\vec {\sigma }}\cdot \nabla \psi (\mathbf {r} )\,=\,E\psi (\mathbf {r} ).}
Here vF ≈ 106 m/s (0.003 c) is the Fermi velocity in graphene, which replaces the velocity of light in the Dirac theory;
σ
→
{\displaystyle {\vec {\sigma }}}
is the vector of the Pauli matrices;
ψ
(
r
)
{\displaystyle \psi (\mathbf {r} )}
is the two-component wave function of the electrons and E is their energy.
The equation describing the electrons' linear dispersion relation is
E
(
k
)
=
ℏ
v
F
k
{\displaystyle E(k)=\hbar v_{\text{F}}k}
where the wavevector
k
=
k
x
2
+
k
y
2
{\displaystyle \textstyle k={\sqrt {k_{x}^{2}+k_{y}^{2}}}}
is measured from the Dirac points (the zero of energy is chosen here to coincide with the Dirac points). The equation uses a pseudospin matrix formula that describes two sublattices of the honeycomb lattice.
=== 'Massive' electrons ===
Graphene's unit cell has two identical carbon atoms and two zero-energy states: one in which the electron resides on atom A, the other in which the electron resides on atom B. However, if the two atoms in the unit cell are not identical, the situation changes. Hunt et al. showed that placing hexagonal boron nitride (h-BN) in contact with graphene can alter the potential felt at atom A versus atom B enough that the electrons develop a mass and accompanying band gap of about 30 meV.
The mass can be positive or negative. An arrangement that slightly raises the energy of an electron on atom A relative to atom B gives it a positive mass, while an arrangement that raises the energy of atom B produces a negative electron mass. The two versions behave alike and are indistinguishable via optical spectroscopy. An electron traveling from a positive-mass region to a negative-mass region must cross an intermediate region where its mass once again becomes zero. This region is gapless and therefore metallic. Metallic modes bounding semiconducting regions of opposite-sign mass is a hallmark of a topological phase and display much the same physics as topological insulators.
If the mass in graphene can be controlled, electrons can be confined to massless regions by surrounding them with massive regions, allowing the patterning of quantum dots, wires and other mesoscopic structures. It also produces one-dimensional conductors along the boundary. These wires would be protected against backscattering and could carry currents without dissipation.
== Single-atom wave propagation ==
Electron waves in graphene propagate within a single-atom layer, making them sensitive to the proximity of other materials such as high-κ dielectrics, superconductors and ferromagnetics.
== Electron transport ==
Graphene displays remarkable electron mobility at room temperature, with reported values in excess of 15000 cm2⋅V−1⋅s−1. Hole and electron mobilities were expected to be nearly identical. The mobility is nearly independent of temperature between 10 K and 100 K, which implies that the dominant scattering mechanism is defect scattering. Scattering by graphene's acoustic phonons intrinsically limits room temperature mobility to 200000 cm2⋅V−1⋅s−1 at a carrier density of 1012 cm−2, 10×106 times greater than copper.
The corresponding resistivity of graphene sheets would be 10−6 Ω⋅cm. This is less than the resistivity of silver, the lowest otherwise known at room temperature. However, on SiO2 substrates, scattering of electrons by optical phonons of the substrate is a larger effect than scattering by graphene's own phonons. This limits mobility to 40000 cm2⋅V−1⋅s−1.
Charge transport is affected by adsorption of contaminants such as water and oxygen molecules. This leads to non-repetitive and large hysteresis I-V characteristics. Researchers must carry out electrical measurements in vacuum. Graphene surfaces can be protected by a coating with materials such as SiN, PMMA and h-BN. In January 2015, the first stable graphene device operation in air over several weeks was reported, for graphene whose surface was protected by aluminum oxide. In 2015 lithium-coated graphene was observed to exhibit superconductivity and in 2017 evidence for unconventional superconductivity was demonstrated in single layer graphene placed on the electron-doped (non-chiral) d-wave superconductor Pr2−xCexCuO4 (PCCO).
Electrical resistance in 40-nanometer-wide nanoribbons of epitaxial graphene changes in discrete steps. The ribbons' conductance exceeds predictions by a factor of 10. The ribbons can act more like optical waveguides or quantum dots, allowing electrons to flow smoothly along the ribbon edges. In copper, resistance increases in proportion to length as electrons encounter impurities.
Transport is dominated by two modes. One is ballistic and temperature independent, while the other is thermally activated. Ballistic electrons resemble those in cylindrical carbon nanotubes. At room temperature, resistance increases abruptly at a particular length—the ballistic mode at 16 micrometres and the other at 160 nanometres.
Graphene electrons can cover micrometer distances without scattering, even at room temperature.
Despite zero carrier density near the Dirac points, graphene exhibits a minimum conductivity on the order of
4
e
2
/
h
{\displaystyle 4e^{2}/h}
. The origin of this minimum conductivity is unclear. However, rippling of the graphene sheet or ionized impurities in the SiO2 substrate may lead to local puddles of carriers that allow conduction. Several theories suggest that the minimum conductivity should be
4
e
2
/
(
π
h
)
{\displaystyle 4e^{2}/{(\pi }h)}
; however, most measurements are of order
4
e
2
/
h
{\displaystyle 4e^{2}/h}
or greater and depend on impurity concentration.
Near zero carrier density graphene exhibits positive photoconductivity and negative photoconductivity at high carrier density. This is governed by the interplay between photoinduced changes of both the Drude weight and the carrier scattering rate.
Graphene doped with various gaseous species (both acceptors and donors) can be returned to an undoped state by gentle heating in vacuum. Even for dopant concentrations in excess of 1012 cm−2 carrier mobility exhibits no observable change. Graphene doped with potassium in ultra-high vacuum at low temperature can reduce mobility 20-fold. The mobility reduction is reversible on removing the potassium.
Due to graphene's two dimensions, charge fractionalization (where the apparent charge of individual pseudoparticles in low-dimensional systems is less than a single quantum) is thought to occur. It may therefore be a suitable material for constructing quantum computers using anyonic circuits.
In 2018, superconductivity was reported in twisted bilayer graphene.
== Excitonic properties ==
First-principle calculations with quasiparticle corrections and many-body effects explore the electronic and optical properties of graphene-based materials. The approach is described as three stages. With GW calculation, the properties of graphene-based materials are accurately investigated, including bulk graphene, nanoribbons, edge and surface functionalized armchair oribbons, hydrogen saturated armchair ribbons, Josephson effect in graphene SNS junctions with single localized defect and armchair ribbon scaling properties.
== Magnetic properties ==
In 2014 researchers magnetized graphene by placing it on an atomically smooth layer of magnetic yttrium iron garnet. The graphene's electronic properties were unaffected. Prior approaches involved doping. The dopant's presence negatively affected its electronic properties.
=== Strong magnetic fields ===
In magnetic fields of ~10 tesla, additional plateaus of Hall conductivity at
σ
x
y
=
ν
e
2
/
h
{\displaystyle \sigma _{xy}=\nu e^{2}/h}
with
ν
=
0
,
±
1
,
±
4
{\displaystyle \nu =0,\pm {1},\pm {4}}
are observed. The observation of a plateau at
ν
=
3
{\displaystyle \nu =3}
and the fractional quantum Hall effect at
ν
=
1
/
3
{\displaystyle \nu =1/3}
were reported.
These observations with
ν
=
0
,
±
1
,
±
3
,
±
4
{\displaystyle \nu =0,\pm 1,\pm 3,\pm 4}
indicate that the four-fold degeneracy (two valley and two spin degrees of freedom) of the Landau energy levels is partially or completely lifted. One hypothesis is that the magnetic catalysis of symmetry breaking is responsible for lifting the degeneracy.
== Spin transport ==
Graphene is claimed to be an ideal material for spintronics due to its small spin–orbit interaction and the near absence of nuclear magnetic moments in carbon (as well as a weak hyperfine interaction). Electrical spin current injection and detection has been demonstrated up to room temperature. Spin coherence length above 1 micrometre at room temperature was observed, and control of the spin current polarity with an electrical gate was observed at low temperature.
Spintronic and magnetic properties can be present in graphene simultaneously. Low-defect graphene nanomeshes manufactured using a non-lithographic method exhibit large-amplitude ferromagnetism even at room temperature. Additionally a spin pumping effect is found for fields applied in parallel with the planes of few-layer ferromagnetic nanomeshes, while a magnetoresistance hysteresis loop is observed under perpendicular fields.
== Dirac fluid ==
Charged particles in high-purity graphene behave as a strongly interacting, quasi-relativistic plasma. The particles move in a fluid-like manner, traveling along a single path and interacting with high frequency. The behavior was observed in a graphene sheet faced on both sides with a h-BN crystal sheet.
== Anomalous quantum Hall effect ==
The quantum Hall effect is a quantum mechanical version of the Hall effect, The Hall effect occurs when a magnetic field causes a perpendicular (transverse) current in a material. In the quantum Hall effect, the transverse conductivity
σ
x
y
{\displaystyle \sigma _{xy}}
is quantized in integer multiples of a basic quantity:
e
2
/
h
{\displaystyle e^{2}/h}
where e is the elementary electric charge and h is the Planck constant. This phenomenon is typically observed in very clean silicon or gallium arsenide solids at temperatures around 3 K and high magnetic fields.
=== Quantum Hall effect in graphene ===
Graphenem, a single layer of carbon atoms, exhibits an unusual form of the quantum Hall effect. In graphene, the steps of conductivity quantization are shifted by 1/2 compared to the standard sequence and have an additional factor of 4. This can be expressed as:
σ
x
y
=
±
4
⋅
(
N
+
1
/
2
)
e
2
/
h
{\displaystyle \sigma _{xy}=\pm {4\cdot \left(N+1/2\right)e^{2}}/h}
where N is the Landau level. The factor of 4 arises due to the double valley and double spin degeneracies of electrons in graphene. These anomalies can be observed even at room temperature (about 20 °C or 293 K).
=== Behavior of electrons in graphene ===
This anomalous behavior is due to graphene's massless Dirac electrons. In a magnetic field, these electrons form a Landau level at the Dirac point with an energy that is precisely zero. This is a result of the Atiyah–Singer index theorem and causes the "+1/2" term in the Hall conductivity for neutral graphene.
In bilayer graphene, the quantum Hall effect is also observed but with only one of the two anomalies. The Hall conductivity in bilayer graphene is given by:
σ
x
y
=
±
4
⋅
N
⋅
e
2
/
h
{\displaystyle \sigma _{xy}=\pm {4\cdot N\cdot e^{2}}/h}
In this case, the first plateau at N = 0 is absent, meaning bilayer graphene remains metallic at the neutrality point.
=== Additional observations in graphene ===
Unlike normal metals, graphene's longitudinal resistance shows maxima, not minima, for integral values of the Landau filling factor in Shubnikov–de Haas oscillations. This is termed the integral quantum Hall effect. These oscillations exhibit a phase shift of π, known as Berry's phase, which is due to the zero effective mass of carriers near the Dirac points. Despite this zero effective mass, the temperature dependence of the oscillations indicates a non-zero cyclotron mass for the carriers.
=== Experimental observations ===
Graphene samples prepared on nickel films and on both the silicon and carbon faces of silicon carbide show the anomalous quantum Hall effect in electrical measurements. Graphitic layers on the carbon face of silicon carbide exhibit a clear Dirac spectrum in angle-resolved photoemission experiments. This effect is also observed in cyclotron resonance and tunneling experiments.
== Casimir effect ==
The Casimir effect is an interaction between disjoint neutral bodies provoked by the fluctuations of the electrodynamical vacuum. Mathematically it can be explained by considering the normal modes of electromagnetic fields, which explicitly depend on the boundary (or matching) conditions on the interacting bodies' surfaces. Since graphene/electromagnetic field interaction is strong for a one-atom-thick material, the Casimir effect is of interest.
== Van der Waals force ==
The Van der Waals force (or dispersion force) is also unusual, obeying an inverse cubic, asymptotic power law in contrast to the usual inverse quartic.
== Effect of substrate ==
The electronic properties of graphene are significantly influenced by the supporting substrate. The Si(100)/H surface does not perturb graphene's electronic properties, whereas the interaction between it and the clean Si(100) surface changes its electronic states significantly. This effect results from the covalent bonding between C and surface Si atoms, modifying the π-orbital network of the graphene layer. The local density of states shows that the bonded C and Si surface states are highly disturbed near the Fermi energy.
== Comparison with nanoribbon ==
If the in-plane direction is confined, in which case it is referred to as a nanoribbon, its electronic structure is different. If it is "zig-zag" (diagram), the bandgap is zero. If it is "armchair" (diagram), the bandgap is non-zero (see figure).
== References ==
=== Works cited ===
Geim, A. K.; Novoselov, K. S. (2007). "The rise of graphene". Nature Materials. 6 (3): 183–191. arXiv:cond-mat/0702595. Bibcode:2007NatMa...6..183G. doi:10.1038/nmat1849. PMID 17330084. S2CID 14647602.
== External links ==
Wolfram demonstration for graphene BZ and electronic dispersion | Wikipedia/Electronic_properties_of_graphene |
Graphene is the only form of carbon (or solid material) in which every atom is available for chemical reaction from two sides (due to the 2D structure). Atoms at the edges of a graphene sheet have special chemical reactivity. Graphene has the highest ratio of edge atoms of any allotrope. Defects within a sheet increase its chemical reactivity. The onset temperature of reaction between the basal plane of single-layer graphene and oxygen gas is below 260 °C (530 K). Graphene combusts at 350 °C (620 K). Graphene is commonly modified with oxygen- and nitrogen-containing functional groups and analyzed by infrared spectroscopy and X-ray photoelectron spectroscopy. However, determination of structures of graphene with oxygen- and nitrogen- functional groups requires the structures to be well controlled.
Contrary to the ideal 2D structure of graphene, chemical applications of graphene need either structural or chemical irregularities, as perfectly flat graphene is chemically inert. In other words, the definition of an ideal graphene is different in chemistry and physics.
Graphene placed on a soda-lime glass (SLG) substrate under ambient conditions exhibited spontaneous n-doping (1.33 × 1013 e/cm2) via surface-transfer. On p-type copper indium gallium diselenide (CIGS) semiconductor itself deposited on SLG n-doping reached 2.11 × 1013 e/cm2.
== Oxide ==
Using paper-making techniques on dispersed, oxidized and chemically processed graphite in water, monolayer flakes form a single sheet and create strong bonds. These sheets, called graphene oxide paper, have a measured tensile modulus of 32 GPa. The chemical property of graphite oxide is related to the functional groups attached to graphene sheets. These can change the polymerization pathway and similar chemical processes. Graphene oxide flakes in polymers display enhanced photo-conducting properties. Graphene is normally hydrophobic and impermeable to all gases and liquids (vacuum-tight). However, when formed into graphene oxide-based capillary membrane, both liquid water and water vapor flow through as quickly as if the membrane was not present.
== Chemical modification ==
Soluble fragments of graphene can be prepared in the laboratory through chemical modification of graphite. First, microcrystalline graphite is treated with an acidic mixture of sulfuric acid and nitric acid. A series of oxidation and exfoliation steps produce small graphene plates with carboxyl groups at their edges. These are converted to acid chloride groups by treatment with thionyl chloride; next, they are converted to the corresponding graphene amide via treatment with octadecylamine. The resulting material (circular graphene layers of 5.3 angstrom thickness) is soluble in tetrahydrofuran, tetrachloromethane and dichloroethane.
Refluxing single-layer graphene oxide (SLGO) in solvents leads to size reduction and folding of individual sheets as well as loss of carboxylic group functionality by up to 20%, indicating thermal instabilities of SLGO sheets dependent on their preparation methodology. When using thionyl chloride, acyl chloride groups result, which can then form aliphatic and aromatic amides with a reactivity conversion of around 70–80%.
Hydrazine reflux is commonly used for reducing SLGO to SLG(R), but titrations show that only around 20–30% of the carboxylic groups are lost, leaving a significant number available for chemical attachment. Analysis of such SLG(R) reveals that the system is unstable. Using a room temperature stirring with HCl (< 1.0 M) leads to around 60% loss of COOH functionality. Room temperature treatment of SLGO with carbodiimides leads to the collapse of the individual sheets into star-like clusters that exhibited poor subsequent reactivity with amines (c. 3–5% conversion of the intermediate to the final amide). It is apparent that conventional chemical treatment of carboxylic groups on SLGO generates morphological changes of individual sheets that leads to a reduction in chemical reactivity, which may potentially limit their use in composite synthesis. Therefore, chemical reactions types have been explored. SLGO has also been grafted with polyallylamine, cross-linked through epoxy groups. When filtered into graphene oxide paper, these composites exhibit increased stiffness and strength relative to unmodified graphene oxide paper.
Full hydrogenation from both sides of graphene sheet results in graphane, but partial hydrogenation leads to hydrogenated graphene. Similarly, both-side fluorination of graphene (or chemical and mechanical exfoliation of graphite fluoride) leads to fluorographene (graphene fluoride), while partial fluorination (generally halogenation) provides fluorinated (halogenated) graphene.
== Graphene ligand/ Graphene complex ==
Graphene can be a ligand to form a graphene complex by introducing functional groups and coordinating metal ions. Structures of graphene ligands are similar to e.g. metal-porphyrin complex, metal-phthalocyanine complex and metal-phenanthroline complex. Copper and nickel ions can be coordinated with graphene ligands.
== References == | Wikipedia/Graphene_chemistry |
A rapidly increasing list of graphene production techniques have been developed to enable graphene's use in commercial applications.
Isolated 2D crystals cannot be grown via chemical synthesis beyond small sizes even in principle, because the rapid growth of phonon density with increasing lateral size forces 2D crystallites to bend into the third dimension. However, other routes to 2D materials exist:
Fundamental forces place seemingly insurmountable barriers in the way of creating [2D crystals]... The nascent 2D crystallites try to minimize their surface energy and inevitably morph into one of the rich variety of stable 3D structures that occur in soot.
But there is a way around the problem. Interactions with 3D structures stabilize 2D crystals during growth. So one can make 2D crystals sandwiched between or placed on top of the atomic planes of a bulk crystal. In that respect, graphene already exists within graphite... One can then hope to fool Nature and extract single-atom-thick crystallites at a low enough temperature that they remain in the quenched state prescribed by the original higher-temperature 3D growth.
The early approaches of cleaving multi-layer graphite into single layers or growing it epitaxially by depositing a layer of carbon onto another material have been supplemented by numerous alternatives. In all cases, the graphene must bond to some substrate to retain its 2d shape.
== Exfoliation ==
As of 2014 exfoliation produced graphene with the lowest number of defects and highest electron mobility.
=== Adhesive tape ===
Andre Geim and Konstantin Novoselov initially used adhesive tape to split graphite into graphene. Achieving single layers typically requires multiple exfoliation steps, each producing a slice with fewer layers, until only one remains. After exfoliation the flakes are deposited on a silicon wafer. Crystallites larger than 1 mm and visible to the naked eye can be obtained.
=== Robotic pixel assembly of van der Waals solids ===
Robotic pixel assembly method for manufacturing vdW solids provides high-speed and controllable design (area, geometry, and angle). In this approach, robotic assembly of prepatterned ‘pixels’ made from atomically thin two-dimensional components forms heterojunction devices. In the first implementation of this approach, the process takes place within a high-vacuum environment to allow clean interfaces.
=== Wedge-based ===
In this method, a sharp single-crystal diamond wedge penetrates onto the graphite source to exfoliate layers. This method uses highly ordered pyrolytic graphite (HOPG) as the starting material. The experiments were supported by molecular dynamic simulations.
=== Graphite oxide reduction ===
P. Boehm reported producing monolayer flakes of reduced graphene oxide in 1962. Rapid heating of graphite oxide and exfoliation yields highly dispersed carbon powder with a few percent of graphene flakes. Reduction of graphite oxide monolayer films, e.g. by hydrazine with annealing in argon/hydrogen also yielded graphene films. Later the oxidation protocol was enhanced to yield graphene oxide with an almost intact carbon framework that allows efficient removal of functional groups, neither of which was originally possible. The measured charge carrier mobility exceeded 1,000 centimetres (393.70 in)/Vs. Spectroscopic analysis of reduced graphene oxide has been conducted.
=== Liquid phase exfoliation: Shearing ===
In 2014 defect-free, unoxidized graphene-containing liquids were made from graphite using mixers that produce local shear rates greater than 10×104 s-1. The method was claimed to be applicable to other 2D materials, including boron nitride, Molybdenum disulfide and other layered crystals. The liquid phase shear technique with the aid of surfactant is more suitable for pristine graphene exfoliation at room temperature and avoiding multi-step preparation.
=== Liquid Phase Exfoliation: Sonication ===
==== Solvent-aided ====
Dispersing graphite in a proper liquid medium can produce graphene by sonication in a process known as liquid phase exfoliation. Graphene is separated from graphite by centrifugation, producing graphene concentrations initially up to 0.01 mg/ml in N-methylpyrrolidone (NMP) and later to 2.1 mg/ml in NMP,. Using a suitable ionic liquid as the dispersing liquid medium produced concentrations of 5.33 mg/ml. Graphene concentration produced by this method is can be low, probably because of the large energy required to fragment the crystal during sonication.
Adding a surfactant to a solvent prior to sonication prevents restacking by adsorbing to the graphene's surface. This allows the production of aqueous suspensions, but removing the surfactant requires chemical treatments.
==== Immiscible liquids ====
Sonicating graphite at the interface of two immiscible liquids, most notably heptane and water, produced macro-scale graphene films. The graphene sheets are adsorbed to the high energy interface between the heptane and the water, where they are kept from restacking. The graphene remained at the interface even when exposed to force in excess of 300,000 g. The solvents may then be evaporated. The sheets are up to ~95% transparent and conductive.
=== Molten salts ===
Graphite particles can be corroded in molten salts to form a variety of carbon nanostructures including graphene. Hydrogen cations, dissolved in molten Lithium chloride, can be discharged on cathodically polarized graphite rods, which then intercalate into the graphite structure, peeling graphite to produce graphene. The graphene nanosheets produced displayed a single-crystalline structure with a lateral size of several hundred nanometers and a high degree of crystallinity and thermal stability.
=== Electrochemical synthesis ===
Electrochemical synthesis can exfoliate graphene. Varying a pulsed voltage controls thickness, flake area, number of defects and affects its properties. The process begins by bathing the graphite in a solvent for intercalation. The process can be tracked by monitoring the solution's transparency with an LED and photodiode.
== Laser-Induced Graphene (LIG) ==
In 2014, a laser-based single-step scalable approach to graphene production was published by Professor James M. Tour's Research Group at Rice University. The technique directly converts the surface of commercial polymer films into porous three-dimensional graphene patterns, using a CO2 infrared laser. The sp3-carbon atoms were photothermally converted to sp2-carbon atoms by pulsed laser irradiation. The resulting material exhibits high electrical conductivity, and has been demonstrated in a variety of applications, including interdigitated electrodes for in-plane microsupercapacitors with specific capacitances of >4 mF cm−2 and power densities of ~9 mW cm−2. Laser-induced production of graphene is compatible with roll-to-roll manufacturing processes, and provides a highly-accessible route to flexible electronics, functional nanocomposites, and advanced energy storage devices. Furthermore, the technique has been extended to a wide variety of carbon sources, such as wood, paper, and cloth, and likewise, other wavelengths of lasers were also demonstrated to form graphene.
== Laser-Induced Graphene Fibers (LIGF) and Laser-Induced Graphene Scrolls (LIGS) ==
In 2018, Professor James M. Tour's Research Group at Rice University published the synthesis of Laser-Induced Graphene Fibers and Laser-Induced Graphene Scrolls. The new morphologies, which were accessible through tuning of laser parameters, found applications in areas such as air filtration and functional nanocomposites.
== Flash Joule Heating ==
In 2019, flash Joule heating (transient high-temperature electrothermal heating) was discovered to be a method to synthesize turbostratic graphene in bulk powder form. The method involves electrothermally converting various carbon sources, such as carbon black, coal, and food waste into micron-scale flakes of graphene. More recent works demonstrated the use of mixed plastic waste, waste rubber tires, and pyrolysis ash as carbon feedstocks. The graphenization process is kinetically controlled, and the energy dose is chosen to preserve the carbon in its graphenic state (excessive energy input leads to subsequent graphitization through annealing).
== Hydrothermal self-assembly ==
Graphene has been prepared by using a sugar (e.g. glucose, fructose, etc.) This substrate-free "bottom-up" synthesis is safer, simpler and more environmentally friendly than exfoliation. The method can control thickness, ranging from monolayer to multilayers.
== Epitaxy ==
Epitaxy refers to the deposition of a crystalline overlayer on a crystalline substrate, where there is registry between the two. In some cases epitaxial graphene layers are coupled to surfaces weakly enough (by Van der Waals forces) to retain the two dimensional electronic band structure of isolated graphene. An example of this weak coupling is epitaxial graphene on SiC and on Pt(111). On the other hand, the epitaxial graphene layer on some metals can be strongly bonded to the surface with covalent bonds. The properties of the covalently bonded graphene can differ from the ones of free-standing graphene. An example of this strong coupling is epitaxial graphene on Ru(0001). However, the coupling is strong only for the first graphene layer on Ru(0001): the second layer is more weakly coupled to the first layer and has already properties very close to the free standing graphene.
=== Chemical vapor deposition ===
Chemical vapor deposition (CVD) is a common form of epitaxy. The process of deposition of solid material onto a heated substrate through decomposition or chemical reaction of compounds contained in the gas passing over the substrate is called chemical vapor deposition. The reactants, generally in the gaseous or vapor phase, react on or near the surface of the substrates, which are at some elevated temperature. The subsequent reaction results in the deposition of atoms or molecules on the entire substrate surface. CVD processes are also widely used for growing epitaxial layers such as a silicon epitaxial layer on a single-crystal silicon substrate (homoepitaxy or commonly referred to as epitaxy) or epitaxial layer deposition on a sapphire (Heteroepitaxy). A special method in CVD, called Epitaxy or Epitaxial Layer Deposition or Vapor-Phase Epitaxy (VPE), has only a single-crystal form as the deposited layer. This process is usually carried out for certain combinations of substrate and layer materials and under special deposition conditions.
== Epitaxy of graphene ==
Epitaxial graphene films can be grown on various crystalline surfaces. The atomic lattice of the substrate facilitate in orientationally registering the carbon atoms of the graphene layer. The chemical interaction of the graphene with the substrate can vary from weak to strong. This also modifies the properties of the graphene layer. The need for epitaxial graphene arises from the challenges of incorporating carbon nanotubes in large-scale integrated electronic architectures. Research on 2D graphene was thus initiated by experiments on epitaxially grown graphene on single crystal silicon carbide. While significant control has been in growing and characterizing epitaxial graphene, challenges remain in being able to fully exploit the potential of these structures. The promise lies in the hope that charge carriers on these graphene structures, like carbon nanotubes, remain ballistic. If so, it could revolutionize the world of electronics.
=== Silicon carbide ===
Heating silicon carbide (SiC) to high temperatures (>1100 °C) under low pressures (~10−6 torr) reduces it to graphene. This process produces epitaxial graphene with dimensions dependent upon the size of the wafer. The polarity of the SiC used for graphene formation, silicon- or carbon-polar, highly influences the thickness, mobility and carrier density.
Graphene's electronic band-structure (so-called Dirac cone structure) was first visualized in this material. Weak anti-localization is observed in this material, but not in exfoliated graphene produced by the drawing method. Large, temperature-independent mobilities approach those in exfoliated graphene placed on silicon oxide, but lower than mobilities in suspended graphene produced by the drawing method. Even without transfer, graphene on SiC exhibits massless Dirac fermions. The graphene–substrate interaction can be further passivated.
The weak van der Waals force that coheres multilayer stacks does not always affect the individual layers' electronic properties. That is, while the electronic properties of certain multilayered epitaxial graphenes are identical to that of a single layer, other properties are affected, as they are in bulk graphite. This effect is well understood theoretically and is related to the symmetry of the interlayer interactions.
Epitaxial graphene on SiC can be patterned using standard microelectronics methods. A band gap can be created and tuned by laser irradiation.
=== Silicon/germanium/hydrogen ===
A normal silicon wafer coated with a layer of germanium (Ge) dipped in dilute hydrofluoric acid strips the naturally forming germanium oxide groups, creating hydrogen-terminated germanium. Chemical vapor deposition deposits a layer of graphene on top. The graphene can be peeled from the wafer using a dry process and is then ready for use. The wafer can be reused. The graphene is wrinkle-free, high quality and low in defects.
=== Metal single crystal substrates ===
Metal single crystals are often used as substrates in graphene growth since they form a smooth and chemically uniform growth platform for graphene. Especially, the chemical uniformity is an important advantage of metal single crystal surfaces: for example in different oxide surfaces the oxidized component and the oxygen forms very different adsorption sites. A typical metal single crystal substrate surface is hexagonal close-packed surface since this geometry is also the geometry of carbon atoms in a graphene layer. Common surfaces that have hexagonal close packed geometry are for example FCC(111) and HCP(0001) surfaces. Of course, the similar surface geometries alone do not ensure perfect graphene adsorption on the surface since the distances between surface metal atoms and carbon atoms can be different, resulting in moiré patterns. Common metal surfaces for graphene growth are Pt(111), Ir(111), Ni(111), Ru(0001), Co(0001) and Cu(111) but also at least Fe(110), Au(111), Pd(111), Re(101͊0) and Rh(111) have been used.
==== Preparation methods of metal single crystal substrates ====
There are several methods how good quality metal single crystal substrates can be manufactured. Czochralski and Bridgman–Stockbarger methods are common industrial methods for bulk metal crystal manufacturing. In these methods, the metal is first melted, after which the metal is let to crystallize around a seed crystal. After crystallization, the crystal is cut into wafers. Other commonly used method especially in research is epitaxy, which enables the growth of numerous different metal single crystal surfaces on some commonly available single crystals like monocrystalline silicon. The advantage of epitaxy over the industrial methods is its low material consumption: with epitaxy substrates with thickness in nanometer scale can be manufactured in comparison to complete self-supporting wafers. This is especially important with rare and expensive metals like rhenium and gold.
=== Ruthenium(0001) ===
Graphene can be grown on ruthenium(0001) surface with CVD, temperature programmed growth (TPG) or segregation. In CVD, a hot ruthenium surface is exposed for some carbon containing molecule like methane or ethene. This results in graphene formation. It has been observed that the graphene can grow only “downhill” of the ruthenium surface steps, not uphill.
Graphene bonds strongly with covalent bonds to the surface and has only 1.45 Å separation to the surface. This affects the electronic structure of the graphene layer, and the layer behaves differently than a free-standing graphene layer. However, the CVD graphene growth on ruthenium is not totally self-terminating and multilayer graphene formation is possible. The second and higher layers cannot bond to the existing graphene layers as strongly as the first layer bonds to the metal surface, which results in higher 3 Å separation between the graphene layers. The second layer thus has much weaker interaction with the substrate and has very similar electronic properties as free-standing graphene.
Due to the strong bonding of graphene on the ruthenium surface, only R0 orientation is observed for graphene layer. Although, different studies have shown different lengths for the moiré repeat distance, varying around Graphene(11 x 11) and Ru(10 x 10). The moiré pattern also causes strong corrugation for the graphene layer, peak height being as much as 1.5 Å.
=== Iridium(111) ===
Graphene is commonly deposited on iridium(111) by CVD but also with temperature programmed growth (TPG) is possible. In CVD, a hot iridium surface is exposed to ethylene. Ethylene decomposes on the surface due to pyrolysis, and the formed carbon adsorbs to the surface forming a graphene monolayer. Thus, only a monolayer growth is possible. The formed graphene layer is weakly bounded to the iridium substrate and is located about 3.3 Å above the surface. The graphene layer and the Ir(111) substrate also forms a moiré pattern with period around 25 Å, depending on the orientation of the graphene on Ir(111). There are many different possibilities for the graphene layer orientation, the most common ones being R0 and R30. The graphene layer has also corrugation due to the moiré pattern, with height varying from 0.04 Å to 0.3 Å. Due to the long-range order of these ripples, minigaps in the electronic band-structure (Dirac cone) become visible.
=== Platinum(111) ===
Graphene sheets have been reported to be grown by dosing ethylene onto the clean, single platinum(111) substrate at temperatures above 1000 °C in ultra-high vacuum (UHV). Graphene monolayer interacts weakly with the Pt(111) surface below it confirmed by the local density of states which is a ‘V’ shape. Kim et al., reported the electronic properties of the graphene nanoislands whose geometry is affected by varying the annealing temperatures and providing a fundamental understanding on graphene growth. The effect of annealing on the average size and density of graphene islands grown on Pt(111) has been widely studied. Sutter et al., reported a thermal-stress driven wrinkle propagation on the graphene sheet as observed from low-energy electron microscopy during cooling after growth. The onset of lattice mismatch precedes the observation of moiré patterns with small (e.g., (3x3)G) and large unit cells (e.g., (8x8)G).
=== Nickel(111) ===
High-quality sheets of few-layer graphene exceeding 1 cm2 (0.2 sq in) in area have been synthesized via CVD on thin nickel films using multiple techniques. First the film is exposed to argon gas at 900–1000 degrees Celsius. Methane is then mixed into the gas, and the methane's disassociated carbon is absorbed into the film. The solution is then cooled and the carbon diffuses out of the nickel to form graphene films. CVD grown graphene on Ni(111) surface forms (1 x 1) structure, i.e. the lattice constants of Ni and graphene matches and no moiré pattern is formed. There are still different possible adsorption sites for carbon atoms on nickel, at least top, hcp hollow, fcc hollow and bridge sites have been reported [17].
Another method used temperatures compatible with conventional CMOS processing, using a nickel-based alloy with a gold catalyst. This process dissolves carbon atoms inside a transition metal melt at a certain temperature and then precipitates the dissolved carbon at lower temperatures as single layer graphene (SLG).
The metal is first melted in contact with a carbon source, possibly a graphite crucible inside which the melt is carried out or graphite powder/chunks that are placed in the melt. Keeping the melt in contact with the carbon at a specific temperature dissolves the carbon atoms, saturating the melt based on the metal–carbon binary phase diagram. Lowering the temperature decreases carbon's solubility and the excess carbon precipitates onto the melt. The floating layer can be either skimmed or frozen for later removal.
Using different morphology, including thick graphite, few layer graphene (FLG) and SLG were observed on metal substrate. Raman spectroscopy proved that SLG had grown on nickel substrate. The SLG Raman spectrum featured no D and D′ band, indicating its pristine nature. Since nickel is not Raman active, direct Raman spectroscopy of graphene layers on top of the nickel is achievable.
Another approach covered a sheet of silicon dioxide glass (the substrate) on one side with a nickel film. Graphene deposited via chemical vapor deposition formed into layers on both sides of the film, one on the exposed top side, and one on the underside, sandwiched between nickel and glass. Peeling the nickel and the top layer of graphene left an intervening layer of graphene on the glass. While the top graphene layer could be harvested from the foil as in earlier methods, the bottom layer was already in place on the glass. The quality and purity of the attached layer was not assessed.
=== Cobalt(0001) ===
Graphene on cobalt(0001) is grown similarly as on a Ni substrate. A Co(0001) film is first grown on a wolfram(110) substrate, following which chemical vapor deposition of propylene at 450 °C enables graphene growth on Co(0001). This results in a p(1x1) structure along with structures that indicated domains of graphene slightly rotated with respect to the Co lattice. Graphene structures grown on Co(0001) are found to be identical to those grown on Ni(111) upon structural and electronic characterization. Co(0001) is ferromagnetic but the graphene monolayer grown over was found to not diminish the spin polarization. Unlike its Ni(111) counterpart, graphene grown on Co(0001) does not show the Rashba effect.
=== Copper ===
Copper foil, at room temperature and very low pressure and in the presence of small amounts of methane produces high quality graphene. The growth automatically stops after a single layer forms. Arbitrarily large films can be created. The single layer growth is due to the low concentration of carbon in methane. The process is surface-based rather than relying on absorption into the metal and then diffusion of carbon into graphene layers on the surface. The room temperature process eliminates the need for postproduction steps and reduces production from a ten-hour/nine- to ten-step procedure to a single step that takes five minutes. A chemical reaction between the hydrogen plasma formed from the methane and ordinary air molecules in the chamber generates cyano radicals. These charged molecules scour away surface imperfections, providing a pristine substrate. The graphene deposits form lines that merge into each other, forming a seamless sheet that contributes to mechanical and electrical integrity.
Larger hydrocarbons such as ethane and propane produce bilayer coatings. Atmospheric pressure CVD growth produces multilayer graphene on copper (similar to nickel).
The material has fewer defects, which in higher temperature processes result from thermal expansion/contraction. Ballistic transport was observed in the resulting material.
=== Tin ===
Tin has been recently used for synthesis of graphene at 250 °C. Low-temperature as well as the transfer free graphene growth on substrates is the major concern of graphene research for its practical applications. The transfer free graphene growth on SiO2 covered Si (SiO2/Si) substrate at 250 °C based on a solid-liquid-solid reaction has been achieved by tin.
=== Sodium ethoxide pyrolysis ===
Gram-quantities were produced by the reduction of ethanol by sodium metal, followed by pyrolysis of the ethoxide product and washing with water to remove sodium salts.
=== Roll-to-roll ===
Large scale roll-to-roll production of graphene based on chemical vapor deposition, was first demonstrated in 2010. In 2014 a two-step roll-to-roll manufacturing process was announced. The first roll-to-roll step produces the graphene via chemical vapor deposition, and the second step binds the graphene to a substrate. In 2018, researchers at MIT refined the roll-to-roll process, creating a promising way to produce large amounts of graphene.
=== Cold wall ===
Growing graphene in an industrial resistive-heating cold wall CVD system was claimed to produce graphene 100 times faster than conventional CVD systems, cuts costs by 99 percent and produce material with enhanced electronic qualities.
Cold wall CVD technique can be used to study the underlying surface science involved in graphene nucleation and growth as it allows unprecedented control of process parameters like gas flow rates, temperature and pressure as demonstrated in a recent study. The study was carried out in a home-built vertical cold wall system utilizing resistive heating by passing direct current through the substrate. It provided conclusive insight into a typical surface-mediated nucleation and growth mechanism involved in two-dimensional materials grown using catalytic CVD under conditions sought out in the semiconductor industry.
== Nanotube slicing ==
Graphene can be created by cutting open carbon nanotubes. In one such method multi-walled carbon nanotubes are cut open in solution by action of potassium permanganate and sulfuric acid. In another method graphene nanoribbons were produced by plasma etching of nanotubes partly embedded in a polymer film.
== Langmuir-Blodgett (LB) ==
In applications where the thickness and packing density of graphene layer needs to carefully controlled, the Langmuir-Blodgett method has been used. In addition to forming directly a layer of graphene, another approach that has been widely studied is forming a graphene oxide layer which can then be reduced further into graphene.
Some of the benefits of LB deposition include an accurate control over the layered architecture of the graphene, the layer-by-layer deposition process is amenable to assembling any combination of thin carbon layers on a substrates, the assembly process operates at room temperature and produces high throughputs while it is amenable to automation and mass production.
== Carbon dioxide reduction ==
A highly exothermic reaction combusts magnesium in an oxidation–reduction reaction with carbon dioxide, producing a variety of carbon nanoparticles including graphene and fullerenes. The carbon dioxide reactant may be either solid (dry-ice) or gaseous. The products of this reaction are carbon and magnesium oxide.
== Spin coating ==
In 2014, carbon nanotube-reinforced graphene was made via spin coating and annealing functionalized carbon nanotubes. The resulting material was stronger, flexible and more conductive than conventional graphene.
== Supersonic spray ==
Supersonic acceleration of droplets through a Laval nozzle was used to deposit small droplets of reduced graphene-oxide in suspension on a substrate. The droplets disperse evenly, evaporate rapidly and display reduced flake aggregations. In addition, the topological defects (Stone-Wales defect and C2 vacancies) originally in the flakes disappeared. The result was a higher quality graphene layer. The energy of the impact stretches the graphene and rearranges its carbon atoms into flawless hexagonal graphene with no need for post-treatment. The high amount of energy also allows the graphene droplets to heal any defects in the graphene layer that occur during this process.
Another approach sprays buckyballs at supersonic speeds onto a substrate. The balls cracked open upon impact, and the resulting unzipped cages then bond together to form a graphene film. The buckyballs are released into a helium or hydrogen gas, which expands at supersonic speeds, carrying the carbon balls with it. The buckyballs achieve energies of around 40 keV without changing their internal dynamics. This material contains hexagons and pentagons that come from the original structures. The pentagons could introduce a band gap.
== Intercalation ==
Producing graphene via intercalation splits graphite into single layer graphene by inserting guest molecules/ions between the graphite layers. Graphite was first intercalated in 1841 using a strong oxidizing or reducing agent that damaged the material's desirable properties. Kovtyukhova developed a widely used oxidative intercalation method in 1999. In 2014, she was able to achieve intercalation using non-oxidizing Brønsted acids (phosphoric, sulfuric, dichloroacetic and alkylsulfonic acids), but without oxidizing agents. The new method has yet to achieve output sufficient for commercialization.
== Reduction of Graphene Oxide through Laser Irradiation ==
Applying a layer of graphite oxide film to a DVD and burning it in a DVD writer produced a thin graphene film with high electrical conductivity (1738 siemens per meter) and specific surface area (1520 square meters per gram) that was highly resistant and malleable.
== Microwave-assisted oxidation ==
In 2012, a microwave-assisted, scalable approach was reported to directly synthesize graphene with different size from graphite in one step. The resulting graphene does not need any post reduction treatment as it contains very little oxygen. This approach avoids use of potassium permanganate in the reaction mixture. It was also reported that by microwave radiation assistance, graphene oxide with or without holes can be synthesized by controlling microwave time. This method uses a recipe similar to Hummer's method, but uses microwave heating instead of traditional heating. Microwave heating can dramatically shorten the reaction time from days to seconds.
== Ion implantation ==
Accelerating carbon ions under an electrical field into a semiconductor made of thin Ni films on a substrate of SiO2/Si, creates a wafer-scale (4 inches (100 mm)) wrinkle/tear/residue-free graphene layer that changes the semiconductor's physical, chemical and electrical properties. The process uses 20 keV and a dose of 1 × 1015 cm−2 at a relatively low temperature of 500 °C. This was followed by high-temperature activation annealing (600–900 °C) to form an sp2-bonded structure.
== Heated vegetable oil ==
Researchers heated soybean oil in a furnace for ≈30 minutes. The heat decomposed the oil into elemental carbon that deposited on nickel foil as single/few-layer graphene.
== Bacteria processing of graphene oxide ==
Graphene oxide can be converted to graphene using the bacteria Shewanella oneidensis
== Graphene characterization techniques ==
=== Low-energy and photoemission electron microscopy ===
Low-energy electron microscopy (LEEM) and photoemission electron microscopy (PEEM) are techniques suited to performing dynamic observations of surfaces with nanometer resolution in a vacuum. With LEEM, it is possible to carry out low-energy electron diffraction (LEED) and micro-LEED experiments. LEED is the standard method for studying the surface structure of a crystalline material. Low-energy electrons (20–200 eV) impact the surface and elastically backscattered electrons illuminate a diffraction pattern on a fluorescent screen. The LEED method is a surface-sensitive technique as electrons have low energy and are not able to penetrate deep into the sample. For example, a micro-sized LEED revealed the presence of rotational variations of graphene on SiC substrate.
=== Raman spectroscopy and microscopy ===
Raman spectroscopy can provide information about the number of layers on graphene stacks, the atomic structure of graphene edges, disorder and defects, the stacking order between different layers, the effect of strain, and charge transfer. Graphene has three main features in its Raman spectrum, called the D, G, and 2D (also called G’) modes that appear at about 1350, 1583 and 2700 cm-1.
=== Scanning tunneling microscopy ===
In scanning tunneling microscopy (STM), a sharp tip scans the surface of a sample in a regime of such tip-sample distances that electrons can quantum tunneling from the tip to the sample surface or vice versa. STM can be performed in a constant current or a constant height mode. The low temperature STM measurements provide thermal stability, which is a requirement for high resolution imaging and spectroscopic analysis. The first atomically resolved images of graphene grown on a platinum substrate were obtained using STM in the 1990s.
=== Atomic and electrostatic force microscopy ===
Atomic force microscopy (AFM) is mostly used to measure the force between atoms located at the sharp point of the tip (located on the cantilever) and atoms at the sample surface. The bending of the cantilever as a result of the interaction between the tip and the sample is detected and converted to an electrical signal. The electrostatic force microscopy mode of AFM has been used to detect the surface potential of graphene layers as a function of thickness variation allowing for quantification of potential difference maps showing distinction between graphene layers of different thicknesses.
=== Transmission electron microscopy ===
Transmission electron microscopy (TEM) uses electrons to generate high-resolution images as using electrons allows to overcome limitations of visible light wavelengths. TEM on graphene should be done with electron energy less than 80 keV to induce a smaller amount of defects, because this energy is the threshold electron energy for damaging a single-wall carbon nano-tube. There are some other difficulties in the study of graphene by TEM, e.g., in a plane-view geometry (top-view graphene) the substrate causes strong electron scattering, and a thick substrate makes it impossible to detect the graphene layer. For a cross-section view, detecting a monolayer graphene is a difficult task as it needs simulation of the TEM images.
=== Scanning electron microscopy ===
In scanning electron microscopy (SEM), a high-energy electron beam (ranging a few 100 eVs to a few keVs) is used to generate a variety of signals at the surface of a sample. These signals which come from the electron-sample interactions expose information about the sample, including surface morphology, crystalline structure, and chemical composition. SEM is also used for characterizations of the growth of graphene on SiC. Because of its atomic thickness, graphene is usually detected with secondary electrons that probe only a sample surface. With SEM imaging, different contrast can be observed, such as thickness, roughness, and edge contrast; the brighter area shows the thinner part of the graphene layers. The roughness contrast of a graphene layer is due to the different numbers of secondary electrons detected. The defects such as wrinkles, ruptures, and folds can be studied by different contrast in SEM images.
== See also ==
Exfoliated graphite nano-platelets
Metal-organic framework
Two-dimensional polymer
HSMG (High Strength Metallurgical Graphene)
== References == | Wikipedia/Graphene_production_techniques |
Epitaxial graphene growth on silicon carbide (SiC) by thermal decomposition is a method to produce large-scale few-layer graphene (FLG).
Graphene is one of the most promising nanomaterials for the future because of its various characteristics, like strong stiffness and high electric and thermal conductivity.
Still, reproducible production of graphene is difficult; thus, many different techniques have been developed.
The main advantage of epitaxial graphene growth on silicon carbide over other techniques is to obtain graphene layers directly on a semiconducting or semi-insulating substrate which is commercially available.
== History ==
The thermal decomposition of bulk SiC was first reported in 1965 by Badami. He annealed the SiC in vacuum to around 2180 °C for an hour to obtain a graphite lattice. In 1975, Bommel et al. then achieved to form monolayer graphite on the C-face as well as the Si-face of hexagonal SiC. The experiment was carried out under UHV with a temperature of 800 °C and hints for a graphene structure could be found in LEED patterns and the change in the carbon Auger peak from a carbide character to a graphite character.
New insights in the electronic and physical properties of graphene like the Dirac nature of the charge carriers, half-integer quantum Hall effect or the observation of the 2D electron gas behaviour were first measured on multilayer graphene from de Heer et al. at the Georgia Institute of Technology in 2004.
Still, the Nobel Prize in Physics ″for groundbreaking experiments regarding the two-dimensional material graphene″ in 2010 was awarded to Andre Geim and Konstantin Novoselov. An official online document of the Royal Swedish Academy of Sciences about this awarding got under fire. Walter de Heer mentions several objections about the work of Geim and Novoselov who apparently have measured on many-layer graphene, also called graphite, which has different electronic and mechanical properties.
Emtsev et al. improved the whole procedure in 2009 by annealing the SiC-samples at high temperatures over 1650 °C in an argon environment to obtain morphologically superior graphene.
== Process ==
The underlying process is the desorption of atoms from an annealed surface, in this case a SiC-sample. Due to the fact that the vapor pressure of carbon is negligible compared to the one of silicon, the Si atoms desorb at high temperatures and leave behind the carbon atoms which form graphitic layers, also called few-layer graphene (FLG). Different heating mechanisms like e-beam heating or resistive heating lead to the same result. The heating process takes place in a vacuum to avoid contamination. Approximately three bilayers of SiC are necessary to set free enough carbon atoms needed for the formation of one graphene layer. This number can be calculated out of the molar densities.
Today's challenge is to improve this process for industrial fabrication. The FLG obtained so far has a non-uniform thickness distribution which leads to different electronic properties. Because of this, there's a demand for growing uniform large-area FLG with the desired thickness in a reproducible way. Also, the impact of the SiC substrate on the physical properties of FLG is not totally understood yet.
The thermal decomposition process of SiC in high / ultra high vacuum works out well and appears promising for large-scale production of devices on graphene basis. But still, there are some problems that have to be solved. Using this technique, the resulting graphene consists of small grains with varying thickness (30–200 nm). These grains occur due to morphological changes of the SiC surface under high temperatures. On the other side, at relatively low temperatures, poor quality occurs due to the high sublimation rate.
The growth procedure was improved to a more controllable technique by annealing the SiC-samples at high temperatures over 1650 °C in an argon environment.
The desorbed silicon atoms from the surface collide with the argon atoms and a few are reflected back to the surface. This leads to a decrease of the Si evaporation rate. Carrying out the experiment under high temperatures further enhances surface diffusion. This leads to a restructuring of the surface which is completed before the formation of the graphene layer. As an additional advantage, the graphene domains are larger in size than in the initial process (3 x 50 μm2) up to 50 x 50 μm2 .
Of course, the technology always undergoes changes to improve the graphene quality. One of them is the so-called confinement controlled sublimation (CCS) method. Here, the SiC sample is placed in a graphite enclosure equipped with a small leak. By controlling the evaporation rate of the silicon through this leak, a regulation of the graphene growth rate is possible. Therefore, high-quality graphene layers are obtained in a near-equilibrium environment.
The quality of the graphene can also be controlled by annealing in the presence of an external silicon flux. By using disilane gas, the silicon vapor pressure can be controlled.
== Crystallographic orientation between the SiC and graphene layers ==
SiC is bipolar and therefore the growth can take place on both the SiC(0001) (silicon-terminated) or SiC(0001) (carbon-terminated) faces of 4H-SiC and 6H-SiC wafers. The different faces result in different growth rates and electronic properties.
=== Silicon-terminated face ===
On the SiC(0001) face, large-area single crystalline monolayer graphene with a low growth rate can be grown. These graphene layers do have a good reproducibility. In this case, the graphene layer grows not directly on top of the substrate but on a complex
(
6
⋅
3
×
6
⋅
3
)
R
30
∘
{\displaystyle (6\cdot {\sqrt {3}}\times 6\cdot {\sqrt {3}})\mathrm {R} 30^{\circ }}
structure. This structure is non-conducting, rich of carbon and partially covalently bonded to the underlying SiC substrate and provides, therefore, a template for subsequent graphene growth and works as an electronic ″buffer layer″. This buffer layer forms a non-interacting interface with the graphene layer on top of it. Therefore, the monolayer graphene grown an SiC(0001) is electronically identical to a freestanding monolayer of graphene. Changing the growth parameters such as annealing temperature and time, the number of graphene layers on the SiC(0001) can be controlled
. The graphene always maintains its epitaxial relationship with the SiC substrate and the topmost graphene, which originates from the initial buffer layer, is continuous everywhere across the substrate steps and across the boundary between regions with different numbers of graphene layers.
The buffer layer does not exhibit the intrinsic electronic structure of graphene but induces considerable n-doping in the overlying monolayer graphene film.
This is a source of electronic scattering and leads therefore to major problems for future electronic device applications based on SiC-supported graphene structures.
This buffer layer can be transformed into monolayer graphene by decoupling it from the SiC substrate using an intercalation process.
It is also possible to grow off axis on 6H-SiC(0001) wafers. Ouerghi obtained a perfect uniform graphene monolayer at the terraces by limiting the silicon sublimation rate with N2 and silicon fluxes in UHV at an annealing temperature of 1300 °C.
A growth on the 3C-SiC(111) face is also possible. Therefore, annealing temperatures over 1200 °C are necessary. First, the SiC loses silicon atoms and the top layer rearranges in a SiC
(
3
×
3
)
R
30
∘
{\displaystyle ({\sqrt {3}}\times {\sqrt {3}})\mathrm {R} 30^{\circ }}
structure. A loss of further silicon atoms leads to a new intermediate distorted stage of SiC
(
3
2
×
3
)
R
30
∘
{\displaystyle ({\frac {3}{2}}\times {\sqrt {3}})\mathrm {R} 30^{\circ }}
which matches almost the graphene (2 x 2) structure. Losing the residual silicon atoms, this evolves into graphene. The first four layers of cubic SiC(111) are arranged in the same order as SiC(0001) so the findings are applicable to both structures.
=== Carbon-terminated face ===
The growth on the SiC(0001) face is much faster than on the SiC(0001) face
. Also the number of layers is higher, around 5 to 100 layers and a polycrystalline nature appear. In early reports, the regions of graphene growths have been described as ″islands″ since they appear on microscopy images as pockets of graphene on the substrate surface.
Hite et al. however found out, that these islands are positioned at a lower level than the surrounding surface and referred them as graphene covered basins (GCBs). The suggestion is, that crystallographic defects in the substrate act as nucleation sites for these GCBs. During the growth of the graphene layers, the GCBs coalesce with each. Because of their different possible orientations, sizes and thickness, the resulting graphene film contains misoriented grains with varying thickness. This leads to large oriental disorder. Growing graphene on the carbon-terminated face, every layer is rotated against the previous one with angles between 0° and 30° relative to the substrate. Due to this, the symmetry between the atoms in the unit cell is not broken in multilayers and every layer has the electronic properties of an isolated monolayer of graphene.
== Evaluation of number of graphene layers ==
To optimize the growth conditions, it is important to know the number of graphene layers. This number can be determined by using the quantized oscillations of the electron reflectivity. Electrons have a wave character. If they are shot on the graphene surface, they can be reflected either from the graphene surface or from the graphene-SiC interface. The reflected electrons (waves) can interfere with each other. The electron reflectivity itself changes periodically as a function of the incident electron energy and the FLG thickness. For example, thinner FLG provides longer oscillation periods. The most suitable technique for these measurements is the low-energy electron microscopy (LEEM).
A fast method to evaluate the number of layers is using optical microscope in combination with contrast-enhancing techniques. Single-layer graphene domains and substrate terraces can be resolved on the surface of SiC. The method is particularly suitable for quick evaluation of the surface.
== Applications ==
Furthermore, epitaxial graphene on SiC is considered as a potential material for high-end electronics. It is considered to surpass silicon in terms of key parameters like feature size, speed and power consumption and is therefore one of the most promising materials for future applications.
=== Saturable absorber ===
Using a two-inch 6H-SiC wafer as substrate, the graphene grown by thermal decomposition can be used to modulate a large energy pulse laser. Because of its saturable properties, the graphene can be used as a passive Q-switcher.
=== Metrology ===
The quantum Hall effect in epitaxial graphene can serve as a practical standard for electrical resistance. The potential of epitaxial graphene on SiC for quantum metrology has been shown since 2010, displaying quantum Hall resistance quantization accuracy of three parts per billion in monolayer epitaxial graphene. Over the years precisions of parts-per-trillion in the Hall resistance quantization and giant quantum Hall plateaus have been demonstrated. Developments in encapsulation and doping of epitaxial graphene have led to the commercialisation of epitaxial graphene quantum resistance standards
=== Hall sensors ===
=== Other ===
The graphene on SiC can be also an ideal platform for structured graphene (transducers, membranes).
== Open problems ==
Limitations in terms of wafer sizes, wafer costs and availability of micromachining processes have to be taken into account when using SiC wafers.
Another problem is directly coupled with the advantage of growing the graphene directly on a semiconducting or semi-insulating substrate which is commercially available. But there's no perfect method yet to transfer the graphene to other substrates. For this application, epitaxial growth on copper is a promising method. The carbon's solubility into copper is extremely low and therefore mainly surface diffusion and nucleation of carbon atoms are involved. Because of this and the growth kinetics, the graphene thickness is limited to predominantly a monolayer. The big advantage is that the graphene can be grown on Cu foil and subsequently transferred to for example SiO2.
== See also ==
Carbide-derived carbon
== References == | Wikipedia/Epitaxial_graphene_growth_on_silicon_carbide |
Nature Materials is a monthly peer-reviewed scientific journal published by Nature Portfolio. It was launched in September 2002. Vincent Dusastre is the launching and current chief editor.
== Aims and scope ==
Nature Materials is focused on all topics within the combined disciplines of materials science and engineering. Topics published in the journal are presented from the view of the impact that materials research has on other scientific disciplines such as (for example) physics, chemistry, and biology. Coverage in this journal encompasses fundamental research and applications from synthesis to processing, and from structure to composition. Coverage also includes basic research and applications of properties and performance of materials. Materials are specifically described as "substances in the condensed states (liquid, solid, colloidal)", and which are "designed or manipulated for technological ends."
Furthermore, Nature Materials functions as a forum for the materials scientist community. Interdisciplinary research results are published, obtained from across all areas of materials research, and between scientists involved in the different disciplines. The readership for this journal are scientists, in both academia and industry involved in either developing materials or working with materials-related concepts. Finally, Nature Materials perceives materials research as significantly influential on the development of society.
== Coverage ==
Research areas covered in the journal include:
Engineering and structural materials (metals, alloys, ceramics, composites)
Organic and soft materials (glasses, colloids, liquid crystals, polymers)
Bio-inspired, biomedical and biomolecular materials
Optical, photonic and optoelectronic materials
Magnetic materials
Materials for electronics
Superconducting materials
Catalytic and separation materials
Materials for energy
Nanoscale materials and processes
Computation, modelling and materials theory
Surfaces and thin films
Design, synthesis, processing and characterization techniques
In addition to primary research, Nature Materials also publishes review articles, news and views, research highlights about important papers published in other journals, commentaries, correspondence, interviews and analysis of the broad field of materials science.
== Abstracting and indexing ==
Nature Materials is indexed in the following databases:
Chemical Abstracts Service – CASSI
Science Citation Index
Science Citation Index Expanded
Current Contents – Physical, Chemical & Earth Sciences
BIOSIS Previews
== References ==
== External links ==
Nature Materials
Nature Materials editors | Wikipedia/Nature_Materials |
GrapheneOS is an open-source, privacy- and security-focused Android operating system that runs on selected Google Pixel devices, including smartphones, tablets and foldables.
== History ==
The main developer, Daniel Micay, originally worked on CopperheadOS, until a schism over software licensing between the co-founders of Copperhead Limited led to Micay's dismissal from the company in 2018. After the incident, Micay continued working on the Android Hardening project, which was renamed as GrapheneOS and announced in April 2019.
In March 2022, two GrapheneOS apps, "Secure Camera" and "Secure PDF Viewer", were released on the Google Play Store.
Also in March 2022, GrapheneOS reportedly released Android 12L for Google Pixel devices before Google did, second to ProtonAOSP.
In May 2023, Micay announced he would step down as lead developer of GrapheneOS and as a GrapheneOS Foundation director. As of September 2024, the GrapheneOS Foundation's Federal Corporation Information lists Micay as one of its directors.
== Features ==
=== Sandboxed Google Play ===
By default Google apps are not installed with GrapheneOS, but users can install a sandboxed version of Google Play Services from the pre-installed "App Store". The sandboxed Google Play Services allows access to the Google Play Store and apps dependent on it, along with features including push notifications and in-app payments.
Around January 2024, Android Auto support was added to GrapheneOS, allowing users to install it via the App Store. The Sandboxed Google Play compatibility layer settings adds a new permission menu with 4 toggles for granting the minimal access required for wired Android Auto, wireless Android Auto, audio routing and phone calls.
=== Security and privacy features ===
GrapheneOS introduces revocable network access and sensors permission toggles for each installed app. GrapheneOS also introduces a PIN scrambling option for the lock screen.
GrapheneOS randomizes Wi-Fi MAC addresses per connection (to a Wi-Fi network) by default, instead of the Android per-network default.
GrapheneOS includes automatic phone reboot when not in use, automatic WiFi and Bluetooth disabling, and system-level disabling of USB-C port, microphone, camera, and sensors for apps. Additionally, it offers the "Contact Scopes" feature, which allows users to select which contacts an app can access.
A hardened Chromium-based web browser and WebView implementation known as Vanadium, is developed by GrapheneOS and included as the default web browser/WebView. It includes automatic updates, process and site-level sandboxing, and built-in ad and tracker blocking.
Auditor, a hardware-based attestation app, developed by GrapheneOS, which "provide strong hardware-based verification of the authenticity and integrity of the firmware/software on the device" is also included.
Apps like Secure Camera and Secure PDF Viewer offer advanced privacy features such as automatic removal of Exif metadata and protection against malicious code in PDF files.
== Installation ==
GrapheneOS currently is only compatible with Google Pixel devices, due to specific requirements that GrapheneOS has for adding support for a new device, including an unlockable bootloader and proper implementation of verified boot.
The operating system can be installed from various platforms, including Windows, macOS, Linux, and Android devices. Two installation methods are available: a WebUSB-based installer, recommended for most users, and a command-line based installer, intended for more experienced users.
== Reception ==
In 2019, Georg Pichler of Der Standard, and other news sources, quoted Edward Snowden saying on Twitter, "If I were configuring a smartphone today, I'd use Daniel Micay's GrapheneOS as the base operating system."
In discussing why services should not force users to install proprietary apps, Lennart Mühlenmeier of netzpolitik.org suggested GrapheneOS as an alternative to Apple or Google.
Svět Mobilně and Webtekno repeated the suggestions that GrapheneOS is a good security- and privacy-oriented replacement for standard Android.
In a detailed review of GrapheneOS for Golem.de, Moritz Tremmel and Sebastian Grüner said they were able to use GrapheneOS similarly to other Android systems, while enjoying more freedom from Google, without noticing differences from "additional memory protection, but that's the way it should be." They concluded GrapheneOS cannot change how "Android devices become garbage after three years at the latest", but "it can better secure the devices during their remaining life while protecting privacy."
In June 2021, reviews of GrapheneOS, KaiOS, AliOS, and Tizen OS, were published in Cellular News. The review of GrapheneOS called it "arguably the best mobile operating system in terms of privacy and security." However, they criticized GrapheneOS for its inconvenience to users, saying "GrapheneOS is completely de-Googled and will stay that way forever—at least according to the developers." They also noticed a "slight performance decrease" and said "it might take two full seconds for an app—even if it’s just the Settings app—to fully load."
In March 2022, writing for How-To Geek Joe Fedewa said that Google apps were not included due to concerns over privacy, and GrapheneOS also did not include a default app store. Instead, Fedewa suggested, F-Droid could be used.
In 2022, Jonathan Lamont of MobileSyrup reviewed GrapheneOS installed on a Pixel 3, after one week of use. He called GrapheneOS install process "straightforward" and concluded that he liked GrapheneOS overall, but criticized the post-install as "often not a seamless experience like using an unmodified Pixel or an iPhone", attributing his experience to his "over-reliance on Google apps" and the absence of some "smart" features in GrapheneOS default keyboard and camera apps, in comparison to software from Google.
In his initial impressions post a week prior, Lamont said that after an easy install there were issues with permissions for Google's Messages app, and difficulty importing contacts; Lamont then concluded, "Anyone looking for a straightforward experience may want to avoid GrapheneOS or other privacy-oriented Android experiences since the privacy gains often come at the expense of convenience and ease of use."
In July 2022, Charlie Osborne of ZDNET suggested that individuals who suspect a Pegasus infection use a secondary device with GrapheneOS for secure communication.
In January 2023, a Swiss startup company, Apostrophy AG, announced AphyOS, which is a subscription fee-based Android operating system and services "built atop" GrapheneOS.
== See also ==
Comparison of mobile operating systems
List of custom Android distributions
Security-focused operating system
== References and notes ==
== External links ==
Official website | Wikipedia/GrapheneOS |
Graphene nanoribbons (GNRs, also called nano-graphene ribbons or nano-graphite ribbons) are strips of graphene with width less than 100 nm. Graphene ribbons were introduced as a theoretical model by Mitsutaka Fujita and coauthors to examine the edge and nanoscale size effect in graphene. Some earlier studies of graphitic ribbons within the area of conductive polymers in the field of synthetic metals include works by Kazuyoshi Tanaka, Tokio Yamabe and co-authors, Steven Kivelson and Douglas J. Klein. While Tanaka, Yamabe and Kivelson studied so-called zigzag and armchair edges of graphite, Klein introduced a different edge geometry that is frequently referred to as a bearded edge.
== Production ==
=== Nanotomy ===
Large quantities of width-controlled GNRs can be produced via graphite nanotomy, where applying a sharp diamond knife on graphite produces graphite nanoblocks, which can then be exfoliated to produce GNRs as shown by Vikas Berry. GNRs can also be produced by "unzipping" or axially cutting nanotubes. In one such method multi-walled carbon nanotubes were unzipped in solution by action of potassium permanganate and sulfuric acid. In another method GNRs were produced by plasma etching of nanotubes partly embedded in a polymer film. More recently, graphene nanoribbons were grown onto silicon carbide (SiC) substrates using ion implantation followed by vacuum or laser annealing. The latter technique allows any pattern to be written on SiC substrates with 5 nm precision.
=== Epitaxy ===
GNRs were grown on the edges of three-dimensional structures etched into silicon carbide wafers. When the wafers are heated to approximately 1,000 °C (1,270 K; 1,830 °F), silicon is preferentially driven off along the edges, forming nanoribbons whose structure is determined by the pattern of the three-dimensional surface. The ribbons had perfectly smooth edges, annealed by the fabrication process. Electron mobility measurements surpassing one million correspond to a sheet resistance of one ohm per square — two orders of magnitude lower than in two-dimensional graphene.
=== Chemical vapor deposition ===
Nanoribbons narrower than 10 nm grown on a germanium wafer act like semiconductors, exhibiting a band gap. Inside a reaction chamber, using chemical vapor deposition, methane is used to deposit hydrocarbons on the wafer surface, where they react with each other to produce long, smooth-edged ribbons. The ribbons were used to create prototype transistors. At a very slow growth rate, the graphene crystals naturally grow into long nanoribbons on a specific germanium crystal facet. By controlling the growth rate and growth time, the researchers achieved control over the nanoribbon width.
Recently, researchers from SIMIT (Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences) reported on a strategy to grow graphene nanoribbons with controlled widths and smooth edges directly onto dielectric hexagonal boron nitride (h-BN) substrates. The team use nickel nanoparticles to etch monolayer-deep, nanometre-wide trenches into h-BN, and subsequently fill them with graphene using chemical vapour deposition. Modifying the etching parameters allows the width of the trench to be tuned to less than 10 nm, and the resulting sub-10-nm ribbons display bandgaps of almost 0.5 eV. Integrating these nanoribbons into field effect transistor devices reveals on–off ratios of greater than 104 at room temperature, as well as high carrier mobilities of ~750 cm2 V−1 s−1.
=== Multistep nanoribbon synthesis ===
A bottom-up approach was investigated. In 2017 dry contact transfer was used to press a fiberglass applicator coated with a powder of atomically precise graphene nanoribbons on a hydrogen-passivated Si(100) surface under vacuum. 80 of 115 GNRs visibly obscured the substrate lattice with an average apparent height of 0.30 nm. The GNRs do not align to the Si lattice, indicating a weak coupling. The average bandgap over 21 GNRs was 2.85 eV with a standard deviation of 0.13 eV.
The method unintentionally overlapped some nanoribbons, allowing the study of multilayer GNRs. Such overlaps could be formed deliberately by manipulation with a scanning tunneling microscope. Hydrogen depassivation left no band-gap. Covalent bonds between the Si surface and the GNR leads to metallic behavior. The Si surface atoms move outward, and the GNR changes from flat to distorted, with some C atoms moving in toward the Si surface.
== Electronic structure ==
The electronic states of GNRs largely depend on the edge structures (armchair or zigzag). In zigzag edges each successive edge segment is at the opposite angle to the previous. In armchair edges, each pair of segments is a 120/-120 degree rotation of the prior pair. The animation below provides a visualization explanation of both. Zigzag edges provide the edge localized state with non-bonding molecular orbitals near the Fermi energy. They are expected to have large changes in optical and electronic properties from quantization.
Calculations based on tight binding theory predict that zigzag GNRs are always metallic while armchairs can be either metallic or semiconducting, depending on their width. However, density functional theory (DFT) calculations show that armchair nanoribbons are semiconducting with an energy gap scaling with the inverse of the GNR width. Experiments verified that energy gaps increase with decreasing GNR width. Graphene nanoribbons with controlled edge orientation have been fabricated by scanning tunneling microscope (STM) lithography. Energy gaps up to 0.5 eV in a 2.5 nm wide armchair ribbon were reported.
Armchair nanoribbons are metallic or semiconducting and present spin polarized edges. Their gap opens thanks to an unusual antiferromagnetic coupling between the magnetic moments at opposite edge carbon atoms. This gap size is inversely proportional to the ribbon width and its behavior can be traced back to the spatial distribution properties of edge-state wave functions, and the mostly local character of the exchange interaction that originates the spin polarization. Therefore, the quantum confinement, inter-edge superexchange, and intra-edge direct exchange interactions in zigzag GNR are important for its magnetism and band gap. The edge magnetic moment and band gap of zigzag GNR are reversely proportional to the electron/hole concentration and they can be controlled by alkaline adatoms.
Their 2D structure, high electrical and thermal conductivity and low noise also make GNRs a possible alternative to copper for integrated circuit interconnects. Research is exploring the creation of quantum dots by changing the width of GNRs at select points along the ribbon, creating quantum confinement. Heterojunctions inside single graphene nanoribbons have been realized, among which structures that have been shown to function as tunnel barriers.
Graphene nanoribbons possess semiconductive properties and may be a technological alternative to silicon semiconductors capable of sustaining microprocessor clock speeds in the vicinity of 1 THz field-effect transistors less than 10 nm wide have been created with GNR – "GNRFETs" – with an Ion/Ioff ratio >106 at room temperature.
=== Electronic structure in external fields ===
The electronic properties in external field such as static electric or magnetic field have been extensively studied. The various levels of the tight-binding model as well as first principles calculations have been employed for such studies.
For for zigzag nanoribbons the most interesting effect under an external electric field is inducing of half-metallicity. In a simple tight-binding model the effect of the external in-plane field applied across the ribbon width is the band gap opening between the edge states. However, the first principles spin-polarized calculations demonstrate that the spin up and down species behave differently. One spin projection closes the band gap whereas another increases. As a result, at some critical value of field, the ribbon turns into a metallic for one spin projection (up or down) and an insulating for another spin (down or up). In this way, half-metallicity that may be useful for spintronics applications is induced.
Armchair ribbons behave differently from their zigzag siblings. They usually feature a band gap that closes under an external in-plane electric field. At some critical value of the field the gap fully closes forming a Dirac cone linear crossing, see Fig. 9d in Ref. This intriguing result have been corroborated by the density functional theory calculations and explained in a simplified tight-binding model. It does not depend on the chemical composition of the ribbon edges, for example both fluorine and chorine atoms can be used for the ribbon edge passivation instead of a usual hydrogen. Also this effect can be induced by chemical co-doping, i.e. by placing nitrogen and boron atoms atop the ribbon at its opposite sides. Modelwise the effect can be explained by a pair of cis-polyacetylene chains placed at a distance corresponding to the ribbon width and subjected to the different gate potentials.
Bearded ribbons with Klen-type edges behave in the tight-binding model approximation similar to zigzag ribbons. Namely, the band gap opens between the edge states. Due to chemical instability of this type of the edge configuration, such ribbons are normally excluded from the publications. Whether they can at least hypothetically exhibit half-metallicity in external in-plane fields similar to zigzag nanoribbons is not yet clear.
A vast family of cousins of the above ribbons with both similar edges is the class of ribbons combining non-equivalent edge geometries in a single ribbon. One of the simplest examples can be a half-bearded nanoribbon. Such ribbons, in principle, could be more stable than nanoribbons with two bearded edges because they could be realized via asymmetric hydrogenation of zigzag ribbons. In the nearest neighbor tight-binding model and in non-spin-polarized density functional theory calculations such ribbons exhibit chiral anomaly structure. The fully flat band of a pristine half-bearded nanoribbon subjected to the in-plane external electric field demonstrates unidirectional linear dispersions with group velocities of opposite directions around each of the two Dirac points. At high fields, the linear bands around the Dirac points transform into a wiggly cubic-like dispersions. This nontrivial behavior is favorable for the field-tunable dissipationless transport. The drastic transformation from fully flat to linear and then cubic-like band allows for a continuum
k
→
⋅
p
→
{\displaystyle {\vec {k}}\cdot {\vec {p}}}
model description based on the Dirac equation. The Dirac equation supplemented with the suitable boundary conditions breaking the inversion/mirror symmetry and a single field strength parameter admits an analytic solution in terms of Airy-like special functions.
== Mechanical properties ==
While it is difficult to prepare graphene nanoribbons with precise geometry to conduct the real tensile test due to the limiting resolution in nanometer scale, the mechanical properties of the two most common graphene nanoribbons (zigzag and armchair) were investigated by computational modeling using density functional theory, molecular dynamics, and finite element method. Since the two-dimensional graphene sheet with strong bonding is known to be one of the stiffest materials, graphene nanoribbons Young's modulus also has a value of over 1 TPa.
The Young's modulus, shear modulus and Poisson's ratio of graphene nanoribbons are different with varying sizes (with different length and width) and shapes. These mechanical properties are anisotropic and would usually be discussed in two in-plane directions, parallel and perpendicular to the one-dimensional periodic direction. Mechanical properties here will be a little bit different from the two-dimensional graphene sheets because of the distinct geometry, bond length, and bond strength particularly at the edge of graphene nanoribbons. It is possible to tune these nanomechanical properties with further chemical doping to change the bonding environment at the edge of graphene nanoribbons. While increasing the width of graphene nanoribbons, the mechanical properties will converge to the value measured on the graphene sheets.
One analysis predicted the high Young's modulus for armchair graphene nanoribbons to be around 1.24 TPa by the molecular dynamics method. They also showed the nonlinear elastic behaviors with higher-order terms in the stress-strain curve. In the higher strain region, it would need even higher-order (>3) to fully describe the nonlinear behavior. Other scientists also reported the nonlinear elasticity by the finite element method, and found that Young's modulus, tensile strength, and ductility of armchair graphene nanoribbons are all greater than those of zigzag graphene nanoribbons. Another report predicted the linear elasticity for the strain between -0.02 and 0.02 on the zigzag graphene nanoribbons by the density functional theory model. Within the linear region, the electronic properties would be relatively stable under the slightly changing geometry. The energy gaps increase from -0.02 eV to 0.02 eV for the strain between -0.02 and 0.02, which provides the feasibilities for future engineering applications.
The tensile strength of the armchair graphene nanoribbons is 175 GPa with the great ductility of 30.26% fracture strain, which shows the greater mechanical properties comparing to the value of 130 GPa and 25% experimentally measured on monolayer graphene. As expected, graphene nanoribbons with smaller width would completely break down faster, since the ratio of the weaker edged bonds increased. While the tensile strain on graphene nanoribbons reached its maximum, C-C bonds would start to break and then formed much bigger rings to make materials weaker until fracture.
== Optical properties ==
The earliest numerical results on the optical properties of graphene nanoribbons were obtained by Lin and Shyu in 2000. The different selection rules for optical transitions in graphene nanoribbons with armchair and zigzag edges were reported. These results were supplemented by a comparative study of zigzag nanoribbons with single wall armchair carbon nanotubes by Hsu and Reichl in 2007. It was demonstrated that selection rules in zigzag ribbons are different from those in carbon nanotube and the eigenstates in zigzag ribbons can be classified as either symmetric or antisymmetric. Also, it was predicted that edge states should play an important role in the optical absorption of zigzag nanoribbons. Optical transitions between the edge and bulk states should enrich the low-energy region (
<
3
{\displaystyle <3}
eV) of the absorption spectrum by strong absorption peaks. Analytical derivation of the numerically obtained selection rules was presented in 2011. The selection rule for the incident light polarized parallel (longitudinally) to the zigzag ribbon axis is that
Δ
J
=
J
2
−
J
1
{\displaystyle \Delta J=J_{2}-J_{1}}
is odd, where
J
1
{\displaystyle J_{1}}
and
J
2
{\displaystyle J_{2}}
enumerate the energy bands, while for the perpendicular polarization
Δ
J
{\displaystyle \Delta J}
is even. Intraband (intersubband) transitions between the conduction or valence sub-bands are also allowed in parallel polarization if
Δ
J
{\displaystyle \Delta J}
is even. For perpendicular polarization the intraband transitions between the conduction or valence sub-bands are allowed when
Δ
J
{\displaystyle \Delta J}
is odd.
For graphene nanoribbons with armchair edges the selection rules are
Δ
J
{\displaystyle \Delta J}
is odd for the perpendicular and
Δ
J
=
0
{\displaystyle \Delta J=0}
for the parallel polarization of the incident light. Similar to carbon tubes the intersubband transitions in parallel polarization are forbidden for armchair graphene nanoribbons though they are allowed for the perpendicular polarization and
Δ
J
{\displaystyle \Delta J}
being odd. Since energy bands in armchair nanoribbons and zigzag carbon nanotubes can be aligned, when
N
t
=
2
N
r
+
4
{\displaystyle N_{t}=2N_{r}+4}
, where
N
t
{\displaystyle N_{t}}
and
N
r
{\displaystyle N_{r}}
are the numbers of atoms in the unit cell of the tube and ribbon, respectively, the selection rules for parallel polarization give rise to an exact correlation between optical absorption peaks of these two types of nanostructures.
Despite different selection rules in single wall armchair carbon nanotubes and zigzag graphene nanoribbons a hidden correlation of the absorption peaks originating from the bulk states is predicted. The correlation of the absorption peaks in armchair tubes and zigzag ribbons takes place when the matching condition
N
t
=
2
N
r
+
4
{\displaystyle N_{t}=2N_{r}+4}
holds even though the energy bands of such a tube and ribbon do not align precisely. A similar correlation between bulk absorption peaks can be obtained for armchair nanotubes and nanoribbons with bearded edges, but in this case the matching conditions alters to
N
t
=
2
N
r
+
2
{\displaystyle N_{t}=2N_{r}+2}
. These results obtained within the nearest-neighbor approximation of the tight-binding model have been corroborated with first principles density functional theory calculations for zigzag nanoribbons and armchair tubes taking into account exchange and correlation effects.
First-principle calculations with quasiparticle corrections and many-body effects explored the electronic and optical properties of graphene-based materials. With GW calculation, the properties of graphene-based materials are accurately investigated, including graphene nanoribbons, edge and surface functionalized armchair graphene nanoribbons and scaling properties in armchair graphene nanoribbons.
== Analyses ==
Graphene nanoribbons can be analyzed by scanning tunneling microscope, Raman spectroscopy, infrared spectroscopy, and X-ray photoelectron spectroscopy. For example, out-of-plane bending vibration of one C-H on one benzene ring, called SOLO, which is similar to zigzag edge, on zigzag GNRs has been reported to appear at 899 cm−1, whereas that of two C-H on one benzene ring, called DUO, which is similar to armchair edge, on armchair GNRs has been reported to appear at 814 cm−1 as results of calculated IR spectra. However, analyses of graphene nanoribbon on substrates are difficult using infrared spectroscopy even with a Reflection Absorption Spectrometry method. Thus, a large quantity of graphene nanoribbon is necessary for infrared spectroscopy analyses.
== Reactivity ==
Zigzag edges are known to be more reactive than armchair edges, as observed in the dehydrogenation reactivities between the compound with zigzag edges (tetracene) and armchair edges (chrysene). Also, zigzag edges tends to be more oxidized than armchair edges without gasification. The zigzag edges with longer length can be more reactive as it can be seen from the dependence of the length of acenes on the reactivity.
== Applications ==
=== Polymeric nanocomposites ===
Graphene nanoribbons and their oxidized counterparts called graphene oxide nanoribbons have been investigated as nano-fillers to improve the mechanical properties of polymeric nanocomposites. Increases in the mechanical properties of epoxy composites on loading of graphene nanoribbons were observed. An increase in the mechanical properties of biodegradable polymeric nanocomposites of poly(propylene fumarate) at low weight percentage was achieved by loading of oxidized graphene nanoribbons, fabricated for bone tissue engineering applications.
=== Contrast agent for bioimaging ===
Hybrid imaging modalities, such as photoacoustic (PA) tomography (PAT) and thermoacoustic (TA) tomography (TAT) have been developed for bioimaging applications. PAT/TAT combines advantages of pure ultrasound and pure optical imaging/radio frequency (RF), providing good spatial resolution, great penetration depth and high soft-tissue contrast. GNR synthesized by unzipping single- and multi-walled carbon nanotubes have been reported as contrast agents for photoacoustic and thermoacoustic imaging and tomography.
=== Catalysis ===
In catalysis, GNRs offer several advantageous features that make them attractive as catalysts or catalyst supports. Firstly, their high surface-to-volume ratio provides abundant active sites for catalytic reactions. This enhanced surface area enables efficient interaction with reactant molecules, leading to improved catalytic performance.
Secondly, the edge structure of GNRs plays a crucial role in catalysis. The zigzag and armchair edges of GNRs possess distinctive electronic properties, making them suitable for specific reactions. For instance, the presence of unsaturated carbon atoms at the edges can serve as active sites for adsorption and reaction of various molecules.
Moreover, GNRs can be functionalized or doped with heteroatoms to tailor their catalytic properties further. Functionalization with specific groups or doping with elements like silicon, nitrogen, boron, or transition metals can introduce additional active sites or modify the electronic structure, allowing for selective catalytic transformations.
== See also ==
== References ==
== External links ==
WOLFRAM Demonstrations Project: Electronic Band Structure of Armchair and Zigzag Graphene Nanoribbons
Graphene nanoribbons on arxiv.org | Wikipedia/Graphene_nanoribbon |
Graphyne is an allotrope of carbon. Although it has been studied in theoretical models, it is very difficult to synthesize and only small amounts of uncertain purity have been created. Its structure is one-atom-thick planar sheets of sp and sp2-bonded carbon atoms arranged in crystal lattice. It can be seen as a lattice of benzene rings connected by acetylene bonds. The material is called graphyne-n when benzene rings are connected by n sequential acetylene molecules, and graphdiyne for a particular case of n = 2 (diacetylene links).
Depending on the content of acetylene groups, graphyne can be considered a mixed hybridization, spk, where k can be 1 or 2, and thus differs from the hybridization of graphene (considered pure sp2) and diamond (pure sp3).
First-principles calculations showed that periodic graphyne structures and their boron nitride analogues are stable. The calculations used phonon dispersion curves and ab-initio finite temperature, quantum mechanical molecular dynamics simulations.
== History ==
Graphyne was first theoretically proposed by Baughman et al. in 1987. In 2010, Li et al. developed the first successful methodology for creating graphdiyne films using the Glaser–Hay cross-coupling reaction with hexaethynylbenzene. The proposed approach makes it possible to synthesize nanometer-scale graphdiyne and graphtetrayne, which lack long-range order. In 2019, Cui and co-workers reported on a mechanochemical technique for obtaining graphyne using benzene and calcium carbide. Although a gram-scale graphyne can be obtained using this approach, graphynes with long-range crystallinity over a large area remain elusive.
In 2022, synthesis of multi-layered γ‑graphyne was successfully performed through the polymerization of 1,3,5-tribromo-2,4,6-triethynylbenzene under Sonogashira coupling conditions. Near-infrared spectroscopy and cyclic voltammetry of the material determined the bandgap as 0.48 ± 0.05 eV, which agrees with the theoretical prediction for graphyne-based materials.: 4
== Synthesis ==
Despite numerous efforts by different approaches, no synthesis method has been discovered to create quality graphyne. The small impure amounts created to date do not allow characterization sufficient to verify theoretical properties.: 12
== Structure ==
Through the use of computer models scientists have predicted several properties of the substance on assumed geometries of the lattice. Its proposed structures are derived from inserting acetylene bonds in place of carbon-carbon single bonds in a graphene lattice. Graphyne is theorized to exist in multiple geometries. This variety is due to the multiple arrangements of sp and sp2 hybridized carbon. The proposed geometries include a hexagonal lattice structure and a rectangular lattice structure. Out of the theorized structures the rectangular lattice of 6,6,12-graphyne may hold the most potential for future applications.
== Properties ==
Models predict that graphyne has the potential for Dirac cones on its double and triple bonded carbon atoms. Due to the Dirac cones, the conduction and valence bands meet in a linear fashion at a single point in the Fermi level. The advantage of this scheme is that electrons behave as if they have no mass, resulting in energies that are proportional to the momentum of the electrons. Like in graphene, hexagonal graphyne has electric properties that are direction independent. However, due to the symmetry of the proposed rectangular 6,6,12-graphyne the electric properties would change along different directions in the plane of the material. This unique feature of its symmetry allows graphyne to self-dope meaning that it has two different Dirac cones lying slightly above and below the Fermi level. The self-doping effect of 6,6,12-graphyne can be effectively tuned by applying in-plane external strain.
Graphyne samples synthesized to date have shown a melting point of 250-300 °C, low reactivity in decomposition reactions with oxygen, heat and light.
== Potential applications ==
It has been hypothesized that graphyne is preferable to graphene for specific applications owing to its particular energy structure, namely direction-dependent Dirac cones. The directional dependency of 6,6,12-graphyne could allow for electrical grating on the nanoscale. This could lead to the development of faster transistors and nanoscale electronic devices. Recently it was demonstrated that photoinduced electron transfer from electron-donating partners to γ-graphyne is favorable and occurs on nano to sub-picosecond time scale.
== References ==
== External links ==
Rawat, Sachin (2022-08-05). "Graphene is a Nobel Prize-winning "wonder material." Graphyne might replace it". Big Think. Retrieved 2022-08-07.
Wang, Xiluan; Shi, Gaoquan (2015). "An introduction to the chemistry of graphene". Physical Chemistry Chemical Physics. 17 (43). Royal Society of Chemistry (RSC): 28484–28504. Bibcode:2015PCCP...1728484W. doi:10.1039/c5cp05212b. PMID 26465215. | Wikipedia/Graphyne |
Fluorographene (or perfluorographane, graphene fluoride) is a fluorocarbon derivative of graphene. It is a two dimensional carbon sheet of sp3 hybridized carbons, with each carbon atom bound to one fluorine. The chemical formula is (CF)n. In comparison, Teflon (polytetrafluoroethylene), -(CF2)n-, consists of carbon "chains" with each carbon bound to two fluorines.
Unlike fluorographene, graphene is unsaturated (sp2 hybridized) and completely carbon. The hydrocarbon analogue to fluorographene is sp3 hybridized graphane. Similar to other fluorocarbons (e.g. perfluorohexane), fluorographene is highly insulating. Fluorographene is thermally stable, resembling polytetrafluoroethylene; however, chemically it is reactive. It can be transformed back into graphene by reaction with potassium iodide at high temperatures. During reactions of fluorographene with NaOH and NaSH simultaneous reductive defluorination and substitution are observed. The reactivity of fluorographene represents a facile way towards graphene derivatives.
== Preparation ==
The material was first created in 2010 by growing graphene on copper foil exposed to xenon difluoride at 30 °C. It was discovered soon after that fluorographene could also be prepared by combining cleaved graphene on a gold grid while being exposed to xenon difluoride at 70 °C. Also in 2010 Withers et al. described exfoliation of fluorinated graphite (monolayer, 24% fluorination) and Cheng et al. reported reversible graphene fluorination. Stoichiometric fluorographene was also prepared by chemical exfoliation of graphite fluoride. It was also shown that graphene fluoride can be transformed back into graphene via reaction with iodine, which forms graphene iodide as a short lived intermediate.
== Structure ==
The structure of fluorographene can be derived from the structure of graphite monofluoride (CF)n, which consists of weakly bound stacked fluorographene layers, and its most stable conformation (predicted for the monocrystal) contains an infinite array of trans-linked cyclohexane chairs with covalent C–F bonds in an AB stacking sequence. The estimated C-F distance is 136-138 pm, C-C distance is 157-158 pm and the C-C-C angle is 110°. Possible fluorographene conformations have been extensively investigated computationally.
== Electronic properties ==
Fluorographene is considered a wide gap semiconductor, because its I-V characteristics are strongly nonlinear with a nearly gate-independent resistance greater than 1 GΩ. In addition, fluorescence and NEXAFS measurements indicate band gap higher than 3.8 eV. Theoretical calculations show that estimation of fluorographene band gap is rather challenging task, as GGA functional provides band gap of 3.1 eV, hybrid (HSE06) 4.9 eV, GW 8.1 eV on top of PBE 8.1 or 8.3 eV on top of HSE06. The optical transition calculated by the Bethe-Salpeter equation is equal to 5.1 eV and points to an extremely strong exciton binding energy of 1.9 eV. It has recently been demonstrated that using fluorographene as a passivation layer in Field Effect Transistors (FETs) featuring a graphene channel, carrier mobility increases significantly.
== Reaction ==
Fluorographene is susceptible for nucleophilic substitution and reductive defluorination, which makes it an extraordinary precursor material for synthesis of numerous graphene derivatives. Both of these channels can be used to chemically manipulate fluorographene, and they can be tuned by suitable conditions, e.g., solvent. In 2010 it was shown that fluorographene can be transformed to graphene by treatment with KI. Nucleophiles can substitute the fluorine atoms and induce partial or full defluorination. The fluorographene reactivity is triggered by point defects. The knowledge on fluorographene reactivity can be used for synthesis of new graphene derivatives, which contain i) mixture of F and other functional groups (like, e.g., thiofluorographene containing both -F and -SH ) or ii) selectively only the functional group (and any -F groups). Alkyl and aryl groups can be selectively attached to graphene using Grignard reaction with fluorographene and this reaction leads to high-degree of graphene functionalization. Very promising and selective graphene derivative cyanographene (graphene nitrile) was synthesized by reaction of NaCN with fluorographene. This material was further used for synthesis of graphene acid, i.e., graphene functionalized by -COOH groups over its surface, and it was shown that this graphene acid can be effectively conjugated with amines and alcohols. These findings open new door for high-yield and selective graphene functionalization.
== Other halogenated graphenes ==
Recent studies have also revealed that, similar to fluorination, full chlorination of graphene can be achieved. The resulting structure is called chlorographene. However other theoretical calculations questioned stability of chlorographene under ambient conditions.
Also graphene can be fluorinated or halofluorinated by CVD-method with fluorocarbons, hydro- or halofluorocarbons by heating while in contact of carbon material with fluoroorganic substance to form partially fluorinated carbons (so called Fluocar materials).
An overview on preparation, reactivity and properties of halogenated graphenes in available in ACS Nano journal free of charge.
== See also ==
"Chlorographene"
Organofluorine chemistry
Organofluorine compound
Diamond
Graphane
== References == | Wikipedia/Fluorographene |
In materials science, the term single-layer materials or 2D materials refers to crystalline solids consisting of a single layer of atoms. These materials are promising for some applications but remain the focus of research. Single-layer materials derived from single elements generally carry the -ene suffix in their names, e.g. graphene. Single-layer materials that are compounds of two or more elements have -ane or -ide suffixes. 2D materials can generally be categorized as either 2D allotropes of various elements or as compounds (consisting of two or more covalently bonding elements).
It is predicted that there are hundreds of stable single-layer materials. The atomic structure and calculated basic properties of these and many other potentially synthesisable single-layer materials, can be found in computational databases. 2D materials can be produced using mainly two approaches: top-down exfoliation and bottom-up synthesis. The exfoliation methods include sonication, mechanical, hydrothermal, electrochemical, laser-assisted, and microwave-assisted exfoliation.
== Single element materials ==
=== C: graphene and graphyne ===
Graphene
Graphene is a crystalline allotrope of carbon in the form of a nearly transparent (to visible light) one atom thick sheet. It is hundreds of times stronger than most steels by weight. It has the highest known thermal and electrical conductivity, displaying current densities 1,000,000 times that of copper. It was first produced in 2004.
Andre Geim and Konstantin Novoselov won the 2010 Nobel Prize in Physics "for groundbreaking experiments regarding the two-dimensional material graphene". They first produced it by lifting graphene flakes from bulk graphite with adhesive tape and then transferring them onto a silicon wafer.
Graphyne
Graphyne is another 2-dimensional carbon allotrope whose structure is similar to graphene's. It can be seen as a lattice of benzene rings connected by acetylene bonds. Depending on the content of the acetylene groups, graphyne can be considered a mixed hybridization, spn, where 1 < n < 2, compared to graphene (pure sp2) and diamond (pure sp3).
First-principle calculations using phonon dispersion curves and ab-initio finite temperature, quantum mechanical molecular dynamics simulations showed graphyne and its boron nitride analogues to be stable.
The existence of graphyne was conjectured before 1960. In 2010, graphdiyne (graphyne with diacetylene groups) was synthesized on copper substrates.
In 2022 a team claimed to have successfully used alkyne metathesis to synthesise graphyne though this claim is disputed. However, after an investigation the team's paper was retracted by the publication citing fabricated data.
Later during 2022 synthesis of multi-layered γ‑graphyne was successfully performed through the polymerization of 1,3,5-tribromo-2,4,6-triethynylbenzene under Sonogashira coupling conditions.
Recently, it has been claimed to be a competitor for graphene due to the potential of direction-dependent Dirac cones.
=== B: borophene ===
Borophene is a crystalline atomic monolayer of boron and is also known as boron sheet. First predicted by theory in the mid-1990s in a freestanding state, and then demonstrated as distinct monoatomic layers on substrates by Zhang et al.,
different borophene structures were experimentally confirmed in 2015.
=== Ge: germanene ===
Germanene is a two-dimensional allotrope of germanium with a buckled honeycomb structure.
Experimentally synthesized germanene exhibits a honeycomb structure.
This honeycomb structure consists of two hexagonal sub-lattices that are vertically displaced by 0.2 A from each other.
=== Si: silicene ===
Silicene is a two-dimensional allotrope of silicon, with a hexagonal honeycomb structure similar to that of graphene. Its growth is scaffolded by a pervasive Si/Ag(111) surface alloy beneath the two-dimensional layer.
=== Sn: stanene ===
Stanene is a predicted topological insulator that may display dissipationless currents at its edges near room temperature. It is composed of tin atoms arranged in a single layer, in a manner similar to graphene. Its buckled structure leads to high reactivity against common air pollutants such as NOx and COx and it is able to trap and dissociate them at low temperature.
A structure determination of stanene using low energy electron diffraction has shown ultra-flat stanene on a Cu(111) surface.
=== Pb: plumbene ===
Plumbene is a two-dimensional allotrope of lead, with a hexagonal honeycomb structure similar to that of graphene.
=== P: phosphorene ===
Phosphorene is a 2-dimensional, crystalline allotrope of phosphorus. Its mono-atomic hexagonal structure makes it conceptually similar to graphene. However, phosphorene has substantially different electronic properties; in particular it possesses a nonzero band gap while displaying high electron mobility. This property potentially makes it a better semiconductor than graphene.
The synthesis of phosphorene mainly consists of micromechanical cleavage or liquid phase exfoliation methods. The former has a low yield while the latter produce free standing nanosheets in solvent and not on the solid support. The bottom-up approaches like chemical vapor deposition (CVD) are still blank because of its high reactivity. Therefore, in the current scenario, the most effective method for large area fabrication of thin films of phosphorene consists of wet assembly techniques like Langmuir-Blodgett involving the assembly followed by deposition of nanosheets on solid supports.
=== Sb: antimonene ===
Antimonene is a two-dimensional allotrope of antimony, with its atoms arranged in a buckled honeycomb lattice. Theoretical calculations predicted that antimonene would be a stable semiconductor in ambient conditions with suitable performance for (opto)electronics. Antimonene was first isolated in 2016 by micromechanical exfoliation and it was found to be very stable under ambient conditions. Its properties make it also a good candidate for biomedical and energy applications.
In a study made in 2018, antimonene modified screen-printed electrodes (SPE's) were subjected to a galvanostatic charge/discharge test using a two-electrode approach to characterize their supercapacitive properties. The best configuration observed, which contained 36 nanograms of antimonene in the SPE, showed a specific capacitance of 1578 F g−1 at a current of 14 A g−1. Over 10,000 of these galvanostatic cycles, the capacitance retention values drop to 65% initially after the first 800 cycles, but then remain between 65% and 63% for the remaining 9,200 cycles. The 36 ng antimonene/SPE system also showed an energy density of 20 mW h kg−1 and a power density of 4.8 kW kg−1. These supercapacitive properties indicate that antimonene is a promising electrode material for supercapacitor systems. A more recent study, concerning antimonene modified SPEs shows the inherent ability of antimonene layers to form electrochemically passivated layers to facilitate electroanalytical measurements in oxygenated environments, in which the presence of dissolved oxygens normally hinders the analytical procedure. The same study also depicts the in-situ production of antimonene oxide/PEDOT:PSS nanocomposites as electrocatalytic platforms for the determination of nitroaromatic compounds.
=== Bi: bismuthene ===
Bismuthene, the two-dimensional (2D) allotrope of bismuth, was predicted to be a topological insulator. It was predicted that bismuthene retains its topological phase when grown on silicon carbide in 2015. The prediction was successfully realized and synthesized in 2016. At first glance the system is similar to graphene, as the Bi atoms arrange in a honeycomb lattice. However the bandgap is as large as 800mV due to the large spin–orbit interaction (coupling) of the Bi atoms and their interaction with the substrate. Thus, room-temperature applications of the quantum spin Hall effect come into reach. It has been reported to be the largest nontrivial bandgap 2D topological insulator in its natural state. Top-down exfoliation of bismuthene has been reported in various instances with recent works promoting the implementation of bismuthene in the field of electrochemical sensing. Emdadul et al. predicted the mechanical strength and phonon thermal conductivity of monolayer β-bismuthene through atomic-scale analysis. The obtained room temperature (300K) fracture strength is ~4.21 N/m along the armchair direction and ~4.22 N/m along the zigzag direction. At 300 K, its Young's moduli are reported to be ~26.1 N/m and ~25.5 N/m, respectively, along the armchair and zigzag directions. In addition, their predicted phonon thermal conductivity of ~1.3 W/m∙K at 300 K is considerably lower than other analogous 2D honeycombs, making it a promising material for thermoelectric operations.
=== Au: goldene ===
On 16 April 2024, scientists from Linköping University in Sweden reported that they had produced goldene, a single layer of gold atoms 100nm wide. Lars Hultman, a materials scientist on the team behind the new research, is quoted as saying "we submit that goldene is the first free-standing 2D metal, to the best of our knowledge", meaning that it is not attached to any other material, unlike plumbene and stanene. Researchers from New York University Abu Dhabi (NYUAD) previously reported to have synthesised Goldene in 2022, however various other scientists have contended that the NYUAD team failed to prove they made a single-layer sheet of gold, as opposed to a multi-layer sheet. Goldene is expected to be used primarily for its optical properties, with applications such as sensing or as a catalyst.
=== Metals ===
Single and double atom layers of platinum in a two-dimensional film geometry has been demonstrated. These atomically thin platinum films are epitaxially grown on graphene, which imposes a compressive strain that modifies the surface chemistry of the platinum, while also allowing charge transfer through the graphene. Single atom layers of palladium with the thickness down to 2.6 Å, and rhodium with the thickness of less than 4 Å have been synthesized and characterized with atomic force microscopy and transmission electron microscopy.
A 2D titanium formed by additive manufacturing (laser powder bed fusion) achieved greater strength than any known material (50% greater than magnesium alloy WE54). The material was arranged in a tubular lattice with a thin band running inside, merging two complementary lattice structures. This reduced by half the stress at the weakest points in the structure.
=== 2D supracrystals ===
The supracrystals of 2D materials have been proposed and theoretically simulated. These monolayer crystals are built of supra atomic periodic structures where atoms in the nodes of the lattice are replaced by symmetric complexes. For example, in the hexagonal structure of graphene patterns of 4 or 6 carbon atoms would be arranged hexagonally instead of single atoms, as the repeating node in the unit cell.
== 2D alloys ==
Two-dimensional alloys (or surface alloys) are a single atomic layer of alloy that is incommensurate with the underlying substrate. One example is the 2D ordered alloys of Pb with Sn and with Bi. Surface alloys have been found to scaffold two-dimensional layers, as in the case of silicene.
== Compounds ==
Boron nitride nanosheet
Titanate nanosheet
Borocarbonitrides
MXenes
2D silica
Niobium bromide and Niobium chloride (Nb3[X]8)
=== Transition metal dichalcogenide monolayers ===
The most commonly studied two-dimensional transition metal dichalcogenide (TMD) is monolayer molybdenum disulfide (MoS2). Several phases are known, notably the 1T and 2H phases. The naming convention reflects the structure: the 1T phase has one "sheet" (consisting of a layer of S-Mo-S; see figure) per unit cell in a trigonal crystal system, while the 2H phase has two sheets per unit cell in a hexagonal crystal system. The 2H phase is more common, as the 1T phase is metastable and spontaneously reverts to 2H without stabilization by additional electron donors (typically surface S vacancies).
The 2H phase of MoS2 (Pearson symbol hP6; Strukturbericht designation C7) has space group P63/mmc. Each layer contains Mo surrounded by S in trigonal prismatic coordination. Conversely, the 1T phase (Pearson symbol hP3) has space group P-3m1, and octahedrally-coordinated Mo; with the 1T unit cell containing only one layer, the unit cell has a c parameter slightly less than half the length of that of the 2H unit cell (5.95 Å and 12.30 Å, respectively). The different crystal structures of the two phases result in differences in their electronic band structure as well. The d-orbitals of 2H-MoS2 are split into three bands: dz2, dx2-y2,xy, and dxz,yz. Of these, only the dz2 is filled; this combined with the splitting results in a semiconducting material with a bandgap of 1.9eV. 1T-MoS2, on the other hand, has partially filled d-orbitals which give it a metallic character.
Because the structure consists of in-plane covalent bonds and inter-layer van der Waals interactions, the electronic properties of monolayer TMDs are highly anisotropic. For example, the conductivity of MoS2 in the direction parallel to the planar layer (0.1–1 ohm−1cm−1) is ~2200 times larger than the conductivity perpendicular to the layers. There are also differences between the properties of a monolayer compared to the bulk material: the Hall mobility at room temperature is drastically lower for monolayer 2H MoS2 (0.1–10 cm2V−1s−1) than for bulk MoS2 (100–500 cm2V−1s−1). This difference arises primarily due to charge traps between the monolayer and the substrate it is deposited on.
MoS2 has important applications in (electro)catalysis. As with other two-dimensional materials, properties can be highly geometry-dependent; the surface of MoS2 is catalytically inactive, but the edges can act as active sites for catalyzing reactions. For this reason, device engineering and fabrication may involve considerations for maximizing catalytic surface area, for example by using small nanoparticles rather than large sheets or depositing the sheets vertically rather than horizontally. Catalytic efficiency also depends strongly on the phase: the aforementioned electronic properties of 2H MoS2 make it a poor candidate for catalysis applications, but these issues can be circumvented through a transition to the metallic (1T) phase. The 1T phase has more suitable properties, with a current density of 10 mA/cm2, an overpotential of −187 mV relative to RHE, and a Tafel slope of 43 mV/decade (compared to 94 mV/decade for the 2H phase).
=== Graphane ===
While graphene has a hexagonal honeycomb lattice structure with alternating double-bonds emerging from its sp2-bonded carbons, graphane, still maintaining the hexagonal structure, is the fully hydrogenated version of graphene with every sp3-hybrized carbon bonded to a hydrogen (chemical formula of (CH)n). Furthermore, while graphene is planar due to its double-bonded nature, graphane is rugged, with the hexagons adopting different out-of-plane structural conformers like the chair or boat, to allow for the ideal 109.5° angles which reduce ring strain, in a direct analogy to the conformers of cyclohexane.
Graphane was first theorized in 2003, was shown to be stable using first principles energy calculations in 2007, and was first experimentally synthesized in 2009. There are various experimental routes available for making graphane, including the top-down approaches of reduction of graphite in solution or hydrogenation of graphite using plasma/hydrogen gas as well as the bottom-up approach of chemical vapor deposition. Graphane is an insulator, with a predicted band gap of 3.5 eV; however, partially hydrogenated graphene is a semi-conductor, with the band gap being controlled by the degree of hydrogenation.
=== Germanane ===
Germanane is a single-layer crystal composed of germanium with one hydrogen bonded in the z-direction for each atom. Germanane's structure is similar to graphane, Bulk germanium does not adopt this structure. Germanane is produced in a two-step route starting with calcium germanide. From this material, the calcium (Ca) is removed by de-intercalation with HCl to give a layered solid with the empirical formula GeH. The Ca sites in Zintl-phase CaGe2 interchange with the hydrogen atoms in the HCl solution, producing GeH and CaCl2.
=== SLSiN ===
SLSiN (acronym for Single-Layer Silicon Nitride), a novel 2D material introduced as the first post-graphene member of Si3N4, was first discovered computationally in 2020 via density-functional theory based simulations. This new material is inherently 2D, insulator with a band-gap of about 4 eV, and stable both thermodynamically and in terms of lattice dynamics.
== Combined surface alloying ==
Often single-layer materials, specifically elemental allotrops, are connected to the supporting substrate via surface alloys. By now, this phenomenon has been proven via a combination of different measurement techniques for silicene, for which the alloy is difficult to prove by a single technique, and hence has not been expected for a long time. Hence, such scaffolding surface alloys beneath two-dimensional materials can be also expected below other two-dimensional materials, significantly influencing the properties of the two-dimensional layer. During growth, the alloy acts as both, foundation and scaffold for the two-dimensional layer, for which it paves the way.
== Organic ==
Ni3(HITP)2 is an organic, crystalline, structurally tunable electrical conductor with a high surface area. HITP is an organic chemical (2,3,6,7,10,11-hexaaminotriphenylene). It shares graphene's hexagonal honeycomb structure. Multiple layers naturally form perfectly aligned stacks, with identical 2-nm openings at the centers of the hexagons. Room temperature electrical conductivity is ~40 S cm−1, comparable to that of bulk graphite and among the highest for any conducting metal-organic frameworks (MOFs). The temperature dependence of its conductivity is linear at temperatures between 100 K and 500 K, suggesting an unusual charge transport mechanism that has not been previously observed in organic semiconductors.
The material was claimed to be the first of a group formed by switching metals and/or organic compounds. The material can be isolated as a powder or a film with conductivity values of 2 and 40 S cm−1, respectively.
== Polymer ==
Using melamine (carbon and nitrogen ring structure) as a monomer, researchers created 2DPA-1, a 2-dimensional polymer sheet held together by hydrogen bonds. The sheet forms spontaneously in solution, allowing thin films to be spin-coated. The polymer has a yield strength twice that of steel, and it resists six times more deformation force than bulletproof glass. It is impermeable to gases and liquids.
== Combinations ==
Single layers of 2D materials can be combined into layered assemblies. For example, bilayer graphene is a material consisting of two layers of graphene. One of the first reports of bilayer graphene was in the seminal 2004 Science paper by Geim and colleagues, in which they described devices "which contained just one, two, or three atomic layers". Layered combinations of different 2D materials are generally called van der Waals heterostructures. Twistronics is the study of how the angle (the twist) between layers of two-dimensional materials can change their electrical properties.
== Characterization ==
Microscopy techniques such as transmission electron microscopy, 3D electron diffraction, scanning probe microscopy, scanning tunneling microscope, and atomic-force microscopy are used to characterize the thickness and size of the 2D materials. Electrical properties and structural properties such as composition and defects are characterized by Raman spectroscopy, X-ray diffraction, and X-ray photoelectron spectroscopy.
=== Mechanical characterization ===
The mechanical characterization of 2D materials is difficult due to ambient reactivity and substrate constraints present in many 2D materials. To this end, many mechanical properties are calculated using molecular dynamics simulations or molecular mechanics simulations. Experimental mechanical characterization is possible in 2D materials which can survive the conditions of the experimental setup as well as can be deposited on suitable substrates or exist in a free-standing form. Many 2D materials also possess out-of-plane deformation which further convolute measurements.
Nanoindentation testing is commonly used to experimentally measure elastic modulus, hardness, and fracture strength of 2D materials. From these directly measured values, models exist which allow the estimation of fracture toughness, work hardening exponent, residual stress, and yield strength. These experiments are run using dedicated nanoindentation equipment or an Atomic Force Microscope (AFM). Nanoindentation experiments are generally run with the 2D material as a linear strip clamped on both ends experiencing indentation by a wedge, or with the 2D material as a circular membrane clamped around the circumference experiencing indentation by a curbed tip in the center. The strip geometry is difficult to prepare but allows for easier analysis due to linear resulting stress fields. The circular drum-like geometry is more commonly used and can be easily prepared by exfoliating samples onto a patterned substrate. The stress applied to the film in the clamping process is referred to as the residual stress. In the case of very thin layers of 2D materials bending stress is generally ignored in indentation measurements, with bending stress becoming relevant in multilayer samples. Elastic modulus and residual stress values can be extracted by determining the linear and cubic portions of the experimental force-displacement curve. The fracture stress of the 2D sheet is extracted from the applied stress at failure of the sample. AFM tip size was found to have little effect on elastic property measurement, but the breaking force was found to have a strong tip size dependence due stress concentration at the apex of the tip. Using these techniques the elastic modulus and yield strength of graphene were found to be 342 N/m and 55 N/m respectively.
Poisson's ratio measurements in 2D materials is generally straightforward. To get a value, a 2D sheet is placed under stress and displacement responses are measured, or an MD calculation is run. The unique structures found in 2D materials have been found to result in auxetic behavior in phosphorene and graphene and a Poisson's ratio of zero in triangular lattice borophene.
Shear modulus measurements of graphene has been extracted by measuring a resonance frequency shift in a double paddle oscillator experiment as well as with MD simulations.
Fracture toughness of 2D materials in Mode I (KIC) has been measured directly by stretching pre-cracked layers and monitoring crack propagation in real-time. MD simulations as well as molecular mechanics simulations have also been used to calculate fracture toughness in Mode I. In anisotropic materials, such as phosphorene, crack propagation was found to happen preferentially along certain directions. Most 2D materials were found to undergo brittle fracture.
== Applications ==
The major expectation held amongst researchers is that given their exceptional properties, 2D materials will replace conventional semiconductors to deliver a new generation of electronics.
=== Biological applications ===
Research on 2D nanomaterials is still in its infancy, with the majority of research focusing on elucidating the unique material characteristics and few reports focusing on biomedical applications of 2D nanomaterials. Nevertheless, recent rapid advances in 2D nanomaterials have raised important yet exciting questions about their interactions with biological moieties. 2D nanoparticles such as carbon-based 2D materials, silicate clays, transition metal dichalcogenides (TMDs), and transition metal oxides (TMOs) provide enhanced physical, chemical, and biological functionality owing to their uniform shapes, high surface-to-volume ratios, and surface charge.
Two-dimensional (2D) nanomaterials are ultrathin nanomaterials with a high degree of anisotropy and chemical functionality. 2D nanomaterials are highly diverse in terms of their mechanical, chemical, and optical properties, as well as in size, shape, biocompatibility, and degradability. These diverse properties make 2D nanomaterials suitable for a wide range of applications, including drug delivery, imaging, tissue engineering, biosensors, and gas sensors among others. However, their low-dimension nanostructure gives them some common characteristics. For example, 2D nanomaterials are the thinnest materials known, which means that they also possess the highest specific surface areas of all known materials. This characteristic makes these materials invaluable for applications requiring high levels of surface interactions on a small scale. As a result, 2D nanomaterials are being explored for use in drug delivery systems, where they can adsorb large numbers of drug molecules and enable superior control over release kinetics. Additionally, their exceptional surface area to volume ratios and typically high modulus values make them useful for improving the mechanical properties of biomedical nanocomposites and nanocomposite hydrogels, even at low concentrations. Their extreme thinness has been instrumental for breakthroughs in biosensing and gene sequencing. Moreover, the thinness of these molecules allows them to respond rapidly to external signals such as light, which has led to utility in optical therapies of all kinds, including imaging applications, photothermal therapy (PTT), and photodynamic therapy (PDT).
Despite the rapid pace of development in the field of 2D nanomaterials, these materials must be carefully evaluated for biocompatibility in order to be relevant for biomedical applications. The newness of this class of materials means that even the relatively well-established 2D materials like graphene are poorly understood in terms of their physiological interactions with living tissues. Additionally, the complexities of variable particle size and shape, impurities from manufacturing, and protein and immune interactions have resulted in a patchwork of knowledge on the biocompatibility of these materials.
== See also ==
Monolayer
Two-dimensional semiconductor
Transition metal dichalcogenide monolayers
== References ==
== External links ==
"What Are 2D Materials, and Why Do They Interest Scientists?" in Columbia News (March 6, 2024)
"Twenty years of 2D materials" in Nature Physics (January 16, 2024)
== Additional reading ==
Xu, Yang; Cheng, Cheng; Du, Sichao; Yang, Jianyi; Yu, Bin; Luo, Jack; Yin, Wenyan; Li, Erping; Dong, Shurong; Ye, Peide; Duan, Xiangfeng (2016). "Contacts between Two- and Three-Dimensional Materials: Ohmic, Schottky, and p–n Heterojunctions". ACS Nano. 10 (5): 4895–4919. doi:10.1021/acsnano.6b01842. PMID 27132492.
Briggs, Natalie; Subramanian, Shruti; Lin, Zhong; Li, Xufan; Zhang, Xiaotian; Zhang, Kehao; Xiao, Kai; Geohegan, David; Wallace, Robert; Chen, Long-Qing; Terrones, Mauricio; Ebrahimi, Aida; Das, Saptarshi; Redwing, Joan; Hinkle, Christopher; Momeni, Kasra; van Duin, Adri; Crespi, Vin; Kar, Swastik; Robinson, Joshua A. (2019). "A roadmap for electronic grade 2D materials". 2D Materials. 6 (2): 022001. Bibcode:2019TDM.....6b2001B. doi:10.1088/2053-1583/aaf836. OSTI 1503991. S2CID 188118830.
Shahzad, F.; Alhabeb, M.; Hatter, C. B.; Anasori, B.; Man Hong, S.; Koo, C. M.; Gogotsi, Y. (2016). "Electromagnetic interference shielding with 2D transition metal carbides (MXenes)". Science. 353 (6304): 1137–1140. Bibcode:2016Sci...353.1137S. doi:10.1126/science.aag2421. PMID 27609888.
"Graphene Uses & Applications". Graphenea. Retrieved 2014-04-13.
cao, yameng; Robson, Alexander J.; Alharbi, Abdullah; Roberts, Jonathan; Woodhead, Christopher Stephen; Noori, Yasir Jamal; Gavito, Ramon Bernardo; Shahrjerdi, Davood; Roedig, Utz (2017). "Optical identification using imperfections in 2D materials". 2D Materials. 4 (4): 045021. arXiv:1706.07949. Bibcode:2017TDM.....4d5021C. doi:10.1088/2053-1583/aa8b4d. S2CID 35147364.
Kolesnichenko, Pavel; Zhang, Qianhui; Zheng, Changxi; Fuhrer, Michael; Davis, Jeffrey (2021). "Multidimensional analysis of excitonic spectra of monolayers of tungsten disulphide: toward computer-aided identification of structural and environmental perturbations of 2D materials". Machine Learning: Science and Technology. 2 (2): 025021. arXiv:2003.01904. doi:10.1088/2632-2153/abd87c. | Wikipedia/Single-layer_materials |
Graphene quantum dots (GQDs) are graphene nanoparticles with a size less than 100 nm. Due to their exceptional properties such as low toxicity, stable photoluminescence, chemical stability and pronounced quantum confinement effect, GQDs are considered as a novel material for biological, opto-electronics, energy and environmental applications.
== Properties ==
Graphene quantum dots (GQDs) consist of one or a few layers of graphene and are smaller than 100 nm in size. They are chemically and physically stable, have a large surface to mass ratio and can be dispersed in water easily due to functional groups at the edges. The fluorescence emission of GQDs can extend across a broad spectral range, including the UV, visible, and IR. The origin of GQD fluorescence emission is a subject of debate, as it has been related to quantum confinement effects, defect states and functional groups that might depend on the pH, when GQDs are dispersed in water. Their electronic structure depends sensitively on the crystallographic orientation of their edges, for example zigzag-edge GQDs with 7-8 nm diameter show a metallic behavior. In general, their energy gap decreases, when the number of graphene layers or the number of carbon atoms per graphene layer is increased.
== Health and safety ==
The toxicity of graphene-family nanoparticles is a matter of ongoing research. The toxicity (both in vivo and cytotoxicity) of GQDs are related to a variety of factors including particle size, methods of synthesis, chemical doping and so on. Many authors claim, that GQDs are biocompatible and cause only low toxicity as they are just composed of organic materials, which should lead to an advantage over semiconductor quantum dots. Several in vitro studies, based on cell cultures, show only marginal effects of GQDs on the viability of human cells. An in-depth look at the gene expression changes caused by GQDs with a size of 3 nm revealed that only one, namely the selenoprotein W, 1 out of 20 800 gene expressions was affected significantly in primary human hematopoietic stem cells. On the contrary, other in vitro studies observe a distinct decrease of cell viability and the induction of autophagy after exposure of the cells to GQDs and one in vivo study in zebrafish larvae observed the alteration of 2116 gene expressions. These inconsistent findings may be attributed to the diversity of the used GQDs, as the related toxicity is dependent on particle size, surface functional groups, oxygen content, surface charges and impurities. Currently, the literature is insufficient to draw conclusions about the potential hazards of GQDs.
== Preparation ==
Presently, a range of techniques have been developed to prepare GQDs. These methods are normally classified into two groups top down and bottom up. Top down approaches applied different techniques to cut bulk graphitic materials into GQDs including graphite, graphene, carbon nanotubes, coal, carbon black and carbon fibres. These techniques mainly include electron beam lithography, chemical synthesis, electrochemical preparation, graphene oxide (GO) reduction, C60 catalytic transformation, the microwave assisted hydrothermal method (MAH), the Soft-Template method, the hydrothermal method, and the ultrasonic exfoliation method. Top down methods usually need intense purification as strong mixed acids are used in these methods. On the other hand, bottom up methods assemble GQDs from small organic molecules such as citric acid and glucose. These GQDs have better biocompatibility.
== Application ==
Graphene quantum dots are studied as an advanced multifunctional material due to their unique optical, electronic, spin, and photoelectric properties induced by the quantum confinement effect and edge effect. They have possible applications in treatment of Alzheimer's disease, bioimaging, photothermal therapy, temperature sensing, drug delivery, LEDs lighter converters, photodetectors, OPV solar cells, and photoluminescent material, biosensors fabrication.
== See also ==
Cadmium-free quantum dot
Carbon quantum dot
Carbon nanotube quantum dot
== References == | Wikipedia/Graphene_quantum_dot |
Levidian Nanosystems Limited (also known just as Levidian, formerly Cambridge Nanosystems) is a manufacturing company that specialises in the production of graphene.
== Background ==
The company has developed a process to produce graphene at ultra-high quality and on a larger scale than has previously been possible by using biogas waste products such as Methane.
Cambridge Nanosystems was spun out of Cambridge University in 2012, and in 2014 began partnering with Malaysia's Felda Global Ventures Holdings Berhad (FGV), a global agricultural and commodities business. FGV has abundant supplies of Methane as that is a by-product of their large-scale palm oil production. They aim to achieve synergy through the relationship with Cambridge Nanosystems whose technology will enable that waste material to be turned into valuable Graphene.
In December 2014, Cambridge Nanosystems was awarded £500,000 from the UK's Technology Strategy Board in order to increase capacity of their material. The funds were used to develop a manufacturing facility in Cambridge with the capability of producing up to 100 tonnes of Graphene a year for the European market. FGV and Cambridge Nanosystems are planning to build another plant in Malaysia to supply the Asian market.
The company founders are Dr Krzysztof Koziol, Jerome Joaug, Lukasz Kurzepa and Catharina Paukner. Chief scientist, Catharina Paukner has been described in the United Kingdom media as "The First Lady of Graphene" and one of the eight UK business leaders to watch in 2015. In January 2015, Dr Anna Mieczakowski joined the management team as the first non co-founder in the role of Chief Operating Officer.
On 19 March 2015, the company won the Hewitsons Award for Business Innovation 2015 at a business award gala organised by Barclays and Cambridge News.
In September 2015, Cambridge Nanosystems has achieved the internationally recognised ISO 9001 certification, establishing it as one of the leaders in its field.
In April 2021, the company went through an acquisition and changed names to Levidian Nanosystems.
In May 2022, the company announced that its LOOP technology will be developed across UAE.
== LOOP Technology ==
Levidian's LOOP technology cracks methane into hydrogen and carbon, before locking the carbon into high-quality green graphene. It uses plasma technology to separate methane into its constituent atoms: carbon, locked into high-quality graphene, and hydrogen, which can either be used immediately or stored for future use.
== References == | Wikipedia/Cambridge_Nanosystems |
In linguistics, a grapheme is the smallest functional unit of a writing system.
The word grapheme is derived from Ancient Greek gráphō ('write'), and the suffix -eme by analogy with phoneme and other emic units. The study of graphemes is called graphemics. The concept of graphemes is abstract and similar to the notion in computing of a character. (A specific geometric shape that represents any particular grapheme in a given typeface is called a glyph.)
== Conceptualization ==
There are two main opposing grapheme concepts.
In the so-called referential conception, graphemes are interpreted as the smallest units of writing that correspond with sounds (more accurately phonemes). In this concept, the sh in the written English word shake would be a grapheme because it represents the phoneme /ʃ/. This referential concept is linked to the dependency hypothesis that claims that writing merely depicts speech.
By contrast, the analogical concept defines graphemes analogously to phonemes, i.e. via written minimal pairs such as shake vs. snake. In this example, h and n are graphemes because they distinguish two words. This analogical concept is associated with the autonomy hypothesis which holds that writing is a system in its own right and should be studied independently from speech. Both concepts have weaknesses.
Some models adhere to both concepts simultaneously by including two individual units, which are given names such as graphemic grapheme for the grapheme according to the analogical conception (h in shake), and phonological-fit grapheme for the grapheme according to the referential concept (sh in shake).
In newer concepts, in which the grapheme is interpreted semiotically as a dyadic linguistic sign, it is defined as a minimal unit of writing that is both lexically distinctive and corresponds with a linguistic unit (phoneme, syllable, or morpheme).
== Notation ==
Graphemes are often notated within angle brackets: e.g. ⟨a⟩. This is analogous to the slash notation /a/ used for phonemes. Analogous to the square bracket notation [a] used for phones, glyphs are sometimes denoted with vertical lines, e.g. |ɑ|.
== Glyphs ==
In the same way that the surface forms of phonemes are speech sounds or phones (and different phones representing the same phoneme are called allophones), the surface forms of graphemes are glyphs (sometimes graphs), namely concrete written representations of symbols (and different glyphs representing the same grapheme are called allographs).
Thus, a grapheme can be regarded as an abstraction of a collection of glyphs that are all functionally equivalent.
For example, in written English (or other languages using the Latin alphabet), there are two different physical representations of the lowercase Latin letter "a": "a" and "ɑ". Since, however, the substitution of either of them for the other cannot change the meaning of a word, they are considered to be allographs of the same grapheme, which can be written ⟨a⟩. Similarly, the grapheme corresponding to "Arabic numeral zero" has a unique semantic identity and Unicode value U+0030 but exhibits variation in the form of slashed zero. Italic and bold face forms are also allographic, as is the variation seen in serif (as in Times New Roman) versus sans-serif (as in Helvetica) forms.
There is some disagreement as to whether capital and lower case letters are allographs or distinct graphemes. Capitals are generally found in certain triggering contexts that do not change the meaning of a word: a proper name, for example, or at the beginning of a sentence, or all caps in a newspaper headline. In other contexts, capitalization can determine meaning: compare, for example Polish and polish: the former is a language, the latter is for shining shoes.
Some linguists consider digraphs like the ⟨sh⟩ in ship to be distinct graphemes, but these are generally analyzed as sequences of graphemes. Non-stylistic ligatures, however, such as ⟨æ⟩, are distinct graphemes, as are various letters with distinctive diacritics, such as ⟨ç⟩.
Identical glyphs may not always represent the same grapheme. For example, the three letters ⟨A⟩, ⟨А⟩ and ⟨Α⟩ appear identical but each has a different meaning: in order, they are the Latin letter A, the Cyrillic letter Azǔ/Азъ and the Greek letter Alpha. Each has its own code point in Unicode: U+0041 A LATIN CAPITAL LETTER A, U+0410 А CYRILLIC CAPITAL LETTER A and U+0391 Α GREEK CAPITAL LETTER ALPHA.
== Types of grapheme ==
The principal types of graphemes are logograms (more accurately termed morphograms), which represent words or morphemes (for example Chinese characters, the ampersand "&" representing the word and, Arabic numerals); syllabic characters, representing syllables (as in Japanese kana); and alphabetic letters, corresponding roughly to phonemes (see next section). For a full discussion of the different types, see Writing system § Functional classification.
There are additional graphemic components used in writing, such as punctuation marks, mathematical symbols, word dividers such as the space, and other typographic symbols. Ancient logographic scripts often used silent determinatives to disambiguate the meaning of a neighboring (non-silent) word.
== Relationship with phonemes ==
As mentioned in the previous section, in languages that use alphabetic writing systems, many of the graphemes stand in principle for the phonemes (significant sounds) of the language. In practice, however, the orthographies of such languages entail at least a certain amount of deviation from the ideal of exact grapheme–phoneme correspondence. A phoneme may be represented by a multigraph (sequence of more than one grapheme), as the digraph sh represents a single sound in English (and sometimes a single grapheme may represent more than one phoneme, as with the Russian letter я or the Spanish c). Some graphemes may not represent any sound at all (like the b in English debt or the h in all Spanish words containing the said letter), and often the rules of correspondence between graphemes and phonemes become complex or irregular, particularly as a result of historical sound changes that are not necessarily reflected in spelling. "Shallow" orthographies such as those of standard Spanish and Finnish have relatively regular (though not always one-to-one) correspondence between graphemes and phonemes, while those of French and English have much less regular correspondence, and are known as deep orthographies.
Multigraphs representing a single phoneme are normally treated as combinations of separate letters, not as graphemes in their own right. However, in some languages a multigraph may be treated as a single unit for the purposes of collation; for example, in a Czech dictionary, the section for words that start with ⟨ch⟩ comes after that for ⟨h⟩. For more examples, see Alphabetical order § Language-specific conventions.
== See also ==
Character (computing) – Symbols encoded in computers to make text
Grapheme–color synesthesia – Synesthesia that associates numbers or letters with colors
Sign (semiotics) – Something that communicates meaning
== References == | Wikipedia/Grapheme |
Graphene nanoribbons (GNRs, also called nano-graphene ribbons or nano-graphite ribbons) are strips of graphene with width less than 100 nm. Graphene ribbons were introduced as a theoretical model by Mitsutaka Fujita and coauthors to examine the edge and nanoscale size effect in graphene. Some earlier studies of graphitic ribbons within the area of conductive polymers in the field of synthetic metals include works by Kazuyoshi Tanaka, Tokio Yamabe and co-authors, Steven Kivelson and Douglas J. Klein. While Tanaka, Yamabe and Kivelson studied so-called zigzag and armchair edges of graphite, Klein introduced a different edge geometry that is frequently referred to as a bearded edge.
== Production ==
=== Nanotomy ===
Large quantities of width-controlled GNRs can be produced via graphite nanotomy, where applying a sharp diamond knife on graphite produces graphite nanoblocks, which can then be exfoliated to produce GNRs as shown by Vikas Berry. GNRs can also be produced by "unzipping" or axially cutting nanotubes. In one such method multi-walled carbon nanotubes were unzipped in solution by action of potassium permanganate and sulfuric acid. In another method GNRs were produced by plasma etching of nanotubes partly embedded in a polymer film. More recently, graphene nanoribbons were grown onto silicon carbide (SiC) substrates using ion implantation followed by vacuum or laser annealing. The latter technique allows any pattern to be written on SiC substrates with 5 nm precision.
=== Epitaxy ===
GNRs were grown on the edges of three-dimensional structures etched into silicon carbide wafers. When the wafers are heated to approximately 1,000 °C (1,270 K; 1,830 °F), silicon is preferentially driven off along the edges, forming nanoribbons whose structure is determined by the pattern of the three-dimensional surface. The ribbons had perfectly smooth edges, annealed by the fabrication process. Electron mobility measurements surpassing one million correspond to a sheet resistance of one ohm per square — two orders of magnitude lower than in two-dimensional graphene.
=== Chemical vapor deposition ===
Nanoribbons narrower than 10 nm grown on a germanium wafer act like semiconductors, exhibiting a band gap. Inside a reaction chamber, using chemical vapor deposition, methane is used to deposit hydrocarbons on the wafer surface, where they react with each other to produce long, smooth-edged ribbons. The ribbons were used to create prototype transistors. At a very slow growth rate, the graphene crystals naturally grow into long nanoribbons on a specific germanium crystal facet. By controlling the growth rate and growth time, the researchers achieved control over the nanoribbon width.
Recently, researchers from SIMIT (Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences) reported on a strategy to grow graphene nanoribbons with controlled widths and smooth edges directly onto dielectric hexagonal boron nitride (h-BN) substrates. The team use nickel nanoparticles to etch monolayer-deep, nanometre-wide trenches into h-BN, and subsequently fill them with graphene using chemical vapour deposition. Modifying the etching parameters allows the width of the trench to be tuned to less than 10 nm, and the resulting sub-10-nm ribbons display bandgaps of almost 0.5 eV. Integrating these nanoribbons into field effect transistor devices reveals on–off ratios of greater than 104 at room temperature, as well as high carrier mobilities of ~750 cm2 V−1 s−1.
=== Multistep nanoribbon synthesis ===
A bottom-up approach was investigated. In 2017 dry contact transfer was used to press a fiberglass applicator coated with a powder of atomically precise graphene nanoribbons on a hydrogen-passivated Si(100) surface under vacuum. 80 of 115 GNRs visibly obscured the substrate lattice with an average apparent height of 0.30 nm. The GNRs do not align to the Si lattice, indicating a weak coupling. The average bandgap over 21 GNRs was 2.85 eV with a standard deviation of 0.13 eV.
The method unintentionally overlapped some nanoribbons, allowing the study of multilayer GNRs. Such overlaps could be formed deliberately by manipulation with a scanning tunneling microscope. Hydrogen depassivation left no band-gap. Covalent bonds between the Si surface and the GNR leads to metallic behavior. The Si surface atoms move outward, and the GNR changes from flat to distorted, with some C atoms moving in toward the Si surface.
== Electronic structure ==
The electronic states of GNRs largely depend on the edge structures (armchair or zigzag). In zigzag edges each successive edge segment is at the opposite angle to the previous. In armchair edges, each pair of segments is a 120/-120 degree rotation of the prior pair. The animation below provides a visualization explanation of both. Zigzag edges provide the edge localized state with non-bonding molecular orbitals near the Fermi energy. They are expected to have large changes in optical and electronic properties from quantization.
Calculations based on tight binding theory predict that zigzag GNRs are always metallic while armchairs can be either metallic or semiconducting, depending on their width. However, density functional theory (DFT) calculations show that armchair nanoribbons are semiconducting with an energy gap scaling with the inverse of the GNR width. Experiments verified that energy gaps increase with decreasing GNR width. Graphene nanoribbons with controlled edge orientation have been fabricated by scanning tunneling microscope (STM) lithography. Energy gaps up to 0.5 eV in a 2.5 nm wide armchair ribbon were reported.
Armchair nanoribbons are metallic or semiconducting and present spin polarized edges. Their gap opens thanks to an unusual antiferromagnetic coupling between the magnetic moments at opposite edge carbon atoms. This gap size is inversely proportional to the ribbon width and its behavior can be traced back to the spatial distribution properties of edge-state wave functions, and the mostly local character of the exchange interaction that originates the spin polarization. Therefore, the quantum confinement, inter-edge superexchange, and intra-edge direct exchange interactions in zigzag GNR are important for its magnetism and band gap. The edge magnetic moment and band gap of zigzag GNR are reversely proportional to the electron/hole concentration and they can be controlled by alkaline adatoms.
Their 2D structure, high electrical and thermal conductivity and low noise also make GNRs a possible alternative to copper for integrated circuit interconnects. Research is exploring the creation of quantum dots by changing the width of GNRs at select points along the ribbon, creating quantum confinement. Heterojunctions inside single graphene nanoribbons have been realized, among which structures that have been shown to function as tunnel barriers.
Graphene nanoribbons possess semiconductive properties and may be a technological alternative to silicon semiconductors capable of sustaining microprocessor clock speeds in the vicinity of 1 THz field-effect transistors less than 10 nm wide have been created with GNR – "GNRFETs" – with an Ion/Ioff ratio >106 at room temperature.
=== Electronic structure in external fields ===
The electronic properties in external field such as static electric or magnetic field have been extensively studied. The various levels of the tight-binding model as well as first principles calculations have been employed for such studies.
For for zigzag nanoribbons the most interesting effect under an external electric field is inducing of half-metallicity. In a simple tight-binding model the effect of the external in-plane field applied across the ribbon width is the band gap opening between the edge states. However, the first principles spin-polarized calculations demonstrate that the spin up and down species behave differently. One spin projection closes the band gap whereas another increases. As a result, at some critical value of field, the ribbon turns into a metallic for one spin projection (up or down) and an insulating for another spin (down or up). In this way, half-metallicity that may be useful for spintronics applications is induced.
Armchair ribbons behave differently from their zigzag siblings. They usually feature a band gap that closes under an external in-plane electric field. At some critical value of the field the gap fully closes forming a Dirac cone linear crossing, see Fig. 9d in Ref. This intriguing result have been corroborated by the density functional theory calculations and explained in a simplified tight-binding model. It does not depend on the chemical composition of the ribbon edges, for example both fluorine and chorine atoms can be used for the ribbon edge passivation instead of a usual hydrogen. Also this effect can be induced by chemical co-doping, i.e. by placing nitrogen and boron atoms atop the ribbon at its opposite sides. Modelwise the effect can be explained by a pair of cis-polyacetylene chains placed at a distance corresponding to the ribbon width and subjected to the different gate potentials.
Bearded ribbons with Klen-type edges behave in the tight-binding model approximation similar to zigzag ribbons. Namely, the band gap opens between the edge states. Due to chemical instability of this type of the edge configuration, such ribbons are normally excluded from the publications. Whether they can at least hypothetically exhibit half-metallicity in external in-plane fields similar to zigzag nanoribbons is not yet clear.
A vast family of cousins of the above ribbons with both similar edges is the class of ribbons combining non-equivalent edge geometries in a single ribbon. One of the simplest examples can be a half-bearded nanoribbon. Such ribbons, in principle, could be more stable than nanoribbons with two bearded edges because they could be realized via asymmetric hydrogenation of zigzag ribbons. In the nearest neighbor tight-binding model and in non-spin-polarized density functional theory calculations such ribbons exhibit chiral anomaly structure. The fully flat band of a pristine half-bearded nanoribbon subjected to the in-plane external electric field demonstrates unidirectional linear dispersions with group velocities of opposite directions around each of the two Dirac points. At high fields, the linear bands around the Dirac points transform into a wiggly cubic-like dispersions. This nontrivial behavior is favorable for the field-tunable dissipationless transport. The drastic transformation from fully flat to linear and then cubic-like band allows for a continuum
k
→
⋅
p
→
{\displaystyle {\vec {k}}\cdot {\vec {p}}}
model description based on the Dirac equation. The Dirac equation supplemented with the suitable boundary conditions breaking the inversion/mirror symmetry and a single field strength parameter admits an analytic solution in terms of Airy-like special functions.
== Mechanical properties ==
While it is difficult to prepare graphene nanoribbons with precise geometry to conduct the real tensile test due to the limiting resolution in nanometer scale, the mechanical properties of the two most common graphene nanoribbons (zigzag and armchair) were investigated by computational modeling using density functional theory, molecular dynamics, and finite element method. Since the two-dimensional graphene sheet with strong bonding is known to be one of the stiffest materials, graphene nanoribbons Young's modulus also has a value of over 1 TPa.
The Young's modulus, shear modulus and Poisson's ratio of graphene nanoribbons are different with varying sizes (with different length and width) and shapes. These mechanical properties are anisotropic and would usually be discussed in two in-plane directions, parallel and perpendicular to the one-dimensional periodic direction. Mechanical properties here will be a little bit different from the two-dimensional graphene sheets because of the distinct geometry, bond length, and bond strength particularly at the edge of graphene nanoribbons. It is possible to tune these nanomechanical properties with further chemical doping to change the bonding environment at the edge of graphene nanoribbons. While increasing the width of graphene nanoribbons, the mechanical properties will converge to the value measured on the graphene sheets.
One analysis predicted the high Young's modulus for armchair graphene nanoribbons to be around 1.24 TPa by the molecular dynamics method. They also showed the nonlinear elastic behaviors with higher-order terms in the stress-strain curve. In the higher strain region, it would need even higher-order (>3) to fully describe the nonlinear behavior. Other scientists also reported the nonlinear elasticity by the finite element method, and found that Young's modulus, tensile strength, and ductility of armchair graphene nanoribbons are all greater than those of zigzag graphene nanoribbons. Another report predicted the linear elasticity for the strain between -0.02 and 0.02 on the zigzag graphene nanoribbons by the density functional theory model. Within the linear region, the electronic properties would be relatively stable under the slightly changing geometry. The energy gaps increase from -0.02 eV to 0.02 eV for the strain between -0.02 and 0.02, which provides the feasibilities for future engineering applications.
The tensile strength of the armchair graphene nanoribbons is 175 GPa with the great ductility of 30.26% fracture strain, which shows the greater mechanical properties comparing to the value of 130 GPa and 25% experimentally measured on monolayer graphene. As expected, graphene nanoribbons with smaller width would completely break down faster, since the ratio of the weaker edged bonds increased. While the tensile strain on graphene nanoribbons reached its maximum, C-C bonds would start to break and then formed much bigger rings to make materials weaker until fracture.
== Optical properties ==
The earliest numerical results on the optical properties of graphene nanoribbons were obtained by Lin and Shyu in 2000. The different selection rules for optical transitions in graphene nanoribbons with armchair and zigzag edges were reported. These results were supplemented by a comparative study of zigzag nanoribbons with single wall armchair carbon nanotubes by Hsu and Reichl in 2007. It was demonstrated that selection rules in zigzag ribbons are different from those in carbon nanotube and the eigenstates in zigzag ribbons can be classified as either symmetric or antisymmetric. Also, it was predicted that edge states should play an important role in the optical absorption of zigzag nanoribbons. Optical transitions between the edge and bulk states should enrich the low-energy region (
<
3
{\displaystyle <3}
eV) of the absorption spectrum by strong absorption peaks. Analytical derivation of the numerically obtained selection rules was presented in 2011. The selection rule for the incident light polarized parallel (longitudinally) to the zigzag ribbon axis is that
Δ
J
=
J
2
−
J
1
{\displaystyle \Delta J=J_{2}-J_{1}}
is odd, where
J
1
{\displaystyle J_{1}}
and
J
2
{\displaystyle J_{2}}
enumerate the energy bands, while for the perpendicular polarization
Δ
J
{\displaystyle \Delta J}
is even. Intraband (intersubband) transitions between the conduction or valence sub-bands are also allowed in parallel polarization if
Δ
J
{\displaystyle \Delta J}
is even. For perpendicular polarization the intraband transitions between the conduction or valence sub-bands are allowed when
Δ
J
{\displaystyle \Delta J}
is odd.
For graphene nanoribbons with armchair edges the selection rules are
Δ
J
{\displaystyle \Delta J}
is odd for the perpendicular and
Δ
J
=
0
{\displaystyle \Delta J=0}
for the parallel polarization of the incident light. Similar to carbon tubes the intersubband transitions in parallel polarization are forbidden for armchair graphene nanoribbons though they are allowed for the perpendicular polarization and
Δ
J
{\displaystyle \Delta J}
being odd. Since energy bands in armchair nanoribbons and zigzag carbon nanotubes can be aligned, when
N
t
=
2
N
r
+
4
{\displaystyle N_{t}=2N_{r}+4}
, where
N
t
{\displaystyle N_{t}}
and
N
r
{\displaystyle N_{r}}
are the numbers of atoms in the unit cell of the tube and ribbon, respectively, the selection rules for parallel polarization give rise to an exact correlation between optical absorption peaks of these two types of nanostructures.
Despite different selection rules in single wall armchair carbon nanotubes and zigzag graphene nanoribbons a hidden correlation of the absorption peaks originating from the bulk states is predicted. The correlation of the absorption peaks in armchair tubes and zigzag ribbons takes place when the matching condition
N
t
=
2
N
r
+
4
{\displaystyle N_{t}=2N_{r}+4}
holds even though the energy bands of such a tube and ribbon do not align precisely. A similar correlation between bulk absorption peaks can be obtained for armchair nanotubes and nanoribbons with bearded edges, but in this case the matching conditions alters to
N
t
=
2
N
r
+
2
{\displaystyle N_{t}=2N_{r}+2}
. These results obtained within the nearest-neighbor approximation of the tight-binding model have been corroborated with first principles density functional theory calculations for zigzag nanoribbons and armchair tubes taking into account exchange and correlation effects.
First-principle calculations with quasiparticle corrections and many-body effects explored the electronic and optical properties of graphene-based materials. With GW calculation, the properties of graphene-based materials are accurately investigated, including graphene nanoribbons, edge and surface functionalized armchair graphene nanoribbons and scaling properties in armchair graphene nanoribbons.
== Analyses ==
Graphene nanoribbons can be analyzed by scanning tunneling microscope, Raman spectroscopy, infrared spectroscopy, and X-ray photoelectron spectroscopy. For example, out-of-plane bending vibration of one C-H on one benzene ring, called SOLO, which is similar to zigzag edge, on zigzag GNRs has been reported to appear at 899 cm−1, whereas that of two C-H on one benzene ring, called DUO, which is similar to armchair edge, on armchair GNRs has been reported to appear at 814 cm−1 as results of calculated IR spectra. However, analyses of graphene nanoribbon on substrates are difficult using infrared spectroscopy even with a Reflection Absorption Spectrometry method. Thus, a large quantity of graphene nanoribbon is necessary for infrared spectroscopy analyses.
== Reactivity ==
Zigzag edges are known to be more reactive than armchair edges, as observed in the dehydrogenation reactivities between the compound with zigzag edges (tetracene) and armchair edges (chrysene). Also, zigzag edges tends to be more oxidized than armchair edges without gasification. The zigzag edges with longer length can be more reactive as it can be seen from the dependence of the length of acenes on the reactivity.
== Applications ==
=== Polymeric nanocomposites ===
Graphene nanoribbons and their oxidized counterparts called graphene oxide nanoribbons have been investigated as nano-fillers to improve the mechanical properties of polymeric nanocomposites. Increases in the mechanical properties of epoxy composites on loading of graphene nanoribbons were observed. An increase in the mechanical properties of biodegradable polymeric nanocomposites of poly(propylene fumarate) at low weight percentage was achieved by loading of oxidized graphene nanoribbons, fabricated for bone tissue engineering applications.
=== Contrast agent for bioimaging ===
Hybrid imaging modalities, such as photoacoustic (PA) tomography (PAT) and thermoacoustic (TA) tomography (TAT) have been developed for bioimaging applications. PAT/TAT combines advantages of pure ultrasound and pure optical imaging/radio frequency (RF), providing good spatial resolution, great penetration depth and high soft-tissue contrast. GNR synthesized by unzipping single- and multi-walled carbon nanotubes have been reported as contrast agents for photoacoustic and thermoacoustic imaging and tomography.
=== Catalysis ===
In catalysis, GNRs offer several advantageous features that make them attractive as catalysts or catalyst supports. Firstly, their high surface-to-volume ratio provides abundant active sites for catalytic reactions. This enhanced surface area enables efficient interaction with reactant molecules, leading to improved catalytic performance.
Secondly, the edge structure of GNRs plays a crucial role in catalysis. The zigzag and armchair edges of GNRs possess distinctive electronic properties, making them suitable for specific reactions. For instance, the presence of unsaturated carbon atoms at the edges can serve as active sites for adsorption and reaction of various molecules.
Moreover, GNRs can be functionalized or doped with heteroatoms to tailor their catalytic properties further. Functionalization with specific groups or doping with elements like silicon, nitrogen, boron, or transition metals can introduce additional active sites or modify the electronic structure, allowing for selective catalytic transformations.
== See also ==
== References ==
== External links ==
WOLFRAM Demonstrations Project: Electronic Band Structure of Armchair and Zigzag Graphene Nanoribbons
Graphene nanoribbons on arxiv.org | Wikipedia/Graphene_nanoribbons |
Hummers' method is a chemical process that can be used to generate graphite oxide through the addition of potassium permanganate to a solution of graphite, sodium nitrate, and sulfuric acid. It is commonly used by engineering and lab technicians as a reliable method of producing quantities of graphite oxide. It is also able to be devised in the creation of a one-atom-thick version of the substance known as graphene oxide.
== Graphite oxide ==
Graphite oxide is a compound of carbon, oxygen, and hydrogen where there is a ratio between 2.1 and 2.9 of carbon to oxygen. Graphite oxide is typically a yellowish solid. It is also known as graphene oxide when used to form unimolecular sheets.
== Method ==
Hummers' method was developed in 1958 as a safer, faster and more efficient method of producing graphite oxide. Before the method was developed, the production of graphite oxide was slow and hazardous to make because of the use of concentrated sulfuric and nitric acid. The Staudenmeier–Hoffman–Hamdi method introduced the addition of potassium chlorate. However, this method had more hazards and produced one gram of graphite oxide to ten grams of potassium chlorate.
William S. Hummers and Richard E. Offeman created their method as an alternative to the above methods after noting the hazards they posed to workers at the National Lead Company. Their approach was similar in that it involved adding graphite to a solution of concentrated acid. However, they simplified it to just graphite, concentrated sulfuric acid, sodium nitrate, and potassium permanganate. They also did not have to use temperatures above 98 °C and avoided most of the explosive risk of the Staudenmeier–Hoffman–Hamdi method.
The procedure starts with 100 g graphite and 50 g of sodium nitrate in 2.3 liters of sulfuric acid at 66 °C which is then cooled to 0 °C. 300 g of potassium permanganate is then added to the solution and stirred. Water is then added in increments until the solution is approximately 32 liters.
The final solution contains about 0.5% of solids to then be cleaned of impurities and dehydrated with phosphorus pentoxide.
== Chemical equations and efficiency ==
The basic chemical reaction involved in the Hummers' method is the oxidation of graphite, introducing molecules of oxygen to the pure carbon graphene. The reaction occurs between the graphene and the concentrated sulfuric acid with the potassium permanganate and sodium nitrate acting as catalysts. The process is capable of yielding approximately 188 g of graphite oxide to 100 g of graphite used. The ratio of carbon to oxygen produced is within the range of 1 to 2.1–2.9 that is characteristic of graphite oxide. The contaminants are determined to be mostly ash and water. Toxic gases such as dinitrogen tetraoxide and nitrogen dioxide are evolved in the process. The final product is typically 47.06% carbon, 27.97% oxygen, 22.99% water, and 1.98% ash with a carbon-to-oxygen ratio of 2.25. All of these results are comparable to the methods that preceded them.
== Significance ==
The method has been taken up by many researchers and chemists who are interested in the use of graphite oxide for other purposes, because it is the fastest conventional method of producing graphite oxide while maintaining a relatively high C/O ratio. When researchers and chemists are introducing a large quantity of graphite oxide within time limitations, Hummers' method is usually referenced in some form.
== Modern variations ==
Graphite oxide captured the attention of the scientific community after the discovery of graphene in 2004. Many teams are looking into ways of using graphite oxide as a shortcut to mass production of graphene. So far, the materials produced by these methods have shown to have more defects than those produced directly from graphite. Hummers' method remains a key point of interest because it is an easy method of producing large quantities of graphite oxide.
Other groups have been focused on making improvements to the Hummers' method to make it more efficient and environmentally friendly. One such process is eliminating the use of NaNO3 from the process. The addition of persufate (S2O82−) ensures the complete oxidation and exfoliation of graphite to yield suspensions of individual graphite oxide sheets. The elimination of nitrate is also advantageous as it stops the production of gases such as nitrogen dioxide and dinitrogen tetraoxide.
== Future uses ==
Besides graphene, Hummers' method has become a point of interest in photocatalysts. After discovering that graphite oxide is reactive to many of the wavelengths of light found within sunlight, teams have been looking into methods of using it to bolster the speed of reaction in decomposition of water and organic matter. The most common method for producing the graphite oxide in these experiments has been Hummers' method.
== See also ==
Graphite Oxide
== References == | Wikipedia/Hummers'_method |
In the area of solid state chemistry, graphite intercalation compounds are a family of materials prepared from graphite. In particular, the sheets of carbon that comprise graphite can be pried apart by the insertion (intercalation) of ions. The graphite is viewed as a host and the inserted ions as guests. The materials have the formula (guest)Cn where n ≥ 6. The insertion of the guests increases the distance between the carbon sheets. Common guests are reducing agents such as alkali metals. Strong oxidants also intercalate into graphite. Intercalation involves electron transfer into or out of the carbon sheets. So, in some sense, graphite intercalation compounds are salts. Intercalation is often reversible: the inserted ions can be removed and the sheets of carbon collapse to a graphite-like structure.
The properties of graphite intercalation compounds differ from those of the parent graphite.
== Preparation and structure ==
These materials are prepared by treating graphite with a strong oxidant or a strong reducing agent:
C + m X → CXm
The reaction is reversible.
The host (graphite) and the guest X interact by charge transfer. An analogous process is the basis of commercial lithium-ion batteries.
In a graphite intercalation compound not every layer is necessarily occupied by guests. In so-called stage 1 compounds, graphite layers and intercalated layers alternate and in stage 2 compounds, two graphite layers with no guest material in between alternate with an intercalated layer. The actual composition may vary and therefore these compounds are an example of non-stoichiometric compounds. It is customary to specify the composition together with the stage. The layers are pushed apart upon incorporation of the guest ions.
== Examples ==
=== Alkali and alkaline earth derivatives ===
One of the best studied graphite intercalation compounds, KC8, is prepared by melting potassium over graphite powder. The potassium is absorbed into the graphite and the material changes color from black to bronze. The resulting solid is pyrophoric. The composition is explained by assuming that the potassium to potassium distance is twice the distance between hexagons in the carbon framework. The bond between anionic graphite layers and potassium cations is ionic. The electrical conductivity of the material is greater than that of α-graphite. KC8 is a superconductor with a very low critical temperature Tc = 0.14 K. Heating KC8 leads to the formation of a series of decomposition products as the K atoms are eliminated:
3 KC8 → KC24 + 2 K
Via the intermediates KC24 (blue in color), KC36, KC48, ultimately the compound KC60 results.
The stoichiometry MC8 is observed for M = K, Rb and Cs. For smaller ions M = Li+, Sr2+, Ba2+, Eu2+, Yb3+, and Ca2+, the limiting stoichiometry is MC6. Calcium graphite CaC6 is obtained by immersing highly oriented pyrolytic graphite in liquid Li–Ca alloy for 10 days at 350 °C. The crystal structure of CaC6 belongs to the R3m space group. The graphite interlayer distance increases upon Ca intercalation from 3.35 to 4.524 Å, and the carbon-carbon distance increases from 1.42 to 1.444 Å.
With barium and ammonia, the cations are solvated, giving the stoichiometry (Ba(NH3)2.5C10.9(stage 1)) or those with caesium, hydrogen and potassium (CsC8·K2H4/3C8(stage 1)).
In situ adsorption on free-standing graphene and intercalation in bilayer graphene of the alkali metals K, Cs, and Li was observed by means of low-energy electron microscopy.
Different from other alkali metals, the amount of Na intercalation is very small. Quantum-mechanical calculations show that this originates from a quite general phenomenon: among the alkali and alkaline earth metals, Na and Mg generally have the weakest chemical binding to a given substrate, compared with the other elements in the same group of the periodic table. The phenomenon arises from the competition between trends in the ionization energy and the ion–substrate coupling, down the columns of the periodic table. However, considerable Na intercalation into graphite can occur in cases when the ion is wrapped in a solvent shell through the process of co-intercalation. A complex magnesium(I) species has also been intercalated into graphite.
=== Graphite bisulfate, perchlorate, hexafluoroarsenate: oxidized carbons ===
The intercalation compounds graphite bisulfate and graphite perchlorate can be prepared by treating graphite with strong oxidizing agents in the presence of strong acids. In contrast to the potassium and calcium graphites, the carbon layers are oxidized in this process:
48 C + 0. 5 [O ]+ 3 H2SO4 → [C24]+[HSO4]−·2H2SO4 + 0.5 H2O
In graphite perchlorate, planar layers of carbon atoms are 794 picometers apart, separated by ClO−4 ions. Cathodic reduction of graphite perchlorate is analogous to heating KC8, which leads to a sequential elimination of HClO4.
Both graphite bisulfate and graphite perchlorate are better conductors as compared to graphite, as predicted by using a positive-hole mechanism.
Reaction of graphite with [O2]+[AsF6]− affords the salt [C8]+[AsF6]−.
=== Metal halide derivatives ===
A number of metal halides intercalate into graphite. The chloride derivatives have been most extensively studied. Examples include MCl2 (M = Zn, Ni, Cu, Mn), MCl3 (M = Al, Fe, Ga), MCl4 (M = Zr, Pt), etc. The materials consists of layers of close-packed metal halide layers between sheets of carbon. The derivative C~8FeCl3 exhibits spin glass behavior. It proved to be a particularly fertile system on which to study phase transitions. A stage n magnetic graphite intercalation compounds has n graphite layers separating successive magnetic layers. As the stage number increases the interaction between spins in successive magnetic layers becomes weaker and 2D magnetic behaviour may arise.
=== Halogen- and oxide-graphite compounds ===
Chlorine and bromine reversibly intercalate into graphite. Iodine does not. Fluorine reacts irreversibly. In the case of bromine, the following stoichiometries are known: CnBr for n = 8, 12, 14, 16, 20, and 28.
Because it forms irreversibly, carbon monofluoride is often not classified as an intercalation compound. It has the formula (CF)x. It is prepared by reaction of gaseous fluorine with graphitic carbon at 215–230 °C. The color is greyish, white, or yellow. The bond between the carbon and fluorine atoms is covalent. Tetracarbon monofluoride (C4F) is prepared by treating graphite with a mixture of fluorine and hydrogen fluoride at room temperature. The compound has a blackish-blue color. Carbon monofluoride is not electrically conductive. It has been studied as a cathode material in one type of primary (non-rechargeable) lithium batteries.
Graphite oxide is an unstable yellow solid.
== Properties and applications ==
Graphite intercalation compounds have fascinated materials scientists for many years owing to their diverse electronic and electrical properties.
=== Superconductivity ===
Among the superconducting graphite intercalation compounds, CaC6 exhibits the highest critical temperature Tc = 11.5 K, which further increases under applied pressure (15.1 K at 8 GPa). Superconductivity in these compounds is thought to be related to the role of an interlayer state, a free electron like band lying roughly 2 eV (0.32 aJ) above the Fermi level; superconductivity only occurs if the interlayer state is occupied. Analysis of pure CaC6 using a high quality ultraviolet light revealed to conduct angle-resolved photoemission spectroscopy measurements. The opening of a superconducting gap in the π* band revealed a substantial contribution to the total electron–phonon-coupling strength from the π*-interlayer interband interaction.
=== Reagents in chemical synthesis: KC8 ===
The bronze-colored material KC8 is one of the strongest reducing agents known. It has also been used as a catalyst in polymerizations and as a coupling reagent for aryl halides to biphenyls. In one study, freshly prepared KC8 was treated with 1-iodododecane delivering a modification (micrometre scale carbon platelets with long alkyl chains sticking out providing solubility) that is soluble in chloroform. Another potassium graphite compound, KC24, has been used as a neutron monochromator. A new essential application for potassium graphite was introduced by the invention of the potassium-ion battery. Like the lithium-ion battery, the potassium-ion battery should use a carbon-based anode instead of a metallic anode. In this circumstance, the stable structure of potassium graphite is an important advantage.
== See also ==
Buckminsterfullerene intercalates
Covalent superconductors
Magnesium diboride, which uses hexagonal planar boron sheets instead of carbon
Pyrolytic graphite
== References ==
== Further reading == | Wikipedia/Graphite_intercalation_compound |
A lamella (pl.: lamellae) is a small plate or flake, from the Latin, and may also refer to collections of fine sheets of material held adjacent to one another in a gill-shaped structure, often with fluid in between though sometimes simply a set of "welded" plates. The term is used in biological contexts for thin membranes of plates of tissue. In the context of materials science, the microscopic structures in bone and nacre are called lamellae. Moreover, the term lamella is often used to describe crystal structure of some materials.
== Uses of the term ==
In surface chemistry (especially mineralogy and materials science), lamellar structures are fine layers, alternating between different materials. They can be produced by chemical effects (as in eutectic solidification), biological means, or a deliberate process of lamination, such as pattern welding. Lamellae can also describe the layers of atoms in the crystal lattices of materials such as metals.
In surface anatomy, a lamella is a thin plate-like structure, often one amongst many lamellae very close to one another, with open space between.
In chemical engineering, the term is used for devices such as filters and heat exchangers.
In mycology, a lamella (or gill) is a papery hymenophore rib under the cap of some mushroom species, most often agarics.
The term has been used to describe the construction of lamellar armour, as well as the layered structures that can be described by a lamellar vector field.
In medical professions, especially orthopedic surgery, the term is used to refer to 3D printed titanium technology which is used to create implantable medical devices (in this case, orthopedic implants).
In context of water-treatment, lamellar filters may be referred to as plate filters or tube filters.
This term is used to describe a certain type of ichthyosis, a congenital skin condition. Lamellar Ichthyosis often presents with a "colloidal" membrane at birth. It is characterized by generalized dark scaling.
The term lamella(e) is used in the flooring industry to describe the finished top-layer of an engineered wooden floor. For example, an engineered walnut floor will have several layers of wood and a top walnut lamella.
In archaeology, the term is used for a variety of small flat and thin objects, such as Amulet MS 5236, a very thin gold plate with a stamped text from Ancient Greece in the 6th century BC.
In crystallography, the term was first used by Christopher Chantler and refers to a very thin layer of a perfect crystal, from which curved crystal physics may be derived.
In textile industry, a lamella is a thin metallic strip used alone or wound around a core thread for goldwork embroidery and tapestry weaving.
In September 2010, the U.S. Food and Drug Administration (FDA) announced a recall of two medications which contained "extremely thin glass flakes (lamellae) that are barely visible in most cases. The lamellae result from the interaction of the formulation with glass vials over the shelf life of the product."
== See also ==
Lamella (cell biology)
Middle lamella
Annulate lamella
Lamella (structure)
== References == | Wikipedia/Lamella_(materials) |
Graphane is a two-dimensional polymer of carbon and hydrogen with the formula unit (CH)n where n is large. Partial hydrogenation results in hydrogenated graphene, which was reported by Elias et al. in 2009 by a TEM study to be "direct evidence for a new graphene-based derivative". The authors viewed the panorama as "a whole range of new two-dimensional crystals with designed electronic and other properties". With the band gap ranges from 0 to 0.8 eV
== Synthesis ==
Its preparation was reported in 2009. Graphane can be formed by electrolytic hydrogenation of graphene, few-layer graphene or high-oriented pyrolytic graphite. In the last case mechanical exfoliation of hydrogenated top layers can be used.
== Structure ==
The first theoretical description of graphane was reported in 2003. The structure was found, using a cluster expansion method, to be the most stable of all the possible hydrogenation ratios of graphene. In 2007, researchers found that the compound is more stable than other compounds containing carbon and hydrogen, such as benzene, cyclohexane and polyethylene. This group named the predicted compound graphane, because it is the fully saturated version of graphene.
Graphane is effectively made up of cyclohexane units, and, in parallel to cyclohexane, the most stable structural conformation is not planar, but an out-of-plane structure, including the chair and boat conformers, in order to minimize ring strain and allow for the ideal tetrahedral bond angle of 109.5° for sp3-bonded atoms. However, in contrast to cyclohexane, graphane cannot interconvert between these different conformers because not only are they topologically different, but they are also different structural isomers with different configurations. The chair conformer has the hydrogens alternating above or below the plane from carbon to neighboring carbon, while the boat conformer has the hydrogen atoms alternating in pairs above and below the plane. There are also other possible conformational isomers, including the twist-boat and twist-boat-chair. As with cyclohexane, the most stable conformer for graphane is the chair, followed by the twist-boat structure. While the buckling of the chair conformer would imply lattice shrinkage, calculations show the lattice actually expands by approximately 30% due to the opposing effect on the lattice spacing of the longer carbon-carbon (C-C) bonds, as the sp3-bonding of graphane yields longer C-C bonds of 1.52 Å compared to the sp2-bonding of graphene which yields shorter C-C bonds of 1.42 Å. As just established, theoretically if graphane was perfect and everywhere in its stable chair conformer, the lattice would expand; however, the existence of domains where the locally stable twist-boat conformer dominates “contribute to the experimentally observed lattice contraction.” When experimentalists have characterized graphane, they have found a distribution of lattice spacings, corresponding to different domains exhibiting different conformers. Any disorder in hydrogenation conformation tends to contract the lattice constant by about 2.0%.
Graphane is an insulator. Chemical functionalization of graphene with hydrogen may be a suitable method to open a band gap in graphene. P-doped graphane is proposed to be a high-temperature BCS theory superconductor with a Tc above 90 K.
== Variants ==
Partial hydrogenation leads to hydrogenated graphene rather than (fully hydrogenated) graphane. Such compounds are usually named as "graphane-like" structures. Graphane and graphane-like structures can be formed by electrolytic hydrogenation of graphene or few-layer graphene or high-oriented pyrolytic graphite. In the last case mechanical exfoliation of hydrogenated top layers can be used.
Hydrogenation of graphene on substrate affects only one side, preserving hexagonal symmetry. One-sided hydrogenation of graphene is possible due to the existence of ripplings. Because the latter are distributed randomly, the obtained material is disordered in contrast to two-sided graphane. Annealing allows the hydrogen to disperse, reverting to graphene. Simulations revealed the underlying kinetic mechanism.
== Potential applications ==
p-Doped graphane is postulated to be a high-temperature BCS theory superconductor with a Tc above 90 K.
Graphane has been proposed for hydrogen storage. Hydrogenation decreases the dependence of the lattice constant on temperature, which indicates a possible application in precision instruments.
== References ==
== External links ==
Sep 14, 2010 Hydrogen vacancies induce stable ferromagnetism in graphane Archived November 27, 2010, at the Wayback Machine
May 25, 2010 Graphane yields new potential
May 02 2010 Doped Graphane Should Superconduct at 90K | Wikipedia/Graphane |
Graphite oxide (GO), formerly called graphitic oxide or graphitic acid, is a compound of carbon, oxygen, and hydrogen in variable ratios, obtained by treating graphite with strong oxidizers and acids for resolving of extra metals. The maximally oxidized bulk product is a yellow solid with C:O ratio between 2.1 and 2.9, that retains the layer structure of graphite but with a much larger and irregular spacing.
The bulk material spontaneously disperses in basic solutions or can be dispersed by sonication in polar solvents to yield monomolecular sheets, known as graphene oxide by analogy to graphene, the single-layer form of graphite. Graphene oxide sheets have been used to prepare strong paper-like materials, membranes, thin films, and composite materials. Initially, graphene oxide attracted substantial interest as a possible intermediate for the manufacture of graphene. The graphene obtained by reduction of graphene oxide still has many chemical and structural defects which is a problem for some applications but an advantage for some others.
== History and preparation ==
Graphite oxide was first prepared by Oxford chemist Benjamin C. Brodie in 1859 by treating graphite with a mixture of potassium chlorate and fuming nitric acid. He reported synthesis of "paper-like foils" with 0.05 mm thickness. In 1957 Hummers and Offeman developed a safer, quicker, and more efficient process called Hummers' method, using a mixture of sulfuric acid H2SO4, sodium nitrate NaNO3, and potassium permanganate KMnO4, which is still widely used, often with some modifications. Largest monolayer GO with highly intact carbon framework and minimal residual impurity concentrations can be synthesized in inert containers using highly pure reactants and solvents.
Graphite oxides demonstrate considerable variation of properties depending on the degree of oxidation and the synthesis method. For example, the temperature point of explosive exfoliation is generally higher for graphite oxide prepared by the Brodie method compared to Hummers graphite oxide, the difference is up to 100 degrees with the same heating rates. Hydration and solvation properties of Brodie and Hummers graphite oxides are also remarkably different.
Recently a mixture of H2SO4 and KMnO4 has been used to cut open carbon nanotubes lengthwise, resulting in microscopic flat ribbons of graphene, a few atoms wide, with the edges "capped" by oxygen atoms (=O) or hydroxyl groups (-OH).
Graphite (graphene) oxide has also been prepared by using a "bottom-up" synthesis method (Tang-Lau method) in which the sole source is glucose, the process is safer, simpler, and more environmentally friendly compared to traditionally "top-down" method, in which strong oxidizers are involved. Another important advantage of the Tang-Lau method is the control of thickness, ranging from monolayer to multilayers, by adjusting growth parameters.
== Structure ==
The structure and properties of graphite oxide depend on the particular synthesis method and degree of oxidation. It typically preserves the layer structure of the parent graphite, but the layers are buckled and the interlayer spacing is about two times larger (~0.7 nm) than that of graphite. Strictly speaking "oxide" is an incorrect but historically established name. Besides epoxide groups (bridging oxygen atoms), other functional groups found experimentally are: carbonyl (C=O), hydroxyl (-OH), phenol and for graphite oxides prepared using sulphuric acid (e.g. Hummers method) some impurity of sulphur is often found, for example in a form of organosulfate groups. The detailed structure is still not understood due to the strong disorder and irregular packing of the layers.
Graphene oxide layers are about 1.1 ± 0.2 nm thick. Scanning tunneling microscopy shows the presence of local regions where oxygen atoms are arranged in a rectangular pattern with lattice constant 0.27 nm × 0.41 nm. The edges of each layer are terminated with carboxyl and carbonyl groups. X-ray photoelectron spectroscopy shows the presence of several C1s peaks, their number and relative intensity depending on the particular oxidation method used. Assignment of these peaks to certain carbon functionalization types is somewhat uncertain and still under debate. For example, one interpretation goes as follows: non-oxygenated ring contexts (284.8 eV), C-O (286.2 eV), C=O (287.8 eV) and O-C=O (289.0 eV). Another interpretation, using density functional theory calculation, goes as follows: C=C with defects such as functional groups and pentagons (283.6 eV), C=C (non-oxygenated ring contexts) (284.3 eV), sp3C-H in the basal plane and C=C with functional groups (285.0 eV), C=O and C=C with functional groups, C-O (286.5 eV), and O-C=O (288.3 eV).
Graphite oxide is hydrophilic and easily hydrated when exposed to water vapor or immersed in liquid water, resulting in a distinct increase of the inter-planar distance (up to 1.2 nm in saturated state). Additional water is also incorporated into the interlayer space due to high pressure induced effects. The maximal hydration state of graphite oxide in liquid water corresponds to insertion of 2-3 water monolayers. Cooling the graphite oxide/H2O samples results in "pseudo-negative thermal expansion" and cooling below the freezing point of water results in de-insertion of one water monolayer and lattice contraction. Complete removal of water from the structure seems difficult since heating at 60–80 °C results in partial decomposition and degradation of the material.
Similar to water, graphite oxide easily incorporates other polar solvents, e.g. alcohols. However, intercalation of polar solvents occurs significantly different in Brodie and Hummers graphite oxides. Brodie graphite oxide is intercalated at ambient conditions by one monolayer of alcohols and several other solvents (e.g. dimethylformamide and acetone) when liquid solvent is available in excess. Separation of graphite oxide layers is proportional to the size of alcohol molecule. Cooling of Brodie graphite oxide immersed in excess of liquid methanol, ethanol, acetone and dimethylformamide results in step-like insertion of an additional solvent monolayer and lattice expansion. The phase transition detected by X-ray diffraction and differential scanning calorimetry (DSC) is reversible; de-insertion of solvent monolayer is observed when sample is heated back from low temperatures. An additional methanol and ethanol monolayer is reversibly inserted into the structure of Brodie graphite oxide under high pressure conditions.
Hummers graphite oxide is intercalated with two methanol or ethanol monolayers at ambient temperature. The interlayer distance of Hummers graphite oxide in an excess of liquid alcohols increases gradually upon temperature decrease, reaching 19.4 and 20.6 Å at 140 K for methanol and ethanol, respectively. The gradual expansion of the Hummers graphite oxide lattice upon cooling corresponds to insertion of at least two additional solvent monolayers.
Graphite oxide exfoliates and decomposes when rapidly heated at moderately high temperatures (~280–300 °C) with formation of finely dispersed amorphous carbon, somewhat similar to activated carbon.
== Characterization ==
XRD, FTIR, Raman, XPS, AFM, TEM, SEM/EDX, Thermogravimetric analysis etc. are some common techniques used to characterize GO samples. Experimental results of graphite/graphene oxide have been analyzed by calculation in detail. Since the distribution of oxygen functionalities on GO sheets is polydisperse, fractionation methods can be used to characterize and separate GO sheets on the basis of oxidation. Different synthesis methods give rise to different types of graphene oxide. Even different batches from similar oxidation methods can have differences in their properties due to variations in purification or quenching processes.
== Surface properties ==
It is also possible to modify the surface of graphene oxide to change its properties. Graphene oxide has unique surface properties which make it a very good surfactant material stabilizing various emulsion systems. Graphene oxide remains at the interface of the emulsions systems due to the difference in surface energy of the two phases separated by the interface.
== Relation to water ==
Graphite oxides absorb moisture in proportion to humidity and swell in liquid water. The amount of water absorbed by graphite oxides depends on the particular synthesis method and shows a strong temperature dependence.
Brodie graphite oxide selectively absorbs methanol from water/methanol mixtures in a certain range of methanol concentrations.
Membranes prepared from graphite oxides (recently more often called "graphene oxide" membranes) are vacuum tight and impermeable to nitrogen and oxygen, but are permeable to water vapors. The membranes are also impermeable to "substances of lower molecular weight". Permeation of graphite and graphene oxide membranes by polar solvents is possible due to swelling of the graphite oxide structure. The membranes in swelled state are also permeable by gases, e.g. helium. Graphene oxide sheets are chemically reactive in liquid water, leading them to acquire a small negative charge.
The interlayer distance of dried graphite oxides was reported as ~6–7 Å but in liquid water it increases up to 11–13 Å at room temperature. The lattice expansion becomes stronger at lower temperatures. The inter-layer distance in diluted NaOH reached infinity, resulting in dispersion of graphite oxide into single-layered graphene oxide sheets in solution. Graphite oxide can be used as a cation exchange membrane for materials such as KCl, HCl, CaCl2, MgCl2, BaCl2 solutions. The membranes were permeable by large alkali ions as they are able to penetrate between graphene oxide layers.
== Applications ==
=== Optical nonlinearity ===
Nonlinear optical materials are of great importance for ultrafast photonics and optoelectronics. Recently, the giant optical nonlinearities of graphene oxide (GO) has proven useful for a number of applications. For example, the optical limiting of GO is indispensable in the protection of sensitive instruments from laser-induced damage. And the saturable absorption can be used for pulse compression, mode-locking and Q-switching. Also, the nonlinear refraction (Kerr effect) is crucial for applications including all-optical switching, signal regeneration, and fast optical communications.
One of the most intriguing and unique properties of GO is that its electrical and optical properties can be tuned dynamically by manipulating the content of oxygen-containing groups through either chemical or physical reduction methods. The tuning of the optical nonlinearities has been demonstrated during the laser-induced reduction process through the continuous increase of the laser irradiance, and four stages of different nonlinear activities have been discovered, which may serve as promising solid state materials for novel nonlinear functional devices. And metal nanoparticles can greatly enhance the optical nonlinearity and fluorescence of graphene oxide.
=== Graphene manufacture ===
Graphite oxide has attracted much interest as a possible route for the large-scale production and manipulation of graphene, a material with extraordinary electronic properties. Graphite oxide itself is an insulator, almost a semiconductor, with differential conductivity between 1 and 5×10−3 S/cm at a bias voltage of 10 V. However, being hydrophilic, graphite oxide disperses readily in water, breaking up into macroscopic flakes, mostly one layer thick. Chemical reduction of these flakes would yield a suspension of graphene flakes. It was argued that the first experimental observation of graphene was reported by Hanns-Peter Boehm in 1962. In this early work the existence of monolayer reduced graphene oxide flakes was demonstrated. The contribution of Boehm was recently acknowledged by Andre Geim, the Nobel Prize winner for graphene research.
Partial reduction can be achieved by treating the suspended graphene oxide with hydrazine hydrate at 100 °C for 24 hours, by exposing graphene oxide to hydrogen plasma for a few seconds, or by exposure to a strong pulse of light, such as that of a xenon flash. Due to the oxidation protocol, manifold defects already present in graphene oxide hamper the effectiveness of the reduction. Thus, the graphene quality obtained after reduction is limited by the precursor quality (graphene oxide) and the efficiency of the reducing agent. However, the conductivity of the graphene obtained by this route is below 10 S/cm, and the charge mobility is between 0.1 and 10 cm2/Vs. These values are much greater than the oxide's, but still a few orders of magnitude lower than those of pristine graphene. Recently, the synthetic protocol for graphite oxide was optimized and almost intact graphene oxide with a preserved carbon framework was obtained. Reduction of this almost intact graphene oxide performs much better and the mobility values of charge carriers exceeds 1000 cm2/Vs for the best quality of flakes. Inspection with the atomic force microscope shows that the oxygen bonds distort the carbon layer, creating a pronounced intrinsic roughness in the oxide layers which persists after reduction. These defects also show up in Raman spectra of graphene oxide.
Large amounts of graphene sheets may also be produced through thermal methods. For example, in 2006 a method was discovered that simultaneously exfoliates and reduces graphite oxide by rapid heating (>2000 °C/min) to 1050 °C. At this temperature, carbon dioxide is released as the oxygen functionalities are removed and it explosively separates the sheets as it comes out. The temperature of reduction is important for the oxygen content of the final product, with higher degree of reduction for higher reduction temperatures.
Exposing a film of graphite oxide to the laser of a LightScribe DVD has also revealed to produce quality graphene at a low cost.
Graphene oxide has also been reduced to graphene in situ, using a 3D printed pattern of engineered E. coli bacteria. Coupling of graphene oxide with biomolecules such as peptide, proteins and enzymes enhances its biomedical applications. Currently, researchers are focussed on reducing graphene oxide using non-toxic substances; tea and coffee powder, lemon extract and various plants based antioxidants are widely used.
=== Water purification ===
Graphite oxides were studied for desalination of water using reverse osmosis beginning in the 1960s. In 2011 additional research was released.
In 2013 Lockheed Martin announced their Perforene graphene filter. Lockheed claims the filter reduces the energy costs of reverse osmosis desalination by 99%. Lockheed claimed that the filter was 500 times thinner than the best filter then on the market, one thousand times stronger and requires 1% of the pressure. The product was not expected to be released until 2020.
Another study showed that graphite oxide could be engineered to allow water to pass, but retain some larger ions. Narrow capillaries allow rapid permeation by mono- or bilayer water. Multilayer laminates have a structure similar to nacre, which provides mechanical strength in water free conditions. Helium cannot pass through the membranes in humidity free conditions, but penetrates easily when exposed to humidity, whereas water vapor passes with no resistance. Dry laminates are vacuum-tight, but immersed in water, they act as molecular sieves, blocking some solutes.
A third project produced graphene sheets with subnanoscale (0.40 ± 0.24 nm) pores. The graphene was bombarded with gallium ions, which disrupt carbon bonds. Etching the result with an oxidizing solution produces a hole at each spot struck by a gallium ion. The length of time spent in the oxidizing solution determined average pore size. Pore density reached 5 trillion pores per square centimeter, while retaining structural integrity. The pores permitted cation transport after short oxidation periods, consistent with electrostatic repulsion from negatively charged functional groups at pore edges. After longer oxidation periods, sheets were permeable to salt but not larger organic molecules.
In 2015 a team created a graphene oxide tea that over the course of a day removed 95% of heavy metals in a water solution.
A composite consisting of NiFe2O4 small ferrimagnetic nanoparticles and partially reduced graphene oxide functionalized with nitrogen atoms was successfully used to remove Cr(III) ion from water. The advantage of this nanocomposite is that it can be separated from water magnetically.
One project layered carbon atoms in a honeycomb structure, forming a hexagon-shaped crystal that measured about 0.1 millimeters in width and length, with subnanometer holes. Later work increased the membrane size to on the order of several millimeters.
Graphene attached to a polycarbonate support structure was initially effective at removing salt. However, defects formed in the graphene. Filling larger defects with nylon and small defects with hafnium metal followed by a layer of oxide restored the filtration effect.
In 2016 engineers developed graphene-based films powered by the sun that can filter dirty/salty water. Bacteria were used to produce a material consisting of two nanocellulose layers. The lower layer contains pristine cellulose, while the top layer contains cellulose and graphene oxide, which absorbs sunlight and produces heat. The system draws water from below into the material. The water diffuses into the higher layer, where it evaporates and leaves behind any contaminants. The evaporate condenses on top, where it can be captured. The film is produced by repeatedly adding a fluid coating that hardens. Bacteria produce nanocellulose fibers with interspersed graphene oxide flakes. The film is light and easily manufactured at scale.
=== Coating ===
Optically transparent, multilayer films made from graphene oxide are impermeable under dry conditions. Exposed to water (or water vapor), they allow passage of molecules below a certain size. The films consist of millions of randomly stacked flakes, leaving nano-sized capillaries between them. Closing these nanocapillaries using chemical reduction with hydroiodic acid creates "reduced graphene oxide" (r-GO) films that are completely impermeable to gases, liquids or strong chemicals greater than 100 nanometers thick. Glassware or copper plates covered with such a graphene "paint" can be used as containers for corrosive acids. Graphene-coated plastic films could be used in medical packaging to improve shelf life. Layer-by-layer coatings based on amine-modified graphene oxide and Nafion show excellent antimicrobial performance that is not compromised when heated for 2 hours at 200 °C.
=== Related materials ===
Dispersed graphene oxide flakes can also be sifted out of the dispersion (as in paper manufacture) and pressed to make an exceedingly strong graphene oxide paper.
Graphene oxide has been used in DNA analysis applications. The large planar surface of graphene oxide allows simultaneous quenching of multiple DNA probes labeled with different dyes, providing the detection of multiple DNA targets in the same solution. Further advances in graphene oxide based DNA sensors could result in very inexpensive rapid DNA analysis. Recently a group of researchers, from university of L'Aquila (Italy), discovered new wetting properties of graphene oxide thermally reduced in ultra-high vacuum up to 900 °C. They found a correlation between the surface chemical composition, the surface free energy and its polar and dispersive components, giving a rationale for the wetting properties of graphene oxide and reduced graphene oxide.
=== Flexible rechargeable battery electrode ===
Graphene oxide has been demonstrated as a flexible free-standing battery anode material for room temperature lithium-ion and sodium-ion batteries. It is also being studied as a high surface area conducting agent in lithium-sulfur battery cathodes. The functional groups on graphene oxide can serve as sites for chemical modification and immobilization of active species. This approach allows for the creation of hybrid architectures for electrode materials. Recent examples of this have been implemented in lithium-ion batteries, which are known for being rechargeable at the cost of low capacity limits. Graphene oxide-based composites functionalized with metal oxides and sulfides have been shown in recent research to induce enhanced battery performance. This has similarly been adapted into applications in supercapacitors, since the electronic properties of graphene oxide allow it to bypass some of the more prevalent restrictions of typical transition metal oxide electrodes. Research in this field is developing, with additional exploration into methods involving nitrogen doping and pH adjustment to improve capacitance. Additionally, research into reduced graphene oxide sheets, which display superior electronic properties akin to pure graphene, is currently being explored. Reduced graphene oxide greatly increases the conductivity and efficiency, while sacrificing some flexibility and structural integrity.
=== Graphene oxide lens ===
The optical lens has been playing a critical role in almost all areas of science and technology since its invention about 3000 years ago. With the advances in micro- and nanofabrication techniques, continued miniaturization of the conventional optical lenses has always been requested for various applications such as communications, sensors, data storage and a wide range of other technology-driven and consumer-driven industries. Specifically, ever smaller sizes, as well as thinner thicknesses of micro lenses, are highly needed for subwavelength optics or nano-optics with extremely small structures, particularly for visible and near-IR applications. Also, as the distance scale for optical communications shrinks, the required feature sizes of micro lenses are rapidly pushed down.
Recently, the excellent properties of newly discovered graphene oxide provide novel solutions to overcome the challenges of current planar focusing devices. Specifically, giant refractive index modification (as large as 10^-1), which is one order of magnitude larger than the current materials, between graphene oxide (GO) and reduced graphene oxide (rGO) have been demonstrated by dynamically manipulating its oxygen content using the direct laser writing (DLW) method. As a result, the overall lens thickness can be potentially reduced by more than ten times. Also, the linear optical absorption of GO is found to increase as the reduction of GO deepens, which results in transmission contrast between GO and rGO and therefore provides an amplitude modulation mechanism. Moreover, both the refractive index and the optical absorption are found to be dispersionless over a broad wavelength range from visible to near infrared. Finally, GO film offers flexible patterning capability by using the maskless DLW method, which reduces the manufacturing complexity and requirements.
As a result, a novel ultrathin planar lens on a GO thin film has been realized recently using the DLW method. The distinct advantage of the GO flat lens is that phase modulation and amplitude modulation can be achieved simultaneously, which are attributed to the giant refractive index modulation and the variable linear optical absorption of GO during its reduction process, respectively. Due to the enhanced wavefront shaping capability, the lens thickness is pushed down to subwavelength scale (~200 nm), which is thinner than all current dielectric lenses (~ μm scale). The focusing intensities and the focal length can be controlled effectively by varying the laser powers and the lens sizes, respectively. By using an oil immersion high numerical aperture (NA) objective during DLW process, 300 nm fabrication feature size on GO film has been realized, and therefore the minimum lens size has been shrunk down to 4.6 μm in diameter, which is the smallest planar micro lens and can only be realized with metasurface by FIB. Thereafter, the focal length can be reduced to as small as 0.8 μm, which would potentially increase the numerical aperture (NA) and the focusing resolution.
The full-width at half-maximum (FWHM) of 320 nm at the minimum focal spot using a 650 nm input beam has been demonstrated experimentally, which corresponding to the effective NA of 1.24 (n=1.5), the largest NA of current micro lenses. Furthermore, ultra-broadband focusing capability from 500 nm to as far as 2 μm have been realized with the same planar lens, which is still a major challenge of focusing in infrared range due to limited availability of suitable materials and fabrication technology. Most importantly, the synthesized high quality GO thin films can be flexibly integrated on various substrates and easily manufactured by using the one-step DLW method over a large area at a comparable low cost and power (~nJ/pulse), which eventually makes the GO flat lenses promising for various practical applications.
=== Energy conversion ===
Photocatalytic water splitting is an artificial photosynthesis process in which water is dissociated into hydrogen (H2) and oxygen (O2), using artificial or natural light. Methods such as photocatalytic water splitting are currently being investigated to produce hydrogen as a clean source of energy. The superior electron mobility and high surface area of graphene oxide sheets suggest it may be implemented as a catalyst that meets the requirements for this process. Specifically, graphene oxide's compositional functional groups of epoxide (-O-) and hydroxide (-OH) allow for more flexible control in the water splitting process. This flexibility can be used to tailor the band gap and band positions that are targeted in photocatalytic water splitting. Recent research experiments have demonstrated that the photocatalytic activity of graphene oxide containing a band gap within the required limits has produced effective splitting results, particularly when used with 40-50% coverage at a 2:1 hydroxide:epoxide ratio. When used in composite materials with CdS (a typical catalyst used in photocatalytic water splitting), graphene oxide nanocomposites have been shown to exhibit increased hydrogen production and quantum efficiency.
=== Hydrogen storage ===
Graphene oxide is also being explored for its applications in hydrogen storage. Hydrogen molecules can be stored among the oxygen-based functional groups found throughout the sheet. This hydrogen storage capability can be further manipulated by modulating the interlayer distance between sheets, as well as making changes to the pore sizes. Research in transition metal decoration on carbon sorbents to enhance hydrogen binding energy has led to experiments with titanium and magnesium anchored to hydroxyl groups, allowing for the binding of multiple hydrogen molecules.
=== Precision medicine ===
Graphene oxide has been studied for its promising uses in a wide variety of nanomedical applications including tissue engineering, cancer treatment, medical imaging, and drug delivery. Its physiochemical properties allow for a structure to regulate the behaviour of stem cells, with the potential to assist in the intracellular delivery of DNA, growth factors, and synthetic proteins that could allow for the repair and regeneration of muscle tissue. Due to its unique behaviour in biological environments, GO has also been proposed as a novel material in early cancer diagnosis.
It has also been explored for its uses in vaccines and immunotherapy, including as a dual-use adjuvant and carrier of biomedical materials. In September 2020, researchers at the Shanghai National Engineering Research Center for Nanotechnology in China filed a patent for use of graphene oxide in a recombinant vaccine under development against SARS-CoV-2.
== Toxicity ==
Several typical mechanisms underlying graphene (oxide) nanomaterial's toxicity have been revealed, for instance, physical destruction, oxidative stress, DNA damage, inflammatory response, apoptosis, autophagy, and necrosis. In these mechanisms, toll-like receptors (TLR), transforming growth factor-beta (TGF-β) and tumor necrosis factor-alpha (TNF-α) dependent-pathways are involved in the signalling pathway network, and oxidative stress plays a crucial role in these pathways. Many experiments have shown that graphene (oxide) nanomaterials have toxic side effects in many biological applications, but more in-depth study of toxicity mechanisms is needed. According to the USA FDA, graphene, graphene oxide, and reduced graphene oxide elicit toxic effects both in vitro and in vivo. Graphene-family nanomaterials (GFN) are not approved by the USA FDA for human consumption.
== See also ==
Oxocarbon
== References == | Wikipedia/Graphite_oxide |
Graphene oxide paper or graphite oxide paper is a material fabricated from graphite oxide. Micrometer thick films of graphene oxide paper are also named as graphite oxide membranes (in the 1960s) or (more recently) graphene oxide membranes. The membranes are typically obtained by slow evaporation of graphene oxide solution or by the filtration method.
The material has exceptional stiffness and strength, due to the intrinsic strength of the two-dimensional graphene backbone and to its interwoven layer structure which distributes loads.
== Preparation ==
The starting material is water-dispersed graphene oxide flakes. The aqueous dispersion is vacuum filtrated to produce free standing foils. The thickness of these foils is typically in the range of 0.1-50 micrometers. Depending on application the graphene oxide laminates are named either as papers or as membranes. Alternative methods to prepare free standing graphene oxide multilayers/laminates is to use repeated drop casting or spin coating. These flakes may be chemically bonded, leading to the development of additional new materials. Like the starting material, graphene oxide paper is an electrical insulator; however, it may be possible to tune this property, making the paper a conductor or semiconductor, without sacrificing its mechanical properties.
== Properties ==
Detailed studies of graphite oxide paper by V. Kohlschütter and P. Haenni date back to 1918. Studies of graphite oxide membranes were performed by Hanns-Peter Boehm, the German scientist who invented the term "graphene", in 1960. The paper titled "Graphite Oxide and its membrane properties" reported synthesis of "paper-like foils" with 0.05 mm thickness. The membranes were reported to be not permeable by gases (nitrogen and oxygen) but easily permeable by water vapors and, suggestively, by any other solvents which are able to intercalate graphite oxide. It was also reported that the membranes are not permeable by "substances of lower molecular weight".
Permeation of water through the membrane was attributed to swelling of graphite oxide structure which enables water penetration path between individual graphene oxide layers. The interlayer distance of dried graphite oxide was reported as 6.35 Å, but in liquid water it increased to 11.6 Å. Remarkably, the paper also cited the inter-layer distance in diluted NaOH as infinity thus reporting dispersion of graphite oxide on single-layered graphene oxide sheets in solution. The study also reported permeation rate of membranes for water 0.1 mg per minute per square cm. The diffusion rate of water was evaluated as 1 cm/hour. Boehm's paper also showed that graphite oxide can be used as cation exchange membrane and reports measurements of osmotic pressures, membrane potentials in KCl, HCl, CaCl2, MgCl2, BaCl2 solutions. The membranes were also reported to be permeable by large alkaloid ions as they are able to penetrate between graphene oxide layers.
In 2012 some of the properties of graphite oxide membranes discovered by Boehm were re-discovered: the membranes were reported to be not permeable by helium but permeable by water vapors. This study was later expanded to demonstrate that several salts (for example KCl, MgCl2) diffuse through the graphene oxide membrane if it is immersed in water solution.
Graphene oxide membranes are actively being studied for their applications to water desalination. Retention rates over 90% were reported in a 1960 study for NaCl solutions using stabilized graphene oxide membranes in reverse osmosis setup.
== See also ==
Nanotechnology
Buckypaper
Carbon nanotube
Graphene oxide
== References ==
Graphene Oxide Paper Could Spawn a New Class of Materials (Northwestern University Press Release)
Graphene oxide weaved into 'paper' (physicsworld.com)
It's Super Paper! (ScienceNOW Daily News)
Ultrastrong Paper from Graphene (Technology Review)
Carbon makes super-tough paper (Nature)
== External links ==
Graphene Oxide Paper Fabrication Technology Abstract
United States Patent Application for Fabrication Method
Preparation and characterization of graphene oxide paper (Nature) | Wikipedia/Graphene_oxide_paper |
A model is an informative representation of an object, person, or system. The term originally denoted the plans of a building in late 16th-century English, and derived via French and Italian ultimately from Latin modulus, 'a measure'.
Models can be divided into physical models (e.g. a ship model or a fashion model) and abstract models (e.g. a set of mathematical equations describing the workings of the atmosphere for the purpose of weather forecasting). Abstract or conceptual models are central to philosophy of science.
In scholarly research and applied science, a model should not be confused with a theory: while a model seeks only to represent reality with the purpose of better understanding or predicting the world, a theory is more ambitious in that it claims to be an explanation of reality.
== Types of model ==
=== Model in specific contexts ===
As a noun, model has specific meanings in certain fields, derived from its original meaning of "structural design or layout":
Model (art), a person posing for an artist, e.g. a 15th-century criminal representing the biblical Judas in Leonardo da Vinci's painting The Last Supper
Model (person), a person who serves as a template for others to copy, as in a role model, often in the context of advertising commercial products; e.g. the first fashion model, Marie Vernet Worth in 1853, wife of designer Charles Frederick Worth.
Model (product), a particular design of a product as displayed in a catalogue or show room (e.g. Ford Model T, an early car model)
Model (organism) a non-human species that is studied to understand biological phenomena in other organisms, e.g. a guinea pig starved of vitamin C to study scurvy, an experiment that would be immoral to conduct on a person
Model (mimicry), a species that is mimicked by another species
Model (logic), a structure (a set of items, such as natural numbers 1, 2, 3,..., along with mathematical operations such as addition and multiplication, and relations, such as
<
{\displaystyle <}
) that satisfies a given system of axioms (basic truisms), i.e. that satisfies the statements of a given theory
Model (CGI), a mathematical representation of any surface of an object in three dimensions via specialized software
Model (MVC), the information-representing internal component of a software, as distinct from its user interface
=== Physical model ===
A physical model (most commonly referred to simply as a model but in this context distinguished from a conceptual model) is a smaller or larger physical representation of an object, person or system. The object being modelled may be small (e.g., an atom) or large (e.g., the Solar System) or life-size (e.g., a fashion model displaying clothes for similarly-built potential customers).
The geometry of the model and the object it represents are often similar in the sense that one is a rescaling of the other. However, in many cases the similarity is only approximate or even intentionally distorted. Sometimes the distortion is systematic, e.g., a fixed scale horizontally and a larger fixed scale vertically when modelling topography to enhance a region's mountains.
An architectural model permits visualization of internal relationships within the structure or external relationships of the structure to the environment. Another use is as a toy.
Instrumented physical models are an effective way of investigating fluid flows for engineering design. Physical models are often coupled with computational fluid dynamics models to optimize the design of equipment and processes. This includes external flow such as around buildings, vehicles, people, or hydraulic structures. Wind tunnel and water tunnel testing is often used for these design efforts. Instrumented physical models can also examine internal flows, for the design of ductwork systems, pollution control equipment, food processing machines, and mixing vessels. Transparent flow models are used in this case to observe the detailed flow phenomenon. These models are scaled in terms of both geometry and important forces, for example, using Froude number or Reynolds number scaling (see Similitude). In the pre-computer era, the UK economy was modelled with the hydraulic model MONIAC, to predict for example the effect of tax rises on employment.
=== Conceptual model ===
A conceptual model is a theoretical representation of a system, e.g. a set of mathematical equations attempting to describe the workings of the atmosphere for the purpose of weather forecasting. It consists of concepts used to help understand or simulate a subject the model represents.
Abstract or conceptual models are central to philosophy of science, as almost every scientific theory effectively embeds some kind of model of the physical or human sphere. In some sense, a physical model "is always the reification of some conceptual model; the conceptual model is conceived ahead as the blueprint of the physical one", which is then constructed as conceived. Thus, the term refers to models that are formed after a conceptualization or generalization process.
=== Examples ===
Conceptual model (computer science), an agreed representation of entities and their relationships, to assist in developing software
Economic model, a theoretical construct representing economic processes
Language model, a probabilistic model of a natural language, used for speech recognition, language generation, and information retrieval
Large language models are artificial neural networks used for generative artificial intelligence (AI), e.g. ChatGPT
Mathematical model, a description of a system using mathematical concepts and language
Statistical model, a mathematical model that usually specifies the relationship between one or more random variables and other non-random variables
Model (CGI), a mathematical representation of any surface of an object in three dimensions via specialized software
Medical model, a proposed "set of procedures in which all doctors are trained"
Mental model, in psychology, an internal representation of external reality
Model (logic), a set along with a collection of finitary operations, and relations that are defined on it, satisfying a given collection of axioms
Model (MVC), information-representing component of a software, distinct from the user interface (the "view"), both linked by the "controller" component, in the context of the model–view–controller software design
Model act, a law drafted centrally to be disseminated and proposed for enactment in multiple independent legislatures
Standard model (disambiguation)
== Properties of models, according to general model theory ==
According to Herbert Stachowiak, a model is characterized by at least three properties:
1. Mapping
A model always is a model of something—it is an image or representation of some natural or artificial, existing or imagined original, where this original itself could be a model.
2. Reduction
In general, a model will not include all attributes that describe the original but only those that appear relevant to the model's creator or user.
3. Pragmatism
A model does not relate unambiguously to its original. It is intended to work as a replacement for the original
a) for certain subjects (for whom?)
b) within a certain time range (when?)
c) restricted to certain conceptual or physical actions (what for?).
For example, a street map is a model of the actual streets in a city (mapping), showing the course of the streets while leaving out, say, traffic signs and road markings (reduction), made for pedestrians and vehicle drivers for the purpose of finding one's way in the city (pragmatism).
Additional properties have been proposed, like extension and distortion as well as validity. The American philosopher Michael Weisberg differentiates between concrete and mathematical models and proposes computer simulations (computational models) as their own class of models.
== Uses of models ==
According to Bruce Edmonds, there are at least 5 general uses for models:
Prediction: reliably anticipating unknown data, including data within the domain of the training data (interpolation), and outside the domain (extrapolation)
Explanation: establishing plausible chains of causality by proposing mechanisms that can explain patterns seen in data
Theoretical exposition: discovering or proposing new hypotheses, or refuting existing hypotheses about the behaviour of the system being modelled
Description: representing important aspects of the system being modelled
Illustration: communicating an idea or explanation
== See also ==
== References ==
== External links ==
Media related to Physical models at Wikimedia Commons | Wikipedia/model |
The Standardmodell rifle (also known as Mauser Model 1924 or Mauser Model 1933) is a bolt-action rifle designed to chamber the 7.92×57mm Mauser cartridge. The rifle was developed in 1924 but entered full-scale production in 1933. Officially designed for export and German security guards, it was used by the paramilitary Sturmabteilung (SA) and Schutzstaffel (SS). Export variants were used in South America, Ethiopia, China and the Iberian Peninsula. The carbine version of this rifle was almost identical with the Karabiner 98k that became the standard German service rifle during World War II.
== Design ==
It was a derivative of the Gewehr 98 or Mauser Model 1898, produced in violation of the Treaty of Versailles. It featured combined features of the Karabiner 98AZ and Gewehr 98 versions. The barrel was only 600 mm (23.6 in)-long, comparable to the barrel of the Karabiner 98AZ. The rifle had a new iron sight line, with a tangent rear sight graduated from 100 m (109 yd) to 2,000 m (2,187 yd), with 50 m (55 yd) increments. The rear sight element could be modified to match the trajectory of the standard 7.92×57mm Mauser S Patrone spitzer bullet or the heavier s.S. Patrone boat tail spitzer bullet originally designed for aerial combat and long range machine gun use.
The first version of the gun was designed in 1924. It used the straight bolt handle and the bottom-mounted sling of the Gewehr 98. The rifle entered full-scale production in 1933 with a turned-down bolt and a Karabiner 98k type slot in the butt to attach the sling. The rifle was exported in 7×57mm Mauser, 7.65×53mm Mauser and 7.92×57mm Mauser. A carbine version, identical to the Karabiner 98k, was also produced.
== Service ==
The Standardmodell of 1924 was used by the SA and the SS and was exported to China and South America.
According to the manufacturer, the Model 1933 rifle was only sold to the Deutsche Reichspost, the German post office. The rifle was named Gewehr für Deutsches Reichspost (rifle of the German Post Office). Part of this production was actually purchased by Nazi organisations or by the Reichswehr. The Wehrmacht, through requisitions, might have used it during World War II.
Bolivia purchased the Standardmodell in the 1920s and used it in combat during the Chaco War. Its enemy, Paraguay, fielded Standardmodell rifles bought during the 1930s. The rifle was also ordered by Honduras.
The Standardmodell saw service in China. In the Chinese National Armament Standards Conference of 1932 it was decided that the Standardmodell was to be the standard-issue rifle of the National Revolutionary Army. Imports from Germany began in 1934, and production in Chinese arsenals began in 1935. The first 10,000 rifles were bought for the Chinese Tax Police. The rifle was first produced under the name "Type 24 Rifle", but was soon renamed to the "Chiang Kai-Shek rifle" after the Generalissimo. It was used during the Chinese Civil War and the Second Sino-Japanese War.
The Imperial Japanese Navy used the Standardmodell in the form of Chiang Kai-Shek rifles captured in China. The Japanese military procured several rifles from the producer in three contracts (many ended up in IJN, perhaps due to ammo supply difficulties or to unwillingness of the IJ Army arsenals to supply the Navy with domestic rifles): 8,000 in 1938, 20,000 in 1939 and an unclear number in 1940.
The Ethiopian Empire bought 25,000 Model 1924 and Model 1933 rifles and carbines, and fielded them during the Second Italo-Ethiopian War.
The Buenos Aires Police also bought Mauser Model 1933 in rifles and carbines configuration, the latter with a 550 millimetres (21.65 in) barrel. The Argentinean rifles and carbines differ from the other Standardmodells by having an extended arm on the bolt release.
Both before and after the Spanish coup of July 1936, Spain bought Standardmodell rifles and carbines. The German Condor Legion fighting during the Spanish Civil War also used this rifle. Some of the Spanish rifles were rebarreled for the Spanish 7×57mm round. At the same time, Portugal ordered Model 1933s to modernized its military forces.
== Users ==
Argentina: 7.65mm cartridge
Bolivia: 7.65mm cartridge
Republic of China: 7.92mm and 7mm cartridges
Ethiopian Empire: 7.92mm cartridge
Weimar Republic: 7.92mm cartridge
Nazi Germany: 7.92mm cartridge
Honduras: 7mm cartridge
Japan: ex-Chinese 7.92mm cartridge
Paraguay: 7.65mm cartridge
Portugal:7.92mm cartridge
Spain: 7.92mm and 7mm cartridges
== References ==
Ball, Robert W. D. (2011). Mauser Military Rifles of the World. Iola: Gun Digest Books. ISBN 9781440228926.
Ness, Leland; Shih, Bin (July 2016). Kangzhan: Guide to Chinese Ground Forces 1937–45. Helion & Company. ISBN 9781910294420.
Grant, Neil (20 Mar 2015). Mauser Military Rifles. Weapon 39. Osprey Publishing. ISBN 9781472805942.
Guillou, Luc (October 2011). "Le Mauser 98 DRP, précurseur du KAR.98K". Gazette des armes (in French). No. 435. pp. 34–38.
Shih, Bin (2018). China's Small Arms of the Second Sino-Japanese War (1937–1945). | Wikipedia/Standardmodell_rifle |
The standard solar model (SSM) is a mathematical model of the Sun as a spherical ball of gas (in varying states of ionisation, with the hydrogen in the deep interior being a completely ionised plasma). This stellar model, technically the spherically symmetric quasi-static model of a star, has stellar structure described by several differential equations derived from basic physical principles. The model is constrained by boundary conditions, namely the luminosity, radius, age and composition of the Sun, which are well determined. The age of the Sun cannot be measured directly; one way to estimate it is from the age of the oldest meteorites, and models of the evolution of the Solar System. The composition in the photosphere of the modern-day Sun, by mass, is 74.9% hydrogen and 23.8% helium. All heavier elements, called metals in astronomy, account for less than 2 percent of the mass. The SSM is used to test the validity of stellar evolution theory. In fact, the only way to determine the two free parameters of the stellar evolution model, the helium abundance and the mixing length parameter (used to model convection in the Sun), are to adjust the SSM to "fit" the observed Sun.
== A calibrated solar model ==
A star is considered to be at zero age (protostellar) when it is assumed to have a homogeneous composition and to be just beginning to derive most of its luminosity from nuclear reactions (so neglecting the period of contraction from a cloud of gas and dust). To obtain the SSM, a one solar mass (M☉) stellar model at zero age is evolved numerically to the age of the Sun. The abundance of elements in the zero age solar model is estimated from primordial meteorites. Along with this abundance information, a reasonable guess at the zero-age luminosity (such as the present-day Sun's luminosity) is then converted by an iterative procedure into the correct value for the model, and the temperature, pressure and density throughout the model calculated by solving the equations of stellar structure numerically assuming the star to be in a steady state. The model is then evolved numerically up to the age of the Sun. Any discrepancy from the measured values of the Sun's luminosity, surface abundances, etc. can then be used to refine the model. For example, since the Sun formed, some of the helium and heavy elements have settled out of the photosphere by diffusion. As a result, the Solar photosphere now contains about 87% as much helium and heavy elements as the protostellar photosphere had; the protostellar Solar photosphere was 71.1% hydrogen, 27.4% helium, and 1.5% metals. A measure of heavy-element settling by diffusion is required for a more accurate model.
== Numerical modelling of the stellar structure equations ==
The differential equations of stellar structure, such as the equation of hydrostatic equilibrium, are integrated numerically. The differential equations are approximated by difference equations. The star is imagined to be made up of spherically symmetric shells and the numerical integration carried out in finite steps making use of the equations of state, giving relationships for the pressure, the opacity and the energy generation rate in terms of the density, temperature and composition.
== Evolution of the Sun ==
Nuclear reactions in the core of the Sun change its composition, by converting hydrogen nuclei into helium nuclei by the proton–proton chain and (to a lesser extent in the Sun than in more massive stars) the CNO cycle. This increases the mean molecular weight in the core of the Sun, which should lead to a decrease in pressure. This does not happen as instead the core contracts. By the virial theorem half of the gravitational potential energy released by this contraction goes towards raising the temperature of the core, and the other half is radiated away. This increase in temperature also increases the pressure and restores the balance of hydrostatic equilibrium. The luminosity of the Sun is increased by the temperature rise, increasing the rate of nuclear reactions. The outer layers expand to compensate for the increased temperature and pressure gradients, so the radius also increases.
No star is completely static, but stars stay on the main sequence (burning hydrogen in the core) for long periods. In the case of the Sun, it has been on the main sequence for roughly 4.6 billion years, and will become a red giant in roughly 6.5 billion years for a total main sequence lifetime of roughly 11 billion (1010) years. Thus the assumption of steady state is a very good approximation. For simplicity, the stellar structure equations are written without explicit time dependence, with the exception of the luminosity gradient equation:
d
L
d
r
=
4
π
r
2
ρ
(
ε
−
ε
ν
)
{\displaystyle {\frac {dL}{dr}}=4\pi r^{2}\rho \left(\varepsilon -\varepsilon _{\nu }\right)}
Here L is the luminosity, ε is the nuclear energy generation rate per unit mass and εν is the luminosity due to neutrino emission (see below for the other quantities). The slow evolution of the Sun on the main sequence is then determined by the change in the nuclear species (principally hydrogen being consumed and helium being produced). The rates of the various nuclear reactions are estimated from particle physics experiments at high energies, which are extrapolated back to the lower energies of stellar interiors (the Sun burns hydrogen rather slowly). Historically, errors in the nuclear reaction rates have been one of the biggest sources of error in stellar modelling. Computers are employed to calculate the varying abundances (usually by mass fraction) of the nuclear species. A particular species will have a rate of production and a rate of destruction, so both are needed to calculate its abundance over time, at varying conditions of temperature and density. Since there are many nuclear species, a computerised reaction network is needed to keep track of how all the abundances vary together.
According to the Vogt–Russell theorem, the mass and the composition structure throughout a star uniquely determine its radius, luminosity, and internal structure, as well as its subsequent evolution (though this "theorem" was only intended to apply to the slow, stable phases of stellar evolution and certainly does not apply to the transitions between stages and rapid evolutionary stages).
The information about the varying abundances of nuclear species over time, along with the equations of state, is sufficient for a numerical solution by taking sufficiently small time increments and using iteration to find the unique internal structure of the star at each stage.
== Purpose of the standard solar model ==
The SSM serves two purposes:
it provides estimates for the helium abundance and mixing length parameter by forcing the stellar model to have the correct luminosity and radius at the Sun's age,
it provides a way to evaluate more complex models with additional physics, such as rotation, magnetic fields and diffusion or improvements to the treatment of convection, such as modelling turbulence, and convective overshooting.
Like the Standard Model of particle physics and the standard cosmology model the SSM changes over time in response to relevant new theoretical or experimental physics discoveries.
== Energy transport in the Sun ==
The Sun has a radiative core and a convective outer envelope. In the core, the luminosity due to nuclear reactions is transmitted to outer layers principally by radiation. However, in the outer layers the temperature gradient is so great that radiation cannot transport enough energy. As a result, thermal convection occurs as thermal columns carry hot material to the surface (photosphere) of the Sun. Once the material cools off at the surface, it plunges back downward to the base of the convection zone, to receive more heat from the top of the radiative zone.
In a solar model, as described in stellar structure, one considers the density
ρ
(
r
)
{\displaystyle \rho (r)}
, temperature T(r), total pressure (matter plus radiation) P(r), luminosity l(r) and energy generation rate per unit mass ε(r) in a spherical shell of a thickness dr at a distance r from the center of the star.
Radiative transport of energy is described by the radiative temperature gradient equation:
d
T
d
r
=
−
3
κ
ρ
l
16
π
r
2
σ
T
3
,
{\displaystyle {dT \over dr}=-{3\kappa \rho l \over 16\pi r^{2}\sigma T^{3}},}
where κ is the opacity of the matter, σ is the Stefan–Boltzmann constant, and the Boltzmann constant is set to one.
Convection is described using mixing length theory and the corresponding temperature gradient equation (for adiabatic convection) is:
d
T
d
r
=
(
1
−
1
γ
)
T
P
d
P
d
r
,
{\displaystyle {dT \over dr}=\left(1-{1 \over \gamma }\right){T \over P}{dP \over dr},}
where γ = cp / cv is the adiabatic index, the ratio of specific heats in the gas. (For a fully ionized ideal gas, γ = 5/3.)
Near the base of the Sun's convection zone, the convection is adiabatic, but near the surface of the Sun, convection is not adiabatic.
== Simulations of near-surface convection ==
A more realistic description of the uppermost part of the convection zone is possible through detailed three-dimensional and time-dependent hydrodynamical simulations, taking into account radiative transfer in the atmosphere. Such simulations successfully reproduce the observed surface structure of solar granulation, as well as detailed profiles of lines in the solar radiative spectrum, without the use of parametrized models of turbulence. The simulations only cover a very small fraction of the solar radius, and are evidently far too time-consuming to be included in general solar modeling. Extrapolation of an averaged simulation through the adiabatic part of the convection zone by means of a model based on the mixing-length description, demonstrated that the adiabat predicted by the simulation was essentially consistent with the depth of the solar convection zone as determined from helioseismology. An extension of mixing-length theory, including effects of turbulent pressure and kinetic energy, based on numerical simulations of near-surface convection, has been developed.
This section is adapted from the Christensen-Dalsgaard review of helioseismology, Chapter IV.
== Equations of state ==
The numerical solution of the differential equations of stellar structure requires equations of state for the pressure, opacity and energy generation rate, as described in stellar structure, which relate these variables to the density, temperature and composition.
== Helioseismology ==
Helioseismology is the study of the wave oscillations in the Sun. Changes in the propagation of these waves through the Sun reveal inner structures and allow astrophysicists to develop extremely detailed profiles of the interior conditions of the Sun. In particular, the location of the convection zone in the outer layers of the Sun can be measured, and information about the core of the Sun provides a method, using the SSM, to calculate the age of the Sun, independently of the method of inferring the age of the Sun from that of the oldest meteorites. This is another example of how the SSM can be refined.
== Neutrino production ==
Hydrogen is fused into helium through several different interactions in the Sun. The vast majority of neutrinos are produced through the pp chain, a process in which four protons are combined to produce two protons, two neutrons, two positrons, and two electron neutrinos. Neutrinos are also produced by the CNO cycle, but that process is considerably less important in the Sun than in other stars.
Most of the neutrinos produced in the Sun come from the first step of the pp chain but their energy is so low (<0.425 MeV) they are very difficult to detect. A rare side branch of the pp chain produces the "boron-8" neutrinos with a maximum energy of roughly 15 MeV, and these are the easiest neutrinos to detect. A very rare interaction in the pp chain produces the "hep" neutrinos, the highest energy neutrinos predicted to be produced by the Sun. They are predicted to have a maximum energy of about 18 MeV.
All of the interactions described above produce neutrinos with a spectrum of energies. The electron capture of 7Be produces neutrinos at either roughly 0.862 MeV (~90%) or 0.384 MeV (~10%).
== Neutrino detection ==
The weakness of the neutrino's interactions with other particles means that most neutrinos produced in the core of the Sun can pass all the way through the Sun without being absorbed. It is possible, therefore, to observe the core of the Sun directly by detecting these neutrinos.
=== History ===
The first experiment to successfully detect cosmic neutrinos was Ray Davis's chlorine experiment, in which neutrinos were detected by observing the conversion of chlorine nuclei to radioactive argon in a large tank of perchloroethylene. This was a reaction channel expected for neutrinos, but since only the number of argon decays was counted, it did not give any directional information, such as where the neutrinos came from. The experiment found about 1/3 as many neutrinos as were predicted by the standard solar model of the time, and this problem became known as the solar neutrino problem.
While it is now known that the chlorine experiment detected neutrinos, some physicists at the time were suspicious of the experiment, mainly because they did not trust such radiochemical techniques. Unambiguous detection of solar neutrinos was provided by the Kamiokande-II experiment, a water Cherenkov detector with a low enough energy threshold to detect neutrinos through neutrino-electron elastic scattering. In the elastic scattering interaction the electrons coming out of the point of reaction strongly point in the direction that the neutrino was travelling, away from the Sun. This ability to "point back" at the Sun was the first conclusive evidence that the Sun is powered by nuclear interactions in the core. While the neutrinos observed in Kamiokande-II were clearly from the Sun, the rate of neutrino interactions was again suppressed compared to theory at the time. Even worse, the Kamiokande-II experiment measured about 1/2 the predicted flux, rather than the chlorine experiment's 1/3.
The solution to the solar neutrino problem was finally experimentally determined by the Sudbury Neutrino Observatory (SNO). The radiochemical experiments were only sensitive to electron neutrinos, and the signal in the water Cerenkov experiments was dominated by the electron neutrino signal. The SNO experiment, by contrast, had sensitivity to all three neutrino flavours. By simultaneously measuring the electron neutrino and total neutrino fluxes the experiment demonstrated that the suppression was due to the MSW effect, the conversion of electron neutrinos from their pure flavour state into the second neutrino mass eigenstate as they passed through a resonance due to the changing density of the Sun. The resonance is energy dependent, and "turns on" near 2MeV. The water Cherenkov detectors only detect neutrinos above about 5MeV, while the radiochemical experiments were sensitive to lower energy (0.8MeV for chlorine, 0.2MeV for gallium), and this turned out to be the source of the difference in the observed neutrino rates at the two types of experiments.
=== Proton–proton chain ===
All neutrinos from the proton–proton chain reaction (PP neutrinos) have been detected except hep neutrinos (next point). Three techniques have been adopted: The radiochemical technique, used by Homestake, GALLEX, GNO and SAGE allowed to measure the neutrino flux above a minimum energy. The detector SNO used scattering on deuterium that allowed to measure the energy of the events, thereby identifying the single components of the predicted SSM neutrino emission. Finally, Kamiokande, Super-Kamiokande, SNO, Borexino and KamLAND used elastic scattering on electrons, which allows the measurement of the neutrino energy. Boron8 neutrinos have been seen by Kamiokande, Super-Kamiokande, SNO, Borexino, KamLAND. Beryllium7, pep, and PP neutrinos have been seen only by Borexino to date.
=== HEP neutrinos ===
The highest energy neutrinos have not yet been observed due to their small flux compared to the boron-8 neutrinos, so thus far only limits have been placed on the flux. No experiment yet has had enough sensitivity to observe the flux predicted by the SSM.
=== CNO cycle ===
Neutrinos from the CNO cycle of solar energy generation – i.e., the CNO-neutrinos – are also expected to provide observable events below 1 MeV. They have not yet been observed due to experimental noise (background). Ultra-pure scintillator detectors have the potential to probe the flux predicted by the SSM. This detection could be possible already in Borexino; the next scientific occasions will be in SNO+ and, on the longer term, in LENA and JUNO, three detectors that will be larger but will use the same principles of Borexino.
The Borexino Collaboration has confirmed that the CNO cycle accounts for 1% of the energy generation within the Sun's core.
=== Future experiments ===
While radiochemical experiments have in some sense observed the pp and Be7 neutrinos they have measured only integral fluxes. The "holy grail" of solar neutrino experiments would detect the Be7 neutrinos with a detector that is sensitive to the individual neutrino energies. This experiment would test the MSW hypothesis by searching for the turn-on of the MSW effect. Some exotic models are still capable of explaining the solar neutrino deficit, so the observation of the MSW turn on would, in effect, finally solve the solar neutrino problem.
== Core temperature prediction ==
The flux of boron-8 neutrinos is highly sensitive to the temperature of the core of the Sun,
ϕ
(
B
8
)
∝
T
25
{\displaystyle \phi ({\ce {^8B}})\propto T^{25}}
. For this reason, a precise measurement of the boron-8 neutrino flux can be used in the framework of the standard solar model as a measurement of the temperature of the core of the Sun. This estimate was performed by Fiorentini and Ricci after the first SNO results were published, and they obtained a temperature of
T
sun
=
15.7
×
10
6
K
±
1
%
{\displaystyle T_{\text{sun}}=15.7\times 10^{6}\;{\text{K}}\;\pm 1\%}
from a determined neutrino flux of 5.2×106/cm2·s.
== Lithium depletion at the solar surface ==
Stellar models of the Sun's evolution predict the solar surface chemical abundance pretty well except for lithium (Li).
The surface abundance of Li on the Sun is 140 times less than the protosolar value (i.e. the primordial abundance at the Sun's birth), yet the temperature at the base of the surface convective zone is not hot enough to burn – and hence deplete – Li. This is known as the solar lithium problem. A large range of Li abundances is observed in solar-type stars of the same age, mass, and metallicity as the Sun. Observations of an unbiased sample of stars of this type with or without observed planets (exoplanets) showed that the known planet-bearing stars have less than one per cent of the primordial Li abundance, and of the remainder half had ten times as much Li. It is hypothesised that the presence of planets may increase the amount of mixing and deepen the convective zone to such an extent that the Li can be burned. A possible mechanism for this is the idea that the planets affect the angular momentum evolution of the star, thus changing the rotation of the star relative to similar stars without planets; in the case of the Sun slowing its rotation. More research is needed to discover where and when the fault in the modelling lies. Given the precision of helioseismic probes of the interior of the modern-day Sun, it is likely that the modelling of the protostellar Sun needs to be adjusted.
== See also ==
Protostar
== References ==
== External links ==
Description of the SSM by David Guenther
Solar Models: An Historical Overview by John N. Bahcall | Wikipedia/Standard_Solar_Model |
Physical cosmology is a branch of cosmology concerned with the study of cosmological models. A cosmological model, or simply cosmology, provides a description of the largest-scale structures and dynamics of the universe and allows study of fundamental questions about its origin, structure, evolution, and ultimate fate. Cosmology as a science originated with the Copernican principle, which implies that celestial bodies obey identical physical laws to those on Earth, and Newtonian mechanics, which first allowed those physical laws to be understood.
Physical cosmology, as it is now understood, began in 1915 with the development of Albert Einstein's general theory of relativity, followed by major observational discoveries in the 1920s: first, Edwin Hubble discovered that the universe contains a huge number of external galaxies beyond the Milky Way; then, work by Vesto Slipher and others showed that the universe is expanding. These advances made it possible to speculate about the origin of the universe, and allowed the establishment of the Big Bang theory, by Georges Lemaître, as the leading cosmological model. A few researchers still advocate a handful of alternative cosmologies; however, most cosmologists agree that the Big Bang theory best explains the observations.
Dramatic advances in observational cosmology since the 1990s, including the cosmic microwave background, distant supernovae and galaxy redshift surveys, have led to the development of a standard model of cosmology. This model requires the universe to contain large amounts of dark matter and dark energy whose nature is currently not well understood, but the model gives detailed predictions that are in excellent agreement with many diverse observations.
Cosmology draws heavily on the work of many disparate areas of research in theoretical and applied physics. Areas relevant to cosmology include particle physics experiments and theory, theoretical and observational astrophysics, general relativity, quantum mechanics, and plasma physics.
== Subject history ==
Modern cosmology developed along tandem tracks of theory and observation. In 1916, Albert Einstein published his theory of general relativity, which provided a unified description of gravity as a geometric property of space and time. At the time, Einstein believed in a static universe, but found that his original formulation of the theory did not permit it. This is because masses distributed throughout the universe gravitationally attract, and move toward each other over time. However, he realized that his equations permitted the introduction of a constant term which could counteract the attractive force of gravity on the cosmic scale. Einstein published his first paper on relativistic cosmology in 1917, in which he added this cosmological constant to his field equations in order to force them to model a static universe. The Einstein model describes a static universe; space is finite and unbounded (analogous to the surface of a sphere, which has a finite area but no edges). However, this so-called Einstein model is unstable to small perturbations—it will eventually start to expand or contract. It was later realized that Einstein's model was just one of a larger set of possibilities, all of which were consistent with general relativity and the cosmological principle. The cosmological solutions of general relativity were found by Alexander Friedmann in the early 1920s. His equations describe the Friedmann–Lemaître–Robertson–Walker universe, which may expand or contract, and whose geometry may be open, flat, or closed.
In the 1910s, Vesto Slipher (and later Carl Wilhelm Wirtz) interpreted the red shift of spiral nebulae as a Doppler shift that indicated they were receding from Earth. However, it is difficult to determine the distance to astronomical objects. One way is to compare the physical size of an object to its angular size, but a physical size must be assumed in order to do this. Another method is to measure the brightness of an object and assume an intrinsic luminosity, from which the distance may be determined using the inverse-square law. Due to the difficulty of using these methods, they did not realize that the nebulae were actually galaxies outside our own Milky Way, nor did they speculate about the cosmological implications. In 1927, the Belgian Roman Catholic priest Georges Lemaître independently derived the Friedmann–Lemaître–Robertson–Walker equations and proposed, on the basis of the recession of spiral nebulae, that the universe began with the "explosion" of a "primeval atom"—which was later called the Big Bang. In 1929, Edwin Hubble provided an observational basis for Lemaître's theory. Hubble showed that the spiral nebulae were galaxies by determining their distances using measurements of the brightness of Cepheid variable stars. He discovered a relationship between the redshift of a galaxy and its distance. He interpreted this as evidence that the galaxies are receding from Earth in every direction at speeds proportional to their distance from Earth. This fact is now known as Hubble's law, though the numerical factor Hubble found relating recessional velocity and distance was off by a factor of ten, due to not knowing about the types of Cepheid variables.
Given the cosmological principle, Hubble's law suggested that the universe was expanding. Two primary explanations were proposed for the expansion. One was Lemaître's Big Bang theory, advocated and developed by George Gamow. The other explanation was Fred Hoyle's steady state model in which new matter is created as the galaxies move away from each other. In this model, the universe is roughly the same at any point in time.
For a number of years, support for these theories was evenly divided. However, the observational evidence began to support the idea that the universe evolved from a hot dense state. The discovery of the cosmic microwave background in 1965 lent strong support to the Big Bang model, and since the precise measurements of the cosmic microwave background by the Cosmic Background Explorer in the early 1990s, few cosmologists have seriously proposed other theories of the origin and evolution of the cosmos. One consequence of this is that in standard general relativity, the universe began with a singularity, as demonstrated by Roger Penrose and Stephen Hawking in the 1960s.
An alternative view to extend the Big Bang model, suggesting the universe had no beginning or singularity and the age of the universe is infinite, has been presented.
In September 2023, astrophysicists questioned the overall current view of the universe, in the form of the Standard Model of Cosmology, based on the latest James Webb Space Telescope studies.
== Energy of the cosmos ==
The lightest chemical elements, primarily hydrogen and helium, were created during the Big Bang through the process of nucleosynthesis. In a sequence of stellar nucleosynthesis reactions, smaller atomic nuclei are then combined into larger atomic nuclei, ultimately forming stable iron group elements such as iron and nickel, which have the highest nuclear binding energies. The net process results in a later energy release, meaning subsequent to the Big Bang. Such reactions of nuclear particles can lead to sudden energy releases from cataclysmic variable stars such as novae. Gravitational collapse of matter into black holes also powers the most energetic processes, generally seen in the nuclear regions of galaxies, forming quasars and active galaxies.
Cosmologists cannot explain all cosmic phenomena exactly, such as those related to the accelerating expansion of the universe, using conventional forms of energy. Instead, cosmologists propose a new form of energy called dark energy that permeates all space. One hypothesis is that dark energy is just the vacuum energy, a component of empty space that is associated with the virtual particles that exist due to the uncertainty principle.
There is no clear way to define the total energy in the universe using the most widely accepted theory of gravity, general relativity. Therefore, it remains controversial whether the total energy is conserved in an expanding universe. For instance, each photon that travels through intergalactic space loses energy due to the redshift effect. This energy is not transferred to any other system, so seems to be permanently lost. On the other hand, some cosmologists insist that energy is conserved in some sense; this follows the law of conservation of energy.
Different forms of energy may dominate the cosmos—relativistic particles which are referred to as radiation, or non-relativistic particles referred to as matter. Relativistic particles are particles whose rest mass is zero or negligible compared to their kinetic energy, and so move at the speed of light or very close to it; non-relativistic particles have much higher rest mass than their energy and so move much slower than the speed of light.
As the universe expands, both matter and radiation become diluted. However, the energy densities of radiation and matter dilute at different rates. As a particular volume expands, mass-energy density is changed only by the increase in volume, but the energy density of radiation is changed both by the increase in volume and by the increase in the wavelength of the photons that make it up. Thus the energy of radiation becomes a smaller part of the universe's total energy than that of matter as it expands. The very early universe is said to have been 'radiation dominated' and radiation controlled the deceleration of expansion. Later, as the average energy per photon becomes roughly 10 eV and lower, matter dictates the rate of deceleration and the universe is said to be 'matter dominated'. The intermediate case is not treated well analytically. As the expansion of the universe continues, matter dilutes even further and the cosmological constant becomes dominant, leading to an acceleration in the universe's expansion.
== History of the universe ==
The history of the universe is a central issue in cosmology. The history of the universe is divided into different periods called epochs, according to the dominant forces and processes in each period. The standard cosmological model is known as the Lambda-CDM model.
=== Equations of motion ===
Within the standard cosmological model, the equations of motion governing the universe as a whole are derived from general relativity with a small, positive cosmological constant. The solution is an expanding universe; due to this expansion, the radiation and matter in the universe cool and become diluted. At first, the expansion is slowed down by gravitation attracting the radiation and matter in the universe. However, as these become diluted, the cosmological constant becomes more dominant and the expansion of the universe starts to accelerate rather than decelerate. In our universe this happened billions of years ago.
=== Particle physics in cosmology ===
During the earliest moments of the universe, the average energy density was very high, making knowledge of particle physics critical to understanding this environment. Hence, scattering processes and decay of unstable elementary particles are important for cosmological models of this period.
As a rule of thumb, a scattering or a decay process is cosmologically important in a certain epoch if the time scale describing that process is smaller than, or comparable to, the time scale of the expansion of the universe. The time scale that describes the expansion of the universe is
1
/
H
{\displaystyle 1/H}
with
H
{\displaystyle H}
being the Hubble parameter, which varies with time. The expansion timescale
1
/
H
{\displaystyle 1/H}
is roughly equal to the age of the universe at each point in time.
=== Timeline of the Big Bang ===
Observations suggest that the universe began around 13.8 billion years ago. Since then, the evolution of the universe has passed through three phases. The very early universe, which is still poorly understood, was the split second in which the universe was so hot that particles had energies higher than those currently accessible in particle accelerators on Earth. Therefore, while the basic features of this epoch have been worked out in the Big Bang theory, the details are largely based on educated guesses.
Following this, in the early universe, the evolution of the universe proceeded according to known high energy physics. This is when the first protons, electrons and neutrons formed, then nuclei and finally atoms. With the formation of neutral hydrogen, the cosmic microwave background was emitted. Finally, the epoch of structure formation began, when matter started to aggregate into the first stars and quasars, and ultimately galaxies, clusters of galaxies and superclusters formed. The future of the universe is not yet firmly known, but according to the ΛCDM model it will continue expanding forever.
== Areas of study ==
Below, some of the most active areas of inquiry in cosmology are described, in roughly chronological order. This does not include all of the Big Bang cosmology, which is presented in Timeline of the Big Bang.
=== Very early universe ===
The early, hot universe appears to be well explained by the Big Bang from roughly 10−33 seconds onwards, but there are several problems. One is that there is no compelling reason, using current particle physics, for the universe to be flat, homogeneous, and isotropic (see the cosmological principle). Moreover, grand unified theories of particle physics suggest that there should be magnetic monopoles in the universe, which have not been found. These problems are resolved by a brief period of cosmic inflation, which drives the universe to flatness, smooths out anisotropies and inhomogeneities to the observed level, and exponentially dilutes the monopoles. The physical model behind cosmic inflation is extremely simple, but it has not yet been confirmed by particle physics, and there are difficult problems reconciling inflation and quantum field theory. Some cosmologists think that string theory and brane cosmology will provide an alternative to inflation.
Another major problem in cosmology is what caused the universe to contain far more matter than antimatter. Cosmologists can observationally deduce that the universe is not split into regions of matter and antimatter. If it were, there would be X-rays and gamma rays produced as a result of annihilation, but this is not observed. Therefore, some process in the early universe must have created a small excess of matter over antimatter, and this (currently not understood) process is called baryogenesis. Three required conditions for baryogenesis were derived by Andrei Sakharov in 1967, and requires a violation of the particle physics symmetry, called CP-symmetry, between matter and antimatter. However, particle accelerators measure too small a violation of CP-symmetry to account for the baryon asymmetry. Cosmologists and particle physicists look for additional violations of the CP-symmetry in the early universe that might account for the baryon asymmetry.
Both the problems of baryogenesis and cosmic inflation are very closely related to particle physics, and their resolution might come from high energy theory and experiment, rather than through observations of the universe.
=== Big Bang Theory ===
Big Bang nucleosynthesis is the theory of the formation of the elements in the early universe. It finished when the universe was about three minutes old and its temperature dropped below that at which nuclear fusion could occur. Big Bang nucleosynthesis had a brief period during which it could operate, so only the very lightest elements were produced. Starting from hydrogen ions (protons), it principally produced deuterium, helium-4, and lithium. Other elements were produced in only trace abundances. The basic theory of nucleosynthesis was developed in 1948 by George Gamow, Ralph Asher Alpher, and Robert Herman. It was used for many years as a probe of physics at the time of the Big Bang, as the theory of Big Bang nucleosynthesis connects the abundances of primordial light elements with the features of the early universe. Specifically, it can be used to test the equivalence principle, to probe dark matter, and test neutrino physics. Some cosmologists have proposed that Big Bang nucleosynthesis suggests there is a fourth "sterile" species of neutrino.
==== Standard model of Big Bang cosmology ====
The ΛCDM (Lambda cold dark matter) or Lambda-CDM model is a parametrization of the Big Bang cosmological model in which the universe contains a cosmological constant, denoted by Lambda (Greek Λ), associated with dark energy, and cold dark matter (abbreviated CDM). It is frequently referred to as the standard model of Big Bang cosmology.
=== Cosmic microwave background ===
The cosmic microwave background is radiation left over from decoupling after the epoch of recombination when neutral atoms first formed. At this point, radiation produced in the Big Bang stopped Thomson scattering from charged ions. The radiation, first observed in 1965 by Arno Penzias and Robert Woodrow Wilson, has a perfect thermal black-body spectrum. It has a temperature of 2.7 kelvins today and is isotropic to one part in 105. Cosmological perturbation theory, which describes the evolution of slight inhomogeneities in the early universe, has allowed cosmologists to precisely calculate the angular power spectrum of the radiation, and it has been measured by the recent satellite experiments (COBE and WMAP) and many ground and balloon-based experiments (such as Degree Angular Scale Interferometer, Cosmic Background Imager, and Boomerang). One of the goals of these efforts is to measure the basic parameters of the Lambda-CDM model with increasing accuracy, as well as to test the predictions of the Big Bang model and look for new physics. The results of measurements made by WMAP, for example, have placed limits on the neutrino masses.
Newer experiments, such as QUIET and the Atacama Cosmology Telescope, are trying to measure the polarization of the cosmic microwave background. These measurements are expected to provide further confirmation of the theory as well as information about cosmic inflation, and the so-called secondary anisotropies, such as the Sunyaev-Zel'dovich effect and Sachs-Wolfe effect, which are caused by interaction between galaxies and clusters with the cosmic microwave background.
On 17 March 2014, astronomers of the BICEP2 Collaboration announced the apparent detection of B-mode polarization of the CMB, considered to be evidence of primordial gravitational waves that are predicted by the theory of inflation to occur during the earliest phase of the Big Bang. However, later that year the Planck collaboration provided a more accurate measurement of cosmic dust, concluding that the B-mode signal from dust is the same strength as that reported from BICEP2. On 30 January 2015, a joint analysis of BICEP2 and Planck data was published and the European Space Agency announced that the signal can be entirely attributed to interstellar dust in the Milky Way.
=== Formation and evolution of large-scale structure ===
Understanding the formation and evolution of the largest and earliest structures (i.e., quasars, galaxies, clusters and superclusters) is one of the largest efforts in cosmology. Cosmologists study a model of hierarchical structure formation in which structures form from the bottom up, with smaller objects forming first, while the largest objects, such as superclusters, are still assembling. One way to study structure in the universe is to survey the visible galaxies, in order to construct a three-dimensional picture of the galaxies in the universe and measure the matter power spectrum. This is the approach of the Sloan Digital Sky Survey and the 2dF Galaxy Redshift Survey.
Another tool for understanding structure formation is simulations, which cosmologists use to study the gravitational aggregation of matter in the universe, as it clusters into filaments, superclusters and voids. Most simulations contain only non-baryonic cold dark matter, which should suffice to understand the universe on the largest scales, as there is much more dark matter in the universe than visible, baryonic matter. More advanced simulations are starting to include baryons and study the formation of individual galaxies. Cosmologists study these simulations to see if they agree with the galaxy surveys, and to understand any discrepancy.
Other, complementary observations to measure the distribution of matter in the distant universe and to probe reionization include:
The Lyman-alpha forest, which allows cosmologists to measure the distribution of neutral atomic hydrogen gas in the early universe, by measuring the absorption of light from distant quasars by the gas.
The 21-centimeter absorption line of neutral atomic hydrogen also provides a sensitive test of cosmology.
Weak lensing, the distortion of a distant image by gravitational lensing due to dark matter.
These will help cosmologists settle the question of when and how structure formed in the universe.
=== Dark matter ===
Evidence from Big Bang nucleosynthesis, the cosmic microwave background, structure formation, and galaxy rotation curves suggests that about 23% of the mass of the universe consists of non-baryonic dark matter, whereas only 4% consists of visible, baryonic matter. The gravitational effects of dark matter are well understood, as it behaves like a cold, non-radiative fluid that forms haloes around galaxies. Dark matter has never been detected in the laboratory, and the particle physics nature of dark matter remains completely unknown. Without observational constraints, there are a number of candidates, such as a stable supersymmetric particle, a weakly interacting massive particle, a gravitationally-interacting massive particle, an axion, and a massive compact halo object. Alternatives to the dark matter hypothesis include a modification of gravity at small accelerations (MOND) or an effect from brane cosmology. TeVeS is a version of MOND that can explain gravitational lensing.
=== Dark energy ===
If the universe is flat, there must be an additional component making up 73% (in addition to the 23% dark matter and 4% baryons) of the energy density of the universe. This is called dark energy. In order not to interfere with Big Bang nucleosynthesis and the cosmic microwave background, it must not cluster in haloes like baryons and dark matter. There is strong observational evidence for dark energy, as the total energy density of the universe is known through constraints on the flatness of the universe, but the amount of clustering matter is tightly measured, and is much less than this. The case for dark energy was strengthened in 1999, when measurements demonstrated that the expansion of the universe has begun to gradually accelerate.
Apart from its density and its clustering properties, nothing is known about dark energy. Quantum field theory predicts a cosmological constant (CC) much like dark energy, but 120 orders of magnitude larger than that observed. Steven Weinberg and a number of string theorists (see string landscape) have invoked the 'weak anthropic principle': i.e. the reason that physicists observe a universe with such a small cosmological constant is that no physicists (or any life) could exist in a universe with a larger cosmological constant. Many cosmologists find this an unsatisfying explanation: perhaps because while the weak anthropic principle is self-evident (given that living observers exist, there must be at least one universe with a cosmological constant (CC) which allows for life to exist) it does not attempt to explain the context of that universe. For example, the weak anthropic principle alone does not distinguish between:
Only one universe will ever exist and there is some underlying principle that constrains the CC to the value we observe.
Only one universe will ever exist and although there is no underlying principle fixing the CC, we got lucky.
Lots of universes exist (simultaneously or serially) with a range of CC values, and of course ours is one of the life-supporting ones.
Other possible explanations for dark energy include quintessence or a modification of gravity on the largest scales. The effect on cosmology of the dark energy that these models describe is given by the dark energy's equation of state, which varies depending upon the theory. The nature of dark energy is one of the most challenging problems in cosmology.
A better understanding of dark energy is likely to solve the problem of the ultimate fate of the universe. In the current cosmological epoch, the accelerated expansion due to dark energy is preventing structures larger than superclusters from forming. It is not known whether the acceleration will continue indefinitely, perhaps even increasing until a big rip, or whether it will eventually reverse, lead to a Big Freeze, or follow some other scenario.
=== Gravitational waves ===
Gravitational waves are ripples in the curvature of spacetime that propagate as waves at the speed of light, generated in certain gravitational interactions that propagate outward from their source. Gravitational-wave astronomy is an emerging branch of observational astronomy which aims to use gravitational waves to collect observational data about sources of detectable gravitational waves such as binary star systems composed of white dwarfs, neutron stars, and black holes; and events such as supernovae, and the formation of the early universe shortly after the Big Bang.
In 2016, the LIGO Scientific Collaboration and Virgo Collaboration teams announced that they had made the first observation of gravitational waves, originating from a pair of merging black holes using the Advanced LIGO detectors. On 15 June 2016, a second detection of gravitational waves from coalescing black holes was announced. Besides LIGO, many other gravitational-wave observatories (detectors) are under construction.
=== Other areas of inquiry ===
Cosmologists also study:
Whether primordial black holes were formed in our universe, and what happened to them.
Detection of cosmic rays with energies above the GZK cutoff, and whether it signals a failure of special relativity at high energies.
The equivalence principle, whether or not Einstein's general theory of relativity is the correct theory of gravitation, and if the fundamental laws of physics are the same everywhere in the universe.
Biophysical cosmology: a type of physical cosmology that studies and understands life as part or an inherent part of physical cosmology. It stresses that life is inherent to the universe and therefore frequent.
== See also ==
== References ==
== Further reading ==
=== Popular ===
Greene, Brian (2005). The Fabric of the Cosmos. Penguin Books Ltd. ISBN 978-0-14-101111-0.
Guth, Alan (1997). The Inflationary Universe: The Quest for a New Theory of Cosmic Origins. Random House. ISBN 978-0-224-04448-6.
Hawking, Stephen W. (1988). A Brief History of Time: From the Big Bang to Black Holes. Bantam Books, Inc. ISBN 978-0-553-38016-3.
Hawking, Stephen W. (2001). The Universe in a Nutshell. Bantam Books, Inc. ISBN 978-0-553-80202-3.
Ostriker, Jeremiah P.; Mitton, Simon (2013). Heart of Darkness: Unraveling the mysteries of the invisible Universe. Princeton, NJ: Princeton University Press. ISBN 978-0-691-13430-7.
Singh, Simon (2005). Big Bang: The Origin of the Universe. Fourth Estate. Bibcode:2004biba.book.....S. ISBN 978-0-00-716221-5.
Weinberg, Steven (1993) [1978]. The First Three Minutes. Basic Books. ISBN 978-0-465-02437-7.
=== Textbooks ===
Cheng, Ta-Pei (2005). Relativity, Gravitation and Cosmology: a Basic Introduction. Oxford and New York: Oxford University Press. ISBN 978-0-19-852957-6. Introductory cosmology and general relativity without the full tensor apparatus, deferred until the last part of the book.
Baumann, Daniel (2022). Cosmology. Cambridge: Cambridge University Press. ISBN 978-0-19-852957-6. Modern introduction to cosmology covering the homogeneous and inhomogeneous universe as well as inflation and the CMB.
Dodelson, Scott (2003). Modern Cosmology. Academic Press. ISBN 978-0-12-219141-1. An introductory text, released slightly before the WMAP results.
Gal-Or, Benjamin (1987) [1981]. Cosmology, Physics and Philosophy. Springer Verlag. ISBN 0-387-90581-2.
Grøn, Øyvind; Hervik, Sigbjørn (2007). Einstein's General Theory of Relativity with Modern Applications in Cosmology. New York: Springer. ISBN 978-0-387-69199-2.
Harrison, Edward (2000). Cosmology: the science of the universe. Cambridge University Press. ISBN 978-0-521-66148-5. For undergraduates; mathematically gentle with a strong historical focus.
Kutner, Marc (2003). Astronomy: A Physical Perspective. Cambridge University Press. ISBN 978-0-521-52927-3. An introductory astronomy text.
Kolb, Edward; Michael Turner (1988). The Early Universe. Addison-Wesley. ISBN 978-0-201-11604-5. The classic reference for researchers.
Liddle, Andrew (2003). An Introduction to Modern Cosmology. John Wiley. ISBN 978-0-470-84835-7. Cosmology without general relativity.
Liddle, Andrew; David Lyth (2000). Cosmological Inflation and Large-Scale Structure. Cambridge. ISBN 978-0-521-57598-0. An introduction to cosmology with a thorough discussion of inflation.
Mukhanov, Viatcheslav (2005). Physical Foundations of Cosmology. Cambridge University Press. ISBN 978-0-521-56398-7.
Padmanabhan, T. (1993). Structure formation in the universe. Cambridge University Press. ISBN 978-0-521-42486-8. Discusses the formation of large-scale structures in detail.
Peacock, John (1998). Cosmological Physics. Cambridge University Press. ISBN 978-0-521-42270-3. An introduction including more on general relativity and quantum field theory than most.
Peebles, P. J. E. (1993). Principles of Physical Cosmology. Princeton University Press. ISBN 978-0-691-01933-8. Strong historical focus.
Peebles, P. J. E. (1980). The Large-Scale Structure of the Universe. Princeton University Press. ISBN 978-0-691-08240-0. The classic work on large-scale structure and correlation functions.
Rees, Martin (2002). New Perspectives in Astrophysical Cosmology. Cambridge University Press. ISBN 978-0-521-64544-7.
Weinberg, Steven (1971). Gravitation and Cosmology. John Wiley. ISBN 978-0-471-92567-5. A standard reference for the mathematical formalism.
Weinberg, Steven (2008). Cosmology. Oxford University Press. ISBN 978-0-19-852682-7.
== External links ==
=== From groups ===
Cambridge Cosmology – from Cambridge University (public home page)
Cosmology 101 – from the NASA WMAP group
Center for Cosmological Physics. University of Chicago, Chicago, Illinois
Origins, Nova Online – Provided by PBS
=== From individuals ===
Gale, George, "Cosmology: Methodological Debates in the 1930s and 1940s", The Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed.)
Madore, Barry F., "Level 5 : A Knowledgebase for Extragalactic Astronomy and Cosmology". Caltech and Carnegie. Pasadena, California.
Tyler, Pat, and Newman, Phil, "Beyond Einstein". Laboratory for High Energy Astrophysics (LHEA) NASA Goddard Space Flight Center.
Wright, Ned. "Cosmology tutorial and FAQ". Division of Astronomy & Astrophysics, UCLA.
Musser, George (February 2004). "Four Keys to Cosmology". Scientific American. Retrieved 22 March 2015.
Burgess, Cliff; Quevedo, Fernando (November 2007). "The Great Cosmic Roller-Coaster Ride". Scientific American (print). pp. 52–59. (subtitle) Could cosmic inflation be a sign that our universe is embedded in a far vaster realm? | Wikipedia/Cosmological_model |
In cryptography the standard model is the model of computation in which the adversary is only limited by the amount of time and computational power available. Other names used are bare model and plain model.
Cryptographic schemes are usually based on complexity assumptions, which state that some problems, such as factorization, cannot be solved in polynomial time. Schemes that can be proven secure using only complexity assumptions are said to be secure in the standard model. Security proofs are notoriously difficult to achieve in the standard model, so in many proofs, cryptographic primitives are replaced by idealized versions. The most common example of this technique, known as the random oracle model, involves replacing a cryptographic hash function with a genuinely random function. Another example is the generic group model, where the adversary is given access to a randomly chosen encoding of a group, instead of the finite field or elliptic curve groups used in practice.
Other models used invoke trusted third parties to perform some task without cheating; for example, the public key infrastructure (PKI) model requires a certificate authority, which if it were dishonest, could produce fake certificates and use them to forge signatures, or mount a man in the middle attack to read encrypted messages. Other examples of this type are the common random string model, where it is assumed that all parties have access to some string chosen uniformly at random, and its generalization, the common reference string model, where a string is chosen according to some other probability distribution. These models are often used for non-interactive zero-knowledge proofs (NIZK). In some applications, such as the Dolev–Dwork–Naor encryption scheme, it makes sense for a particular party to generate the common reference string, while in other applications, the common reference string must be generated by a trusted third party. Collectively, these models are referred to as models with special setup assumptions.
== References == | Wikipedia/Standard_model_(cryptography) |
In set theory, a standard model for a theory T is a model M for T where the membership relation ∈M is the same as the membership relation ∈ of a set theoretical universe V (restricted to the domain of M). In other words, M is a substructure of V. A standard model M that satisfies the additional transitivity condition that x ∈ y ∈ M implies x ∈ M is a standard transitive model (or simply a transitive model).
Usually, when one talks about a model M of set theory, it is assumed that M is a set model, i.e. the domain of M is a set in V. If the domain of M is a proper class, then M is a class model. An inner model is necessarily a class model.
== References == | Wikipedia/Standard_model_(set_theory) |
The Solow–Swan model or exogenous growth model is an economic model of long-run economic growth. It attempts to explain long-run economic growth by looking at capital accumulation, labor or population growth, and increases in productivity largely driven by technological progress. At its core, it is an aggregate production function, often specified to be of Cobb–Douglas type, which enables the model "to make contact with microeconomics".: 26 The model was developed independently by Robert Solow and Trevor Swan in 1956, and superseded the Keynesian Harrod–Domar model.
Mathematically, the Solow–Swan model is a nonlinear system consisting of a single ordinary differential equation that models the evolution of the per capita stock of capital. Due to its particularly attractive mathematical characteristics, Solow–Swan proved to be a convenient starting point for various extensions. For instance, in 1965, David Cass and Tjalling Koopmans integrated Frank Ramsey's analysis of consumer optimization, thereby endogenizing the saving rate, to create what is now known as the Ramsey–Cass–Koopmans model.
== Background ==
The Solow–Swan model was an extension to the 1946 Harrod–Domar model that dropped the restrictive assumption that only capital contributes to growth (so long as there is sufficient labor to use all capital). Important contributions to the model came from the work done by Solow and by Swan in 1956, who independently developed relatively simple growth models. Solow's model fitted available data on US economic growth with some success. In 1987 Solow was awarded the Nobel Prize in Economics for his work. Today, economists use Solow's sources-of-growth accounting to estimate the separate effects on economic growth of technological change, capital, and labor.
The Solow model is also one of the most widely used models in economics to explain economic growth. Basically, it asserts that outcomes on the "total factor productivity (TFP) can lead to limitless increases in the standard of living in a country."
=== Extension to the Harrod–Domar model ===
Solow extended the Harrod–Domar model by adding labor as a factor of production and capital-output ratios that are not fixed as they are in the Harrod–Domar model. These refinements allow increasing capital intensity to be distinguished from technological progress. Solow sees the fixed proportions production function as a "crucial assumption" to the instability results in the Harrod-Domar model. His own work expands upon this by exploring the implications of alternative specifications, namely the Cobb–Douglas and the more general constant elasticity of substitution (CES). Although this has become the canonical and celebrated story in the history of economics, featured in many economic textbooks, recent reappraisal of Harrod's work has contested it. One central criticism is that Harrod's original piece was neither mainly concerned with economic growth nor did he explicitly use a fixed proportions production function.
=== Long-run implications ===
A standard Solow model predicts that in the long run, economies converge to their balanced growth equilibrium and that permanent growth of per capita income is achievable only through technological progress. Both shifts in saving and in population growth cause only level effects in the long-run (i.e. in the absolute value of real income per capita).
An interesting implication of Solow's model is that poor countries should grow faster and eventually catch-up to richer countries. This convergence could be explained by:
Lags in the diffusion on knowledge. Differences in real income might shrink as poor countries receive better technology and information;
Efficient allocation of international capital flows, since the rate of return on capital should be higher in poorer countries. In practice, this is seldom observed and is known as Lucas' paradox;
A mathematical implication of the model (assuming poor countries have not yet reached their steady state).
Baumol attempted to verify this empirically and found a very strong correlation between a countries' output growth over a long period of time (1870 to 1979) and its initial wealth. His findings were later contested by DeLong who claimed that both the non-randomness of the sampled countries, and potential for significant measurement errors for estimates of real income per capita in 1870, biased Baumol's findings. DeLong concludes that there is little evidence to support the convergence theory.
=== Assumptions ===
The key assumption of the Solow–Swan growth model is that capital is subject to diminishing returns in a closed economy.
Given a fixed stock of labor, the impact on output of the last unit of capital accumulated will always be less than the one before.
Assuming for simplicity no technological progress or labor force growth, diminishing returns implies that at some point the amount of new capital produced is only just enough to make up for the amount of existing capital lost due to depreciation.[1] At this point, because of the assumptions of no technological progress or labor force growth, we can see the economy ceases to grow.
Assuming non-zero rates of labor growth complicate matters somewhat, but the basic logic still applies[2] – in the short-run, the rate of growth slows as diminishing returns take effect and the economy converges to a constant "steady-state" rate of growth (that is, no economic growth per-capita).
Including non-zero technological progress is very similar to the assumption of non-zero workforce growth, in terms of "effective labor": a new steady state is reached with constant output per worker-hour required for a unit of output. However, in this case, per-capita output grows at the rate of technological progress in the "steady-state"[3] (that is, the rate of productivity growth).
=== Variations in the effects of productivity ===
In the Solow–Swan model the unexplained change in the growth of output after accounting for the effect of capital accumulation is called the Solow residual. This residual measures the exogenous increase in total factor productivity (TFP) during a particular time period. The increase in TFP is often attributed entirely to technological progress, but it also includes any permanent improvement in the efficiency with which factors of production are combined over time. Implicitly TFP growth includes any permanent productivity improvements that result from improved management practices in the private or public sectors of the economy. Paradoxically, even though TFP growth is exogenous in the model, it cannot be observed, so it can only be estimated in conjunction with the simultaneous estimate of the effect of capital accumulation on growth during a particular time period.
The model can be reformulated in slightly different ways using different productivity assumptions, or different measurement metrics:
Average Labor Productivity (ALP) is economic output per labor hour.
Multifactor productivity (MFP) is output divided by a weighted average of capital and labor inputs. The weights used are usually based on the aggregate input shares either factor earns. This ratio is often quoted as: 33% return to capital and 67% return to labor (in Western nations).
In a growing economy, capital is accumulated faster than people are born, so the denominator in the growth function under the MFP calculation is growing faster than in the ALP calculation. Hence, MFP growth is almost always lower than ALP growth. (Therefore, measuring in ALP terms increases the apparent capital deepening effect.) MFP is measured by the "Solow residual", not ALP.
== Mathematics of the model ==
The textbook Solow–Swan model is set in continuous-time world with no government or international trade. A single good (output) is produced using two factors of production, labor (
L
{\displaystyle L}
) and capital (
K
{\displaystyle K}
) in an aggregate production function that satisfies the Inada conditions, which imply that the elasticity of substitution must be asymptotically equal to one.
Y
(
t
)
=
K
(
t
)
α
(
A
(
t
)
L
(
t
)
)
1
−
α
{\displaystyle Y(t)=K(t)^{\alpha }(A(t)L(t))^{1-\alpha }\,}
where
t
{\displaystyle t}
denotes time,
0
<
α
<
1
{\displaystyle 0<\alpha <1}
is the elasticity of output with respect to capital, and
Y
(
t
)
{\displaystyle Y(t)}
represents total production.
A
{\displaystyle A}
refers to labor-augmenting technology or “knowledge”, thus
A
L
{\displaystyle AL}
represents effective labor. All factors of production are fully employed, and initial values
A
(
0
)
{\displaystyle A(0)}
,
K
(
0
)
{\displaystyle K(0)}
, and
L
(
0
)
{\displaystyle L(0)}
are given. The number of workers, i.e. labor, as well as the level of technology grow exogenously at rates
n
{\displaystyle n}
and
g
{\displaystyle g}
, respectively:
L
(
t
)
=
L
(
0
)
e
n
t
{\displaystyle L(t)=L(0)e^{nt}}
A
(
t
)
=
A
(
0
)
e
g
t
{\displaystyle A(t)=A(0)e^{gt}}
The number of effective units of labor,
A
(
t
)
L
(
t
)
{\displaystyle A(t)L(t)}
, therefore grows at rate
(
n
+
g
)
{\displaystyle (n+g)}
. Meanwhile, the stock of capital depreciates over time at a constant rate
δ
{\displaystyle \delta }
. However, only a fraction of the output (
c
Y
(
t
)
{\displaystyle cY(t)}
with
0
<
c
<
1
{\displaystyle 0<c<1}
) is consumed, leaving a saved share
s
=
1
−
c
{\displaystyle s=1-c}
for investment. This dynamic is expressed through the following differential equation:
K
˙
(
t
)
=
s
⋅
Y
(
t
)
−
δ
⋅
K
(
t
)
{\displaystyle {\dot {K}}(t)=s\cdot Y(t)-\delta \cdot K(t)\,}
where
K
˙
{\displaystyle {\dot {K}}}
is shorthand for
d
K
(
t
)
d
t
{\displaystyle {\frac {dK(t)}{dt}}}
, the derivative with respect to time. Derivative with respect to time means that it is the change in capital stock—output that is neither consumed nor used to replace worn-out old capital goods is net investment.
Since the production function
Y
(
K
,
A
L
)
{\displaystyle Y(K,AL)}
has constant returns to scale, it can be written as output per effective unit of labour
y
{\displaystyle y}
, which is a measure for wealth creation:
y
(
t
)
=
Y
(
t
)
A
(
t
)
L
(
t
)
=
k
(
t
)
α
{\displaystyle y(t)={\frac {Y(t)}{A(t)L(t)}}=k(t)^{\alpha }}
The main interest of the model is the dynamics of capital intensity
k
{\displaystyle k}
, the capital stock per unit of effective labour. Its behaviour over time is given by the key equation of the Solow–Swan model:
k
˙
(
t
)
=
s
k
(
t
)
α
−
(
n
+
g
+
δ
)
k
(
t
)
{\displaystyle {\dot {k}}(t)=sk(t)^{\alpha }-(n+g+\delta )k(t)}
The first term,
s
k
(
t
)
α
=
s
y
(
t
)
{\displaystyle sk(t)^{\alpha }=sy(t)}
, is the actual investment per unit of effective labour: the fraction
s
{\displaystyle s}
of the output per unit of effective labour
y
(
t
)
{\displaystyle y(t)}
that is saved and invested. The second term,
(
n
+
g
+
δ
)
k
(
t
)
{\displaystyle (n+g+\delta )k(t)}
, is the “break-even investment”: the amount of investment that must be invested to prevent
k
{\displaystyle k}
from falling.: 16 The equation implies that
k
(
t
)
{\displaystyle k(t)}
converges to a steady-state value of
k
∗
{\displaystyle k^{*}}
, defined by
s
k
(
t
)
α
=
(
n
+
g
+
δ
)
k
(
t
)
{\displaystyle sk(t)^{\alpha }=(n+g+\delta )k(t)}
, at which there is neither an increase nor a decrease of capital intensity:
k
∗
=
(
s
n
+
g
+
δ
)
1
/
(
1
−
α
)
{\displaystyle k^{*}=\left({\frac {s}{n+g+\delta }}\right)^{1/(1-\alpha )}\,}
at which the stock of capital
K
{\displaystyle K}
and effective labour
A
L
{\displaystyle AL}
are growing at rate
(
n
+
g
)
{\displaystyle (n+g)}
. Likewise, it is possible to calculate the steady-state of created wealth
y
∗
{\displaystyle y^{*}}
that corresponds with
k
∗
{\displaystyle k^{*}}
:
y
∗
=
(
s
n
+
g
+
δ
)
α
/
(
1
−
α
)
{\displaystyle y^{*}=\left({\frac {s}{n+g+\delta }}\right)^{\alpha /(1-\alpha )}\,}
By assumption of constant returns, output
Y
{\displaystyle Y}
is also growing at that rate. In essence, the Solow–Swan model predicts that an economy will converge to a balanced-growth equilibrium, regardless of its starting point. In this situation, the growth of output per worker is determined solely by the rate of technological progress.: 18
Since, by definition,
K
(
t
)
Y
(
t
)
=
k
(
t
)
1
−
α
{\displaystyle {\frac {K(t)}{Y(t)}}=k(t)^{1-\alpha }}
, at the equilibrium
k
∗
{\displaystyle k^{*}}
we have
K
(
t
)
Y
(
t
)
=
s
n
+
g
+
δ
{\displaystyle {\frac {K(t)}{Y(t)}}={\frac {s}{n+g+\delta }}}
Therefore, at the equilibrium, the capital/output ratio depends only on the saving, growth, and depreciation rates. This is the Solow–Swan model's version of the golden rule saving rate.
Since
α
<
1
{\displaystyle {\alpha }<1}
, at any time
t
{\displaystyle t}
the marginal product of capital
K
(
t
)
{\displaystyle K(t)}
in the Solow–Swan model is inversely related to the capital/labor ratio.
M
P
K
=
∂
Y
∂
K
=
α
A
1
−
α
(
K
/
L
)
1
−
α
{\displaystyle MPK={\frac {\partial Y}{\partial K}}={\frac {\alpha A^{1-\alpha }}{(K/L)^{1-\alpha }}}}
If productivity
A
{\displaystyle A}
is the same across countries, then countries with less capital per worker
K
/
L
{\displaystyle K/L}
have a higher marginal product, which would provide a higher return on capital investment. As a consequence, the model predicts that in a world of open market economies and global financial capital, investment will flow from rich countries to poor countries, until capital/worker
K
/
L
{\displaystyle K/L}
and income/worker
Y
/
L
{\displaystyle Y/L}
equalize across countries.
Since the marginal product of physical capital is not higher in poor countries than in rich countries, the implication is that productivity is lower in poor countries. The basic Solow model cannot explain why productivity is lower in these countries. Lucas suggested that lower levels of human capital in poor countries could explain the lower productivity.
If the rate of return
r
{\displaystyle r}
equals the marginal product of capital
∂
Y
∂
K
{\displaystyle {\frac {\partial Y}{\partial K}}}
then
r
K
Y
=
K
∂
Y
∂
K
Y
=
α
{\displaystyle {\frac {rK}{Y}}={\frac {K{\frac {\partial Y}{\partial K}}}{Y}}=\alpha \,}
so that
α
{\displaystyle \alpha }
is the fraction of income appropriated by capital. Thus, the Solow–Swan model assumes from the beginning that the labor-capital split of income is constant.
== Mankiw–Romer–Weil version of model ==
=== Addition of human capital ===
In 1992, N. Gregory Mankiw, David Romer, and David N. Weil theorised a version of the Solow-Swan model, augmented to include a role for human capital, that can explain the failure of international investment to flow to poor countries. In this model output and the marginal product of capital (K) are lower in poor countries because they have less human capital than rich countries.
Similar to the textbook Solow–Swan model, the production function is of Cobb–Douglas type:
Y
(
t
)
=
K
(
t
)
α
H
(
t
)
β
(
A
(
t
)
L
(
t
)
)
1
−
α
−
β
,
{\displaystyle Y(t)=K(t)^{\alpha }H(t)^{\beta }(A(t)L(t))^{1-\alpha -\beta },}
where
H
(
t
)
{\displaystyle H(t)}
is the stock of human capital, which depreciates at the same rate
δ
{\displaystyle \delta }
as physical capital. For simplicity, they assume the same function of accumulation for both types of capital. Like in Solow–Swan, a fraction of the outcome,
s
Y
(
t
)
{\displaystyle sY(t)}
, is saved each period, but in this case split up and invested partly in physical and partly in human capital, such that
s
=
s
K
+
s
H
{\displaystyle s=s_{K}+s_{H}}
. Therefore, there are two fundamental dynamic equations in this model:
k
˙
=
s
K
k
α
h
β
−
(
n
+
g
+
δ
)
k
{\displaystyle {\dot {k}}=s_{K}k^{\alpha }h^{\beta }-(n+g+\delta )k}
h
˙
=
s
H
k
α
h
β
−
(
n
+
g
+
δ
)
h
{\displaystyle {\dot {h}}=s_{H}k^{\alpha }h^{\beta }-(n+g+\delta )h}
The balanced (or steady-state) equilibrium growth path is determined by
k
˙
=
h
˙
=
0
{\displaystyle {\dot {k}}={\dot {h}}=0}
, which means
s
K
k
α
h
β
−
(
n
+
g
+
δ
)
k
=
0
{\displaystyle s_{K}k^{\alpha }h^{\beta }-(n+g+\delta )k=0}
and
s
H
k
α
h
β
−
(
n
+
g
+
δ
)
h
=
0
{\displaystyle s_{H}k^{\alpha }h^{\beta }-(n+g+\delta )h=0}
. Solving for the steady-state level of
k
{\displaystyle k}
and
h
{\displaystyle h}
yields:
k
∗
=
(
s
K
1
−
β
s
H
β
n
+
g
+
δ
)
1
1
−
α
−
β
{\displaystyle k^{*}=\left({\frac {s_{K}^{1-\beta }s_{H}^{\beta }}{n+g+\delta }}\right)^{\frac {1}{1-\alpha -\beta }}}
h
∗
=
(
s
K
α
s
H
1
−
α
n
+
g
+
δ
)
1
1
−
α
−
β
{\displaystyle h^{*}=\left({\frac {s_{K}^{\alpha }s_{H}^{1-\alpha }}{n+g+\delta }}\right)^{\frac {1}{1-\alpha -\beta }}}
In the steady state,
y
∗
=
(
k
∗
)
α
(
h
∗
)
β
{\displaystyle y^{*}=(k^{*})^{\alpha }(h^{*})^{\beta }}
.
=== Econometric estimates ===
Klenow and Rodriguez-Clare cast doubt on the validity of the augmented model because Mankiw, Romer, and Weil's estimates of
β
{\displaystyle {\beta }}
did not seem consistent with accepted estimates of the effect of increases in schooling on workers' salaries. Though the estimated model explained 78% of variation in income across countries, the estimates of
β
{\displaystyle {\beta }}
implied that human capital's external effects on national income are greater than its direct effect on workers' salaries.
=== Accounting for external effects ===
Theodore Breton provided an insight that reconciled the large effect of human capital from schooling in the Mankiw, Romer and Weil model with the smaller effect of schooling on workers' salaries. He demonstrated that the mathematical properties of the model include significant external effects between the factors of production, because human capital and physical capital are multiplicative factors of production. The external effect of human capital on the productivity of physical capital is evident in the marginal product of physical capital:
M
P
K
=
∂
Y
∂
K
=
α
A
1
−
α
(
H
/
L
)
β
(
K
/
L
)
1
−
α
{\displaystyle MPK={\frac {\partial Y}{\partial K}}={\frac {\alpha A^{1-\alpha }(H/L)^{\beta }}{(K/L)^{1-\alpha }}}}
He showed that the large estimates of the effect of human capital in cross-country estimates of the model are consistent with the smaller effect typically found on workers' salaries when the external effects of human capital on physical capital and labor are taken into account. This insight significantly strengthens the case for the Mankiw, Romer, and Weil version of the Solow–Swan model. Most analyses criticizing this model fail to account for the pecuniary external effects of both types of capital inherent in the model.
=== Total factor productivity ===
The exogenous rate of TFP (total factor productivity) growth in the Solow–Swan model is the residual after accounting for capital accumulation. The Mankiw, Romer, and Weil model provide a lower estimate of the TFP (residual) than the basic Solow–Swan model because the addition of human capital to the model enables capital accumulation to explain more of the variation in income across countries. In the basic model, the TFP residual includes the effect of human capital because human capital is not included as a factor of production.
== Conditional convergence ==
The Solow–Swan model augmented with human capital predicts that the income levels of poor countries will tend to catch up with or converge towards the income levels of rich countries if the poor countries have similar savings rates for both physical capital and human capital as a share of output, a process known as conditional convergence. However, savings rates vary widely across countries. In particular, since considerable financing constraints exist for investment in schooling, savings rates for human capital are likely to vary as a function of cultural and ideological characteristics in each country.
Since the 1950s, output/worker in rich and poor countries generally has not converged, but those poor countries that have greatly raised their savings rates have experienced the income convergence predicted by the Solow–Swan model. As an example, output/worker in Japan, a country which was once relatively poor, has converged to the level of the rich countries. Japan experienced high growth rates after it raised its savings rates in the 1950s and 1960s, and it has experienced slowing growth of output/worker since its savings rates stabilized around 1970, as predicted by the model.
The per-capita income levels of the southern states of the United States have tended to converge to the levels in the Northern states. The observed convergence in these states is also consistent with the conditional convergence concept. Whether absolute convergence between countries or regions occurs depends on whether they have similar characteristics, such as:
Education policy
Institutional arrangements
Free markets internally, and trade policy with other countries.
Additional evidence for conditional convergence comes from multivariate, cross-country regressions.
Econometric analysis on Singapore and the other "East Asian Tigers" has produced the surprising result that although output per worker has been rising, almost none of their rapid growth had been due to rising per-capita productivity (they have a low "Solow residual").
== See also ==
Economic growth
Endogenous growth theory
== Notes ==
== References ==
== Further reading ==
Agénor, Pierre-Richard (2004). "Growth and Technological Progress: The Solow–Swan Model". The Economics of Adjustment and Growth (Second ed.). Cambridge: Harvard University Press. pp. 439–462. ISBN 978-0-674-01578-4.
Barro, Robert J.; Sala-i-Martin, Xavier (2004). "Growth Models with Exogenous Saving Rates". Economic Growth (Second ed.). New York: McGraw-Hill. pp. 23–84. ISBN 978-0-262-02553-9.
Burmeister, Edwin; Dobell, A. Rodney (1970). "One-Sector Growth Models". Mathematical Theories of Economic Growth. New York: Macmillan. pp. 20–64.
Dornbusch, Rüdiger; Fischer, Stanley; Startz, Richard (2004). "Growth Theory: The Neoclassical Model". Macroeconomics (Ninth ed.). New York: McGraw-Hill Irwin. pp. 61–75. ISBN 978-0-07-282340-0.
Farmer, Roger E. A. (1999). "Neoclassical Growth Theory". Macroeconomics (Second ed.). Cincinnati: South-Western. pp. 333–355. ISBN 978-0-324-12058-5.
Ferguson, Brian S.; Lim, G. C. (1998). Introduction to Dynamic Economic Models. Manchester: Manchester University Press. pp. 42–48. ISBN 978-0-7190-4996-5.
Gandolfo, Giancarlo (1996). "The Neoclassical Growth Model". Economic Dynamics (Third ed.). Berlin: Springer. pp. 175–189. ISBN 978-3-540-60988-9.
Halsmayer, Verena (2014). "From Exploratory Modeling to Technical Expertise: Solow's Growth Model as a Multipurpose Design". History of Political Economy. 46 (Supplement 1, MIT and the Transformation of American Economics): 229–251. doi:10.1215/00182702-2716181. Retrieved 2017-11-29.
Intriligator, Michael D. (1971). Mathematical Optimalization and Economic Theory. Englewood Cliffs: Prentice-Hall. pp. 398–416. ISBN 978-0-13-561753-3.
van Rijckeghem Willy (1963) : The Structure of Some Macro-Economic Growth Models : a Comparison. Weltwirtschaftliches Archiv volume 91 pp. 84–100
== External links ==
Solow Model Videos - 20+ videos walking through derivation of the Solow Growth Model's Conclusions
Video explanation by Marginal Revolution University
Java applet where you can experiment with parameters and learn about Solow model
Solow Growth Model by Fiona Maclachlan, The Wolfram Demonstrations Project.
A step-by-step explanation of how to understand the Solow Model
Professor José-Víctor Ríos-Rull's course at University of Minnesota | Wikipedia/Solow–Swan_model |
Financial modeling is the task of building an abstract representation (a model) of a real world financial situation. This is a mathematical model designed to represent (a simplified version of) the performance of a financial asset or portfolio of a business, project, or any other investment.
Typically, then, financial modeling is understood to mean an exercise in either asset pricing or corporate finance, of a quantitative nature. It is about translating a set of hypotheses about the behavior of markets or agents into numerical predictions. At the same time, "financial modeling" is a general term that means different things to different users; the reference usually relates either to accounting and corporate finance applications or to quantitative finance applications.
== Accounting ==
In corporate finance and the accounting profession, financial modeling typically entails financial statement forecasting; usually the preparation of detailed company-specific models used for decision making purposes, valuation and financial analysis.
Applications include:
Business valuation, stock valuation, and project valuation - especially via discounted cash flow, but including other valuation approaches
Scenario planning, FP&A and management decision making ("what is"; "what if"; "what has to be done")
Budgeting: revenue forecasting and analytics; production budgeting; operations budgeting
Capital budgeting, including cost of capital (i.e. WACC) calculations
Cash flow forecasting; working capital- and treasury management; asset and liability management
Financial statement analysis / ratio analysis (including of operating- and finance leases, and R&D)
Transaction analytics: M&A, PE, VC, LBO, IPO, Project finance, P3
Credit decisioning: Credit analysis, Consumer credit risk; impairment- and provision-modeling
Management accounting: Activity-based costing, Profitability analysis, Cost analysis, Whole-life cost, Managerial risk accounting
Public sector procurement
To generalize as to the nature of these models:
firstly, as they are built around financial statements, calculations and outputs are monthly, quarterly or annual;
secondly, the inputs take the form of "assumptions", where the analyst specifies the values that will apply in each period for external / global variables (exchange rates, tax percentage, etc....; may be thought of as the model parameters), and for internal / company specific variables (wages, unit costs, etc....). Correspondingly, both characteristics are reflected (at least implicitly) in the mathematical form of these models:
firstly, the models are in discrete time;
secondly, they are deterministic.
For discussion of the issues that may arise, see below; for discussion as to more sophisticated approaches sometimes employed, see Corporate finance § Quantifying uncertainty and Financial economics § Corporate finance theory.
Modelers are often designated "financial analyst" (and are sometimes referred to, tongue in cheek, as "number crunchers"). Typically, the modeler will have completed an MBA or MSF with (optional) coursework in "financial modeling". Accounting qualifications and finance certifications such as the CIIA and CFA generally do not provide direct or explicit training in modeling. At the same time, numerous commercial training courses are offered, both through universities and privately.
For the components and steps of business modeling here, see Outline of finance § Financial modeling; see also Valuation using discounted cash flows § Determine cash flow for each forecast period for further discussion and considerations.
Although purpose-built business software does exist, the vast proportion of the market is spreadsheet-based; this is largely since the models are almost always company-specific. Also, analysts will each have their own criteria and methods for financial modeling. Microsoft Excel now has by far the dominant position, having overtaken Lotus 1-2-3 in the 1990s. Spreadsheet-based modelling can have its own problems, and several standardizations and "best practices" have been proposed. "Spreadsheet risk" is increasingly studied and managed; see model audit.
One critique here, is that model outputs, i.e. line items, often inhere "unrealistic implicit assumptions" and "internal inconsistencies". (For example, a forecast for growth in revenue but without corresponding increases in working capital, fixed assets and the associated financing, may imbed unrealistic assumptions about asset turnover, debt level and/or equity financing. See Sustainable growth rate § From a financial perspective.) What is required, but often lacking, is that all key elements are explicitly and consistently forecasted.
Related to this, is that modellers often additionally "fail to identify crucial assumptions" relating to inputs, "and to explore what can go wrong". Here, in general, modellers "use point values and simple arithmetic instead of probability distributions and statistical measures"
— i.e., as mentioned, the problems are treated as deterministic in nature — and thus calculate a single value for the asset or project, but without providing information on the range, variance and sensitivity of outcomes;
see Valuation using discounted cash flows § Determine equity value.
A further, more general critique relates to the lack of basic computer programming concepts amongst modelers,
with the result that their models are often poorly structured, and difficult to maintain. Serious criticism is also directed at the nature of budgeting, and its impact on the organization.
== Quantitative finance ==
In quantitative finance, financial modeling entails the development of a sophisticated mathematical model. Models here deal with asset prices, market movements, portfolio returns and the like.
Relatedly, applications include:
Option pricing and calculation of their "Greeks" ( accommodating volatility surfaces - via local / stochastic volatility models - and multi-curves)
Other derivatives, especially interest rate derivatives, credit derivatives and exotic derivatives
Credit valuation adjustment, CVA, as well as the various XVA
Modeling the term structure of interest rates (bootstrapping / multi-curves, short-rate models, HJM framework) and any related credit spread
Credit risk, counterparty credit risk, and regulatory capital: EAD, PD, LGD, PFE, EE; Jarrow–Turnbull model, Merton model, KMV model
Portfolio optimization and Quantitative investing more generally; see further re optimization methods employed.
Credit scoring and provisioning; Credit scorecards and IFRS 9 § Impairment
Structured product design and manufacture
Financial risk modeling: value at risk (parametric- and / or historical, CVaR, EVT), stress testing, "sensitivities" analysis (Greeks, duration, convexity, DV01, KRD, CS01, JTD)
Corporate finance applications: cash flow analytics, corporate financing activity prediction problems, and risk analysis in capital investment
Real options
Actuarial applications: Dynamic financial analysis (DFA), UIBFM, investment modeling
These problems are generally stochastic and continuous in nature, and models here thus require complex algorithms, entailing computer simulation, advanced numerical methods (such as numerical differential equations, numerical linear algebra, dynamic programming) and/or the development of optimization models. The general nature of these problems is discussed under Mathematical finance § History: Q versus P, while specific techniques are listed under Outline of finance § Mathematical tools.
For further discussion here see also: Brownian model of financial markets; Martingale pricing; Financial models with long-tailed distributions and volatility clustering; Extreme value theory; Historical simulation (finance).
Modellers are generally referred to as "quants", i.e. quantitative analysts (or "rocket scientists") and typically have advanced (Ph.D. level) backgrounds in quantitative disciplines such as statistics, physics, engineering, computer science, mathematics or operations research.
Alternatively, or in addition to their quantitative background, they complete a finance masters with a quantitative orientation, such as the Master of Quantitative Finance, or the more specialized Master of Computational Finance or Master of Financial Engineering; the CQF certificate is increasingly common.
Although spreadsheets are widely used here also (almost always requiring extensive VBA);
custom C++, Fortran or Python, or numerical-analysis software such as MATLAB, are often preferred, particularly where stability or speed is a concern.
MATLAB is often used at the research or prototyping stage because of its intuitive programming, graphical and debugging tools, but C++/Fortran are preferred for conceptually simple but high computational-cost applications where MATLAB is too slow;
Python is increasingly used due to its simplicity, and large standard library / available applications, including QuantLib.
Additionally, for many (of the standard) derivative and portfolio applications, commercial software is available, and the choice as to whether the model is to be developed in-house, or whether existing products are to be deployed, will depend on the problem in question.
See Quantitative analysis (finance) § Library quantitative analysis.
The complexity of these models may result in incorrect pricing or hedging or both. This Model risk is the subject of ongoing research by finance academics, and is a topic of great, and growing, interest in the risk management arena.
Criticism of the discipline (often preceding the 2008 financial crisis by several years) emphasizes the differences between finance and the mathematical / physical sciences, and stresses the resultant caution to be applied by modelers, and by traders and risk managers using their models. Notable here are Emanuel Derman and Paul Wilmott, authors of the Financial Modelers' Manifesto. Some go further and question whether the mathematical- and statistical modeling techniques usually applied to finance are at all appropriate (see the assumptions made for options and for portfolios).
In fact, these may go so far as to question the "empirical and scientific validity... of modern financial theory".
Notable here are Nassim Taleb and Benoit Mandelbrot.
See also Mathematical finance § Criticism, Financial economics § Challenges and criticism and Financial engineering § Criticisms.
== Competitive modeling ==
Several financial modeling competitions exist, emphasizing speed and accuracy in modeling. The Microsoft-sponsored ModelOff Financial Modeling World Championships were held annually from 2012 to 2019, with competitions throughout the year and a finals championship in New York or London. After its end in 2020, several other modeling championships have been started, including the Financial Modeling World Cup and Microsoft Excel Collegiate Challenge, also sponsored by Microsoft.
== Philosophy of financial modeling ==
Philosophy of financial modeling is a branch of philosophy concerned with the foundations, methods, and implications of modeling science.
In the philosophy of financial modeling, scholars have more recently begun to question the generally-held assumption that financial modelers seek to represent any "real-world" or actually ongoing investment situation. Instead, it has been suggested that the task of the financial modeler resides in demonstrating the possibility of a transaction in a prospective investment scenario, from a limited base of possibility conditions initially assumed in the model.
== See also ==
== References ==
== Bibliography == | Wikipedia/Financial_model |
Pierre-Simon, Marquis de Laplace (; French: [pjɛʁ simɔ̃ laplas]; 23 March 1749 – 5 March 1827) was a French polymath, a scholar whose work has been instrumental in the fields of physics, astronomy, mathematics, engineering, statistics, and philosophy. He summarized and extended the work of his predecessors in his five-volume Mécanique céleste (Celestial Mechanics) (1799–1825). This work translated the geometric study of classical mechanics to one based on calculus, opening up a broader range of problems. Laplace also popularized and further confirmed Sir Isaac Newton's work. In statistics, the Bayesian interpretation of probability was developed mainly by Laplace.
Laplace formulated Laplace's equation, and pioneered the Laplace transform which appears in many branches of mathematical physics, a field that he took a leading role in forming. The Laplacian differential operator, widely used in mathematics, is also named after him. He restated and developed the nebular hypothesis of the origin of the Solar System and was one of the first scientists to suggest an idea similar to that of a black hole, with Stephen Hawking stating that "Laplace essentially predicted the existence of black holes". He originated Laplace's demon, which is a hypothetical all-predicting intellect. He also refined Newton's calculation of the speed of sound to derive a more accurate measurement.
Laplace is regarded as one of the greatest scientists of all time. Sometimes referred to as the French Newton or Newton of France, he has been described as possessing a phenomenal natural mathematical faculty superior to that of almost all of his contemporaries. He was Napoleon's examiner when Napoleon graduated from the École Militaire in Paris in 1785. Laplace became a count of the Empire in 1806 and was named a marquis in 1817, after the Bourbon Restoration.
== Early years ==
Some details of Laplace's life are not known, as records of it were burned in 1925 with the family château in Saint Julien de Mailloc, near Lisieux, the home of his great-great-grandson the Comte de Colbert-Laplace. Others had been destroyed earlier, when his house at Arcueil near Paris was looted in 1871.
Laplace was born in Beaumont-en-Auge, Normandy on 23 March 1749, a village four miles west of Pont l'Évêque. According to W. W. Rouse Ball, his father, Pierre de Laplace, owned and farmed the small estates of Maarquis. His great-uncle, Maitre Oliver de Laplace, had held the title of Chirurgien Royal. It would seem that from a pupil he became an usher in the school at Beaumont; but, having procured a letter of introduction to d'Alembert, he went to Paris to advance his fortune. However, Karl Pearson is scathing about the inaccuracies in Rouse Ball's account and states:
Indeed Caen was probably in Laplace's day the most intellectually active of all the towns of Normandy. It was here that Laplace was educated and was provisionally a professor. It was here he wrote his first paper published in the Mélanges of the Royal Society of Turin, Tome iv. 1766–1769, at least two years before he went at 22 or 23 to Paris in 1771. Thus before he was 20 he was in touch with Lagrange in Turin. He did not go to Paris a raw self-taught country lad with only a peasant background! In 1765 at the age of sixteen Laplace left the "School of the Duke of Orleans" in Beaumont and went to the University of Caen, where he appears to have studied for five years and was a member of the Sphinx. The École Militaire of Beaumont did not replace the old school until 1776.
His parents, Pierre Laplace and Marie-Anne Sochon, were from comfortable families. The Laplace family was involved in agriculture until at least 1750, but Pierre Laplace senior was also a cider merchant and syndic of the town of Beaumont.
Pierre Simon Laplace attended a school in the village run at a Benedictine priory, his father intending that he be ordained in the Roman Catholic Church. At sixteen, to further his father's intention, he was sent to the University of Caen to read theology.
At the university, he was mentored by two enthusiastic teachers of mathematics, Christophe Gadbled and Pierre Le Canu, who awoke his zeal for the subject. Here Laplace's brilliance as a mathematician was quickly recognised and while still at Caen he wrote a memoir Sur le Calcul integral aux differences infiniment petites et aux differences finies. This provided the first correspondence between Laplace and Lagrange. Lagrange was the senior by thirteen years, and had recently founded in his native city Turin a journal named Miscellanea Taurinensia, in which many of his early works were printed and it was in the fourth volume of this series that Laplace's paper appeared. About this time, recognising that he had no vocation for the priesthood, he resolved to become a professional mathematician. Some sources state that he then broke with the church and became an atheist. Laplace did not graduate in theology but left for Paris with a letter of introduction from Le Canu to Jean le Rond d'Alembert who at that time was supreme in scientific circles.
According to his great-great-grandson, d'Alembert received him rather poorly, and to get rid of him gave him a thick mathematics book, saying to come back when he had read it. When Laplace came back a few days later, d'Alembert was even less friendly and did not hide his opinion that it was impossible that Laplace could have read and understood the book. But upon questioning him, he realised that it was true, and from that time he took Laplace under his care.
Another account is that Laplace solved overnight a problem that d'Alembert set him for submission the following week, then solved a harder problem the following night. D'Alembert was impressed and recommended him for a teaching place in the École Militaire.
With a secure income and undemanding teaching, Laplace now threw himself into original research and for the next seventeen years, 1771–1787, he produced much of his original work in astronomy.
From 1780 to 1784, Laplace and French chemist Antoine Lavoisier collaborated on several experimental investigations, designing their own equipment for the task.
In 1783 they published their joint paper, Memoir on Heat, in which they discussed the kinetic theory of molecular motion.
In their experiments they measured the specific heat of various bodies, and the expansion of metals with increasing temperature. They also measured the boiling points of ethanol and ether under pressure.
Laplace further impressed the Marquis de Condorcet, and already by 1771 Laplace felt entitled to membership in the French Academy of Sciences. However, that year admission went to Alexandre-Théophile Vandermonde and in 1772 to Jacques Antoine Joseph Cousin. Laplace was disgruntled, and early in 1773 d'Alembert wrote to Lagrange in Berlin to ask if a position could be found for Laplace there. However, Condorcet became permanent secretary of the Académie in February and Laplace was elected associate member on 31 March, at age 24. In 1773 Laplace read his paper on the invariability of planetary motion in front of the Academy des Sciences. That March he was elected to the academy, a place where he conducted the majority of his science.
On 15 March 1788, at the age of thirty-nine, Laplace married Marie-Charlotte de Courty de Romanges, an eighteen-year-old girl from a "good" family in Besançon. The wedding was celebrated at Saint-Sulpice, Paris. The couple had a son, Charles-Émile (1789–1874), and a daughter, Sophie-Suzanne (1792–1813).
== Analysis, probability, and astronomical stability ==
Laplace's early published work in 1771 started with differential equations and finite differences but he was already starting to think about the mathematical and philosophical concepts of probability and statistics. However, before his election to the Académie in 1773, he had already drafted two papers that would establish his reputation. The first, Mémoire sur la probabilité des causes par les événements was ultimately published in 1774 while the second paper, published in 1776, further elaborated his statistical thinking and also began his systematic work on celestial mechanics and the stability of the Solar System. The two disciplines would always be interlinked in his mind. "Laplace took probability as an instrument for repairing defects in knowledge." Laplace's work on probability and statistics is discussed below with his mature work on the analytic theory of probabilities.
=== Stability of the Solar System ===
Sir Isaac Newton had published his Philosophiæ Naturalis Principia Mathematica in 1687 in which he gave a derivation of Kepler's laws, which describe the motion of the planets, from his laws of motion and his law of universal gravitation. However, though Newton had privately developed the methods of calculus, all his published work used cumbersome geometric reasoning, unsuitable to account for the more subtle higher-order effects of interactions between the planets. Newton himself had doubted the possibility of a mathematical solution to the whole, even concluding that periodic divine intervention was necessary to guarantee the stability of the Solar System. Dispensing with the hypothesis of divine intervention would be a major activity of Laplace's scientific life. It is now generally regarded that Laplace's methods on their own, though vital to the development of the theory, are not sufficiently precise to demonstrate the stability of the Solar System; today the Solar System is understood to be generally chaotic at fine scales, although currently fairly stable on coarse scale.: 83, 93
One particular problem from observational astronomy was the apparent instability whereby Jupiter's orbit appeared to be shrinking while that of Saturn was expanding. The problem had been tackled by Leonhard Euler in 1748, and Joseph Louis Lagrange in 1763, but without success. In 1776, Laplace published a memoir in which he first explored the possible influences of a purported luminiferous ether or of a law of gravitation that did not act instantaneously. He ultimately returned to an intellectual investment in Newtonian gravity. Euler and Lagrange had made a practical approximation by ignoring small terms in the equations of motion. Laplace noted that though the terms themselves were small, when integrated over time they could become important. Laplace carried his analysis into the higher-order terms, up to and including the cubic. Using this more exact analysis, Laplace concluded that any two planets and the Sun must be in mutual equilibrium and thereby launched his work on the stability of the Solar System. Gerald James Whitrow described the achievement as "the most important advance in physical astronomy since Newton".
Laplace had a wide knowledge of all sciences and dominated all discussions in the Académie. Laplace seems to have regarded analysis merely as a means of attacking physical problems, though the ability with which he invented the necessary analysis is almost phenomenal. As long as his results were true he took but little trouble to explain the steps by which he arrived at them; he never studied elegance or symmetry in his processes, and it was sufficient for him if he could by any means solve the particular question he was discussing.
== Tidal dynamics ==
=== Dynamic theory of tides ===
While Newton explained the tides by describing the tide-generating forces and Bernoulli gave a description of the static reaction of the waters on Earth to the tidal potential, the dynamic theory of tides, developed by Laplace in 1775, describes the ocean's real reaction to tidal forces. Laplace's theory of ocean tides took into account friction, resonance and natural periods of ocean basins. It predicted the large amphidromic systems in the world's ocean basins and explains the oceanic tides that are actually observed.
The equilibrium theory, based on the gravitational gradient from the Sun and Moon but ignoring the Earth's rotation, the effects of continents, and other important effects, could not explain the real ocean tides.
Since measurements have confirmed the theory, many things have possible explanations now, like how the tides interact with deep sea ridges and chains of seamounts give rise to deep eddies that transport nutrients from the deep to the surface. The equilibrium tide theory calculates the height of the tide wave of less than half a meter, while the dynamic theory explains why tides are up to 15 meters. Satellite observations confirm the accuracy of the dynamic theory, and the tides worldwide are now measured to within a few centimeters. Measurements from the CHAMP satellite closely match the models based on the TOPEX data. Accurate models of tides worldwide are essential for research since the variations due to tides must be removed from measurements when calculating gravity and changes in sea levels.
=== Laplace's tidal equations ===
In 1776, Laplace formulated a single set of linear partial differential equations, for tidal flow described as a barotropic two-dimensional sheet flow. Coriolis effects are introduced as well as lateral forcing by gravity. Laplace obtained these equations by simplifying the fluid dynamic equations. But they can also be derived from energy integrals via Lagrange's equation.
For a fluid sheet of average thickness D, the vertical tidal elevation ζ, as well as the horizontal velocity components u and v (in the latitude φ and longitude λ directions, respectively) satisfy Laplace's tidal equations:
∂
ζ
∂
t
+
1
a
cos
(
φ
)
[
∂
∂
λ
(
u
D
)
+
∂
∂
φ
(
v
D
cos
(
φ
)
)
]
=
0
,
∂
u
∂
t
−
v
(
2
Ω
sin
(
φ
)
)
+
1
a
cos
(
φ
)
∂
∂
λ
(
g
ζ
+
U
)
=
0
and
∂
v
∂
t
+
u
(
2
Ω
sin
(
φ
)
)
+
1
a
∂
∂
φ
(
g
ζ
+
U
)
=
0
,
{\displaystyle {\begin{aligned}{\frac {\partial \zeta }{\partial t}}&+{\frac {1}{a\cos(\varphi )}}\left[{\frac {\partial }{\partial \lambda }}(uD)+{\frac {\partial }{\partial \varphi }}\left(vD\cos(\varphi )\right)\right]=0,\\[2ex]{\frac {\partial u}{\partial t}}&-v\left(2\Omega \sin(\varphi )\right)+{\frac {1}{a\cos(\varphi )}}{\frac {\partial }{\partial \lambda }}\left(g\zeta +U\right)=0\qquad {\text{and}}\\[2ex]{\frac {\partial v}{\partial t}}&+u\left(2\Omega \sin(\varphi )\right)+{\frac {1}{a}}{\frac {\partial }{\partial \varphi }}\left(g\zeta +U\right)=0,\end{aligned}}}
where Ω is the angular frequency of the planet's rotation, g is the planet's gravitational acceleration at the mean ocean surface, a is the planetary radius, and U is the external gravitational tidal-forcing potential.
William Thomson (Lord Kelvin) rewrote Laplace's momentum terms using the curl to find an equation for vorticity. Under certain conditions this can be further rewritten as a conservation of vorticity.
== On the figure of the Earth ==
During the years 1784–1787 he published some papers of exceptional power. Prominent among these is one read in 1783, reprinted as Part II of Théorie du Mouvement et de la figure elliptique des planètes in 1784, and in the third volume of the Mécanique céleste. In this work, Laplace completely determined the attraction of a spheroid on a particle outside it. This is memorable for the introduction into analysis of spherical harmonics or Laplace's coefficients, and also for the development of the use of what we would now call the gravitational potential in celestial mechanics.
=== Spherical harmonics ===
In 1783, in a paper sent to the Académie, Adrien-Marie Legendre had introduced what are now known as associated Legendre functions. If two points in a plane have polar coordinates (r, θ) and (r ', θ'), where r ' ≥ r, then, by elementary manipulation, the reciprocal of the distance between the points, d, can be written as:
1
d
=
1
r
′
[
1
−
2
cos
(
θ
′
−
θ
)
r
r
′
+
(
r
r
′
)
2
]
−
1
2
.
{\displaystyle {\frac {1}{d}}={\frac {1}{r'}}\left[1-2\cos(\theta '-\theta ){\frac {r}{r'}}+\left({\frac {r}{r'}}\right)^{2}\right]^{-{\tfrac {1}{2}}}.}
This expression can be expanded in powers of r/r ' using Newton's generalised binomial theorem to give:
1
d
=
1
r
′
∑
k
=
0
∞
P
k
0
(
cos
(
θ
′
−
θ
)
)
(
r
r
′
)
k
.
{\displaystyle {\frac {1}{d}}={\frac {1}{r'}}\sum _{k=0}^{\infty }P_{k}^{0}(\cos(\theta '-\theta ))\left({\frac {r}{r'}}\right)^{k}.}
The sequence of functions P0k(cos φ) is the set of so-called "associated Legendre functions" and their usefulness arises from the fact that every function of the points on a circle can be expanded as a series of them.
Laplace, with scant regard for credit to Legendre, made the non-trivial extension of the result to three dimensions to yield a more general set of functions, the spherical harmonics or Laplace coefficients. The latter term is not in common use now.
: p. 340ff
=== Potential theory ===
This paper is also remarkable for the development of the idea of the scalar potential. The gravitational force acting on a body is, in modern language, a vector, having magnitude and direction. A potential function is a scalar function that defines how the vectors will behave. A scalar function is computationally and conceptually easier to deal with than a vector function.
Alexis Clairaut had first suggested the idea in 1743 while working on a similar problem though he was using Newtonian-type geometric reasoning. Laplace described Clairaut's work as being "in the class of the most beautiful mathematical productions". However, Rouse Ball alleges that the idea "was appropriated from Joseph Louis Lagrange, who had used it in his memoirs of 1773, 1777 and 1780". The term "potential" itself was due to Daniel Bernoulli, who introduced it in his 1738 memoire Hydrodynamica. However, according to Rouse Ball, the term "potential function" was not actually used (to refer to a function V of the coordinates of space in Laplace's sense) until George Green's 1828 An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism.
Laplace applied the language of calculus to the potential function and showed that it always satisfies the differential equation:
∇
2
V
=
∂
2
V
∂
x
2
+
∂
2
V
∂
y
2
+
∂
2
V
∂
z
2
=
0.
{\displaystyle \nabla ^{2}V={\partial ^{2}V \over \partial x^{2}}+{\partial ^{2}V \over \partial y^{2}}+{\partial ^{2}V \over \partial z^{2}}=0.}
An analogous result for the velocity potential of a fluid had been obtained some years previously by Leonhard Euler.
Laplace's subsequent work on gravitational attraction was based on this result. The quantity ∇2V has been termed the concentration of V and its value at any point indicates the "excess" of the value of V there over its mean value in the neighbourhood of the point. Laplace's equation, a special case of Poisson's equation, appears ubiquitously in mathematical physics. The concept of a potential occurs in fluid dynamics, electromagnetism and other areas. Rouse Ball speculated that it might be seen as "the outward sign" of one of the a priori forms in Kant's theory of perception.
The spherical harmonics turn out to be critical to practical solutions of Laplace's equation. Laplace's equation in spherical coordinates, such as are used for mapping the sky, can be simplified, using the method of separation of variables into a radial part, depending solely on distance from the centre point, and an angular or spherical part. The solution to the spherical part of the equation can be expressed as a series of Laplace's spherical harmonics, simplifying practical computation.
== Planetary and lunar inequalities ==
=== Jupiter–Saturn great inequality ===
Laplace presented a memoir on planetary inequalities in three sections, in 1784, 1785, and 1786. This dealt mainly with the identification and explanation of the perturbations now known as the "great Jupiter–Saturn inequality". Laplace solved a longstanding problem in the study and prediction of the movements of these planets. He showed by general considerations, first, that the mutual action of two planets could never cause large changes in the eccentricities and inclinations of their orbits; but then, even more importantly, that peculiarities arose in the Jupiter–Saturn system because of the near approach to commensurability of the mean motions of Jupiter and Saturn.
In this context commensurability means that the ratio of the two planets' mean motions is very nearly equal to a ratio between a pair of small whole numbers. Two periods of Saturn's orbit around the Sun almost equal five of Jupiter's. The corresponding difference between multiples of the mean motions, (2nJ − 5nS), corresponds to a period of nearly 900 years, and it occurs as a small divisor in the integration of a very small perturbing force with this same period. As a result, the integrated perturbations with this period are disproportionately large, about 0.8° degrees of arc in orbital longitude for Saturn and about 0.3° for Jupiter.
Further developments of these theorems on planetary motion were given in his two memoirs of 1788 and 1789, but with the aid of Laplace's discoveries, the tables of the motions of Jupiter and Saturn could at last be made much more accurate. It was on the basis of Laplace's theory that Delambre computed his astronomical tables.
=== Books ===
Laplace now set himself the task to write a work which should "offer a complete solution of the great mechanical problem presented by the Solar System, and bring theory to coincide so closely with observation that empirical equations should no longer find a place in astronomical tables." The result is embodied in the Exposition du système du monde and the Mécanique céleste.
The former was published in 1796, and gives a general explanation of the phenomena, but omits all details. It contains a summary of the history of astronomy. This summary procured for its author the honour of admission to the forty of the French Academy and is commonly esteemed one of the masterpieces of French literature, though it is not altogether reliable for the later periods of which it treats.
Laplace developed the nebular hypothesis of the formation of the Solar System, first suggested by Emanuel Swedenborg and expanded by Immanuel Kant. This hypothesis remains the most widely accepted model in the study of the origin of planetary systems. According to Laplace's description of the hypothesis, the Solar System evolved from a globular mass of incandescent gas rotating around an axis through its centre of mass. As it cooled, this mass contracted, and successive rings broke off from its outer edge. These rings in their turn cooled, and finally condensed into the planets, while the Sun represented the central core which was still left. On this view, Laplace predicted that the more distant planets would be older than those nearer the Sun.
As mentioned, the idea of the nebular hypothesis had been outlined by Immanuel Kant in 1755, who had also suggested "meteoric aggregations" and tidal friction as causes affecting the formation of the Solar System. Laplace was probably aware of this, but, like many writers of his time, he generally did not reference the work of others.
Laplace's analytical discussion of the Solar System is given in his Mécanique céleste published in five volumes. The first two volumes, published in 1799, contain methods for calculating the motions of the planets, determining their figures, and resolving tidal problems. The third and fourth volumes, published in 1802 and 1805, contain applications of these methods, and several astronomical tables. The fifth volume, published in 1825, is mainly historical, but it gives as appendices the results of Laplace's latest researches. The Mécanique céleste contains numerous of Laplace's own investigations but many results are appropriated from other writers with little or no acknowledgement. The volume's conclusions, which are described by historians as the organised result of a century of work by other writers as well as Laplace, are presented by Laplace as if they were his discoveries alone.
Jean-Baptiste Biot, who assisted Laplace in revising it for the press, says that Laplace himself was frequently unable to recover the details in the chain of reasoning, and, if satisfied that the conclusions were correct, he was content to insert the phrase, "Il est aisé à voir que..." ("It is easy to see that..."). The Mécanique céleste is not only the translation of Newton's Principia Mathematica into the language of differential calculus, but it completes parts of which Newton had been unable to fill in the details. The work was carried forward in a more finely tuned form in Félix Tisserand's Traité de mécanique céleste (1889–1896), but Laplace's treatise remains a standard authority.
In the years 1784–1787, Laplace produced some memoirs of exceptional power. The significant among these was one issued in 1784, and reprinted in the third volume of the Mécanique céleste. In this work he completely determined the attraction of a spheroid on a particle outside it. This is known for the introduction into analysis of the potential, a useful mathematical concept of broad applicability to the physical sciences.
== Optics ==
Laplace was a supporter of the corpuscle theory of light of Newton. In the fourth edition of Mécanique Céleste, Laplace assumed that short-ranged molecular forces were responsible for refraction of the corpuscles of light. Laplace and Étienne-Louis Malus also showed that Huygens principle of double refraction could be recovered from the principle of least action on light particles.
However in 1815, Augustin-Jean Fresnel presented a new wave theory for diffraction to a commission of the French Academy with the help of François Arago. Laplace was one of the commission members and they ultimately awarded a prize to Fresnel for his new approach.: I.108
=== Influence of gravity on light ===
Using corpuscular theory, Laplace also came close to propounding the concept of the black hole. He suggested that gravity could influence light and that there could be massive stars whose gravity is so great that not even light could escape from their surface (see escape velocity). However, this insight was so far ahead of its time that it played no role in the history of scientific development.
== Arcueil ==
In 1806, Laplace bought a house in Arcueil, then a village and not yet absorbed into the Paris conurbation. The chemist Claude Louis Berthollet was a neighbour – their gardens were not separated – and the pair formed the nucleus of an informal scientific circle, latterly known as the Society of Arcueil. Because of their closeness to Napoleon, Laplace and Berthollet effectively controlled advancement in the scientific establishment and admission to the more prestigious offices. The Society built up a complex pyramid of patronage. In 1806, Laplace was also elected a foreign member of the Royal Swedish Academy of Sciences.
== Analytic theory of probabilities ==
In 1812, Laplace issued his Théorie analytique des probabilités in which he laid down many fundamental results in statistics. The first half of this treatise was concerned with probability methods and problems, the second half with statistical methods and applications. Laplace's proofs are not always rigorous according to the standards of a later day, and his perspective slides back and forth between the Bayesian and non-Bayesian views with an ease that makes some of his investigations difficult to follow, but his conclusions remain basically sound even in those few situations where his analysis goes astray. In 1819, he published a popular account of his work on probability. This book bears the same relation to the Théorie des probabilités that the Système du monde does to the Mécanique céleste. In its emphasis on the analytical importance of probabilistic problems, especially in the context of the "approximation of formula functions of large numbers," Laplace's work goes beyond the contemporary view which almost exclusively considered aspects of practical applicability. Laplace's Théorie analytique remained the most influential book of mathematical probability theory to the end of the 19th century. The general relevance for statistics of Laplacian error theory was appreciated only by the end of the 19th century. However, it influenced the further development of a largely analytically oriented probability theory.
=== Inductive probability ===
In his Essai philosophique sur les probabilités (1814), Laplace set out a mathematical system of inductive reasoning based on probability, which we would today recognise as Bayesian. He begins the text with a series of principles of probability, the first seven being:
Probability is the ratio of the "favored events" to the total possible events.
The first principle assumes equal probabilities for all events. When this is not true, we must first determine the probabilities of each event. Then, the probability is the sum of the probabilities of all possible favoured events.
For independent events, the probability of the occurrence of all is the probability of each multiplied together.
When two events A and B depend on each other, the probability of compound event is the probability of A multiplied by the probability that, given A, B will occur.
The probability that A will occur, given that B has occurred, is the probability of A and B occurring divided by the probability of B.
Three corollaries are given for the sixth principle, which amount to Bayesian rule. Where event Ai ∈ {A1, A2, ... An} exhausts the list of possible causes for event B, Pr(B) = Pr(A1, A2, ..., An). Then
Pr
(
A
i
∣
B
)
=
Pr
(
A
i
)
Pr
(
B
∣
A
i
)
∑
j
Pr
(
A
j
)
Pr
(
B
∣
A
j
)
.
{\displaystyle \Pr(A_{i}\mid B)=\Pr(A_{i}){\frac {\Pr(B\mid A_{i})}{\sum _{j}\Pr(A_{j})\Pr(B\mid A_{j})}}.}
The probability of a future event C is the sum of the products of the probability of each causes Bi drawn from the event observed A, by the probability that, this cause existing, the future event will occur. Symbolically,
Pr
(
C
|
A
)
=
∑
i
Pr
(
C
|
B
i
)
Pr
(
B
i
|
A
)
.
{\displaystyle \Pr(C|A)=\sum _{i}\Pr(C|B_{i})\Pr(B_{i}|A).}
One well-known formula arising from his system is the rule of succession, given as principle seven. Suppose that some trial has only two possible outcomes, labelled "success" and "failure". Under the assumption that little or nothing is known a priori about the relative plausibilities of the outcomes, Laplace derived a formula for the probability that the next trial will be a success.
Pr
(
next outcome is success
)
=
s
+
1
n
+
2
{\displaystyle \Pr({\text{next outcome is success}})={\frac {s+1}{n+2}}}
where s is the number of previously observed successes and n is the total number of observed trials. It is still used as an estimator for the probability of an event if we know the event space, but have only a small number of samples.
The rule of succession has been subject to much criticism, partly due to the example which Laplace chose to illustrate it. He calculated that the probability that the sun will rise tomorrow, given that it has never failed to in the past, was
Pr
(
sun will rise tomorrow
)
=
d
+
1
d
+
2
{\displaystyle \Pr({\text{sun will rise tomorrow}})={\frac {d+1}{d+2}}}
where d is the number of times the sun has risen in the past. This result has been derided as absurd, and some authors have concluded that all applications of the Rule of Succession are absurd by extension. However, Laplace was fully aware of the absurdity of the result; immediately following the example, he wrote, "But this number [i.e., the probability that the sun will rise tomorrow] is far greater for him who, seeing in the totality of phenomena the principle regulating the days and seasons, realizes that nothing at the present moment can arrest the course of it."
=== Probability-generating function ===
The method of estimating the ratio of the number of favourable cases to the whole number of possible cases had been previously indicated by Laplace in a paper written in 1779. It consists of treating the successive values of any function as the coefficients in the expansion of another function, with reference to a different variable. The latter is therefore called the probability-generating function of the former. Laplace then shows how, by means of interpolation, these coefficients may be determined from the generating function. Next he attacks the converse problem, and from the coefficients he finds the generating function; this is effected by the solution of a finite difference equation.
=== Least squares and central limit theorem ===
The fourth chapter of this treatise includes an exposition of the method of least squares, a remarkable testimony to Laplace's command over the processes of analysis. In 1805 Legendre had published the method of least squares, making no attempt to tie it to the theory of probability. In 1809 Gauss had derived the normal distribution from the principle that the arithmetic mean of observations gives the most probable value for the quantity measured; then, turning this argument back upon itself, he showed that, if the errors of observation are normally distributed, the least squares estimates give the most probable values for the coefficients in regression situations. These two works seem to have spurred Laplace to complete work toward a treatise on probability he had contemplated as early as 1783.
In two important papers in 1810 and 1811, Laplace first developed the characteristic function as a tool for large-sample theory and proved the first general central limit theorem. Then in a supplement to his 1810 paper written after he had seen Gauss's work, he showed that the central limit theorem provided a Bayesian justification for least squares: if one were combining observations, each one of which was itself the mean of a large number of independent observations, then the least squares estimates would not only maximise the likelihood function, considered as a posterior distribution, but also minimise the expected posterior error, all this without any assumption as to the error distribution or a circular appeal to the principle of the arithmetic mean. In 1811 Laplace took a different non-Bayesian tack. Considering a linear regression problem, he restricted his attention to linear unbiased estimators of the linear coefficients. After showing that members of this class were approximately normally distributed if the number of observations was large, he argued that least squares provided the "best" linear estimators. Here it is "best" in the sense that it minimised the asymptotic variance and thus both minimised the expected absolute value of the error, and maximised the probability that the estimate would lie in any symmetric interval about the unknown coefficient, no matter what the error distribution. His derivation included the joint limiting distribution of the least squares estimators of two parameters.
== Laplace's demon ==
In 1814, Laplace published what may have been the first scientific articulation of causal determinism:
We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be the present to it.
This intellect is often referred to as Laplace's demon (in the same vein as Maxwell's demon) and sometimes Laplace's Superman (after Hans Reichenbach). Laplace, himself, did not use the word "demon", which was a later embellishment. As translated into English above, he simply referred to: "Une intelligence ... Rien ne serait incertain pour elle, et l'avenir comme le passé, serait présent à ses yeux."
Even though Laplace is generally credited with having first formulated the concept of causal determinism, in a philosophical context the idea was actually widespread at the time, and can be found as early as 1756 in Maupertuis' 'Sur la Divination'. As well, Jesuit scientist Boscovich first proposed a version of scientific determinism very similar to Laplace's in his 1758 book Theoria philosophiae naturalis.
== Laplace transforms ==
As early as 1744, Euler, followed by Lagrange, had started looking for solutions of differential equations in the form:
z
=
∫
X
(
x
)
e
a
x
d
x
and
z
=
∫
X
(
x
)
x
a
d
x
.
{\displaystyle z=\int X(x)e^{ax}\,dx{\text{ and }}z=\int X(x)x^{a}\,dx.}
The Laplace transform has the form:
F
(
s
)
=
∫
f
(
t
)
e
−
s
t
d
t
{\displaystyle F(s)=\int f(t)e^{-st}\,dt}
This integral operator transforms a function of time (
t
{\displaystyle t}
) into a function of a complex variable (
s
{\displaystyle s}
), usually interpreted as complex frequency.
== Other discoveries and accomplishments ==
=== Mathematics ===
Among the other discoveries of Laplace in pure and applied mathematics are:
Discussion, contemporaneously with Alexandre-Théophile Vandermonde, of the general theory of determinants, (1772);
Proof that every equation of an odd degree must have at least one real quadratic factor;
Laplace's method for approximating integrals
Solution of the linear partial differential equation of the second order;
He was the first to consider the difficult problems involved in equations of mixed differences, and to prove that the solution of an equation in finite differences of the first degree and the second order might always be obtained in the form of a continued fraction;
In his theory of probabilities:
de Moivre–Laplace theorem that approximates binomial distribution with a normal distribution
Evaluation of several common definite integrals;
General proof of the Lagrange reversion theorem.
=== Surface tension ===
Laplace built upon the qualitative work of Thomas Young to develop the theory of capillary action and the Young–Laplace equation.
=== Speed of sound ===
Laplace in 1816 was the first to point out that the speed of sound in air depends on the heat capacity ratio. Newton's original theory gave too low a value, because it does not take account of the adiabatic compression of the air which results in a local rise in temperature and pressure. Laplace's investigations in practical physics were confined to those carried on by him jointly with Lavoisier in the years 1782 to 1784 on the specific heat of various bodies.
== Politics ==
=== Minister of the Interior ===
In his early years, Laplace was careful never to become involved in politics, or indeed in life outside the Académie des sciences. He prudently withdrew from Paris during the most violent part of the Revolution.
In November 1799, immediately after seizing power in the coup of 18 Brumaire, Napoleon appointed Laplace to the post of Minister of the Interior. The appointment, however, lasted only six weeks, after which Lucien Bonaparte, Napoleon's brother, was given the post. Evidently, once Napoleon's grip on power was secure, there was no need for a prestigious but inexperienced scientist in the government. Napoleon later (in his Mémoires de Sainte Hélène) wrote of Laplace's dismissal as follows:
Geometrician of the first rank, Laplace was not long in showing himself a worse than average administrator; from his first actions in office we recognized our mistake. Laplace did not consider any question from the right angle: he sought subtleties everywhere, conceived only problems, and finally carried the spirit of "infinitesimals" into the administration.
Grattan-Guinness, however, describes these remarks as "tendentious", since there seems to be no doubt that Laplace "was only appointed as a short-term figurehead, a place-holder while Napoleon consolidated power".
=== From Bonaparte to the Bourbons ===
Although Laplace was removed from office, it was desirable to retain his allegiance. He was accordingly raised to the senate, and to the third volume of the Mécanique céleste he prefixed a note that of all the truths therein contained the most precious to the author was the declaration he thus made of his devotion towards the peacemaker of Europe. In copies sold after the Bourbon Restoration this was struck out. (Pearson points out that the censor would not have allowed it anyway.) In 1814 it was evident that the empire was falling; Laplace hastened to tender his services to the Bourbons, and in 1817 during the Restoration he was rewarded with the title of marquis.
According to Rouse Ball, the contempt that his more honest colleagues felt for his conduct in the matter may be read in the pages of Paul Louis Courier. His knowledge was useful on the numerous scientific commissions on which he served, and, says Rouse Ball, probably accounts for the manner in which his political insincerity was overlooked.
Roger Hahn in his 2005 biography disputes this portrayal of Laplace as an opportunist and turncoat, pointing out that, like many in France, he had followed the debacle of Napoleon's Russian campaign with serious misgivings. The Laplaces, whose only daughter Sophie had died in childbirth in September 1813, were in fear for the safety of their son Émile, who was on the eastern front with the emperor. Napoleon had originally come to power promising stability, but it was clear that he had overextended himself, putting the nation at peril. It was at this point that Laplace's loyalty began to weaken. Although he still had easy access to Napoleon, his personal relations with the emperor cooled considerably. As a grieving father, he was particularly cut to the quick by Napoleon's insensitivity in an exchange related by Jean-Antoine Chaptal: "On his return from the rout in Leipzig, he [Napoleon] accosted Mr Laplace: 'Oh! I see that you have grown thin—Sire, I have lost my daughter—Oh! that's not a reason for losing weight. You are a mathematician; put this event in an equation, and you will find that it adds up to zero.'"
=== Political philosophy ===
In the second edition (1814) of the Essai philosophique, Laplace added some revealing comments on politics and governance. Since it is, he says, "the practice of the eternal principles of reason, justice and humanity that produce and preserve societies, there is a great advantage to adhere to these principles, and a great inadvisability to deviate from them". Noting "the depths of misery into which peoples have been cast" when ambitious leaders disregard these principles, Laplace makes a veiled criticism of Napoleon's conduct: "Every time a great power intoxicated by the love of conquest aspires to universal domination, the sense of liberty among the unjustly threatened nations breeds a coalition to which it always succumbs." Laplace argues that "in the midst of the multiple causes that direct and restrain various states, natural limits" operate, within which it is "important for the stability as well as the prosperity of empires to remain". States that transgress these limits cannot avoid being "reverted" to them, "just as is the case when the waters of the seas whose floor has been lifted by violent tempests sink back to their level by the action of gravity".
About the political upheavals he had witnessed, Laplace formulated a set of principles derived from physics to favour evolutionary over revolutionary change:
Let us apply to the political and moral sciences the method founded upon observation and calculation, which has served us so well in the natural sciences. Let us not offer fruitless and often injurious resistance to the inevitable benefits derived from the progress of enlightenment; but let us change our institutions and the usages that we have for a long time adopted only with extreme caution. We know from past experience the drawbacks they can cause, but we are unaware of the extent of ills that change may produce. In the face of this ignorance, the theory of probability instructs us to avoid all change, especially to avoid sudden changes which in the moral as well as the physical world never occur without a considerable loss of vital force.
In these lines, Laplace expressed the views he had arrived at after experiencing the Revolution and the Empire. He believed that the stability of nature, as revealed through scientific findings, provided the model that best helped to preserve the human species. "Such views," Hahn comments, "were also of a piece with his steadfast character."
In the Essai philosophique, Laplace also illustrates the potential of probabilities in political studies by applying the law of large numbers to justify the candidates’ integer-valued ranks used in the Borda method of voting, with which the new members of the Academy of Sciences were elected. Laplace’s verbal argument is so rigorous that it can easily be converted into a formal proof.
== Death ==
Laplace died in Paris on 5 March 1827, which was the same day Alessandro Volta died. His brain was removed by his physician, François Magendie, and kept for many years, eventually being displayed in a roving anatomical museum in Britain. It was reportedly smaller than the average brain. Laplace was buried at Père Lachaise in Paris but in 1888 his remains were moved to Saint Julien de Mailloc in the canton of Orbec and reinterred on the family estate. The tomb is situated on a hill overlooking the village of St Julien de Mailloc, Normandy, France.
== Religious opinions ==
=== I had no need of that hypothesis ===
A frequently cited but potentially apocryphal interaction between Laplace and Napoleon purportedly concerns the existence of God. Although the conversation in question did occur, the exact words Laplace used and his intended meaning are not known. A typical version is provided by Rouse Ball:
Laplace went in state to Napoleon to present a copy of his work, and the following account of the interview is well authenticated, and so characteristic of all the parties concerned that I quote it in full. Someone had told Napoleon that the book contained no mention of the name of God; Napoleon, who was fond of putting embarrassing questions, received it with the remark, 'M. Laplace, they tell me you have written this large book on the system of the universe, and have never even mentioned its Creator.' Laplace, who, though the most supple of politicians, was as stiff as a martyr on every point of his philosophy, drew himself up and answered bluntly, Je n'avais pas besoin de cette hypothèse-là. ("I had no need of that hypothesis.") Napoleon, greatly amused, told this reply to Lagrange, who exclaimed, Ah! c'est une belle hypothèse; ça explique beaucoup de choses. ("Ah, it is a fine hypothesis; it explains many things.")
An earlier report, although without the mention of Laplace's name, is found in Antommarchi's The Last Moments of Napoleon (1825):
Je m'entretenais avec L ..... je le félicitais d'un ouvrage qu'il venait de publier et lui demandais comment le nom de Dieu, qui se reproduisait sans cesse sous la plume de Lagrange, ne s'était pas présenté une seule fois sous la sienne. C'est, me répondit-il, que je n'ai pas eu besoin de cette hypothèse. ("While speaking with L ..... I congratulated him on a work which he had just published and asked him how the name of God, which appeared endlessly in the works of Lagrange, didn't occur even once in his. He replied that he had no need of that hypothesis.")
In 1884, however, the astronomer Hervé Faye affirmed that this account of Laplace's exchange with Napoleon presented a "strangely transformed" (étrangement transformée) or garbled version of what had actually happened. It was not God that Laplace had treated as a hypothesis, but merely his intervention at a determinate point:
In fact Laplace never said that. Here, I believe, is what truly happened. Newton, believing that the secular perturbations which he had sketched out in his theory would in the long run end up destroying the Solar System, says somewhere that God was obliged to intervene from time to time to remedy the evil and somehow keep the system working properly. This, however, was a pure supposition suggested to Newton by an incomplete view of the conditions of the stability of our little world. Science was not yet advanced enough at that time to bring these conditions into full view. But Laplace, who had discovered them by a deep analysis, would have replied to the First Consul that Newton had wrongly invoked the intervention of God to adjust from time to time the machine of the world (la machine du monde) and that he, Laplace, had no need of such an assumption. It was not God, therefore, that Laplace treated as a hypothesis, but his intervention in a certain place.
Laplace's younger colleague, the astronomer François Arago, who gave his eulogy before the French Academy in 1827, told Faye of an attempt by Laplace to keep the garbled version of his interaction with Napoleon out of circulation. Faye writes:
I have it on the authority of M. Arago that Laplace, warned shortly before his death that that anecdote was about to be published in a biographical collection, had requested him [Arago] to demand its deletion by the publisher. It was necessary to either explain or delete it, and the second way was the easiest. But, unfortunately, it was neither deleted nor explained.
The Swiss-American historian of mathematics Florian Cajori appears to have been unaware of Faye's research, but in 1893 he came to a similar conclusion. Stephen Hawking said in 1999, "I don't think that Laplace was claiming that God does not exist. It's just that he doesn't intervene, to break the laws of Science."
The only eyewitness account of Laplace's interaction with Napoleon is from the entry for 8 August 1802 in the diary of the British astronomer Sir William Herschel:
The first Consul then asked a few questions relating to Astronomy and the construction of the heavens to which I made such answers as seemed to give him great satisfaction. He also addressed himself to Mr Laplace on the same subject, and held a considerable argument with him in which he differed from that eminent mathematician. The difference was occasioned by an exclamation of the first Consul, who asked in a tone of exclamation or admiration (when we were speaking of the extent of the sidereal heavens): 'And who is the author of all this!' Mons. De la Place wished to shew that a chain of natural causes would account for the construction and preservation of the wonderful system. This the first Consul rather opposed. Much may be said on the subject; by joining the arguments of both we shall be led to 'Nature and nature's God'.
Since this makes no mention of Laplace's saying, "I had no need of that hypothesis," Daniel Johnson argues that "Laplace never used the words attributed to him." Arago's testimony, however, appears to imply that he did, only not in reference to the existence of God.
=== Views on God ===
Raised a Catholic, Laplace appears in adult life to have inclined to deism (presumably his considered position, since it is the only one found in his writings). However, some of his contemporaries thought he was an atheist, while a number of recent scholars have described him as agnostic.
Faye thought that Laplace "did not profess atheism", but Napoleon, on Saint Helena, told General Gaspard Gourgaud, "I often asked Laplace what he thought of God. He owned that he was an atheist." Roger Hahn, in his biography of Laplace, mentions a dinner party at which "the geologist Jean-Étienne Guettard was staggered by Laplace's bold denunciation of the existence of God." It appeared to Guettard that Laplace's atheism "was supported by a thoroughgoing materialism." But the chemist Jean-Baptiste Dumas, who knew Laplace well in the 1820s, wrote that Laplace "provided materialists with their specious arguments, without sharing their convictions."
Hahn states: "Nowhere in his writings, either public or private, does Laplace deny God's existence." Expressions occur in his private letters that appear inconsistent with atheism. On 17 June 1809, for instance, he wrote to his son, "Je prie Dieu qu'il veille sur tes jours. Aie-Le toujours présent à ta pensée, ainsi que ton père et ta mère [I pray that God watches over your days. Let Him be always present to your mind, as also your father and your mother]." Ian S. Glass, quoting Herschel's account of the celebrated exchange with Napoleon, writes that Laplace was "evidently a deist like Herschel".
In Exposition du système du monde, Laplace quotes Newton's assertion that "the wondrous disposition of the Sun, the planets and the comets, can only be the work of an all-powerful and intelligent Being." This, says Laplace, is a "thought in which he [Newton] would be even more confirmed, if he had known what we have shown, namely that the conditions of the arrangement of the planets and their satellites are precisely those which ensure its stability." By showing that the "remarkable" arrangement of the planets could be entirely explained by the laws of motion, Laplace had eliminated the need for the "supreme intelligence" to intervene, as Newton had "made" it do. Laplace cites with approval Leibniz's criticism of Newton's invocation of divine intervention to restore order to the Solar System: "This is to have very narrow ideas about the wisdom and the power of God." He evidently shared Leibniz's astonishment at Newton's belief "that God has made his machine so badly that unless he affects it by some extraordinary means, the watch will very soon cease to go."
In a group of manuscripts, preserved in relative secrecy in a black envelope in the library of the Académie des sciences and published for the first time by Hahn, Laplace mounted a deist critique of Christianity. It is, he writes, the "first and most infallible of principles ... to reject miraculous facts as untrue." As for the doctrine of transubstantiation, it "offends at the same time reason, experience, the testimony of all our senses, the eternal laws of nature, and the sublime ideas that we ought to form of the Supreme Being." It is the sheerest absurdity to suppose that "the sovereign lawgiver of the universe would suspend the laws that he has established, and which he seems to have maintained invariably."
Laplace also ridiculed the use of probability in theology. Even following Pascal's reasoning presented in Pascal's wager, it is not worth making a bet, for the hope of profit – equal to the product of the value of the testimonies (infinitely small) and the value of the happiness they promise (which is significant but finite) – must necessarily be infinitely small.
In old age, Laplace remained curious about the question of God and frequently discussed Christianity with the Swiss astronomer Jean-Frédéric-Théodore Maurice. He told Maurice that "Christianity is quite a beautiful thing" and praised its civilising influence. Maurice thought that the basis of Laplace's beliefs was, little by little, being modified, but that he held fast to his conviction that the invariability of the laws of nature did not permit of supernatural events. After Laplace's death, Poisson told Maurice, "You know that I do not share your [religious] opinions, but my conscience forces me to recount something that will surely please you." When Poisson had complimented Laplace about his "brilliant discoveries", the dying man had fixed him with a pensive look and replied, "Ah! We chase after phantoms [chimères]." These were his last words, interpreted by Maurice as a realisation of the ultimate "vanity" of earthly pursuits. Laplace received the last rites from the curé of the Missions Étrangères (in whose parish he was to be buried) and the curé of Arcueil.
According to his biographer, Roger Hahn, it is "not credible" that Laplace "had a proper Catholic end", and he "remained a skeptic" to the very end of his life. Laplace in his last years has been described as an agnostic.
=== Excommunication of a comet ===
In 1470 the humanist scholar Bartolomeo Platina wrote that Pope Callixtus III had asked for prayers for deliverance from the Turks during a 1456 appearance of Halley's Comet. Platina's account does not accord with Church records, which do not mention the comet. Laplace is alleged to have embellished the story by claiming the Pope had "excommunicated" Halley's comet. What Laplace actually said, in Exposition du système du monde (1796), was that the Pope had ordered the comet to be "exorcised" (conjuré). It was Arago, in Des Comètes en général (1832), who first spoke of an excommunication.
== Honors ==
Correspondent of the Royal Institute of the Netherlands in 1809.
Foreign Honorary Member of the American Academy of Arts and Sciences in 1822.
The asteroid 4628 Laplace is named for Laplace.
A spur of the Montes Jura on the Moon is known as Promontorium Laplace.
His name is one of the 72 names inscribed on the Eiffel Tower.
The tentative working name of the European Space Agency Europa Jupiter System Mission is the "Laplace" space probe.
A train station in the RER B in Arcueil bears his name.
A street in Verkhnetemernitsky (near Rostov-on-Don, Russia).
The Institute of Electrical and Electronics Engineers (IEEE) Signal Processing Society's Early Career Technical Achievement Award is named in his honor.
== Quotations ==
I had no need of that hypothesis. ("Je n'avais pas besoin de cette hypothèse-là", allegedly as a reply to Napoleon, who had asked why he hadn't mentioned God in his book on astronomy.)
It is therefore obvious that ... (Frequently used in the Celestial Mechanics when he had proved something and mislaid the proof, or found it clumsy. Notorious as a signal for something true, but hard to prove.)
If we seek a cause wherever we perceive symmetry, it is not that we regard a symmetrical event as less possible than the others, but, since this event ought to be the effect of a regular cause or that of chance, the first of these suppositions is more probable than the second.
The more extraordinary the event, the greater the need of its being supported by strong proofs.
"We are so far from knowing all the agents of nature and their diverse modes of action that it would not be philosophical to deny phenomena solely because they are inexplicable in the actual state of our knowledge. But we ought to examine them with an attention all the more scrupulous as it appears more difficult to admit them."
This is restated in Theodore Flournoy's work From India to the Planet Mars as the Principle of Laplace or, "The weight of the evidence should be proportioned to the strangeness of the facts."
Most often repeated as "The weight of evidence for an extraordinary claim must be proportioned to its strangeness." (see also: Sagan standard)
This simplicity of ratios will not appear astonishing if we consider that all the effects of nature are only mathematical results of a small number of immutable laws.
Infinitely varied in her effects, nature is only simple in her causes.
What we know is little, and what we are ignorant of is immense. (Fourier comments: "This was at least the meaning of his last words, which were articulated with difficulty.")
One sees in this essay that the theory of probabilities is basically only common sense reduced to a calculus. It makes one estimate accurately what right-minded people feel by a sort of instinct, often without being able to give a reason for it.
== List of works ==
Traité de mécanique céleste (in French). Vol. 1. Paris: Charles Crapelet. 1799.
Traité de mécanique céleste (in French). Vol. 2. Paris: Charles Crapelet. 1799.
Traité de mécanique céleste (in French). Vol. 3. Paris: Charles Crapelet. 1802.
Traité de mécanique céleste (in French). Vol. 4. Paris: Charles Crapelet. 1805.
Traité de mécanique céleste (in French). Vol. 5. Paris: Charles Louis Étienne Bachelier. 1852.
Précis de l'histoire de l'astronomie (in Italian). Milano: Angelo Stanislao Brambilla. 1823.
Exposition du système du monde (in French). Paris: Charles Louis Étienne Bachelier. 1824.
== Bibliography ==
Œuvres complètes de Laplace, 14 vol. (1878–1912), Paris: Gauthier-Villars (copy from Gallica in French)
Théorie du movement et de la figure elliptique des planètes (1784) Paris (not in Œuvres complètes)
Précis de l'histoire de l'astronomie
Alphonse Rebière, Mathématiques et mathématiciens, 3rd edition Paris, Nony & Cie, 1898.
=== English translations ===
Bowditch, N. (trans.) (1829–1839) Mécanique céleste, 4 vols, Boston
New edition by Reprint Services ISBN 0-7812-2022-X
– [1829–1839] (1966–1969) Celestial Mechanics, 5 vols, including the original French
Pound, J. (trans.) (1809) The System of the World, 2 vols, London: Richard Phillips
_ The System of the World (v.1)
_ The System of the World (v.2)
– [1809] (2007) The System of the World, vol.1, Kessinger, ISBN 1-4326-5367-9
Toplis, J. (trans.) (1814) A treatise upon analytical mechanics Nottingham: H. Barnett
Laplace, Pierre Simon Marquis De (2007) [1902]. A Philosophical Essay on Probabilities. Translated by Truscott, F.W. & Emory, F.L. Cosimo. ISBN 978-1-60206-328-0., translated from the French 6th ed. (1840)
A Philosophical Essay on Probabilities (1902) at the Internet Archive
Dale, Andrew I.; Laplace, Pierre-Simon (1995). Philosophical Essay on Probabilities. Sources in the History of Mathematics and Physical Sciences. Vol. 13. Translated by Andrew I. Dale. Springer. doi:10.1007/978-1-4612-4184-3. hdl:2027/coo1.ark:/13960/t3126f008. ISBN 978-1-4612-8689-9., translated from the French 5th ed. (1825)
== See also ==
History of the metre
Laplace–Bayes estimator
Ratio estimator
Seconds pendulum
List of things named after Pierre-Simon Laplace
Pascal's wager
== References ==
=== Citations ===
=== General sources ===
== External links ==
"Laplace, Pierre (1749–1827)". Eric Weisstein's World of Scientific Biography. Wolfram Research. Retrieved 24 August 2007.
"Pierre-Simon Laplace" in the MacTutor History of Mathematics archive.
"Bowditch's English translation of Laplace's preface". Mécanique Céleste. The MacTutor History of Mathematics archive. Retrieved 4 September 2007.
Guide to the Pierre Simon Laplace Papers at The Bancroft Library
Pierre-Simon Laplace at the Mathematics Genealogy Project
English translation Archived 27 December 2012 at the Wayback Machine of a large part of Laplace's work in probability and statistics, provided by Richard Pulskamp Archived 29 October 2012 at the Wayback Machine
Pierre-Simon Laplace – Œuvres complètes (last 7 volumes only) Gallica-Math
"Sur le mouvement d'un corps qui tombe d'une grande hauteur" (Laplace 1803), online and analysed on BibNum Archived 2 April 2015 at the Wayback Machine (English). | Wikipedia/Analytical_Theory_of_Probabilities |
An economic model is a theoretical construct representing economic processes by a set of variables and a set of logical and/or quantitative relationships between them. The economic model is a simplified, often mathematical, framework designed to illustrate complex processes. Frequently, economic models posit structural parameters. A model may have various exogenous variables, and those variables may change to create various responses by economic variables. Methodological uses of models include investigation, theorizing, and fitting theories to the world.
== Overview ==
In general terms, economic models have two functions: first as a simplification of and abstraction from observed data, and second as a means of selection of data based on a paradigm of econometric study.
Simplification is particularly important for economics given the enormous complexity of economic processes. This complexity can be attributed to the diversity of factors that determine economic activity; these factors include: individual and cooperative decision processes, resource limitations, environmental and geographical constraints, institutional and legal requirements and purely random fluctuations. Economists therefore must make a reasoned choice of which variables and which relationships between these variables are relevant and which ways of analyzing and presenting this information are useful.
Selection is important because the nature of an economic model will often determine what facts will be looked at and how they will be compiled. For example, inflation is a general economic concept, but to measure inflation requires a model of behavior, so that an economist can differentiate between changes in relative prices and changes in price that are to be attributed to inflation.
In addition to their professional academic interest, uses of models include:
Forecasting economic activity in a way in which conclusions are logically related to assumptions;
Proposing economic policy to modify future economic activity;
Presenting reasoned arguments to politically justify economic policy at the national level, to explain and influence company strategy at the level of the firm, or to provide intelligent advice for household economic decisions at the level of households.
Planning and allocation, in the case of centrally planned economies, and on a smaller scale in logistics and management of businesses.
In finance, predictive models have been used since the 1980s for trading (investment and speculation). For example, emerging market bonds were often traded based on economic models predicting the growth of the developing nation issuing them. Since the 1990s many long-term risk management models have incorporated economic relationships between simulated variables in an attempt to detect high-exposure future scenarios (often through a Monte Carlo method).
A model establishes an argumentative framework for applying logic and mathematics that can be independently discussed and tested and that can be applied in various instances. Policies and arguments that rely on economic models have a clear basis for soundness, namely the validity of the supporting model.
Economic models in current use do not pretend to be theories of everything economic; any such pretensions would immediately be thwarted by computational infeasibility and the incompleteness or lack of theories for various types of economic behavior. Therefore, conclusions drawn from models will be approximate representations of economic facts. However, properly constructed models can remove extraneous information and isolate useful approximations of key relationships. In this way more can be understood about the relationships in question than by trying to understand the entire economic process.
The details of model construction vary with type of model and its application, but a generic process can be identified. Generally, any modelling process has two steps: generating a model, then checking the model for accuracy (sometimes called diagnostics). The diagnostic step is important because a model is only useful to the extent that it accurately mirrors the relationships that it purports to describe. Creating and diagnosing a model is frequently an iterative process in which the model is modified (and hopefully improved) with each iteration of diagnosis and respecification. Once a satisfactory model is found, it should be double checked by applying it to a different data set.
== Types of models ==
According to whether all the model variables are deterministic, economic models can be classified as stochastic or non-stochastic models; according to whether all the variables are quantitative, economic models are classified as discrete or continuous choice model; according to the model's intended purpose/function, it can be classified as
quantitative or qualitative; according to the model's ambit, it can be classified as a general equilibrium model, a partial equilibrium model, or even a non-equilibrium model; according to the economic agent's characteristics, models can be classified as rational agent models, representative agent models etc.
Stochastic models are formulated using stochastic processes. They model economically observable values over time. Most of econometrics is based on statistics to formulate and test hypotheses about these processes or estimate parameters for them. A widely used bargaining class of simple econometric models popularized by Tinbergen and later Wold are autoregressive models, in which the stochastic process satisfies some relation between current and past values. Examples of these are autoregressive moving average models and related ones such as autoregressive conditional heteroskedasticity (ARCH) and GARCH models for the modelling of heteroskedasticity.
Non-stochastic models may be purely qualitative (for example, relating to social choice theory) or quantitative (involving rationalization of financial variables, for example with hyperbolic coordinates, and/or specific forms of functional relationships between variables). In some cases economic predictions in a coincidence of a model merely assert the direction of movement of economic variables, and so the functional relationships are used only stoical in a qualitative sense: for example, if the price of an item increases, then the demand for that item will decrease. For such models, economists often use two-dimensional graphs instead of functions.
Qualitative models – although almost all economic models involve some form of mathematical or quantitative analysis, qualitative models are occasionally used. One example is qualitative scenario planning in which possible future events are played out. Another example is non-numerical decision tree analysis. Qualitative models often suffer from lack of precision.
At a more practical level, quantitative modelling is applied to many areas of economics and several methodologies have evolved more or less independently of each other. As a result, no overall model taxonomy is naturally available. We can nonetheless provide a few examples that illustrate some particularly relevant points of model construction.
An accounting model is one based on the premise that for every credit there is a debit. More symbolically, an accounting model expresses some principle of conservation in the form
algebraic sum of inflows = sinks − sources
This principle is certainly true for money and it is the basis for national income accounting. Accounting models are true by convention, that is any experimental failure to confirm them, would be attributed to fraud, arithmetic error or an extraneous injection (or destruction) of cash, which we would interpret as showing the experiment was conducted improperly.
Optimality and constrained optimization models – Other examples of quantitative models are based on principles such as profit or utility maximization. An example of such a model is given by the comparative statics of taxation on the profit-maximizing firm. The profit of a firm is given by
π
(
x
,
t
)
=
x
p
(
x
)
−
C
(
x
)
−
t
x
{\displaystyle \pi (x,t)=xp(x)-C(x)-tx\quad }
where
p
(
x
)
{\displaystyle p(x)}
is the price that a product commands in the market if it is supplied at the rate
x
{\displaystyle x}
,
x
p
(
x
)
{\displaystyle xp(x)}
is the revenue obtained from selling the product,
C
(
x
)
{\displaystyle C(x)}
is the cost of bringing the product to market at the rate
x
{\displaystyle x}
, and
t
{\displaystyle t}
is the tax that the firm must pay per unit of the product sold.
The profit maximization assumption states that a firm will produce at the output rate x if that rate maximizes the firm's profit. Using differential calculus we can obtain conditions on x under which this holds. The first order maximization condition for x is
∂
π
(
x
,
t
)
∂
x
=
∂
(
x
p
(
x
)
−
C
(
x
)
)
∂
x
−
t
=
0
{\displaystyle {\frac {\partial \pi (x,t)}{\partial x}}={\frac {\partial (xp(x)-C(x))}{\partial x}}-t=0}
Regarding x as an implicitly defined function of t by this equation (see implicit function theorem), one concludes that the derivative of x with respect to t has the same sign as
∂
2
(
x
p
(
x
)
−
C
(
x
)
)
∂
2
x
=
∂
2
π
(
x
,
t
)
∂
x
2
,
{\displaystyle {\frac {\partial ^{2}(xp(x)-C(x))}{\partial ^{2}x}}={\partial ^{2}\pi (x,t) \over \partial x^{2}},}
which is negative if the second order conditions for a local maximum are satisfied.
Thus the profit maximization model predicts something about the effect of taxation on output, namely that output decreases with increased taxation. If the predictions of the model fail, we conclude that the profit maximization hypothesis was false; this should lead to alternate theories of the firm, for example based on bounded rationality.
Borrowing a notion apparently first used in economics by Paul Samuelson, this model of taxation and the predicted dependency of output on the tax rate, illustrates an operationally meaningful theorem; that is one requiring some economically meaningful assumption that is falsifiable under certain conditions.
Aggregate models. Macroeconomics needs to deal with aggregate quantities such as output, the price level, the interest rate and so on. Now real output is actually a vector of goods and services, such as cars, passenger airplanes, computers, food items, secretarial services, home repair services etc. Similarly price is the vector of individual prices of goods and services. Models in which the vector nature of the quantities is maintained are used in practice, for example Leontief input–output models are of this kind. However, for the most part, these models are computationally much harder to deal with and harder to use as tools for qualitative analysis. For this reason, macroeconomic models usually lump together different variables into a single quantity such as output or price. Moreover, quantitative relationships between these aggregate variables are often parts of important macroeconomic theories. This process of aggregation and functional dependency between various aggregates usually is interpreted statistically and validated by econometrics. For instance, one ingredient of the Keynesian model is a functional relationship between consumption and national income: C = C(Y). This relationship plays an important role in Keynesian analysis.
== Problems with economic models ==
Most economic models rest on a number of assumptions that are not entirely realistic. For example, agents are often assumed to have perfect information, and markets are often assumed to clear without friction. Or, the model may omit issues that are important to the question being considered, such as externalities. Any analysis of the results of an economic model must therefore consider the extent to which these results may be compromised by inaccuracies in these assumptions, and a large literature has grown up discussing problems with economic models, or at least asserting that their results are unreliable.
== History ==
One of the major problems addressed by economic models has been understanding economic growth. An early attempt to provide a technique to approach this came from the French physiocratic school in the eighteenth century. Among these economists, François Quesnay was known particularly for his development and use of tables he called Tableaux économiques. These tables have in fact been interpreted in more modern terminology as a Leontiev model, see the Phillips reference below.
All through the 18th century (that is, well before the founding of modern political economy, conventionally marked by Adam Smith's 1776 Wealth of Nations), simple probabilistic models were used to understand the economics of insurance. This was a natural extrapolation of the theory of gambling, and played an important role both in the development of probability theory itself and in the development of actuarial science. Many of the giants of 18th century mathematics contributed to this field. Around 1730, De Moivre addressed some of these problems in the 3rd edition of The Doctrine of Chances. Even earlier (1709), Nicolas Bernoulli studies problems related to savings and interest in the Ars Conjectandi. In 1730, Daniel Bernoulli studied "moral probability" in his book Mensura Sortis, where he introduced what would today be called "logarithmic utility of money" and applied it to gambling and insurance problems, including a solution of the paradoxical Saint Petersburg problem. All of these developments were summarized by Laplace in his Analytical Theory of Probabilities (1812). Thus, by the time David Ricardo came along he had a well-established mathematical basis to draw from.
== Tests of macroeconomic predictions ==
In the late 1980s, the Brookings Institution compared 12 leading macroeconomic models available at the time. They compared the models' predictions for how the economy would respond to specific economic shocks (allowing the models to control for all the variability in the real world; this was a test of model vs. model, not a test against the actual outcome). Although the models simplified the world and started from a stable, known common parameters the various models gave significantly different answers. For instance, in calculating the impact of a monetary loosening on output some models estimated a 3% change in GDP after one year, and one gave almost no change, with the rest spread between.
Partly as a result of such experiments, modern central bankers no longer have as much confidence that it is possible to 'fine-tune' the economy as they had in the 1960s and early 1970s. Modern policy makers tend to use a less activist approach, explicitly because they lack confidence that their models will actually predict where the economy is going, or the effect of any shock upon it. The new, more humble, approach sees danger in dramatic policy changes based on model predictions, because of several practical and theoretical limitations in current macroeconomic models; in addition to the theoretical pitfalls, (listed above) some problems specific to aggregate modelling are:
Limitations in model construction caused by difficulties in understanding the underlying mechanisms of the real economy. (Hence the profusion of separate models.)
The law of unintended consequences, on elements of the real economy not yet included in the model.
The time lag in both receiving data and the reaction of economic variables to policy makers attempts to 'steer' them (mostly through monetary policy) in the direction that central bankers want them to move. Milton Friedman has vigorously argued that these lags are so long and unpredictably variable that effective management of the macroeconomy is impossible.
The difficulty in correctly specifying all of the parameters (through econometric measurements) even if the structural model and data were perfect.
The fact that all the model's relationships and coefficients are stochastic, so that the error term becomes very large quickly, and the available snapshot of the input parameters is already out of date.
Modern economic models incorporate the reaction of the public and market to the policy maker's actions (through game theory), and this feedback is included in modern models (following the rational expectations revolution and Robert Lucas, Jr.'s Lucas critique of non-microfounded models). If the response to the decision maker's actions (and their credibility) must be included in the model then it becomes much harder to influence some of the variables simulated.
=== Comparison with models in other sciences ===
Complex systems specialist and mathematician David Orrell wrote on this issue in his book Apollo's Arrow and explained that the weather, human health and economics use similar methods of prediction (mathematical models). Their systems—the atmosphere, the human body and the economy—also have similar levels of complexity. He found that forecasts fail because the models suffer from two problems: (i) they cannot capture the full detail of the underlying system, so rely on approximate equations; (ii) they are sensitive to small changes in the exact form of these equations. This is because complex systems like the economy or the climate consist of a delicate balance of opposing forces, so a slight imbalance in their representation has big effects. Thus, predictions of things like economic recessions are still highly inaccurate, despite the use of enormous models running on fast computers.
See Unreasonable ineffectiveness of mathematics § Economics and finance.
=== Effects of deterministic chaos on economic models ===
Economic and meteorological simulations may share a fundamental limit to their predictive powers: chaos. Although the modern mathematical work on chaotic systems began in the 1970s the danger of chaos had been identified and defined in Econometrica as early as 1958:
"Good theorising consists to a large extent in avoiding assumptions ... [with the property that] a small change in what is posited will seriously affect the conclusions."
(William Baumol, Econometrica, 26 see: Economics on the Edge of Chaos).
It is straightforward to design economic models susceptible to butterfly effects of initial-condition sensitivity.
However, the econometric research program to identify which variables are chaotic (if any) has largely concluded that aggregate macroeconomic variables probably do not behave chaotically. This would mean that refinements to the models could ultimately produce reliable long-term forecasts. However, the validity of this conclusion has generated two challenges:
In 2004 Philip Mirowski challenged this view and those who hold it, saying that chaos in economics is suffering from a biased "crusade" against it by neo-classical economics in order to preserve their mathematical models.
The variables in finance may well be subject to chaos. Also in 2004, the University of Canterbury study Economics on the Edge of Chaos concludes that after noise is removed from S&P 500 returns, evidence of deterministic chaos is found.
More recently, chaos (or the butterfly effect) has been identified as less significant than previously thought to explain prediction errors. Rather, the predictive power of economics and meteorology would mostly be limited by the models themselves and the nature of their underlying systems (see Comparison with models in other sciences above).
=== Critique of hubris in planning ===
A key strand of free market economic thinking is that the market's invisible hand guides an economy to prosperity more efficiently than central planning using an economic model. One reason, emphasized by Friedrich Hayek, is the claim that many of the true forces shaping the economy can never be captured in a single plan. This is an argument that cannot be made through a conventional (mathematical) economic model because it says that there are critical systemic-elements that will always be omitted from any top-down analysis of the economy.
== Examples of economic models ==
Cobb–Douglas model of production
Solow–Swan model of economic growth
Lucas islands model of money supply
Heckscher–Ohlin model of international trade
Black–Scholes model of option pricing
AD–AS model a macroeconomic model of aggregate demand– and supply
IS–LM model the relationship between interest rates and assets markets
Ramsey–Cass–Koopmans model of economic growth
Gordon–Loeb model for cyber security investments
== See also ==
Economic methodology
Computational economics
Agent-based computational economics
Endogeneity
Financial model
== Notes ==
== References ==
Baumol, William & Blinder, Alan (1982), Economics: Principles and Policy (2nd ed.), New York: Harcourt Brace Jovanovich, ISBN 0-15-518839-9.
Caldwell, Bruce (1994), Beyond Positivism: Economic Methodology in the Twentieth Century (Revised ed.), New York: Routledge, ISBN 0-415-10911-6.
Holcombe, R. (1989), Economic Models and Methodology, New York: Greenwood Press, ISBN 0-313-26679-4. Defines model by analogy with maps, an idea borrowed from Baumol and Blinder. Discusses deduction within models, and logical derivation of one model from another. Chapter 9 compares the neoclassical school and the Austrian School, in particular in relation to falsifiability.
Lange, Oskar (1945), "The Scope and Method of Economics", Review of Economic Studies, 13 (1), The Review of Economic Studies Ltd.: 19–32, doi:10.2307/2296113, JSTOR 2296113, S2CID 4140287. One of the earliest studies on methodology of economics, analysing the postulate of rationality.
de Marchi, N. B. & Blaug, M. (1991), Appraising Economic Theories: Studies in the Methodology of Research Programs, Brookfield, VT: Edward Elgar, ISBN 1-85278-515-2. A series of essays and papers analysing questions about how (and whether) models and theories in economics are empirically verified and the current status of positivism in economics.
Morishima, Michio (1976), The Economic Theory of Modern Society, New York: Cambridge University Press, ISBN 0-521-21088-7. A thorough discussion of many quantitative models used in modern economic theory. Also a careful discussion of aggregation.
Orrell, David (2007), Apollo's Arrow: The Science of Prediction and the Future of Everything, Toronto: Harper Collins Canada, ISBN 978-0-00-200740-5.
Phillips, Almarin (1955), "The Tableau Économique as a Simple Leontief Model", Quarterly Journal of Economics, 69 (1), The MIT Press: 137–44, doi:10.2307/1884854, JSTOR 1884854.
Samuelson, Paul A. (1948), "The Simple Mathematics of Income Determination", in Metzler, Lloyd A. (ed.), Income, Employment and Public Policy; essays in honor of Alvin Hansen, New York: W. W. Norton.
Samuelson, Paul A. (1983), Foundations of Economic Analysis (Enlarged ed.), Cambridge: Harvard University Press, ISBN 0-674-31301-1. This is a classic book carefully discussing comparative statics in microeconomics, though some dynamics is studied as well as some macroeconomic theory. This should not be confused with Samuelson's popular textbook.
Tinbergen, Jan (1939), Statistical Testing of Business Cycle Theories, Geneva: League of Nations.
Walsh, Vivian (1987), "Models and theory", The New Palgrave: A Dictionary of Economics, vol. 3, New York: Stockton Press, pp. 482–83, ISBN 0-935859-10-1.
Wold, H. (1938), A Study in the Analysis of Stationary Time Series, Stockholm: Almqvist and Wicksell.
Wold, H. & Jureen, L. (1953), Demand Analysis: A Study in Econometrics, New York: Wiley.
Gordon, Lawrence A.; Loeb, Martin P. (November 2002). "The Economics of Information Security Investment". ACM Transactions on Information and System Security. 5 (4): 438–457. doi:10.1145/581271.581274. S2CID 1500788.
== External links ==
R. Frigg and S. Hartmann, Models in Science. Entry in the Stanford Encyclopedia of Philosophy.
H. Varian How to build a model in your spare time The author makes several unexpected suggestions: Look for a model in the real world, not in journals. Look at the literature later, not sooner.
Elmer G. Wiens: Classical & Keynesian AD-AS Model – An on-line, interactive model of the Canadian Economy.
IFs Economic Sub-Model [1]: Online Global Model
Economic attractor | Wikipedia/Model_(economics) |
The Ramsey–Cass–Koopmans model (also known as the Ramsey growth model or the neoclassical growth model) is a foundational model in neoclassical economics that describes the dynamics of economic growth over time. It builds upon the pioneering work of Frank P. Ramsey (1928), with later extensions by David Cass and Tjalling Koopmans in the 1960s.
The model extends the Solow–Swan model by endogenizing the savings rate through explicit microfoundations of consumption behavior: rather than assuming a constant saving rate, the model derives it from the intertemporal optimization of a representative agent who chooses consumption to maximize utility over an infinite horizon. This approach leads to a richer dynamic structure in the transition to the long-run steady state, and yields a Pareto efficient outcome.
Ramsey originally formulated the model as a social planner’s problem—maximizing aggregate consumption across generations—before it was reformulated by Cass and Koopmans as a decentralized economy with a representative agent and competitive markets. The model is designed to explain long-run growth trends rather than short-term business cycle fluctuations and does not incorporate elements like market imperfections, heterogeneous agents, or exogenous shocks. Later developments, such as real business cycle theory, extended the model’s structure, allowing for government purchases, employment variations, and other shocks.
== Mathematical description ==
=== Model setup ===
In the usual setup, time is continuous, starting, for simplicity, at
t
=
0
{\displaystyle t=0}
and continuing forever. By assumption, the only productive factors are capital
K
{\displaystyle K}
and labour
L
{\displaystyle L}
, both required to be nonnegative. The labour force, which makes up the entire population, is assumed to grow at a constant rate
n
{\displaystyle n}
, i.e.
L
˙
=
d
L
d
t
=
n
L
{\displaystyle {\dot {L}}={\tfrac {\mathrm {d} L}{\mathrm {d} t}}=nL}
, implying that
L
=
L
0
e
n
t
{\displaystyle L=L_{0}e^{nt}}
with initial level
L
0
>
0
{\displaystyle L_{0}>0}
at
t
=
0
{\displaystyle t=0}
. Finally, let
Y
{\displaystyle Y}
denote aggregate production and
C
{\displaystyle C}
denote aggregate consumption.
The variables that the Ramsey–Cass–Koopmans model ultimately aims to describe are the per capita (or more accurately, per labour) consumption:
c
=
C
L
{\displaystyle c={\frac {C}{L}}}
and capital intensity:
k
=
K
L
{\displaystyle k={\frac {K}{L}}}
It does so by connecting capital accumulation, written
K
˙
=
d
K
d
t
{\displaystyle {\dot {K}}={\tfrac {\mathrm {d} K}{\mathrm {d} t}}}
in Newton's notation, with consumption
C
{\displaystyle C}
, describing a consumption-investment trade-off. More specifically, since the existing capital stock decays by depreciation rate
δ
{\displaystyle \delta }
(assumed to be constant), it requires investment of current-period production output
Y
{\displaystyle Y}
. Thus,
K
˙
=
Y
−
δ
K
−
c
L
{\displaystyle {\dot {K}}=Y-\delta K-cL}
The relationship between the productive factors and aggregate output is described by the aggregate production function,
Y
=
F
(
K
,
L
)
{\displaystyle Y=F(K,L)}
. A common choice is the Cobb–Douglas production function
F
(
K
,
L
)
=
A
K
1
−
α
L
α
{\displaystyle F(K,L)=AK^{1-\alpha }L^{\alpha }}
, but generally, any production function satisfying the Inada conditions is permissible. Importantly, though,
F
{\displaystyle F}
is required to be homogeneous of degree 1, which economically implies constant returns to scale. With this assumption, we can re-express aggregate output in per capita terms
F
(
K
,
L
)
=
L
⋅
F
(
K
L
,
1
)
=
L
⋅
f
(
k
)
{\displaystyle F(K,L)=L\cdot F\left({\frac {K}{L}},1\right)=L\cdot f(k)}
For example, if we use the Cobb–Douglas production function with
A
=
1
,
α
=
0.5
{\displaystyle A=1,\alpha =0.5}
, then
f
(
k
)
=
k
0.5
{\displaystyle f(k)=k^{0.5}}
.
To obtain the first key equation of the Ramsey–Cass–Koopmans model, the dynamic equation for the capital stock needs to be expressed in per capita terms. Noting the quotient rule for
d
d
t
(
K
L
)
{\displaystyle {\tfrac {\mathrm {d} }{\mathrm {d} t}}\left({\tfrac {K}{L}}\right)}
, we have
A non-linear differential equation akin to the Solow–Swan model but incorporates endogenous consumption 𝑐, reflecting the model's microfoundations.
=== Maximizing welfare ===
If we ignore the problem of how consumption is distributed, then the rate of utility
U
{\displaystyle U}
is a function of aggregate consumption. That is,
U
=
U
(
C
,
t
)
{\displaystyle U=U(C,t)}
. To avoid the problem of infinity, we exponentially discount future utility at a discount rate
ρ
∈
(
0
,
∞
)
{\displaystyle \rho \in (0,\infty )}
. A high
ρ
{\displaystyle \rho }
reflects high impatience.
The social planner's problem is maximizing the social welfare function
U
0
=
∫
0
∞
e
−
ρ
t
U
(
C
,
t
)
d
t
{\displaystyle U_{0}=\int _{0}^{\infty }e^{-\rho t}U(C,t)\,\mathrm {d} t}
Assume that the economy is populated by identical immortal individuals with unchanging utility functions
u
(
c
)
{\displaystyle u(c)}
(a representative agent), such that the total utility is:
U
(
C
,
t
)
=
L
u
(
c
)
=
L
0
e
n
t
u
(
c
)
{\displaystyle U(C,t)=Lu(c)=L_{0}e^{nt}u(c)}
The utility function is assumed to be strictly increasing (i.e., there is no bliss point) and concave in
c
{\displaystyle c}
, with
lim
c
→
0
u
c
=
∞
{\displaystyle \lim _{c\to 0}u_{c}=\infty }
, where
u
c
{\displaystyle u_{c}}
is marginal utility of consumption
∂
u
∂
c
{\displaystyle {\tfrac {\partial u}{\partial c}}}
. Thus, we have the social planner's problem:
max
c
∫
0
∞
e
−
(
ρ
−
n
)
t
u
(
c
)
d
t
{\displaystyle \max _{c}\int _{0}^{\infty }e^{-(\rho -n)t}u(c)\,\mathrm {d} t}
subject to
c
=
f
(
k
)
−
(
n
+
δ
)
k
−
k
˙
{\displaystyle {\text{subject to}}\quad c=f(k)-(n+\delta )k-{\dot {k}}}
where an initial non-zero capital stock
k
(
0
)
=
k
0
>
0
{\displaystyle k(0)=k_{0}>0}
is given. To ensure that the integral is well-defined, we impose
ρ
>
n
{\displaystyle \rho >n}
.
=== Solution ===
The solution, usually found by using a Hamiltonian function, is a differential equation that describes the optimal evolution of consumption,
the Keynes–Ramsey rule.
The term
f
k
(
k
)
−
δ
−
ρ
{\displaystyle f_{k}(k)-\delta -\rho }
, where
f
k
=
∂
k
f
{\displaystyle f_{k}=\partial _{k}f}
is the marginal product of capital, reflects the marginal return on net investment, accounting for capital depreciation and time discounting.
Here
σ
(
c
)
{\displaystyle \sigma (c)}
is the elasticity of intertemporal substitution (EIS), defined by
σ
(
c
)
=
−
u
c
(
c
)
c
⋅
u
c
c
(
c
)
=
−
d
ln
c
d
ln
(
u
′
(
c
)
)
{\displaystyle \sigma (c)=-{\frac {u_{c}(c)}{c\cdot u_{cc}(c)}}=-{\frac {d\ln c}{d\ln(u'(c))}}}
It is formally equivalent to the inverse of relative risk aversion. The quantity reflects the curvature of the utility function and indicates how much the representative agent wishes to smooth consumption over time. If the agent has high relative risk aversion, it has low EIS and thus would be more willing to smooth consumption over time.
It is often assumed that
u
{\displaystyle u}
is strictly monotonically increasing and concave, thus
σ
>
0
{\displaystyle \sigma >0}
. In particular, if utility is logarithmic, then it is constant:
u
(
c
)
=
u
0
ln
c
⟹
σ
(
c
)
=
1
{\displaystyle u(c)=u_{0}\ln c\implies \sigma (c)=1}
We can rewrite the Ramsey rule as
d
d
t
ln
c
⏟
consumption delay rate
=
σ
(
c
)
⏟
EIS at current consumption level
[
f
k
(
k
)
−
δ
−
ρ
]
⏟
marginal return on net investment
{\displaystyle \underbrace {{\frac {d}{dt}}\ln c} _{\text{consumption delay rate}}=\underbrace {\sigma (c)} _{{\text{EIS at current consumption level}}\quad }\underbrace {[f_{k}(k)-\delta -\rho ]} _{\text{marginal return on net investment}}}
where we interpret
d
d
t
ln
c
{\displaystyle {\frac {d}{dt}}\ln c}
as the "consumption delay rate," indicating the rate at which current consumption is being postponed in favor of future consumption. A higher value implies that the agent prioritizes saving over consuming today, thereby deferring consumption later.
=== Graphical analysis in phase space ===
The two coupled differential equations for
k
{\displaystyle k}
and
c
{\displaystyle c}
form the Ramsey–Cass–Koopmans dynamical system.
A steady state
(
k
∗
,
c
∗
)
{\displaystyle (k^{\ast },c^{\ast })}
for the system is found by setting
k
˙
{\displaystyle {\dot {k}}}
and
c
˙
{\displaystyle {\dot {c}}}
equal to zero. There are three solutions:
f
k
(
k
∗
)
=
δ
+
ρ
and
c
∗
=
f
(
k
∗
)
−
(
n
+
δ
)
k
∗
{\displaystyle f_{k}\left(k^{\ast }\right)=\delta +\rho \quad {\text{and}}\quad c^{\ast }=f\left(k^{\ast }\right)-(n+\delta )k^{\ast }}
(
0
,
0
)
{\displaystyle (0,0)}
f
(
k
∗
)
=
(
n
+
δ
)
k
∗
with
k
∗
>
0
,
c
∗
=
0
{\displaystyle f(k^{*})=(n+\delta )k^{*}{\text{ with }}k^{*}>0,c^{*}=0}
The first is the only solution in the interior of the upper quadrant. It is a saddle point (as shown below). The second is a repelling point. The third is a degenerate stable equilibrium. The first solution is meant by default, although the other two are important to keep track of.
Any optimal trajectory must follow the dynamical system. However, since the variable
c
{\displaystyle c}
is a control variable, at each capital intensity
k
{\displaystyle k}
, to find its corresponding optimal trajectory, we still need to find its starting consumption rate
c
(
0
)
{\displaystyle c(0)}
. As it turns out, the optimal trajectory is the unique one that converges to the interior equilibrium point. Any other trajectory either converges to the all-saving equilibrium with
k
∗
>
0
,
c
∗
=
0
{\displaystyle k^{*}>0,c^{*}=0}
, or diverges to
k
→
0
,
c
→
∞
{\displaystyle k\to 0,c\to \infty }
, which means that the economy expends all its capital in finite time. Both achieve a lower overall utility than the trajectory toward the interior equilibrium point.
A qualitative statement about the stability of the solution
(
k
∗
,
c
∗
)
{\displaystyle (k^{\ast },c^{\ast })}
requires a linearization by a first-order Taylor polynomial
[
k
˙
c
˙
]
≈
J
(
k
∗
,
c
∗
)
[
(
k
−
k
∗
)
(
c
−
c
∗
)
]
{\displaystyle {\begin{bmatrix}{\dot {k}}\\{\dot {c}}\end{bmatrix}}\approx \mathbf {J} (k^{\ast },c^{\ast }){\begin{bmatrix}(k-k^{\ast })\\(c-c^{\ast })\end{bmatrix}}}
where
J
(
k
∗
,
c
∗
)
{\displaystyle \mathbf {J} (k^{\ast },c^{\ast })}
is the Jacobian matrix evaluated at steady state, given by
J
(
k
∗
,
c
∗
)
=
[
ρ
−
n
−
1
1
σ
f
k
k
(
k
)
⋅
c
∗
0
]
{\displaystyle \mathbf {J} \left(k^{\ast },c^{\ast }\right)={\begin{bmatrix}\rho -n&-1\\{\frac {1}{\sigma }}f_{kk}(k)\cdot c^{\ast }&0\end{bmatrix}}}
which has determinant
|
J
(
k
∗
,
c
∗
)
|
=
1
σ
f
k
k
(
k
)
⋅
c
∗
<
0
{\displaystyle \left|\mathbf {J} \left(k^{\ast },c^{\ast }\right)\right|={\frac {1}{\sigma }}f_{kk}(k)\cdot c^{\ast }<0}
since
c
∗
>
0
{\displaystyle c^{*}>0}
,
σ
{\displaystyle \sigma }
is positive by assumption, and
f
k
k
<
0
{\displaystyle f_{kk}<0}
since
f
{\displaystyle f}
is concave (Inada condition). Since the determinant equals the product of the eigenvalues, the eigenvalues must be real and opposite in sign.
Hence, by the stable manifold theorem, the equilibrium is a saddle point, and there exists a unique stable arm, or "saddle path," that converges on the equilibrium, indicated by the blue curve in the phase diagram.
The system is called "saddle path stable" since all unstable trajectories are ruled out by the "no Ponzi scheme" condition:
lim
t
→
∞
k
⋅
e
−
∫
0
t
(
f
k
−
n
−
δ
)
d
s
≥
0
{\displaystyle \lim _{t\to \infty }k\cdot e^{-\int _{0}^{t}\left(f_{k}-n-\delta \right)\mathrm {d} s}\geq 0}
implying that the present value of the capital stock cannot be negative.
== History ==
Spear and Young re-examine the history of optimal growth during the 1950s and 1960s, focusing in part on the veracity of the claimed simultaneous and independent development of Cass' "Optimum growth in an aggregative model of capital accumulation" (published in 1965 in the Review of Economic Studies), and Tjalling Koopman's "On the concept of optimal economic growth" (published in Study Week on the Econometric Approach to Development Planning, 1965, Rome: Pontifical Academy of Science).
Over their lifetimes, neither Cass nor Koopmans ever suggested that their results characterizing optimal growth in the one-sector, continuous-time growth model were anything other than "simultaneous and independent". The priority issue became a discussion point because, in the published version of Koopmans' work, he cited the chapter from Cass' thesis that later became the RES paper. In his paper, Koopmans states in a footnote that Cass independently obtained conditions similar to what he finds. Cass also considers the limiting case where the discount rate goes to zero in his paper. For his part, Cass notes that "after the original version of this paper was completed, a very similar analysis by Koopmans came to our attention. We draw on his results in discussing the limiting case, where the effective social discount rate goes to zero". In the interview that Cass gave to Macroeconomic Dynamics, he credits Koopmans with pointing him to Frank Ramsey's previous work, claiming to have been embarrassed not to have known of it, but says nothing to dispel the basic claim that his work and Koopmans' were independent.
Spear and Young dispute this history, based upon a previously overlooked working paper version of Koopmans' paper, which was the basis for Koopmans' oft-cited presentation at a conference held by the Pontifical Academy of Sciences in October 1963. In this Cowles Discussion paper, there is an error. Koopmans claims in his main result that the Euler equations are both necessary and sufficient to characterize optimal trajectories in the model because any solutions to the Euler equations that do not converge to the optimal steady-state would hit either a zero consumption or zero capital boundary in finite time. This error was presented at the Vatican conference, although no participant commented on the problem at the time of Koopmans' presentation. This can be inferred because the discussion after each paper presentation at the Vatican conference is verbatim in the conference volume.
In the Vatican volume discussion following the presentation of a paper by Edmond Malinvaud, the issue does arise because of Malinvaud's explicit inclusion of a so-called "transversality condition" (which Malinvaud calls Condition I) in his paper. At the end of the presentation, Koopmans asks Malinvaud whether it is not the case that Condition I guarantees that solutions to the Euler equations that do not converge to the optimal steady-state hit a boundary in finite time. Malinvaud replies that this is not the case and suggests that Koopmans look at the example with log utility functions and Cobb-Douglas production functions.
At this point, Koopmans recognizes he has a problem. However, based on a confusing appendix to a later version of the paper produced after the Vatican conference, he seems unable to decide how to deal with the issue raised by Malinvaud's Condition I.
From the Macroeconomic Dynamics interview with Cass, it is clear that Koopmans met with Cass' thesis advisor, Hirofumi Uzawa, at the winter meetings of the Econometric Society in January 1964, where Uzawa advised him that his student [Cass] had solved this problem already. Uzawa must have then provided Koopmans with the copy of Cass' thesis chapter, which he sent along in the guise of the IMSSS Technical Report that Koopmans cited in the published version of his paper. The word "guise" is appropriate here because the TR number listed in Koopmans' citation would have put the issue date of the report in the early 1950s, which it was not.
In the published version of Koopmans' paper, he imposes a new Condition Alpha in addition to the Euler equations, stating that the only admissible trajectories among those satisfying the Euler equations are the one that converges to the optimal steady-state equilibrium of the model. This result is derived in Cass' paper via the imposition of a transversality condition that Cass deduced from relevant sections of a book by Lev Pontryagin. Spear and Young conjecture that Koopmans took this route because he did not want to appear to be "borrowing" either Malinvaud's or Cass' transversality technology.
Based on this and other examination of Malinvaud's contributions in 1950s—specifically his intuition of the importance of the transversality condition—Spear and Young suggest that the neo-classical growth model might better be called the Ramsey–Malinvaud–Cass model than the established Ramsey–Cass–Koopmans honorific.
== Notes ==
== References ==
== Further reading ==
Acemoglu, Daron (2009). "The Neoclassical Growth Model". Introduction to Modern Economic Growth. Princeton: Princeton University Press. pp. 287–326. ISBN 978-0-691-13292-1.
Barro, Robert J.; Sala-i-Martin, Xavier (2004). "Growth Models with Consumer Optimization". Economic Growth (Second ed.). New York: McGraw-Hill. pp. 85–142. ISBN 978-0-262-02553-9.
Bénassy, Jean-Pascal (2011). "The Ramsey Model". Macroeconomic Theory. New York: Oxford University Press. pp. 145–160. ISBN 978-0-19-538771-1.
Blanchard, Olivier Jean; Fischer, Stanley (1989). "Consumption and Investment: Basic Infinite Horizon Models". Lectures on Macroeconomics. Cambridge: MIT Press. pp. 37–89. ISBN 978-0-262-02283-5.
Miao, Jianjun (2014). "Neoclassical Growth Models". Economic Dynamics in Discrete Time. Cambridge: MIT Press. pp. 353–364. ISBN 978-0-262-02761-8.
Novales, Alfonso; Fernández, Esther; Ruíz, Jesús (2009). "Optimal Growth: Continuous Time Analysis". Economic Growth: Theory and Numerical Solution Methods. Berlin: Springer. pp. 101–154. ISBN 978-3-540-68665-1.
Romer, David (2011). "Infinite-Horizon and Overlapping-Generations Models". Advanced Macroeconomics (Fourth ed.). New York: McGraw-Hill. pp. 49–77. ISBN 978-0-07-351137-5.
== External links ==
Discussion of Ramsey's original paper by Orazio Attanasio on YouTube | Wikipedia/Ramsey–Cass–Koopmans_model |
An economic model is a theoretical construct representing economic processes by a set of variables and a set of logical and/or quantitative relationships between them. The economic model is a simplified, often mathematical, framework designed to illustrate complex processes. Frequently, economic models posit structural parameters. A model may have various exogenous variables, and those variables may change to create various responses by economic variables. Methodological uses of models include investigation, theorizing, and fitting theories to the world.
== Overview ==
In general terms, economic models have two functions: first as a simplification of and abstraction from observed data, and second as a means of selection of data based on a paradigm of econometric study.
Simplification is particularly important for economics given the enormous complexity of economic processes. This complexity can be attributed to the diversity of factors that determine economic activity; these factors include: individual and cooperative decision processes, resource limitations, environmental and geographical constraints, institutional and legal requirements and purely random fluctuations. Economists therefore must make a reasoned choice of which variables and which relationships between these variables are relevant and which ways of analyzing and presenting this information are useful.
Selection is important because the nature of an economic model will often determine what facts will be looked at and how they will be compiled. For example, inflation is a general economic concept, but to measure inflation requires a model of behavior, so that an economist can differentiate between changes in relative prices and changes in price that are to be attributed to inflation.
In addition to their professional academic interest, uses of models include:
Forecasting economic activity in a way in which conclusions are logically related to assumptions;
Proposing economic policy to modify future economic activity;
Presenting reasoned arguments to politically justify economic policy at the national level, to explain and influence company strategy at the level of the firm, or to provide intelligent advice for household economic decisions at the level of households.
Planning and allocation, in the case of centrally planned economies, and on a smaller scale in logistics and management of businesses.
In finance, predictive models have been used since the 1980s for trading (investment and speculation). For example, emerging market bonds were often traded based on economic models predicting the growth of the developing nation issuing them. Since the 1990s many long-term risk management models have incorporated economic relationships between simulated variables in an attempt to detect high-exposure future scenarios (often through a Monte Carlo method).
A model establishes an argumentative framework for applying logic and mathematics that can be independently discussed and tested and that can be applied in various instances. Policies and arguments that rely on economic models have a clear basis for soundness, namely the validity of the supporting model.
Economic models in current use do not pretend to be theories of everything economic; any such pretensions would immediately be thwarted by computational infeasibility and the incompleteness or lack of theories for various types of economic behavior. Therefore, conclusions drawn from models will be approximate representations of economic facts. However, properly constructed models can remove extraneous information and isolate useful approximations of key relationships. In this way more can be understood about the relationships in question than by trying to understand the entire economic process.
The details of model construction vary with type of model and its application, but a generic process can be identified. Generally, any modelling process has two steps: generating a model, then checking the model for accuracy (sometimes called diagnostics). The diagnostic step is important because a model is only useful to the extent that it accurately mirrors the relationships that it purports to describe. Creating and diagnosing a model is frequently an iterative process in which the model is modified (and hopefully improved) with each iteration of diagnosis and respecification. Once a satisfactory model is found, it should be double checked by applying it to a different data set.
== Types of models ==
According to whether all the model variables are deterministic, economic models can be classified as stochastic or non-stochastic models; according to whether all the variables are quantitative, economic models are classified as discrete or continuous choice model; according to the model's intended purpose/function, it can be classified as
quantitative or qualitative; according to the model's ambit, it can be classified as a general equilibrium model, a partial equilibrium model, or even a non-equilibrium model; according to the economic agent's characteristics, models can be classified as rational agent models, representative agent models etc.
Stochastic models are formulated using stochastic processes. They model economically observable values over time. Most of econometrics is based on statistics to formulate and test hypotheses about these processes or estimate parameters for them. A widely used bargaining class of simple econometric models popularized by Tinbergen and later Wold are autoregressive models, in which the stochastic process satisfies some relation between current and past values. Examples of these are autoregressive moving average models and related ones such as autoregressive conditional heteroskedasticity (ARCH) and GARCH models for the modelling of heteroskedasticity.
Non-stochastic models may be purely qualitative (for example, relating to social choice theory) or quantitative (involving rationalization of financial variables, for example with hyperbolic coordinates, and/or specific forms of functional relationships between variables). In some cases economic predictions in a coincidence of a model merely assert the direction of movement of economic variables, and so the functional relationships are used only stoical in a qualitative sense: for example, if the price of an item increases, then the demand for that item will decrease. For such models, economists often use two-dimensional graphs instead of functions.
Qualitative models – although almost all economic models involve some form of mathematical or quantitative analysis, qualitative models are occasionally used. One example is qualitative scenario planning in which possible future events are played out. Another example is non-numerical decision tree analysis. Qualitative models often suffer from lack of precision.
At a more practical level, quantitative modelling is applied to many areas of economics and several methodologies have evolved more or less independently of each other. As a result, no overall model taxonomy is naturally available. We can nonetheless provide a few examples that illustrate some particularly relevant points of model construction.
An accounting model is one based on the premise that for every credit there is a debit. More symbolically, an accounting model expresses some principle of conservation in the form
algebraic sum of inflows = sinks − sources
This principle is certainly true for money and it is the basis for national income accounting. Accounting models are true by convention, that is any experimental failure to confirm them, would be attributed to fraud, arithmetic error or an extraneous injection (or destruction) of cash, which we would interpret as showing the experiment was conducted improperly.
Optimality and constrained optimization models – Other examples of quantitative models are based on principles such as profit or utility maximization. An example of such a model is given by the comparative statics of taxation on the profit-maximizing firm. The profit of a firm is given by
π
(
x
,
t
)
=
x
p
(
x
)
−
C
(
x
)
−
t
x
{\displaystyle \pi (x,t)=xp(x)-C(x)-tx\quad }
where
p
(
x
)
{\displaystyle p(x)}
is the price that a product commands in the market if it is supplied at the rate
x
{\displaystyle x}
,
x
p
(
x
)
{\displaystyle xp(x)}
is the revenue obtained from selling the product,
C
(
x
)
{\displaystyle C(x)}
is the cost of bringing the product to market at the rate
x
{\displaystyle x}
, and
t
{\displaystyle t}
is the tax that the firm must pay per unit of the product sold.
The profit maximization assumption states that a firm will produce at the output rate x if that rate maximizes the firm's profit. Using differential calculus we can obtain conditions on x under which this holds. The first order maximization condition for x is
∂
π
(
x
,
t
)
∂
x
=
∂
(
x
p
(
x
)
−
C
(
x
)
)
∂
x
−
t
=
0
{\displaystyle {\frac {\partial \pi (x,t)}{\partial x}}={\frac {\partial (xp(x)-C(x))}{\partial x}}-t=0}
Regarding x as an implicitly defined function of t by this equation (see implicit function theorem), one concludes that the derivative of x with respect to t has the same sign as
∂
2
(
x
p
(
x
)
−
C
(
x
)
)
∂
2
x
=
∂
2
π
(
x
,
t
)
∂
x
2
,
{\displaystyle {\frac {\partial ^{2}(xp(x)-C(x))}{\partial ^{2}x}}={\partial ^{2}\pi (x,t) \over \partial x^{2}},}
which is negative if the second order conditions for a local maximum are satisfied.
Thus the profit maximization model predicts something about the effect of taxation on output, namely that output decreases with increased taxation. If the predictions of the model fail, we conclude that the profit maximization hypothesis was false; this should lead to alternate theories of the firm, for example based on bounded rationality.
Borrowing a notion apparently first used in economics by Paul Samuelson, this model of taxation and the predicted dependency of output on the tax rate, illustrates an operationally meaningful theorem; that is one requiring some economically meaningful assumption that is falsifiable under certain conditions.
Aggregate models. Macroeconomics needs to deal with aggregate quantities such as output, the price level, the interest rate and so on. Now real output is actually a vector of goods and services, such as cars, passenger airplanes, computers, food items, secretarial services, home repair services etc. Similarly price is the vector of individual prices of goods and services. Models in which the vector nature of the quantities is maintained are used in practice, for example Leontief input–output models are of this kind. However, for the most part, these models are computationally much harder to deal with and harder to use as tools for qualitative analysis. For this reason, macroeconomic models usually lump together different variables into a single quantity such as output or price. Moreover, quantitative relationships between these aggregate variables are often parts of important macroeconomic theories. This process of aggregation and functional dependency between various aggregates usually is interpreted statistically and validated by econometrics. For instance, one ingredient of the Keynesian model is a functional relationship between consumption and national income: C = C(Y). This relationship plays an important role in Keynesian analysis.
== Problems with economic models ==
Most economic models rest on a number of assumptions that are not entirely realistic. For example, agents are often assumed to have perfect information, and markets are often assumed to clear without friction. Or, the model may omit issues that are important to the question being considered, such as externalities. Any analysis of the results of an economic model must therefore consider the extent to which these results may be compromised by inaccuracies in these assumptions, and a large literature has grown up discussing problems with economic models, or at least asserting that their results are unreliable.
== History ==
One of the major problems addressed by economic models has been understanding economic growth. An early attempt to provide a technique to approach this came from the French physiocratic school in the eighteenth century. Among these economists, François Quesnay was known particularly for his development and use of tables he called Tableaux économiques. These tables have in fact been interpreted in more modern terminology as a Leontiev model, see the Phillips reference below.
All through the 18th century (that is, well before the founding of modern political economy, conventionally marked by Adam Smith's 1776 Wealth of Nations), simple probabilistic models were used to understand the economics of insurance. This was a natural extrapolation of the theory of gambling, and played an important role both in the development of probability theory itself and in the development of actuarial science. Many of the giants of 18th century mathematics contributed to this field. Around 1730, De Moivre addressed some of these problems in the 3rd edition of The Doctrine of Chances. Even earlier (1709), Nicolas Bernoulli studies problems related to savings and interest in the Ars Conjectandi. In 1730, Daniel Bernoulli studied "moral probability" in his book Mensura Sortis, where he introduced what would today be called "logarithmic utility of money" and applied it to gambling and insurance problems, including a solution of the paradoxical Saint Petersburg problem. All of these developments were summarized by Laplace in his Analytical Theory of Probabilities (1812). Thus, by the time David Ricardo came along he had a well-established mathematical basis to draw from.
== Tests of macroeconomic predictions ==
In the late 1980s, the Brookings Institution compared 12 leading macroeconomic models available at the time. They compared the models' predictions for how the economy would respond to specific economic shocks (allowing the models to control for all the variability in the real world; this was a test of model vs. model, not a test against the actual outcome). Although the models simplified the world and started from a stable, known common parameters the various models gave significantly different answers. For instance, in calculating the impact of a monetary loosening on output some models estimated a 3% change in GDP after one year, and one gave almost no change, with the rest spread between.
Partly as a result of such experiments, modern central bankers no longer have as much confidence that it is possible to 'fine-tune' the economy as they had in the 1960s and early 1970s. Modern policy makers tend to use a less activist approach, explicitly because they lack confidence that their models will actually predict where the economy is going, or the effect of any shock upon it. The new, more humble, approach sees danger in dramatic policy changes based on model predictions, because of several practical and theoretical limitations in current macroeconomic models; in addition to the theoretical pitfalls, (listed above) some problems specific to aggregate modelling are:
Limitations in model construction caused by difficulties in understanding the underlying mechanisms of the real economy. (Hence the profusion of separate models.)
The law of unintended consequences, on elements of the real economy not yet included in the model.
The time lag in both receiving data and the reaction of economic variables to policy makers attempts to 'steer' them (mostly through monetary policy) in the direction that central bankers want them to move. Milton Friedman has vigorously argued that these lags are so long and unpredictably variable that effective management of the macroeconomy is impossible.
The difficulty in correctly specifying all of the parameters (through econometric measurements) even if the structural model and data were perfect.
The fact that all the model's relationships and coefficients are stochastic, so that the error term becomes very large quickly, and the available snapshot of the input parameters is already out of date.
Modern economic models incorporate the reaction of the public and market to the policy maker's actions (through game theory), and this feedback is included in modern models (following the rational expectations revolution and Robert Lucas, Jr.'s Lucas critique of non-microfounded models). If the response to the decision maker's actions (and their credibility) must be included in the model then it becomes much harder to influence some of the variables simulated.
=== Comparison with models in other sciences ===
Complex systems specialist and mathematician David Orrell wrote on this issue in his book Apollo's Arrow and explained that the weather, human health and economics use similar methods of prediction (mathematical models). Their systems—the atmosphere, the human body and the economy—also have similar levels of complexity. He found that forecasts fail because the models suffer from two problems: (i) they cannot capture the full detail of the underlying system, so rely on approximate equations; (ii) they are sensitive to small changes in the exact form of these equations. This is because complex systems like the economy or the climate consist of a delicate balance of opposing forces, so a slight imbalance in their representation has big effects. Thus, predictions of things like economic recessions are still highly inaccurate, despite the use of enormous models running on fast computers.
See Unreasonable ineffectiveness of mathematics § Economics and finance.
=== Effects of deterministic chaos on economic models ===
Economic and meteorological simulations may share a fundamental limit to their predictive powers: chaos. Although the modern mathematical work on chaotic systems began in the 1970s the danger of chaos had been identified and defined in Econometrica as early as 1958:
"Good theorising consists to a large extent in avoiding assumptions ... [with the property that] a small change in what is posited will seriously affect the conclusions."
(William Baumol, Econometrica, 26 see: Economics on the Edge of Chaos).
It is straightforward to design economic models susceptible to butterfly effects of initial-condition sensitivity.
However, the econometric research program to identify which variables are chaotic (if any) has largely concluded that aggregate macroeconomic variables probably do not behave chaotically. This would mean that refinements to the models could ultimately produce reliable long-term forecasts. However, the validity of this conclusion has generated two challenges:
In 2004 Philip Mirowski challenged this view and those who hold it, saying that chaos in economics is suffering from a biased "crusade" against it by neo-classical economics in order to preserve their mathematical models.
The variables in finance may well be subject to chaos. Also in 2004, the University of Canterbury study Economics on the Edge of Chaos concludes that after noise is removed from S&P 500 returns, evidence of deterministic chaos is found.
More recently, chaos (or the butterfly effect) has been identified as less significant than previously thought to explain prediction errors. Rather, the predictive power of economics and meteorology would mostly be limited by the models themselves and the nature of their underlying systems (see Comparison with models in other sciences above).
=== Critique of hubris in planning ===
A key strand of free market economic thinking is that the market's invisible hand guides an economy to prosperity more efficiently than central planning using an economic model. One reason, emphasized by Friedrich Hayek, is the claim that many of the true forces shaping the economy can never be captured in a single plan. This is an argument that cannot be made through a conventional (mathematical) economic model because it says that there are critical systemic-elements that will always be omitted from any top-down analysis of the economy.
== Examples of economic models ==
Cobb–Douglas model of production
Solow–Swan model of economic growth
Lucas islands model of money supply
Heckscher–Ohlin model of international trade
Black–Scholes model of option pricing
AD–AS model a macroeconomic model of aggregate demand– and supply
IS–LM model the relationship between interest rates and assets markets
Ramsey–Cass–Koopmans model of economic growth
Gordon–Loeb model for cyber security investments
== See also ==
Economic methodology
Computational economics
Agent-based computational economics
Endogeneity
Financial model
== Notes ==
== References ==
Baumol, William & Blinder, Alan (1982), Economics: Principles and Policy (2nd ed.), New York: Harcourt Brace Jovanovich, ISBN 0-15-518839-9.
Caldwell, Bruce (1994), Beyond Positivism: Economic Methodology in the Twentieth Century (Revised ed.), New York: Routledge, ISBN 0-415-10911-6.
Holcombe, R. (1989), Economic Models and Methodology, New York: Greenwood Press, ISBN 0-313-26679-4. Defines model by analogy with maps, an idea borrowed from Baumol and Blinder. Discusses deduction within models, and logical derivation of one model from another. Chapter 9 compares the neoclassical school and the Austrian School, in particular in relation to falsifiability.
Lange, Oskar (1945), "The Scope and Method of Economics", Review of Economic Studies, 13 (1), The Review of Economic Studies Ltd.: 19–32, doi:10.2307/2296113, JSTOR 2296113, S2CID 4140287. One of the earliest studies on methodology of economics, analysing the postulate of rationality.
de Marchi, N. B. & Blaug, M. (1991), Appraising Economic Theories: Studies in the Methodology of Research Programs, Brookfield, VT: Edward Elgar, ISBN 1-85278-515-2. A series of essays and papers analysing questions about how (and whether) models and theories in economics are empirically verified and the current status of positivism in economics.
Morishima, Michio (1976), The Economic Theory of Modern Society, New York: Cambridge University Press, ISBN 0-521-21088-7. A thorough discussion of many quantitative models used in modern economic theory. Also a careful discussion of aggregation.
Orrell, David (2007), Apollo's Arrow: The Science of Prediction and the Future of Everything, Toronto: Harper Collins Canada, ISBN 978-0-00-200740-5.
Phillips, Almarin (1955), "The Tableau Économique as a Simple Leontief Model", Quarterly Journal of Economics, 69 (1), The MIT Press: 137–44, doi:10.2307/1884854, JSTOR 1884854.
Samuelson, Paul A. (1948), "The Simple Mathematics of Income Determination", in Metzler, Lloyd A. (ed.), Income, Employment and Public Policy; essays in honor of Alvin Hansen, New York: W. W. Norton.
Samuelson, Paul A. (1983), Foundations of Economic Analysis (Enlarged ed.), Cambridge: Harvard University Press, ISBN 0-674-31301-1. This is a classic book carefully discussing comparative statics in microeconomics, though some dynamics is studied as well as some macroeconomic theory. This should not be confused with Samuelson's popular textbook.
Tinbergen, Jan (1939), Statistical Testing of Business Cycle Theories, Geneva: League of Nations.
Walsh, Vivian (1987), "Models and theory", The New Palgrave: A Dictionary of Economics, vol. 3, New York: Stockton Press, pp. 482–83, ISBN 0-935859-10-1.
Wold, H. (1938), A Study in the Analysis of Stationary Time Series, Stockholm: Almqvist and Wicksell.
Wold, H. & Jureen, L. (1953), Demand Analysis: A Study in Econometrics, New York: Wiley.
Gordon, Lawrence A.; Loeb, Martin P. (November 2002). "The Economics of Information Security Investment". ACM Transactions on Information and System Security. 5 (4): 438–457. doi:10.1145/581271.581274. S2CID 1500788.
== External links ==
R. Frigg and S. Hartmann, Models in Science. Entry in the Stanford Encyclopedia of Philosophy.
H. Varian How to build a model in your spare time The author makes several unexpected suggestions: Look for a model in the real world, not in journals. Look at the literature later, not sooner.
Elmer G. Wiens: Classical & Keynesian AD-AS Model – An on-line, interactive model of the Canadian Economy.
IFs Economic Sub-Model [1]: Online Global Model
Economic attractor | Wikipedia/Economic_models |
A macroeconomic model is an analytical tool designed to describe the operation of the problems of economy of a country or a region. These models are usually designed to examine the comparative statics and dynamics of aggregate quantities such as the total amount of goods and services produced, total income earned, the level of employment of productive resources, and the level of prices.
Macroeconomic models may be logical, mathematical, and/or computational; the different types of macroeconomic models serve different purposes and have different advantages and disadvantages. Macroeconomic models may be used to clarify and illustrate basic theoretical principles; they may be used to test, compare, and quantify different macroeconomic theories; they may be used to produce "what if" scenarios (usually to predict the effects of changes in monetary, fiscal, or other macroeconomic policies); and they may be used to generate economic forecasts. Thus, macroeconomic models are widely used in academia in teaching and research, and are also widely used by international organizations, national governments and larger corporations, as well as by economic consultants and think tanks.
== Types ==
=== Simple theoretical models ===
Simple textbook descriptions of the macroeconomy involving a small number of equations or diagrams are often called ‘models’. Examples include the IS-LM model and Mundell–Fleming model of Keynesian macroeconomics, and the Solow model of neoclassical growth theory. These models share several features. They are based on a few equations involving a few variables, which can often be explained with simple diagrams. Many of these models are static, but some are dynamic, describing the economy over many time periods. The variables that appear in these models often represent macroeconomic aggregates (such as GDP or total employment) rather than individual choice variables, and while the equations relating these variables are intended to describe economic decisions, they are not usually derived directly by aggregating models of individual choices. They are simple enough to be used as illustrations of theoretical points in introductory explanations of macroeconomic ideas; but therefore quantitative application to forecasting, testing, or policy evaluation is usually impossible without substantially augmenting the structure of the model.
=== Empirical forecasting models ===
In the 1940s and 1950s, as governments began accumulating national income and product accounting data, economists set out to construct quantitative models to describe the dynamics observed in the data. These models estimated the relations between different macroeconomic variables using (mostly linear) time series analysis. Like the simpler theoretical models, these empirical models described relations between aggregate quantities, but many addressed a much finer level of detail (for example, studying the relations between output, employment, investment, and other variables in many different industries). Thus, these models grew to include hundreds or thousands of equations describing the evolution of hundreds or thousands of prices and quantities over time, making computers essential for their solution. While the choice of which variables to include in each equation was partly guided by economic theory (for example, including past income as a determinant of consumption, as suggested by the theory of adaptive expectations), variable inclusion was mostly determined on purely empirical grounds.
Dutch economist Jan Tinbergen developed the first comprehensive national model, which he built for the Netherlands in 1936. He later applied the same modeling structure to the economies of the United States and the United Kingdom. The first global macroeconomic model, Wharton Econometric Forecasting Associates' LINK project, was initiated by Lawrence Klein. The model was cited in 1980 when Klein, like Tinbergen before him, won the Nobel Prize. Large-scale empirical models of this type, including the Wharton model, are still in use today, especially for forecasting purposes.
==== The Lucas critique of empirical forecasting models ====
Econometric studies in the first part of the 20th century showed a negative correlation between inflation and unemployment called the Phillips curve. Empirical macroeconomic forecasting models, being based on roughly the same data, had similar implications: they suggested that unemployment could be permanently lowered by permanently increasing inflation. However, in 1968, Milton Friedman and Edmund Phelps argued that this apparent tradeoff was illusory. They claimed that the historical relation between inflation and unemployment was due to the fact that past inflationary episodes had been largely unexpected. They argued that if monetary authorities permanently raised the inflation rate, workers and firms would eventually come to understand this, at which point the economy would return to its previous, higher level of unemployment, but now with higher inflation too. The stagflation of the 1970s appeared to bear out their prediction.
In 1976, Robert Lucas Jr., published an influential paper arguing that the failure of the Phillips curve in the 1970s was just one example of a general problem with empirical forecasting models. He pointed out that such models are derived from observed relationships between various macroeconomic quantities over time, and that these relations differ depending on what macroeconomic policy regime is in place. In the context of the Phillips curve, this means that the relation between inflation and unemployment observed in an economy where inflation has usually been low in the past would differ from the relation observed in an economy where inflation has been high. Furthermore, this means one cannot predict the effects of a new policy regime using an empirical forecasting model based on data from previous periods when that policy regime was not in place. Lucas argued that economists would remain unable to predict the effects of new policies unless they built models based on economic fundamentals (like preferences, technology, and budget constraints) that should be unaffected by policy changes.
=== Dynamic stochastic general equilibrium models ===
Partly as a response to the Lucas critique, economists of the 1980s and 1990s began to construct microfounded macroeconomic models based on rational choice, which have come to be called dynamic stochastic general equilibrium (DSGE) models. These models begin by specifying the set of agents active in the economy, such as households, firms, and governments in one or more countries, as well as the preferences, technology, and budget constraint of each one. Each agent is assumed to make an optimal choice, taking into account prices and the strategies of other agents, both in the current period and in the future. Summing up the decisions of the different types of agents, it is possible to find the prices that equate supply with demand in every market. Thus these models embody a type of equilibrium self-consistency: agents choose optimally given the prices, while prices must be consistent with agents’ supplies and demands.
DSGE models often assume that all agents of a given type are identical (i.e. there is a ‘representative household’ and a ‘representative firm’) and can perform perfect calculations that forecast the future correctly on average (which is called rational expectations). However, these are only simplifying assumptions, and are not essential for the DSGE methodology; many DSGE studies aim for greater realism by considering heterogeneous agents or various types of adaptive expectations. Compared with empirical forecasting models, DSGE models typically have fewer variables and equations, mainly because DSGE models are harder to solve, even with the help of computers. Simple theoretical DSGE models, involving only a few variables, have been used to analyze the forces that drive business cycles; this empirical work has given rise to two main competing frameworks called the real business cycle model and the New Keynesian DSGE model. More elaborate DSGE models are used to predict the effects of changes in economic policy and evaluate their impact on social welfare. However, economic forecasting is still largely based on more traditional empirical models, which are still widely believed to achieve greater accuracy in predicting the impact of economic disturbances over time.
==== DSGE versus CGE models ====
A methodology that pre-dates DSGE modeling is computable general equilibrium (CGE) modeling. Like DSGE models, CGE models are often microfounded on assumptions about preferences, technology, and budget constraints. However, CGE models focus mostly on long-run relationships, making them most suited to studying the long-run impact of permanent policies like the tax system or the openness of the economy to international trade. DSGE models instead emphasize the dynamics of the economy over time (often at a quarterly frequency), making them suited for studying business cycles and the cyclical effects of monetary and fiscal policy.
=== Agent-based computational macroeconomic models ===
Another modeling methodology is Agent-based computational economics (ACE), which is a variety of Agent-based modeling. Like the DSGE methodology, ACE seeks to break down aggregate macroeconomic relationships into microeconomic decisions of individual agents. ACE models also begin by defining the set of agents that make up the economy, and specify the types of interactions individual agents can have with each other or with the market as a whole. Instead of defining the preferences of those agents, ACE models often jump directly to specifying their strategies. Or sometimes, preferences are specified, together with an initial strategy and a learning rule whereby the strategy is adjusted according to its past success. Given these strategies, the interaction of large numbers of individual agents (who may be very heterogeneous) can be simulated on a computer, and then the aggregate, macroeconomic relationships that arise from those individual actions can be studied.
==== Strengths and weaknesses of DSGE and ACE models ====
DSGE and ACE models have different advantages and disadvantages due to their different underlying structures. DSGE models may exaggerate individual rationality and foresight, and understate the importance of heterogeneity, since the rational expectations, representative agent case remains the simplest and thus the most common type of DSGE model to solve. Also, unlike ACE models, it may be difficult to study local interactions between individual agents in DSGE models, which instead focus mostly on the way agents interact through aggregate prices. On the other hand, ACE models may exaggerate errors in individual decision-making, since the strategies assumed in ACE models may be very far from optimal choices unless the modeler is very careful. A related issue is that ACE models which start from strategies instead of preferences may remain vulnerable to the Lucas critique: a changed policy regime should generally give rise to changed strategies.
== See also ==
== References ==
== External links ==
Macroeconomic Modeling: The Cowles Commission Approach by Ray Fair
FAIRMODEL - US models to download
JAMEL - An on-line, interactive agent-based macroeconomic model | Wikipedia/Macroeconomic_model |
The IS–LM model, or Hicks–Hansen model, is a two-dimensional macroeconomic model which is used as a pedagogical tool in macroeconomic teaching. The IS–LM model shows the relationship between interest rates and output in the short run in a closed economy. The intersection of the "investment–saving" (IS) and "liquidity preference–money supply" (LM) curves illustrates a "general equilibrium" where supposed simultaneous equilibria occur in both the goods and the money markets. The IS–LM model shows the importance of various demand shocks (including the effects of monetary policy and fiscal policy) on output and consequently offers an explanation of changes in national income in the short run when prices are fixed or sticky. Hence, the model can be used as a tool to suggest potential levels for appropriate stabilisation policies. It is also used as a building block for the demand side of the economy in more comprehensive models like the AD–AS model.
The model was developed by John Hicks in 1937 and was later extended by Alvin Hansen as a mathematical representation of Keynesian macroeconomic theory. Between the 1940s and mid-1970s, it was the leading framework of macroeconomic analysis. Today, it is generally accepted as being imperfect and is largely absent from teaching at advanced economic levels and from macroeconomic research, but it is still an important pedagogical introductory tool in most undergraduate macroeconomics textbooks.
As monetary policy since the 1980s and 1990s generally does not try to target money supply as assumed in the original IS–LM model, but instead targets interest rate levels directly, some modern versions of the model have changed the interpretation (and in some cases even the name) of the LM curve, presenting it instead simply as a horizontal line showing the central bank's choice of interest rate. This allows for a simpler dynamic adjustment and supposedly reflects the behaviour of actual contemporary central banks more closely.
== History ==
The IS–LM model was introduced at a conference of the Econometric Society held in Oxford during September 1936. Roy Harrod, John R. Hicks, and James Meade all presented papers
describing mathematical models attempting to summarize John Maynard Keynes' General Theory of Employment, Interest, and Money. Hicks, who had seen a draft of Harrod's paper, invented the IS–LM model (originally using the abbreviation "LL", not "LM"). He later presented it in
"Mr. Keynes and the Classics: A Suggested Interpretation". Hicks and Alvin Hansen developed the model further in the 1930s and early 1940s,: 527 Hansen extending the earlier contribution.
The model became a central tool of macroeconomic teaching for many decades. Between the 1940s and mid-1970s, it was the leading framework of macroeconomic analysis. It was particularly suited to illustrate the debate of the 1960s and 1970s between Keynesians and monetarists as to whether fiscal or monetary policy was most effective to stabilize the economy. Later, this issue faded from focus and came to play only a modest role in discussions of short-run fluctuations.
The IS-LM model assumes a fixed price level and consequently cannot in itself be used to analyze inflation. This was of little importance in the 1950s and early 1960s when inflation was not an important issue, but became problematic with the rising inflation levels in the late 1960s and 1970s, which led to extensions of the model to also incorporate aggregate supply in some form, e.g. in the form of the AD–AS model, which can be regarded as an IS-LM model with an added supply side explaining rises in the price level.
One of the basic assumptions of the IS-LM model is that the central bank targets the money supply. However, a fundamental rethinking in central bank policy took place from the early 1990s when central banks generally changed strategies towards targeting inflation rather than money growth and using an interest rate rule to achieve their goal.: 507 As central banks started paying little attention to the money supply when deciding on their policy, this model feature became increasingly unrealistic and sometimes confusing to students. David Romer in 2000 suggested replacing the traditional IS-LM framework with an IS-MP model, replacing the positively sloped LM curve with a horizontal MP curve (where MP stands for "monetary policy"). He advocated that it had several advantages compared to the traditional IS-LM model. John B. Taylor independently made a similar recommendation in the same year. After 2000, this has led to various modifications to the model in many textbooks, replacing the traditional LM curve and story of the central bank influencing the interest rate level indirectly via controlling the supply of money in the money market to a more realistic one of the central bank determining the policy interest rate as an exogenous variable directly.: 113
Today, the IS-LM model is largely absent from macroeconomic research, but it is still a backbone conceptual introductory tool in many macroeconomics textbooks.
== Formation ==
The point where the IS and LM schedules intersect represents a short-run equilibrium in the real and monetary sectors (though not necessarily in other sectors, such as labor markets): both the product market and the money market are in equilibrium. This equilibrium yields a unique combination of the interest rate and real GDP.
=== IS (investment–saving) curve ===
The IS curve shows the causation from interest rates to planned investment to national income and output.
For the investment–saving curve, the independent variable is the interest rate and the dependent variable is the level of income. The IS curve is drawn as downward-sloping with the interest rate r on the vertical axis and GDP (gross domestic product: Y) on the horizontal axis. The IS curve represents the locus where total spending (consumer spending + planned private investment + government purchases + net exports) equals total output (real income, Y, or GDP).
The IS curve also represents the equilibria where total private investment equals total saving, with saving equal to consumer saving plus government saving (the budget surplus) plus foreign saving (the trade surplus). The level of real GDP (Y) is determined along this line for each interest rate. Every level of the real interest rate will generate a certain level of investment and spending: lower interest rates encourage higher investment and more spending. The multiplier effect of an increase in fixed investment resulting from a lower interest rate raises real GDP. This explains the downward slope of the IS curve. In summary, the IS curve shows the causation from interest rates to planned fixed investment to rising national income and output.
The IS curve is defined by the equation
Y
=
C
(
Y
−
T
(
Y
)
)
+
I
(
r
)
+
G
+
N
X
(
Y
)
,
{\displaystyle Y=C\left({Y}-{T(Y)}\right)+I\left({r}\right)+G+NX(Y),}
where Y represents income,
C
(
Y
−
T
(
Y
)
)
{\displaystyle C(Y-T(Y))}
represents consumer spending increasing as a function of disposable income (income, Y, minus taxes, T(Y), which themselves depend positively on income),
I
(
r
)
{\displaystyle I(r)}
represents business investment decreasing as a function of the real interest rate, G represents government spending, and NX(Y) represents net exports (exports minus imports) decreasing as a function of income (decreasing because imports are an increasing function of income).
=== LM (liquidity-money) curve ===
The LM curve shows the combinations of interest rates and levels of real income for which the money market is in equilibrium. It shows where money demand equals money supply. For the LM curve, the independent variable is income and the dependent variable is the interest rate.
In the money market equilibrium diagram, the liquidity preference function is the willingness to hold cash. The liquidity preference function is downward sloping (i.e. the willingness to hold cash increases as the interest rate decreases). Two basic elements determine the quantity of cash balances demanded:
Transactions demand for money: this includes both (a) the willingness to hold cash for everyday transactions and (b) a precautionary measure (money demand in case of emergencies). Transactions demand is positively related to real GDP. As GDP is considered exogenous to the liquidity preference function, changes in GDP shift the curve.
Speculative demand for money: this is the willingness to hold cash instead of securities as an asset for investment purposes. Speculative demand is inversely related to the interest rate. As the interest rate rises, the opportunity cost of holding money rather than investing in securities increases. So, as interest rates rise, speculative demand for money falls.
Money supply is determined by central bank decisions and willingness of commercial banks to loan money. Money supply in effect is perfectly inelastic with respect to nominal interest rates. Thus the money supply function is represented as a vertical line – money supply is a constant, independent of the interest rate, GDP, and other factors. Mathematically, the LM curve is defined by the equation
M
/
P
=
L
(
i
,
Y
)
{\displaystyle M/P=L(i,Y)}
, where the supply of money is represented as the real amount M/P (as opposed to the nominal amount M), with P representing the price level, and L being the real demand for money, which is some function of the interest rate and the level of real income.
An increase in GDP shifts the liquidity preference function rightward and hence increases the interest rate. Thus the LM function is positively sloped.
== Shifts ==
One hypothesis is that a government's deficit spending ("fiscal policy") has an effect similar to that of a lower saving rate or increased private fixed investment, increasing the amount of demand for goods at each individual interest rate. An increased deficit by the national government shifts the IS curve to the right. This raises the equilibrium interest rate (from i1 to i2) and national income (from Y1 to Y2), as shown in the graph above. The equilibrium level of national income in the IS–LM diagram is referred to as aggregate demand.
Keynesians argue spending may actually "crowd in" (encourage) private fixed investment via the accelerator effect, which helps long-term growth. Further, if government deficits are spent on productive public investment (e.g., infrastructure or public health) that spending directly and eventually raises potential output, although not necessarily more (or less) than the lost private investment might have. The extent of any crowding out depends on the shape of the LM curve. A shift in the IS curve along a relatively flat LM curve can increase output substantially with little change in the interest rate. On the other hand, a rightward shift in the IS curve along a vertical LM curve will lead to higher interest rates, but no change in output (this case represents the "Treasury view").
Rightward shifts of the IS curve also result from exogenous increases in investment spending (i.e., for reasons other than interest rates or income), in consumer spending, and in export spending by people outside the economy being modelled, as well as by exogenous decreases in spending on imports. Thus these too raise both equilibrium income and the equilibrium interest rate. Of course, changes in these variables in the opposite direction shift the IS curve in the opposite direction.
The IS–LM model also allows for the role of monetary policy. If the money supply is increased, that shifts the LM curve downward or to the right, lowering interest rates and raising equilibrium national income. Further, exogenous decreases in liquidity preference, perhaps due to improved transactions technologies, lead to downward shifts of the LM curve and thus increases in income and decreases in interest rates. Changes in these variables in the opposite direction shift the LM curve in the opposite direction.
== IS–LM model with interest targeting central bank ==
The fact that contemporary central banks normally do not target the money supply, as assumed by the original IS–LM model, but instead conduct their monetary policy by steering the interest rate directly, has led to increasing criticism of the traditional IS–LM setup since 2000 for being outdated and confusing to students. In some textbooks, the traditional LM curve derived from an explicit money market equilibrium story consequently has been replaced by an LM curve simply showing the interest rate level determined by the central bank. Notably this is the case in Olivier Blanchard's widely-used intermediate-level textbook "Macroeconomics" since its 7th edition in 2017.
In this case, the LM curve becomes horizontal at the interest rate level chosen by the central bank, allowing a simpler kind of dynamics. Also, the interest rate level measured along the vertical axis may be interpreted as either the nominal or the real interest rate, in the latter case allowing inflation to enter the IS–LM model in a simple way. The output level is still determined by the intersection of the IS and LM curves. The LM curve may shift because of a change in monetary policy or possibly a change in inflation expectations, whereas the IS curve as in the traditional model may shift either because of a change in fiscal policy affecting government consumption or taxation, or because of shocks affecting private consumption or investment (or, in the open-economy version, net exports). Additionally, the model distinguishes between the policy interest rate determined by the central bank and the market interest rate which is decisive for firms' investment decisions, and which is equal to the policy interest rate plus a premium which may be interpreted as a risk premium or a measure of the market power or other factors influencing the business strategies of commercial banks. This premium allows for shocks in the financial sector being transmitted to the goods market and consequently affecting aggregate demand.: 195–201
Similar models, though called slightly different names, appear in the textbooks by Charles Jones and by Wendy Carlin and David Soskice and the CORE Econ project. Parallelly, texts by Akira Weerapana and Stephen Williamson have outlined approaches where the LM curve is replaced with a real interest rate rule.
== Incorporation into larger models ==
By itself, the traditional IS–LM model is used to study the short run when prices are fixed or sticky, and no inflation is taken into consideration. In addition, the model is often used as a sub-model of larger models which allow for a flexible price level. The addition of a supply relation enables the model to be used for both short- and medium-run analysis of the economy, or to use a different terminology: classical and Keynesian analysis.
A main example of this is the Aggregate Demand-Aggregate Supply model – the AD–AS model. In the aggregate demand-aggregate supply model, each point on the aggregate demand curve is an outcome of the IS–LM model for aggregate demand Y based on a particular price level. Starting from one point on the aggregate demand curve, at a particular price level and a quantity of aggregate demand implied by the IS–LM model for that price level, if one considers a higher potential price level, in the IS–LM model the real money supply M/P will be lower and hence the LM curve will be shifted higher, leading to lower aggregate demand as measured by the horizontal location of the IS–LM intersection; hence at the higher price level the level of aggregate demand is lower, so the aggregate demand curve is negatively sloped.: 315–317
In the 2018 textbook "Macroeconomics" by Daron Acemoglu, David Laibson and John A. List, the corresponding model combining a traditional IS-LM setup with a relation for a changing price level is named an IS-LM-FE model (FE standing for "full equilibrium").
=== AD-AS-like models with inflation instead of price levels ===
In many modern textbooks, the traditional AD–AS diagram is replaced by a variation in which the variables are not output and the price level, but instead output and inflation (i.e., the change in the price level). In this case, the relation corresponding to the AS curve is normally derived from a Phillips curve relationship between inflation and the unemployment gap. As policymakers and economists are generally concerned about inflation levels and not actual price levels, this formulation is considered more appropriate. This variation is often referred to as a dynamic AD–AS model, but may also have other names. Olivier Blanchard in his textbook uses the term IS–LM–PC model (PC standing for Phillips curve). Others, among them Carlin and Soskice, refer to it as the "three-equation New Keynesian model", the three equations being an IS relation, often augmented with a term that allows for expectations influencing demand, a monetary policy (interest) rule and a short-run Phillips curve.
== Variations ==
=== IS-LM-NAC model ===
In 2016, Roger Farmer and Konstantin Platonov presented a so-called IS-LM-NAC model (NAC standing for "no arbitrage condition", in casu between physical capital and financial assets), in which the long-run effect of monetary policy depends on the way in which people form beliefs. The model was an attempt to integrate the phenomenon of secular stagnation in the IS-LM model. Whereas in the IS-LM model, high unemployment would be a temporary phenomenon caused by sticky wages and prices, in the IS-LM-NAC model high unemployment may be a permanent situation caused by pessimistic beliefs - a particular instance of what Keynes called animal spirits. The model was part of a broader research agenda studying how beliefs may independently influence macroeconomic outcomes.
== See also ==
== References ==
== Further reading ==
Barro, Robert J. (1984). "The Keynesian Theory of Business Fluctuations". Macroeconomics. New York: John Wiley. pp. 487–513. ISBN 978-0-471-87407-2.
Blanchard, Olivier (2021). "Goods and Financial Markets: The IS-LM Model". Macroeconomics (Eighth, global ed.). Harlow, England: Pearson. pp. 107–126. ISBN 978-0-134-89789-9.
Hicks, J. R. (1937). "Mr. Keynes and the 'Classics': A Suggested Interpretation". Econometrica. 5 (2): 147–159. doi:10.2307/1907242. JSTOR 1907242.
Krugman, Paul (2011-10-09). "IS-LMentary". The New York Times. Retrieved 2020-10-01.
Leijonhufvud, Axel (1983). "What is Wrong with IS/LM?". In Fitoussi, Jean-Paul (ed.). Modern Macroeconomic Theory. Oxford: Blackwell. pp. 49–90. ISBN 978-0-631-13158-8.
Mankiw, Nicholas Gregory (2022). "Aggregate Demand I+II". Macroeconomics (Eleventh, international ed.). New York, NY: Worth Publishers, Macmillan Learning. pp. 283–334. ISBN 978-1-319-26390-4.
Romer, David (2000). "Keynesian Macroeconomics without the LM Curve". Journal of Economic Perspectives. 14 (2): 149–170. doi:10.1257/jep.14.2.149. ISSN 0895-3309.
Smith, Warren L. (1956). "A Graphical Exposition of the Complete Keynesian System". Southern Economic Journal. 23 (2): 115–125. doi:10.2307/1053551. JSTOR 1053551.
Vroey, Michel de; Hoover, Kevin D., eds. (2004). The IS-LM model: Its Rise, Fall, and Strange Persistence. Durham: Duke University Press. ISBN 978-0-8223-6631-7.
Young, Warren; Zilberfarb, Ben-Zion, eds. (2000). IS-LM and Modern Macroeconomics. Recent Economic Thought. Vol. 73. Springer Science & Business Media. doi:10.1007/978-94-010-0644-6. ISBN 978-0-7923-7966-9.
== External links ==
Krugman, Paul. There's something about macro – An explanation of the model and its role in understanding macroeconomics.
Krugman, Paul. IS-LMentary – A basic explanation of the model and its uses.
Wiens, Elmer G. IS–LM model – An online, interactive IS–LM model of the Canadian economy. | Wikipedia/IS–LM_model |
A macroeconomic model is an analytical tool designed to describe the operation of the problems of economy of a country or a region. These models are usually designed to examine the comparative statics and dynamics of aggregate quantities such as the total amount of goods and services produced, total income earned, the level of employment of productive resources, and the level of prices.
Macroeconomic models may be logical, mathematical, and/or computational; the different types of macroeconomic models serve different purposes and have different advantages and disadvantages. Macroeconomic models may be used to clarify and illustrate basic theoretical principles; they may be used to test, compare, and quantify different macroeconomic theories; they may be used to produce "what if" scenarios (usually to predict the effects of changes in monetary, fiscal, or other macroeconomic policies); and they may be used to generate economic forecasts. Thus, macroeconomic models are widely used in academia in teaching and research, and are also widely used by international organizations, national governments and larger corporations, as well as by economic consultants and think tanks.
== Types ==
=== Simple theoretical models ===
Simple textbook descriptions of the macroeconomy involving a small number of equations or diagrams are often called ‘models’. Examples include the IS-LM model and Mundell–Fleming model of Keynesian macroeconomics, and the Solow model of neoclassical growth theory. These models share several features. They are based on a few equations involving a few variables, which can often be explained with simple diagrams. Many of these models are static, but some are dynamic, describing the economy over many time periods. The variables that appear in these models often represent macroeconomic aggregates (such as GDP or total employment) rather than individual choice variables, and while the equations relating these variables are intended to describe economic decisions, they are not usually derived directly by aggregating models of individual choices. They are simple enough to be used as illustrations of theoretical points in introductory explanations of macroeconomic ideas; but therefore quantitative application to forecasting, testing, or policy evaluation is usually impossible without substantially augmenting the structure of the model.
=== Empirical forecasting models ===
In the 1940s and 1950s, as governments began accumulating national income and product accounting data, economists set out to construct quantitative models to describe the dynamics observed in the data. These models estimated the relations between different macroeconomic variables using (mostly linear) time series analysis. Like the simpler theoretical models, these empirical models described relations between aggregate quantities, but many addressed a much finer level of detail (for example, studying the relations between output, employment, investment, and other variables in many different industries). Thus, these models grew to include hundreds or thousands of equations describing the evolution of hundreds or thousands of prices and quantities over time, making computers essential for their solution. While the choice of which variables to include in each equation was partly guided by economic theory (for example, including past income as a determinant of consumption, as suggested by the theory of adaptive expectations), variable inclusion was mostly determined on purely empirical grounds.
Dutch economist Jan Tinbergen developed the first comprehensive national model, which he built for the Netherlands in 1936. He later applied the same modeling structure to the economies of the United States and the United Kingdom. The first global macroeconomic model, Wharton Econometric Forecasting Associates' LINK project, was initiated by Lawrence Klein. The model was cited in 1980 when Klein, like Tinbergen before him, won the Nobel Prize. Large-scale empirical models of this type, including the Wharton model, are still in use today, especially for forecasting purposes.
==== The Lucas critique of empirical forecasting models ====
Econometric studies in the first part of the 20th century showed a negative correlation between inflation and unemployment called the Phillips curve. Empirical macroeconomic forecasting models, being based on roughly the same data, had similar implications: they suggested that unemployment could be permanently lowered by permanently increasing inflation. However, in 1968, Milton Friedman and Edmund Phelps argued that this apparent tradeoff was illusory. They claimed that the historical relation between inflation and unemployment was due to the fact that past inflationary episodes had been largely unexpected. They argued that if monetary authorities permanently raised the inflation rate, workers and firms would eventually come to understand this, at which point the economy would return to its previous, higher level of unemployment, but now with higher inflation too. The stagflation of the 1970s appeared to bear out their prediction.
In 1976, Robert Lucas Jr., published an influential paper arguing that the failure of the Phillips curve in the 1970s was just one example of a general problem with empirical forecasting models. He pointed out that such models are derived from observed relationships between various macroeconomic quantities over time, and that these relations differ depending on what macroeconomic policy regime is in place. In the context of the Phillips curve, this means that the relation between inflation and unemployment observed in an economy where inflation has usually been low in the past would differ from the relation observed in an economy where inflation has been high. Furthermore, this means one cannot predict the effects of a new policy regime using an empirical forecasting model based on data from previous periods when that policy regime was not in place. Lucas argued that economists would remain unable to predict the effects of new policies unless they built models based on economic fundamentals (like preferences, technology, and budget constraints) that should be unaffected by policy changes.
=== Dynamic stochastic general equilibrium models ===
Partly as a response to the Lucas critique, economists of the 1980s and 1990s began to construct microfounded macroeconomic models based on rational choice, which have come to be called dynamic stochastic general equilibrium (DSGE) models. These models begin by specifying the set of agents active in the economy, such as households, firms, and governments in one or more countries, as well as the preferences, technology, and budget constraint of each one. Each agent is assumed to make an optimal choice, taking into account prices and the strategies of other agents, both in the current period and in the future. Summing up the decisions of the different types of agents, it is possible to find the prices that equate supply with demand in every market. Thus these models embody a type of equilibrium self-consistency: agents choose optimally given the prices, while prices must be consistent with agents’ supplies and demands.
DSGE models often assume that all agents of a given type are identical (i.e. there is a ‘representative household’ and a ‘representative firm’) and can perform perfect calculations that forecast the future correctly on average (which is called rational expectations). However, these are only simplifying assumptions, and are not essential for the DSGE methodology; many DSGE studies aim for greater realism by considering heterogeneous agents or various types of adaptive expectations. Compared with empirical forecasting models, DSGE models typically have fewer variables and equations, mainly because DSGE models are harder to solve, even with the help of computers. Simple theoretical DSGE models, involving only a few variables, have been used to analyze the forces that drive business cycles; this empirical work has given rise to two main competing frameworks called the real business cycle model and the New Keynesian DSGE model. More elaborate DSGE models are used to predict the effects of changes in economic policy and evaluate their impact on social welfare. However, economic forecasting is still largely based on more traditional empirical models, which are still widely believed to achieve greater accuracy in predicting the impact of economic disturbances over time.
==== DSGE versus CGE models ====
A methodology that pre-dates DSGE modeling is computable general equilibrium (CGE) modeling. Like DSGE models, CGE models are often microfounded on assumptions about preferences, technology, and budget constraints. However, CGE models focus mostly on long-run relationships, making them most suited to studying the long-run impact of permanent policies like the tax system or the openness of the economy to international trade. DSGE models instead emphasize the dynamics of the economy over time (often at a quarterly frequency), making them suited for studying business cycles and the cyclical effects of monetary and fiscal policy.
=== Agent-based computational macroeconomic models ===
Another modeling methodology is Agent-based computational economics (ACE), which is a variety of Agent-based modeling. Like the DSGE methodology, ACE seeks to break down aggregate macroeconomic relationships into microeconomic decisions of individual agents. ACE models also begin by defining the set of agents that make up the economy, and specify the types of interactions individual agents can have with each other or with the market as a whole. Instead of defining the preferences of those agents, ACE models often jump directly to specifying their strategies. Or sometimes, preferences are specified, together with an initial strategy and a learning rule whereby the strategy is adjusted according to its past success. Given these strategies, the interaction of large numbers of individual agents (who may be very heterogeneous) can be simulated on a computer, and then the aggregate, macroeconomic relationships that arise from those individual actions can be studied.
==== Strengths and weaknesses of DSGE and ACE models ====
DSGE and ACE models have different advantages and disadvantages due to their different underlying structures. DSGE models may exaggerate individual rationality and foresight, and understate the importance of heterogeneity, since the rational expectations, representative agent case remains the simplest and thus the most common type of DSGE model to solve. Also, unlike ACE models, it may be difficult to study local interactions between individual agents in DSGE models, which instead focus mostly on the way agents interact through aggregate prices. On the other hand, ACE models may exaggerate errors in individual decision-making, since the strategies assumed in ACE models may be very far from optimal choices unless the modeler is very careful. A related issue is that ACE models which start from strategies instead of preferences may remain vulnerable to the Lucas critique: a changed policy regime should generally give rise to changed strategies.
== See also ==
== References ==
== External links ==
Macroeconomic Modeling: The Cowles Commission Approach by Ray Fair
FAIRMODEL - US models to download
JAMEL - An on-line, interactive agent-based macroeconomic model | Wikipedia/Macroeconomic_models |
In economics and econometrics, the Cobb–Douglas production function is a particular functional form of the production function, widely used to represent the technological relationship between the amounts of two or more inputs (particularly physical capital and labor) and the amount of output that can be produced by those inputs. The Cobb–Douglas form was developed and tested against statistical evidence by Charles Cobb and Paul Douglas between 1927 and 1947; according to Douglas, the functional form itself was developed earlier by Philip Wicksteed.
== Formulation ==
In its most standard form for production of a single good with two factors, the function is given by:
Y
(
L
,
K
)
=
A
L
β
K
α
{\displaystyle Y(L,K)=AL^{\beta }K^{\alpha }}
where:
Y = total production (the real value of all goods produced in a year or 365.25 days)
L = labour input (person-hours worked in a year or 365.25 days)
K = capital input (a measure of all machinery, equipment, and buildings; the value of capital input divided by the price of capital)
A = total factor productivity
0
<
α
<
1
{\displaystyle 0<\alpha <1}
and
0
<
β
<
1
{\displaystyle 0<\beta <1}
are the output elasticities of capital and labor, respectively. These values are constants determined by available technology.
Capital and labour are the two "factors of production" of the Cobb–Douglas production function.
== History ==
Paul Douglas explained that his first formulation of the Cobb–Douglas production function was developed in 1927; when seeking a functional form to relate estimates he had calculated for workers and capital, he spoke with mathematician and colleague Charles Cobb, who suggested a function of the form Y = ALβK1−β, previously used by Knut Wicksell, Philip Wicksteed, and Léon Walras, although Douglas only acknowledges Wicksteed and Walras for their contributions. Not long after Knut Wicksell's death in 1926, Paul Douglas and Charles Cobb implemented the Cobb–Douglas function in their work covering the subject manner of producer theory for the first time. Estimating this using least squares, he obtained a result for the exponent of labour of 0.75—which was subsequently confirmed by the National Bureau of Economic Research to be 0.741. Later work in the 1940s prompted them to allow for the exponents on K and L to vary, resulting in estimates that subsequently proved to be very close to improved measure of productivity developed at that time.
A major criticism at the time was that estimates of the production function, although seemingly accurate, were based on such sparse data that it was hard to give them much credibility. Douglas remarked "I must admit I was discouraged by this criticism and thought of giving up the effort, but there was something which told me I should hold on." The breakthrough came in using US census data, which was cross-sectional and provided a large number of observations. Douglas presented the results of these findings, along with those for other countries, at his 1947 address as president of the American Economic Association. Shortly afterwards, Douglas went into politics and was stricken by ill health—resulting in little further development on his side. However, two decades later, his production function was widely used, being adopted by economists such as Paul Samuelson and Robert Solow. The Cobb–Douglas production function is especially notable for being the first time an aggregate or economy-wide production function had been developed, estimated, and then presented to the profession for analysis; it marked a landmark change in how economists approached macroeconomics from a microeconomics perspective.
== Positivity of marginal products ==
The marginal product of a factor of production is the change in output when that factor of production changes, holding constant all the other factors of production as well as the total factor productivity.
The marginal product of capital,
M
P
K
{\displaystyle MPK}
corresponds to the first derivative of the production function with respect to capital:
M
P
K
=
∂
Y
∂
K
=
α
A
L
β
K
α
−
1
=
α
A
L
β
K
α
K
=
α
Y
K
{\displaystyle MPK={\frac {\partial Y}{\partial K}}=\alpha AL^{\beta }K^{\alpha -1}=\alpha {\frac {AL^{\beta }K^{\alpha }}{K}}=\alpha {\frac {Y}{K}}}
Because
α
>
0
{\displaystyle \alpha >0}
(and
Y
>
0
,
K
>
0
{\displaystyle Y>0,K>0}
as well), we find out that the marginal product of capital is always positive; that is, increasing capital leads to an increase in output.
We also find that increasing the total factor productivity
A
{\displaystyle A}
increases the marginal product of capital.
An analogous reasoning holds for labor.
== Law of diminishing returns ==
Taking the derivative of the marginal product of capital with respect to capital (i.e., taking the second derivative of the production function with respect to capital), we have:
∂
M
P
K
∂
K
=
∂
2
Y
∂
K
2
=
∂
∂
K
(
A
L
β
α
K
α
−
1
)
=
A
L
β
α
(
α
−
1
)
K
α
−
2
=
α
(
α
−
1
)
A
L
β
K
α
K
2
=
α
(
α
−
1
)
Y
K
2
{\displaystyle {\frac {\partial MPK}{\partial K}}={\frac {\partial ^{2}Y}{\partial K^{2}}}={\frac {\partial }{\partial K}}(AL^{\beta }\alpha K^{\alpha -1})=AL^{\beta }\alpha (\alpha -1)K^{\alpha -2}=\alpha (\alpha -1)AL^{\beta }{\frac {K^{\alpha }}{K^{2}}}=\alpha (\alpha -1){\frac {Y}{K^{2}}}}
Because
α
<
1
{\displaystyle \alpha <1}
, then
α
−
1
<
0
{\displaystyle \alpha -1<0}
and so
∂
M
P
K
∂
K
<
0
{\displaystyle {\dfrac {\partial MPK}{\partial K}}<0}
.
Thus, this function satisfies the law of "diminishing returns"; that is, the marginal product of capital, while always positive, is declining. As capital increases (holding labor and total factor productivity constant), the output increases but at a diminishing rate.
A similar reasoning holds for labor.
== Cross derivatives ==
We can study what happens to the marginal product of capital when labor increases by taking the partial derivative of the marginal product of capital with respect to labor, that is, the cross-derivative of output with respect to capital and labor:
∂
M
P
K
∂
L
=
∂
2
Y
∂
K
∂
L
=
∂
∂
L
(
A
L
β
α
K
α
−
1
)
=
A
β
L
β
−
1
α
K
α
−
1
=
A
α
β
L
β
K
α
L
K
=
α
β
Y
L
K
{\displaystyle {\dfrac {\partial MPK}{\partial L}}={\dfrac {\partial ^{2}Y}{\partial K\partial L}}={\dfrac {\partial }{\partial L}}(AL^{\beta }\alpha K^{\alpha -1})=A\beta L^{\beta -1}\alpha K^{\alpha -1}=A\alpha \beta {\dfrac {L^{\beta }K^{\alpha }}{LK}}=\alpha \beta {\dfrac {Y}{LK}}}
Since
∂
M
P
K
∂
L
>
0
{\displaystyle {\dfrac {\partial MPK}{\partial L}}>0}
, an increase in labor raises the marginal product of capital.
== Returns to scale ==
Output elasticity measures the responsiveness of output to a change in levels of either labor or capital used in production, ceteris paribus. For example, if α = 0.45, a 1% increase in capital usage would lead to approximately a .45% increase in output.
Sometimes the term has a more restricted meaning, requiring that the function display constant returns to scale, meaning that increasing capital K and labor L by a factor k also increases output Y by the same factor, that is,
Y
(
k
L
,
k
K
)
=
k
Y
(
L
,
K
)
{\displaystyle Y(kL,kK)=kY(L,K)}
. This holds if
α
+
β
=
1
{\displaystyle \alpha +\beta =1}
.
If
α
+
β
<
1
{\displaystyle \alpha +\beta <1}
, then returns to scale are decreasing, meaning that an increase of capital K and labor L by a factor k will produce an increase in output Y smaller than a factor k, that is
Y
(
k
L
,
k
K
)
<
k
Y
(
L
,
K
)
{\displaystyle Y(kL,kK)<kY(L,K)}
.
If
α
+
β
>
1
{\displaystyle \alpha +\beta >1}
, then returns to scale are increasing, meaning that an increase in capital K and labor L by a factor k produce an increase in output Y greater than a factor k, that is,
Y
(
k
L
,
k
K
)
>
k
Y
(
L
,
K
)
{\displaystyle Y(kL,kK)>kY(L,K)}
.
== Remuneration under perfect competition ==
Under perfect competition, the factors of production are remunerated at their total marginal product.
Suppose that
Y
=
F
(
L
,
K
)
=
L
α
K
1
−
α
{\displaystyle Y=F(L,K)=L^{\alpha }K^{1-\alpha }}
where
0
<
α
<
1
{\displaystyle 0<\alpha <1}
. In this case
M
P
L
=
α
L
α
−
1
K
1
−
α
=
α
(
K
L
)
1
−
α
{\displaystyle MP_{L}=\alpha L^{\alpha -1}K^{1-\alpha }=\alpha ({\frac {K}{L}})^{1-\alpha }}
and
M
P
K
=
(
1
−
α
)
L
α
K
−
α
=
(
1
−
α
)
(
K
L
)
−
α
{\displaystyle MP_{K}=(1-\alpha )L^{\alpha }K^{-\alpha }=(1-\alpha )({\frac {K}{L}})^{-\alpha }}
.
Therefore,
Y
=
L
⋅
M
P
L
+
K
⋅
M
P
K
=
α
L
α
K
1
−
α
+
(
1
−
α
)
L
α
K
1
−
α
{\displaystyle Y=L\cdot MP_{L}+K\cdot MP_{K}=\alpha L^{\alpha }K^{1-\alpha }+(1-\alpha )L^{\alpha }K^{1-\alpha }}
. Dividing both sides by
Y
=
F
(
L
,
K
)
=
L
α
K
1
−
α
{\displaystyle Y=F(L,K)=L^{\alpha }K^{1-\alpha }}
we obtain that the remuneration of labor is
α
{\displaystyle \alpha }
of the production and the remuneration of capital is
(
1
−
α
)
{\displaystyle (1-\alpha )}
of the production.
Let us normalize the price of
Y
{\displaystyle Y}
to 1. In a competitive equilibrium the value of marginal product of a production factor equals its price or
P
Y
⋅
M
P
K
=
M
P
K
=
w
{\displaystyle P_{Y}\cdot MP_{K}=MP_{K}=w}
and similarly
M
P
K
=
r
{\displaystyle MP_{K}=r}
where
w
{\displaystyle w}
is the wage rate and
r
{\displaystyle r}
is the price of capital, the real interest rate (assuming that capital fully depreciates after one period, otherwise, the price of capital is
r
+
δ
{\displaystyle r+\delta }
where
δ
{\displaystyle \delta }
is the depreciation rate of capital).
The total production can be written as follows:
Y
=
L
⋅
w
+
K
⋅
r
{\displaystyle Y=L\cdot w+K\cdot r}
. That is, the value of production is divides between renumeration for labor and renumeration for capital.
== Generalized form ==
In its generalized form, the Cobb–Douglas function models more than two goods. The Cobb–Douglas function may be written as
f
(
x
)
=
A
∏
i
=
1
n
x
i
λ
i
,
x
=
(
x
1
,
…
,
x
n
)
.
{\displaystyle f(x)=A\prod _{i=1}^{n}x_{i}^{\lambda _{i}},\qquad x=(x_{1},\ldots ,x_{n}).}
where
A is an efficiency parameter
n is the total number of input variables (goods)
x1, ..., xn are the (non-negative) quantities of good consumed, produced, etc.
λ
i
{\displaystyle \lambda _{i}}
is an elasticity parameter for good i
== Criticisms ==
The function has been criticised for its lack of foundation. Cobb and Douglas were influenced by statistical evidence that appeared to show that labor and capital shares of total output were constant over time in developed countries; they explained this by statistical fitting least-squares regression of their production function. It is now widely accepted that labor share is declining in industrialized economies. The production function contains a principal assumption that may not always provide the most accurate representation of a country's productive capabilities and supply-side efficiencies. This assumption is a "constant share of labor in output," which may not be effective when applied to cases of countries whose labor markets are growing at significant rates. Another issue within the fundamental composition the Cobb–Douglas production function is the presence of simultaneous equation bias. When competition is presumed, the simultaneous equation bias has impact on all function types involving firm decisions – including the Cobb–Douglas function. In some cases this simultaneous equation bias doesn't appear. However, it is apparent when least squares asymptotic approximations are used.
The Cobb–Douglas production function was not developed on the basis of any knowledge of engineering, technology, or management of the production process. This rationale may be true given the definition of the Capital term. Labor hours and Capital need a better definition. If capital is defined as a building, labor is already included in the development of that building. A building is composed of commodities, labor and risks and general conditions. It was instead developed because it had attractive mathematical characteristics, such as diminishing marginal returns to either factor of production and the property that the optimal expenditure shares on any given input of a firm operating a Cobb–Douglas technology are constant. Initially, there were no utility foundations for it. In the modern era, some economists try to build models up from individual agents acting, rather than imposing a functional form on an entire economy. The Cobb–Douglas production function, if properly defined, can be applied at a micro-economic level, up to a macro- economic level.
However, many modern authors have developed models which give microeconomically based Cobb–Douglas production functions, including many New Keynesian models. It is nevertheless a mathematical mistake to assume that just because the Cobb–Douglas function applies at the microeconomic level, it also always applies at the macroeconomic level. Similarly, it is not necessarily the case that a macro Cobb–Douglas applies at the disaggregated level. An early microfoundation of the aggregate Cobb–Douglas technology based on linear activities is derived in Houthakker (1955). The Cobb–Douglas production function is inconsistent with modern empirical estimates of the elasticity of substitution between capital and labor, which suggest that capital and labor are gross complements. A 2021 meta-analysis of 3186 estimates concludes that "the weight of evidence accumulated in the empirical literature emphatically rejects the Cobb–Douglas specification."
== Cobb–Douglas utilities ==
The Cobb–Douglas function is often used as a utility function. Utility
u
~
{\displaystyle {\tilde {u}}}
is a function of the quantities
x
i
{\displaystyle x_{i}}
of the
n
{\displaystyle n}
goods consumed:
u
~
(
x
)
=
∏
i
=
1
n
x
i
λ
i
{\displaystyle {\tilde {u}}(x)=\prod _{i=1}^{n}x_{i}^{\lambda _{i}}}
Utility functions represent ordinal preferences and do not have natural units, unlike production functions. As the result, a monotonic transformation of a utility function represents the same preferences. Unlike with a Cobb–Douglas production function, where the sum of the exponents determines the degree of economies of scale, the sum can be normalized to one for a utility function because normalization is a monotonic transformation of the original utility function. Thus, let us define
λ
=
∑
i
=
1
n
λ
i
{\displaystyle \lambda =\sum _{i=1}^{n}\lambda _{i}}
and
α
i
=
λ
i
λ
{\displaystyle \alpha _{i}={\frac {\lambda _{i}}{\lambda }}}
, so
∑
i
=
1
n
α
i
=
1
{\displaystyle \sum _{i=1}^{n}\alpha _{i}=1}
, and write the utility function as:
u
(
x
)
=
∏
i
=
1
n
x
i
α
i
{\displaystyle u(x)=\prod _{i=1}^{n}x_{i}^{\alpha _{i}}}
The consumer maximizes utility subject to the budget constraint that the cost of the goods is less than her wealth
w
{\displaystyle w}
. Letting
p
i
{\displaystyle p_{i}}
denote the goods' prices, she solves:
max
x
i
∏
i
=
1
n
x
i
α
i
subject to the constraint
∑
i
=
1
n
p
i
x
i
=
w
{\displaystyle \max _{x_{i}}\prod _{i=1}^{n}x_{i}^{\alpha _{i}}\quad {\text{ subject to the constraint }}\quad \sum _{i=1}^{n}p_{i}x_{i}=w}
The Marginal Rate of Substitution between each two goods is
By inserting to the budget constrain we obtain
p
i
x
i
+
∑
j
≠
i
n
p
j
x
j
α
j
α
i
=
w
{\displaystyle p_{i}x_{i}+\textstyle \sum _{j\neq i}^{n}\displaystyle p_{j}x_{j}{\frac {\alpha _{j}}{\alpha _{i}}}=w}
⇒
p
i
x
i
(
1
+
∑
j
≠
i
n
α
j
α
i
)
=
w
⇒
p
i
x
i
(
α
i
+
∑
j
≠
i
n
α
j
)
α
i
=
w
⇒
p
i
x
i
1
α
i
=
w
{\displaystyle \Rightarrow p_{i}x_{i}(1+\sum _{j\neq i}^{n}{\frac {\alpha _{j}}{\alpha _{i}}})=w\Rightarrow p_{i}x_{i}{\frac {(\alpha _{i}+\sum _{j\neq i}^{n}\alpha _{j})}{\alpha _{i}}}=w\Rightarrow p_{i}x_{i}{\frac {1}{\alpha _{i}}}=w}
⇒
x
i
∗
=
α
i
w
p
i
∀
i
{\displaystyle \Rightarrow x_{i}^{*}={\frac {\alpha _{i}w}{p_{i}}}\forall i}
Note that
p
i
x
i
∗
=
α
i
w
{\displaystyle p_{i}x_{i}^{*}=\alpha _{i}w}
, the consumer spends fraction
α
i
{\displaystyle \alpha _{i}}
of her wealth on good i.
Also note that each good is affected solely by its own price. That is, any two goods are not substitute goods nor complementary goods. Namely, their cross elasticity equals to zero and the cross demand function of any good is described by a vertical line.
Finally, note that when the income increase by some percent the demand for the good increase by the same percent. That is, the elasticity of the demand with respect to income equals 1 and therefore, the Engel curve is a straight line starting from the origin.
Note that this is the solution for either
u
(
x
)
{\displaystyle u(x)}
or
u
~
(
x
)
,
{\displaystyle {\tilde {u}}(x),}
since the same preferences generate the same demand.
The indirect utility function can be calculated by substituting the demands
x
i
{\displaystyle x_{i}}
into the utility function. Define the constant
K
=
∏
i
=
1
n
α
i
α
i
{\displaystyle K=\prod _{i=1}^{n}\alpha _{i}^{\alpha _{i}}}
and we get:
v
(
p
,
w
)
=
∏
i
=
1
n
(
w
α
i
p
i
)
α
i
=
∏
i
=
1
n
w
α
i
⋅
∏
i
=
1
n
α
i
α
i
∏
i
=
1
n
p
i
α
i
=
K
(
w
∏
i
=
1
n
p
i
α
i
)
{\displaystyle v(p,w)=\prod _{i=1}^{n}\left({\frac {w\alpha _{i}}{p_{i}}}\right)^{\alpha _{i}}={\frac {\prod _{i=1}^{n}w^{\alpha _{i}}\cdot \prod _{i=1}^{n}\alpha _{i}^{\alpha _{i}}}{\prod _{i=1}^{n}p_{i}^{\alpha _{i}}}}=K\left({\frac {w}{\prod _{i=1}^{n}p_{i}^{\alpha _{i}}}}\right)}
which is a special case of the Gorman polar form. The expenditure function is the inverse of the indirect utility function:: 112
e
(
p
,
u
)
=
(
1
/
K
)
∏
i
=
1
n
p
i
α
i
u
{\displaystyle e(p,u)=(1/K)\prod _{i=1}^{n}p_{i}^{\alpha _{i}}u}
The Marshallian demand function that Cobb-Douglas utility function
== Various representations of the production function ==
The Cobb–Douglas function form can be estimated as a linear relationship using the following expression:
ln
(
Y
)
=
a
0
+
∑
i
a
i
ln
(
I
i
)
{\displaystyle \ln(Y)=a_{0}+\sum _{i}a_{i}\ln(I_{i})}
where
Y
=
output
{\displaystyle Y={\text{output}}}
I
i
=
inputs
{\displaystyle I_{i}={\text{inputs}}}
a
i
=
model coefficients
{\displaystyle a_{i}={\text{model coefficients}}}
The model can also be written as
Y
=
e
a
0
(
I
1
)
a
1
⋅
(
I
2
)
a
2
⋯
{\displaystyle Y=e^{a_{0}}(I_{1})^{a_{1}}\cdot (I_{2})^{a_{2}}\cdots }
As noted, the common Cobb–Douglas function used in macroeconomic modeling is
Y
=
K
α
L
β
{\displaystyle Y=K^{\alpha }L^{\beta }}
where K is capital and L is labor. When the model exponents sum to one, the production function is first-order homogeneous, which implies constant returns to scale—that is, if all inputs are scaled by a common factor greater than zero, output will be scaled by the same factor.
== Relationship to the CES production function ==
The constant elasticity of substitution (CES) production function (in the two-factor case) is
Y
=
A
(
α
K
γ
+
(
1
−
α
)
L
γ
)
1
/
γ
,
{\displaystyle Y=A\left(\alpha K^{\gamma }+(1-\alpha )L^{\gamma }\right)^{1/\gamma },}
in which the limiting case γ = 0 corresponds to a Cobb–Douglas function,
Y
=
A
K
α
L
1
−
α
,
{\displaystyle Y=AK^{\alpha }L^{1-\alpha },}
with constant returns to scale.
To see this, the log of the CES function:
ln
(
Y
)
=
ln
(
A
)
+
1
γ
ln
(
α
K
γ
+
(
1
−
α
)
L
γ
)
{\displaystyle \ln(Y)=\ln(A)+{\frac {1}{\gamma }}\ln \left(\alpha K^{\gamma }+(1-\alpha )L^{\gamma }\right)}
can be taken to the limit by applying L'Hôpital's rule:
lim
γ
→
0
ln
(
Y
)
=
ln
(
A
)
+
α
ln
(
K
)
+
(
1
−
α
)
ln
(
L
)
.
{\displaystyle \lim _{\gamma \to 0}\ln(Y)=\ln(A)+\alpha \ln(K)+(1-\alpha )\ln(L).}
Therefore,
Y
=
A
K
α
L
1
−
α
{\displaystyle Y=AK^{\alpha }L^{1-\alpha }}
.
=== Translog production function ===
The translog production function is an approximation of the CES function by a second-order Taylor polynomial in the variable
γ
{\displaystyle \gamma }
about
γ
=
0
{\displaystyle \gamma =0}
, i.e. the Cobb–Douglas case. The name translog stands for "transcendental logarithmic." It is often used in econometrics for the fact that it is linear in the parameters, which means ordinary least squares could be used if inputs could be assumed exogenous.
In the two-factor case above the translog production function is
ln
(
Y
)
=
ln
(
A
)
+
α
ln
(
K
)
+
(
1
−
α
)
ln
(
L
)
+
1
2
γ
α
(
1
−
α
)
[
ln
(
K
)
−
ln
(
L
)
]
2
=
ln
(
A
)
+
a
K
ln
(
K
)
+
a
L
ln
(
L
)
+
b
K
K
ln
2
(
K
)
+
b
L
L
ln
2
(
L
)
+
b
K
L
ln
(
K
)
ln
(
L
)
{\displaystyle {\begin{aligned}\ln(Y)&=\ln(A)+\alpha \ln(K)+(1-\alpha )\ln(L)+{\frac {1}{2}}\gamma \alpha (1-\alpha )\left[\ln(K)-\ln(L)\right]^{2}\\&=\ln(A)+a_{K}\ln(K)+a_{L}\ln(L)+b_{KK}\ln ^{2}(K)+b_{LL}\ln ^{2}(L)+b_{KL}\ln(K)\ln(L)\end{aligned}}}
where
a
K
{\displaystyle a_{K}}
,
a
L
{\displaystyle a_{L}}
,
b
K
K
{\displaystyle b_{KK}}
,
b
L
L
{\displaystyle b_{LL}}
, and
b
K
L
{\displaystyle b_{KL}}
are defined appropriately. In the three factor case, the translog production function is:
ln
(
Y
)
=
ln
(
A
)
+
a
L
ln
(
L
)
+
a
K
ln
(
K
)
+
a
M
ln
(
M
)
+
b
L
L
ln
2
(
L
)
+
b
K
K
ln
2
(
K
)
+
b
M
M
ln
2
(
M
)
+
b
L
K
ln
(
L
)
ln
(
K
)
+
b
L
M
ln
(
L
)
ln
(
M
)
+
b
K
M
ln
(
K
)
ln
(
M
)
=
f
(
L
,
K
,
M
)
.
{\displaystyle {\begin{aligned}\ln(Y)&=\ln(A)+a_{L}\ln(L)+a_{K}\ln(K)+a_{M}\ln(M)+b_{LL}\ln ^{2}(L)+b_{KK}\ln ^{2}(K)+b_{MM}\ln ^{2}(M)\\&{}\qquad \qquad +b_{LK}\ln(L)\ln(K)+b_{LM}\ln(L)\ln(M)+b_{KM}\ln(K)\ln(M)\\&=f(L,K,M).\end{aligned}}}
where
A
{\displaystyle A}
= total factor productivity,
L
{\displaystyle L}
= labor,
K
{\displaystyle K}
= capital,
M
{\displaystyle M}
= materials and supplies, and
Y
{\displaystyle Y}
= output.
== See also ==
Leontief production function
Production–possibility frontier
Production theory
== References ==
== Further reading ==
Renshaw, Geoff (2005). Maths for Economics. New York: Oxford University Press. pp. 516–526. ISBN 0-19-926746-4.
== External links ==
Anatomy of Cobb–Douglas Type Production Functions in 3D
Analysis of the Cobb–Douglas as a utility function Archived 2014-10-03 at the Wayback Machine
Closed Form Solution for a firm with an N-input production function | Wikipedia/Cobb–Douglas_production_function |
The Lucas islands model is an economic model of the link between money supply and price and output changes in a simplified economy using rational expectations. It delivered a new classical explanation of the Phillips curve relationship between unemployment and inflation. The model was formulated by Robert Lucas, Jr. in a series of papers in the 1970s.
== Description ==
The model contains a group of N islands, with one individual on each. Each individual produces some quantity Y, which can be bought for some amount of money M. Individuals use money a given number of times to buy a certain quantity of goods which cost a certain price. In the quantity theory of money, this is expressed as MV = PY, where money supply times velocity equals price times output.
Lucas then introduced variation in the price level. This can occur through changes in the local price level of individual islands due to increased or decreased demand (i.e. asymmetric preferences, z) or through stochastic processes (randomness) that cannot be predicted (e). However, the island dweller only observes the nominal price change, not the component price changes. Essentially, all prices can be rising, in which case the islander wants to produce the same, as his real income is the same, which is shown by (e). Or the price of his product is rising and others are not, which is z, in which case he wants to increase supply due to a higher price. The islander wishes to respond to z but not to e, but since he can only see the total price change p (p = z + e), he makes errors. Due to this, if the money supply is expanded, causing general inflation, he will increase production even though he is not receiving as high of a price as he thinks (he confuses some of the price as an increase in z). This exhibits a Phillips curve relationship, as inflation is positively related with output (i.e. inflation is negatively related with unemployment). However, and this is the point, the existence of a short-run Phillips curve does not make the central bank capable of exploiting this relationship in a systematic way. Although economic agents are expected to respond to changes in the price level, the central bank is not able to control the real economy. Since erratic changes may occur in the macroeconomic environment (interpreted as white noises) and agents are assumed to be fully rational, controlling the real economy (unemployment and production) is possible only through surprises (or, in other words, unexpected monetary policy actions) which, however, cannot be systematic.
The twist is that due to the rational expectations included in the model, the islander isn't tricked by long-run inflation, as he incorporates this into his predictions and correctly identifies this as pi (long-run trend inflation) and not z. This is essentially the policy ineffectiveness proposition. This means in the long-run, inflation cannot induce increases in output, which means the Phillips curve is vertical.
An important consequence of the Lucas islands model is that it requires that we distinguish between anticipated and unanticipated changes in monetary policy. If changes in monetary policy and the resulting changes in inflation are anticipated, then the islanders are not misled by any price changes that they observe. Consequently, they will not adjust production and the neutrality of money occurs even in the short-run. With unanticipated changes in inflation, the islanders face the imperfect information problem and will adjust production. Therefore, monetary policy can influence output only as long as it surprises individuals and firms in an economy.
== See also ==
Phillips curve
New classical macroeconomics
Neutrality of money
== References ==
== Further reading ==
Blanchard, Olivier Jean; Fischer, Stanley (1989). "The Lucas Model". Lectures on Macroeconomics. Cambridge: MIT Press. pp. 356–360. ISBN 978-0-262-02283-5.
== External links ==
Ellison, Martin. "University of Warwick: Lecture notes in Monetary Economics, Chapter 3" (PDF). | Wikipedia/Lucas_islands_model |
Most economic models rest on a number of assumptions that are not entirely realistic. For example, agents are often assumed to have perfect information, and markets are often assumed to clear without friction. Or, the model may omit issues that are important to the question being considered, such as externalities. Any analysis of the results of an economic model must therefore consider the extent to which these results may be compromised by inaccuracies in these assumptions, and there is a growing literature debunking economics and economic models.
== Restrictive, unrealistic assumptions ==
Probably unrealistic assumptions are pervasive in neoclassical economic theory (also called the "standard theory" or "neoclassical paradigm"), and those assumptions are inherited by simplified models for that theory. (Any model based on a flawed theory, cannot transcend the limitations of that theory.) Joseph Stiglitz' 2001 Nobel Prize lecture reviews his work on information asymmetries, which contrasts with the assumption, in standard models, of "perfect information". Stiglitz surveys many aspects of these faulty standard models, and the faulty policy implications
and recommendations that arise from their unrealistic assumptions.
Economic models can be such powerful tools in understanding some economic relationships that it is easy to ignore their limitations. One tangible example where the limits of economic models allegedly collided with reality, but were nevertheless accepted as "evidence" in public policy debates, involved models to simulate the effects of NAFTA, the North American Free Trade Agreement. James Stanford published his examination of 10 of these models.
The fundamental issue is circular reasoning: embedding one's assumptions as foundational "input" axioms in a model, then proceeding to "prove" that, indeed, the model's "output" supports the validity of those assumptions. Such a model is consistent with similar models that have adopted those same assumptions. But is it consistent with reality? As with any scientific theory, empirical validation is needed, if we are to have any confidence in its predictive ability.
If those assumptions are, in fact, fundamental aspects of empirical reality, then the model's output will correctly describe reality (if it is properly "tuned", and if it is not missing any crucial assumptions). But if those assumptions are not valid for the particular aspect of reality one attempts to simulate, then it becomes a case of "GIGO" – Garbage In, Garbage Out".
James Stanford outlines this issue for the specific Computable General Equilibrium ("CGE") models that were introduced as evidence into the public policy debate, by advocates for NAFTA.
Despite the prominence of Stiglitz' 2001 Nobel prize lecture, the use of arguably misleading neoclassical models persisted in 2007, according to these authors:
The working paper,
"Debunking the Myths of Computable General Equilibrium Models",
provides both a history, and a readable theoretical analysis
of what CGE models are, and are not. In particular, despite their name,
CGE models use neither the Walrass general equilibrium,
nor the Arrow-Debreus General Equilibrium frameworks.
Thus, CGE models are highly distorted simplifications of theoretical frameworks—collectively called "the neoclassical economic paradigm"—which—themselves—were largely discredited by Joseph Stiglitz.
In the "Concluding Remarks" (p. 524) of his 2001 Nobel Prize lecture, Stiglitz examined why the neoclassical paradigm—and models based on it—persists, despite his publication, over a decade earlier, of some of his seminal results showing that Information Asymmetries invalidated core Assumptions of that paradigm
and its models:
In the aftermath of the 2007–2009 global economic meltdown, the profession's alleged attachment to unrealistic models is increasingly being questioned and criticized. After a weeklong workshop, one group of economists released a paper highly critical of their own profession's allegedly unethical use of unrealistic models. Their Abstract offers an indictment of fundamental practices.
== Omitted details ==
A great danger inherent in the simplification required to fit the entire economy into a model is omitting critical elements. Some economists believe that making the model as simple as possible is an art form, but the details left out are often contentious. For instance:
Market models often exclude externalities such as pollution. Such models are the basis for many environmentalist attacks on mainstream economists. It is said that if the social costs of externalities were included in the models their conclusions would be very different, and models are often accused of leaving out these terms because of economist's pro-free market bias.
In turn, environmental economics has been accused of omitting key financial considerations from its models. For example, the returns to solar power investments are sometimes modelled without a discount factor, so that the present utility of solar energy delivered in a century's time is precisely equal to gas-power station energy today.
Financial models can be oversimplified by relying on historically unprecedented arbitrage-free markets, probably underestimating the chance of crises, and under-pricing or under-planning for risk.
It is possible that any missing variable as well as errors in values of included variables can lead to erroneous results.
Model risk: There is a significant amount of model risk inherent in the current mathematical modeling approaches to economics that one must take into account when using them. A good economic theory should be built on sound economic principles tested on many free markets, and proven to be valid. However, empirical facts have been alleged to indicate that the principles of economics hold only under very limited conditions that are rarely met in real life, and there is no scientific testing methodology available to validate hypotheses. Decisions based on economic theories that are not scientifically possible to test can give people a false sense of precision, and that could be misleading, leading to build up logical errors.
Natural economics: Economics is concerned with both 'normal' and 'abnormal' economic conditions. In an objective scientific study one is not restricted by the normality assumption in describing actual economies, as much empirical evidence shows that some "anomalous" behavior can persist for a long time in real markets e.g., in market "bubbles" and market "herding".
== References == | Wikipedia/Problems_with_economic_models |
The Heckscher–Ohlin model (/hɛkʃr ʊˈliːn/, H–O model) is a general equilibrium mathematical model of international trade, developed by Eli Heckscher and Bertil Ohlin at the Stockholm School of Economics. It builds on David Ricardo's theory of comparative advantage by predicting patterns of commerce and production based on the resources of a trading region. The model essentially says that countries export the products which use their relatively abundant and cheap factors of production, and import the products which use the countries' relatively scarce factors.
== Features of the model ==
Relative endowments of the factors of production (land, labor, and capital) determine a country's comparative advantage. Countries have comparative advantages in those goods for which the required factors of production are relatively abundant locally. This is because the profitability of goods is determined by input costs. Goods that require locally abundant inputs are cheaper to produce than those goods that require locally scarce inputs.
For example, a country where capital and land are abundant but labor is scarce has a comparative advantage in goods that require lots of capital and land, but little labor — such as grains. If capital and land are abundant, their prices are low. As they are the main factors in the production of grain, the price of grain is also low—and thus attractive for both local consumption and export. Labor-intensive goods, on the other hand, are very expensive to produce since labor is scarce and its price is high. Therefore, the country is better off importing those goods.
The comparative advantage is due to the fact that nations have various factors of production, the endowment of factors is the number of resources such as land, labor, and capital that a country has. Countries are endowed with multiple factors which explains the difference in the costs of a particular factor when a cheaper factor is more abundant. The theory predicts that nations will export the goods that make the most of the factors that are abundant in their soil and will import those that are made with scarce factors. Thus, this theory aims to explain the scheme of international trade that we observe in the world economy. Ohlin and Heckscher's theory advocates that the pattern of international trade is determined by differences in factor endowments rather than by differences in productivity. The endowments are relative and not absolute. One nation may have more land and workers than another but be relatively abundant in one of two factors.
For example; The United States is a leading exporter of agricultural products, which reflects its great abundance of arable land, and on the other hand, China excels in the export of goods made with cheap labor such as textiles or shoes. This demonstrates why the United States has been a large importer of these Chinese products since it does not abound in cheap labor.
== Theoretical development ==
While still building on traditional models such as the Ricardian framework, the mid 1900s bring forth innovation in international trade theory with the introduction of the Heckscher-Ohlin (H-O) model, developed by Swedish economists Eli Heckscher and Bertil Ohlin from the Stockholm School of Economics. The H-O model advances international trade theory by introducing the concept of factor endowments within a country as well as the underlying causes for differences in comparative costs between countries, while assuming countries will have identical production technologies. The H-O framework finds that countries have differing comparative costs even though they have the same production technologies due to differences in factors of production, such as the geographical abundance of natural resources or population size. Furthermore, what the H-O model concludes is that traded commodities are essentially bundles of factors (land, labor, and capital) and therefore the international trade of commodities is indirect factor arbitrage (Leamer 1995).The H-O model more accurately describes international trade patterns in modern times (post WWII) due to the increased ability of transferring knowledge/ production technologies between countries, mainly focusing on factorial differences such as labor force and resource allocation as to why countries trade with each other. The Ricardian model of comparative advantage has trade ultimately motivated by differences in labour productivity using different "technologies". Heckscher and Ohlin did not require production technology to vary between countries, so (in the interests of simplicity) the "H–O model has identical production technology everywhere". Ricardo considered a single factor of production (labour) and would not have been able to produce comparative advantage without technological differences between countries (all nations would become autarkic at various stages of growth, with no reason to trade with each other). The H–O model removed technology variations but introduced variable capital endowments, recreating endogenously the inter-country variation of labour productivity that Ricardo had imposed exogenously. With international variations in the capital endowment like infrastructure and goods requiring different factor "proportions", Ricardo's comparative advantage emerges as a profit-maximizing solution of capitalist's choices from within the model's equations. The decision that capital owners are faced with is between investments in differing production technologies; the H–O model assumes capital is privately held.
=== Original publication ===
Bertil Ohlin first explained the theory in a book published in 1933. Ohlin wrote the book alone, but he credited Heckscher as co-developer of the model because of his earlier work on the problem, and because many of the ideas in the final model came from Ohlin's doctoral thesis, supervised by Heckscher.
Interregional and International Trade itself was verbose, rather than being pared down to the mathematical, and appealed because of its new insights.
=== 2×2×2 model ===
The original H–O model assumed that the only difference between countries was the relative abundances of labour and capital. The original Heckscher–Ohlin model contained two countries, and had two commodities that could be produced. Since there are two (homogeneous) factors of production this model is sometimes called the "2×2×2 model".
The model has "variable factor proportions" between countries—highly developed countries have a comparatively high capital-to-labor ratio compared to developing countries. This makes the developed country capital-abundant relative to the developing country, and the developing nation labor-abundant in relation to the developed country.
With this single difference, Ohlin was able to discuss the new mechanism of comparative advantage, using just two goods and two technologies to produce them. One technology would be a capital-intensive industry, the other a labor-intensive business—see "assumptions" below.
=== Extensions ===
The model has been extended since the 1930s by many economists. These developments did not change the fundamental role of variable factor proportions in driving international trade, but added to the model various real-world considerations (such as tariffs) in the hopes of increasing the model's predictive power, or as a mathematical way of discussing macroeconomic policy options.
Notable contributions came from Paul Samuelson, Ronald Jones, and Jaroslav Vanek, so that variations of the model are sometimes called the Heckscher–Ohlin–Samuelson model (HOS) or the Heckscher–Ohlin–Vanek model in the neo-classical economics.
== Theoretical assumptions ==
The original, 2×2×2 model was derived with restrictive assumptions, partly for the sake of mathematical simplicity. Some of these have been relaxed for the sake of development. These assumptions and developments are listed here.
=== Both countries have identical production technology ===
This assumption means that producing the same output of either commodity could be done with the same level of capital and labour in either country. Actually, it would be inefficient to use the same balance in either country (because of the relative availability of either input factor) but, in principle this would be possible. Another way of saying this is that the per-capita productivity is the same in both countries in the same technology with identical amounts of capital.
Countries have natural advantages in the production of various commodities in relation to one another, so this is an "unrealistic" simplification designed to highlight the effect of variable factors. This meant that the original H–O model produced an alternative explanation for free trade to Ricardo's, rather than a complementary one; in reality, both effects may occur due to differences in technology and factor abundances.
In addition to natural advantages in the production of one sort of output over another (wine vs. rice, say) the infrastructure, education, culture, and "know-how" of countries differ so dramatically that the idea of identical technologies is a theoretical notion. Ohlin said that the H–O model was a long-run model, and that the conditions of industrial production are "everywhere the same" in the long run.
=== Production output is assumed to exhibit constant returns to scale ===
In a simple model, both countries produce two commodities. Each commodity in turn is made using two factors of production. The production of each commodity requires input from both factors of production—capital (K) and labor (L). The technologies of each commodity is assumed to exhibit constant returns to scale (CRS). CRS technologies implies that when inputs of both capital and labor is multiplied by a factor of k, the output also multiplies by a factor of k. For example, if both capital and labor inputs are doubled, output of the commodities is doubled. In other terms the production function of both commodities is "homogeneous of degree 1".
The assumption of constant returns to scale CRS is useful because it exhibits a diminishing returns in a factor. Under constant returns to scale, doubling both capital and labor leads to a doubling of the output. Since outputs are increasing in both factors of production, doubling capital while holding labor constant leads to less than doubling of an output. Diminishing returns to capital and diminishing returns to labor are crucial to the Stolper–Samuelson theorem.
=== The technologies used to produce the two commodities differ ===
The CRS production functions must differ to make trade worthwhile in this model. For instance if the functions are Cobb–Douglas technologies the parameters applied to the inputs must vary. An example would be:
Arable industry:
A
=
K
1
/
3
L
2
/
3
{\displaystyle A={{K}^{1/3}}{{L}^{2/3}}}
Fishing industry:
F
=
K
1
/
2
L
1
/
2
{\displaystyle F={{K}^{1/2}}{{L}^{1/2}}}
where A is the output in arable production, F is the output in fish production, and K, L are capital and labor in both cases.
In this example, the marginal return to an extra unit of capital is higher in the fishing industry, assuming units of fish (F) and arable output (A) have equal value. The more capital-abundant country may gain by developing its fishing fleet at the expense of its arable farms. Conversely, the workers available in the relatively labor-abundant country can be employed relatively more efficiently in arable farming.
=== Factor mobility within countries ===
Within countries, capital and labor can be reinvested and reemployed to produce different outputs. Similar to Ricardo's comparative advantage argument, this is assumed to happen without cost. If the two production technologies are the arable industry and the fishing industry it is assumed that farmers can shift to work as fishermen with no cost and vice versa.
It is further assumed that capital can shift easily into either technology, so that the industrial mix can change without adjustment costs between the two types of production. For instance, if the two industries are farming and fishing it is assumed that farms can be sold to pay for the construction of fishing boats with no transaction costs.
The theory by Avsar has offered much criticism to this.
=== Factor immobility between countries ===
The basic Heckscher–Ohlin model depends upon the relative availability of capital and labor differing internationally, but if capital can be freely invested anywhere, competition (for investment) makes relative abundances identical throughout the world. Essentially, free trade in capital provides a single worldwide investment pool.
Differences in labour abundance would not produce a difference in relative factor abundance (in relation to mobile capital) because the labour/capital ratio would be identical everywhere. (A large country would receive twice as much investment as a small one, for instance, maximizing capitalist's return on investment).
As capital controls are reduced, the modern world has begun to look a lot less like the world modelled by Heckscher and Ohlin. It has been argued that capital mobility undermines the case for free trade itself, see: Capital mobility and comparative advantage Free trade critique.
Capital is mobile when:
There are limited exchange controls
Foreign direct investment (FDI) is permitted between countries, or foreigners are permitted to invest in the commercial operations of a country through a stock or corporate bond market
Like capital, labor movements are not permitted in the Heckscher–Ohlin world, since this would drive an equalization of relative abundances of the two production factors, just as in the case of capital immobility. This condition is more defensible as a description of the modern world than the assumption that capital is confined to a single country.
=== Commodity prices are the same everywhere ===
The 2x2x2 model originally placed no barriers to trade, had no tariffs, and no exchange controls (capital was immobile, but repatriation of foreign sales was costless). It was also free of transportation costs between the countries, or any other savings that would favor procuring a local supply.
If the two countries have separate currencies, this does not affect the model in any way—purchasing power parity applies. Since there are no transaction costs or currency issues the law of one price applies to both commodities, and consumers in either country pay exactly the same price for either good.
In Ohlin's day this assumption was a fairly neutral simplification, but economic changes and econometric research since the 1950s have shown that the local prices of goods tend to correlate with incomes when both are converted at money prices (though this is less true with traded commodities). See: Penn effect.
=== Perfect internal competition ===
Neither labor nor capital has the power to affect prices or factor rates by constraining supply; a state of perfect competition exists.
== Conclusions ==
The results of this work has been the formulation of certain named conclusions arising from the assumptions inherent in the model.
=== Heckscher–Ohlin theorem ===
Exports of a capital-abundant country come from capital-intensive industries, and labour-abundant countries import such goods, exporting labour-intensive goods in return. Competitive pressures within the H–O model produce this prediction fairly straightforwardly. Conveniently, this is an easily testable hypothesis.
=== Rybczynski theorem ===
When the amount of one factor of production increases, the production of the good that uses that particular production factor intensively increases relative to the increase in the factor of production, as the H–O model assumes perfect competition where price is equal to the costs of factors of production. This theorem is useful in explaining the effects of immigration, emigration, and foreign capital investment. However, Rybczynski suggests that a fixed quantity of the two factors of production are required. This could be expanded to consider factor substitution, in which case the increase in production is more than proportional.
=== Stolper–Samuelson theorem ===
Relative changes in output goods prices drive the relative prices of the factors used to produce them. If the world price of capital-intensive goods increases, it increases the relative rental rate and decreases the relative wage rate (the return on capital as against the return to labor). Also, if the price of labor-intensive goods increases, it increases the relative wage rate and decreases the relative rental rate.
=== Factor–price equalization theorem ===
Free and competitive trade makes factor prices converge along with traded goods prices. The FPE theorem is the most significant conclusion of the H–O model, but also has found the least agreement with the economic evidence. Neither the rental return to capital, nor the wage rates seem to consistently converge between trading partners at different levels of development.
=== Implications of factor-proportion changes ===
The Stolper–Samuelson theorem concerns nominal rents and wages. The Magnification effect on prices considers the effect of output-goods price-changes on the real return to capital and labor. This is done by dividing the nominal rates with a price index, but took thirty years to develop completely because of the theoretical complexity involved.
The Magnification effect shows that trade liberalization actually makes the locally-scarce factor of production worse off (because increased trade makes the price index fall by less than the drop in returns to the scarce-factor induced by the Stolper–Samuelson theorem).
The Magnification effect on production quantity-shifts induced by endowment changes (via the Rybczynski theorem) predicts a larger proportionate shift in output-quantity than in the corresponding endowment factor shift that induced it. This has implications to both labor and capital:
Assuming fixed capital, population growth dilutes the scarcity of labor in relation to capital. If the population growth outpaces the growth in capital by 10% this may translate into a 20% shift in the balance of employment to the labor-intensive industries.
In the modern world, money is much more mobile than labor, so import of capital to a country almost certainly shifts the relative factor-abundances in favor of capital. The magnification effect says that a 10% increase in national capital may lead to a redistribution of labor amounting to a fifth of the entire economy (towards capital-intensive, high-tech production). Notably, employment patterns in very poor countries can be dramatically affected by a small amount of FDI, in this model. (See also: Dutch disease.)
== Econometric testing of H–O model theorems ==
Heckscher and Ohlin considered the Factor-Price Equalization theorem an econometric success because the large volume of international trade in the late 19th and early 20th centuries coincided with the convergence of commodity and factor prices worldwide.
Modern econometric estimates have shown the model to perform poorly, however, and adjustments have been suggested, most importantly the assumption that technology is not the same everywhere. This change would mean abandoning the pure H–O model.
=== Leontief paradox ===
In 1954 an econometric test by Wassily W. Leontief of the H–O model found that the United States, despite having a relative abundance of capital, tended to export labor-intensive goods and import capital-intensive goods. This problem became known as the Leontief paradox. Alternative trade models and various explanations for the paradox have emerged as a result of the paradox. One such trade model, the Linder hypothesis, suggests that goods are traded based on similar demand rather than differences in supply side factors (i.e., H–O's factor endowments).
=== The Vanek formula ===
Various attempts in the 1960s and 1970s have been made to "solve" the Leontief paradox and save the Heckscher–Ohlin model from failing. From the 1980s a new series of statistical tests had been tried. The new tests depended on Vanek's formula. It takes a simple form
F
C
=
V
C
−
s
C
V
{\displaystyle \mathbf {F_{C}} =\mathbf {V_{C}} -s_{C}\mathbf {V} }
where
F
C
{\displaystyle \mathbf {F_{C}} }
is the net trade of factor service vector for country
c
{\displaystyle c}
,
V
C
{\displaystyle \mathbf {V_{C}} }
the factor endowment vector for country
c
{\displaystyle c}
, and
s
C
{\displaystyle s_{C}}
the country
c
{\displaystyle c}
's share of the world consumption and
V
{\displaystyle \mathbf {V} }
the world total endowment vector of factors. For many countries and many factors, it is possible to estimate the left hand sides and right hand sides independently. To put it another way, the left hand side tells the direction of factor service trade. Thus it is possible to ask how this system of equations holds. The results obtained by Bowen, Leamer and Sveiskaus (1987) were disastrous. They examined the cases of 12 factors and 27 countries for the year 1967. They found that the two sides of the equations had the same sign only for 61% of 324 cases. For the year 1983, the result was more disastrous. Both sides had the same sign only for 148 cases out of 297 cases (or the rate of correct predictions was 49.8%). The results of Bowen, Leamer, and Sveiskaus (1987) mean that the Heckscher–Ohlin–Vanek (HOV) theory has no predictive power concerning the direction of trade.
== General Price-Trade Equilibrium ==
Paul Samuelson (1949) initialed the study of trade equilibrium in Heckscher-Ohlin model. He verbally stated an idea that, assuming the borders between the two countries were redrawn and thereby production and factors "redistributed" across the borders in the two economies, the main variables would remain unchanged. Avinash Dixit and Victor D. Norman (1980, chp.4) proposed the integrated world equilibrium (IWE) diagram to illustrate the equilibrium with mobile factors: world prices will remain unchanged when factor endowments are redistributed within the factor price equalization (FPE) set. Much literature confirmed this result fully. Elhanan Helpman and Paul Krugman (1985, p.23-24) further used equal-trade-volume lines to illustrate trade equilibrium and the trade basics in the IWE diagram. Guo (2023)
introduced a Dixit-Norman constant and used the equal trade volume line to obtain the general trade equilibrium by
w
a
g
e
(
w
∗
)
r
e
n
t
a
l
(
r
∗
)
=
W
o
r
l
d
C
a
p
i
t
a
l
(
K
W
)
W
o
r
l
d
L
a
b
o
r
(
L
W
)
.
{\displaystyle {\frac {wage(w^{*})}{rental(r^{*})}}={\frac {World\ Capital(K^{W})}{World\ Labor(L^{W})}}.}
The factor-price equalization theorem about the relationship between factor prices and factor supplies is empty. This is an important supplement to show the supply-demand relationship between factor prices and factor supplies. The equilibrium links Heckscher-Ohlin theorem with factor price equalization theorem.
== Criticism ==
The critical assumption of the Heckscher–Ohlin model is that the two countries are identical, except for the difference in resource endowments. This also implies that the aggregate preferences are the same. The relative abundance in capital leads the capital-abundant country to produce the capital-intensive good cheaper than the labor-abundant country, and vice versa.
Initially, when the countries are not trading: The price of the capital-intensive good in the capital-abundant country will be bid down relative to the price of the good in the other country, the price of the labor-intensive good in the labor-abundant country will be bid down relative to the price of the good in the other country. Once trade is allowed, profit-seeking firms move their products to the markets that have (temporary) higher prices.
As a result: the capital-abundant country will export the capital-intensive good, the labor-abundant country will export the labor-intensive good.
=== Predictive power ===
The original Heckscher–Ohlin model and extended model such as the Vanek model performs poorly, as it is shown in the section "Econometric testing of H–O model theorems". Daniel Trefler and Susan Chun Zhu summarizes their paper that "It is hard to believe that factor endowments theory [editor's note: in other words, Heckscher–Ohlin–Vanek Model] could offer an adequate explanation of international trade patterns".
A common understanding exists that in the national level HOV model fits well. In fact, Davis and others found that HOV model fitted extremely well with the regional data of Japan. Even when the HOV formula fits well, it does not mean that Heckscher–Ohlin theory is valid. Indeed, Heckscher–Ohlin theory claims that the state of factor endowments of each country (or each region) determines the production of each country (respectively of each region) but Bernstein and Weinstein found that the factor endowments have little predictive power. The factor-endowments-driven model (FED model) has errors much greater than the HOV model.
Unemployment is the vital question in any trade conflict. Heckscher–Ohlin theory excludes unemployment by the very formulation of the model, in which all factors (including labour) are employed in the production.
=== Leontief paradox ===
The Leontief paradox, presented by Wassily Leontief in 1953, found that the U.S. (the most capital-abundant country in the world by any criterion) exported labor-intensive commodities and imported capital-intensive commodities, contrary to the Heckscher–Ohlin theory.
However, if labor is separated into two distinct factors, skilled labor and unskilled labor, the Heckscher–Ohlin theorem is more accurate. The U.S. tends to export skilled-labor-intensive goods, and tends to import unskilled-labor-intensive goods.
=== Factor equalization theorem ===
The factor equalization theorem (FET) applies only to the most advanced countries. The average wage in Japan was once as big as 70 times the wage in Vietnam. These wage discrepancies are not normally in the scope of the H–O model analysis.
Heckscher–Ohlin theory is badly adapted to the analyze south–north trade problems. The assumptions of H–O are unrealistic with respect to north–south trade. Income differences between North and South is the concern that third world cares most. The factor price equalization theorem has not shown a sign of realization, even for a long time lag of a half century.
=== Identical production function ===
The standard Heckscher–Ohlin model assumes that the production functions are identical for all countries concerned. This means that all countries are in the same level of production and have the same technology, yet this is highly unrealistic. Technological gap between developed and developing countries is the main concern for the development of poor countries. The standard Heckscher–Ohlin model ignores all these vital factors when one wants to consider development of less developed countries in the international context. Even between developed countries, technology differs from industry to industry and firm to firm base. Indeed, this is the very basis of the competition between firms, inside the country and across the country. See the New Trade Theory in this article below.
=== Capital as endowment ===
In the modern production system, machines and apparatuses play an important role. What is referred to as capital is nothing other than these machines and apparatuses, together with materials and intermediate products consumed in the production process. Capital is the most important of factors, or one should say as important as labor. By the help of machines and apparatuses, the human being got a tremendous production capability. These machines, apparatuses and tools are classified as capital, or more precisely as durable capital, for one uses these items for many years. Their quantity is not changed at once. But the capital is not an endowment given by the nature. It is composed of goods manufactured in the production and often imported from foreign countries. In this sense, capital is internationally mobile and the result of past economic activity. The concept of capital as natural endowment distorts the real role of capital. Capital is a production power accumulated by the past investment.
=== Homogeneous capital ===
Capital goods take different forms. It may take the form of a machine-tool such as lathe or a conveyor belt. Capital goods can be highly specialised and have no use beyond the precise operation they are intended for. Despite this, capital in the Heckscher–Ohlin model is assumed to be homogeneous and transferable to any form if necessary. This assumption not only conflicts with the observable diversity and specificity of the capital stock, but also contains a further flaw, namely in how the amount of capital is measured. Usually, this would be done through the price system, which depends on the profit rate. However, In the Heckscher–Ohlin model, the rate of profit is determined according to how abundant capital is. If capital is scarce, it has a high rate of profit. If it is abundant, the profit rate is low. Therefore, before the profit rate is determined, the amount of capital is not measured - but we need to know the amount of capital to know the rate of profit! This logical difficulty was the subject of the so-called Cambridge Capital Controversies, which ultimately concluded that the concept of homogeneous capital was untenable. This is a serious blow to Heckscher-Ohlin theory, which has not been able to refute this theoretical flaw at the heart of the model.
Usually by a system of prices. But prices depend on profit rate.
=== No room for firms ===
Standard Heckscher–Ohlin theory assumes the same production function for all countries. This implies that all firms are identical. The theoretical consequence is that there is no room for firms in the H–O model. By contrast, the New Trade Theory emphasizes that firms are heterogeneous.
=== Political background ===
From the middle of the 19th century to 1930s, giant flows of immigration took place from Europe to North America. It is estimated that more than 60 million people crossed the Atlantic Ocean. Some politicians worried about negative consequences of immigration, such as cultural conflicts. For those politicians, the Heckscher-Ohlin theory of Trade provided a good reason “in support of both restrictions on labor migration and free trade in goods”.
== Alternatives theories of trade ==
=== New Trade Theory ===
New Trade Theory analyses individual enterprises and plants in an international competitive situation. The classical trade theory—i.e., the Heckscher–Ohlin model—has no enterprises in mind. The new trade theory treats enterprises in an industry as identical entities. "New" New Trade Theory (NNTT) gives focus on the diversity of enterprises. It is a fact that some enterprises engage in export and some that do not. Some enterprises invest directly in the foreign country in order to produce and sell in that country. Some other enterprises engage only in export. Why does this kind of differences occur? New Trade Theory tries to find out the reasons of these well observed facts.
New Trade theorists challenge the assumption of diminishing returns to scale, and some argue that using protectionist measures to build up a huge industrial base in certain industries would then allow those sectors to dominate the world market via a network effect.
See also Intra-industry trade.
=== Gravity model of trade ===
The gravity model of international trade predicts bilateral trade flows based on the economic sizes of two nations, and the distance between them.
=== Ricardo–Sraffa trade theory ===
Ricardian theory is now extended in a general form to include not only labor, but also inputs of materials and intermediate goods. In this sense, it is much more general and plausible than the Heckscher–Ohlin model and escapes the logical problems such as capital as endowments, which is, in reality, produced goods.
As the theory permits different production processes to coexist in an industry of a country, the Ricardo–Sraffa theory can give a theoretical bases for the New Trade Theory.
== See also ==
== References ==
== Further reading ==
Feenstra, Robert C. (2004). "The Heckscher–Ohlin Model". Advanced International Trade: Theory and Evidence. Princeton: Princeton University Press. pp. 31–63. ISBN 978-0-691-11410-1.
Leamer, Edward E. (1995). The Heckscher–Ohlin Model in Theory and Practice. Princeton Studies in International Finance. Vol. 77. Princeton, NJ: Princeton University Press. ISBN 978-0-88165-249-9.
Ohlin, Bertil (1967). Interregional and International Trade. Harvard Economic Studies. Vol. 39. Cambridge, MA: Harvard University Press.
== External links ==
A precisely defined, two-goods H–O model
The Heckscher–Ohlin Model Between 1400 and 2000 An econometric analysis of factor prices, commodity prices, and endowments in intercontinental trade by NBER in 1999. It finds that 19th century trade patterns and economies can be successfully modeled within an H-O framework. | Wikipedia/Heckscher–Ohlin_model |
Model (Korean: 모델; RR: Mo-del) is a 1997 South Korean television series starring Kim Nam-joo, Han Jae-suk, Jang Dong-gun, Yum Jung-ah and So Ji-sub. It aired on SBS from April 9 to August 7, 1997 on Wednesdays and Thursdays at 21:45 for 36 episodes.
Jang Hyuk made his television drama debut in Model.
== Plot ==
Song Kyung-rin is a fashion designer, but after top male model Jo Won-joon refused to work for her, she gets a taste of modeling and decides to become a model herself. Lee Jung just arrived in Korea from America and he, too, has an interest in modeling. Through fate, Jung helps out Kyung-rin and they fall in love. But behind Jung's return to Korea there is a deep, dark secret and a plan for revenge.
== Cast ==
Kim Nam-joo as Song Kyung-rin
Han Jae-suk as Jo Won-joon
Jang Dong-gun as Lee Jung
Yum Jung-ah as Park Soo-ah
Lee Young-beom as Soo-ah's husband
Song Seon-mi as Kim Yi-joo
Lee Sun-jin as Na Pil-soon
Jun Kwang-ryul as Yoo Jang-hyuk
So Ji-sub as Song Kyung-chul
Jang Hyuk as Joon-ho
Yoo Hye-ri as Yu-ri
Jung Dong-hwan
Chun Ho-jin as Jo Tae-sik
Ha Yoo-mi
Yoon Young-joon
Kim Yong-sun
Seo Bum-shik
== References ==
== External links ==
Model at HanCinema
Model at IMDb | Wikipedia/Model_(TV_series) |
The Model Automobile Company was a brass era American automobile manufacturer located in Peru, Indiana from 1902 to 1909.
== History ==
Edward A. Myers established the Model Gas and Gas Engine Company in 1901 in Auburn, Indiana. Manufacturing gasoline engines, in 1902 Model added a 2-cylinder 12-hp automobile to its products. This was joined by a 16-hp version in 1904, but a financial crisis in Auburn forced Myers to reorganize his company.
=== Model ===
By 1906 Myers had sold over 300 automobiles and reorganized into two companies, one dedicated to automobile production. The Model Gas Engine Works and Model Automobile Company relocated to Peru, Indiana in 1906. The 1905 Model 2-cylinder engine had been increased to 20-hp. Model automobiles had convertible coachwork which allowed the body to be tilted upward from the rear for access to the engine, and permitting the rear seats to be removed as a unit. In 1906 Model prices ranged from $900 for a runabout to $1,250 (equivalent to $43,745 in 2024) for a touring car.
In 1907 Model introduced a 4-cylinder automobile. This was a 45-hp touring car, with the engine moved under the hood and priced at $2,000, equivalent to $67,493 in 2024.
=== Star ===
Model's main business was selling engines and transmissions to other automobile, truck and tractor manufacturers and decided it would seem less of a conflict to sell completed automobiles under a different marque. For 1908 and 1909 Model Automobile Company products were known as Star. The Star was continued as 2-cylinder or 4-cylinder cars, with the 4-cylinder models increased to 50-hp.
In 1909 E. A. Myers decided to spin off the automobile manufacturing completely and organized the Great Western Automobile Company and all cars from 1910 to 1916 would be Great Westerns.
In 1911 Myers would work with a Dr. H. H. Bissell of Watseka, Illinois who commissioned a car Bisell called the Izzer. The first example pleased Myers so much, that two more examples were produced. Myers retained one of the cars and gave the second to the Model office manager, James Littlejohn. Bissell's Izzer is still extant.
Model Gas Engine Works was sold to Pittsburgh investors in 1912 and the factory and E. A .Myers moved to Pittsburgh. In 1914 he returned to Peru to run Great Western. In 1916 Standard Steel Car Company purchased the Model Gas Engine Company factory and assets to expand Standard automobile production.
== Gallery ==
== External links ==
1911 Izzer Roadster at ConceptCarz
== References == | Wikipedia/Model_Automobile_Company |
Malte Fynboe Manniche Ebert (born 26 June 1994), better known by his former nickname Gulddreng, is a Danish musician. His identity was unknown until the father of his host family from an exchange trip announced his name on Facebook. His name translates to "gold boy" or "golden boy" in English. As Gulddreng, Ebert was known for always wearing sunglasses, which helped to obscure his identity. His first seven singles, "Model", "Se mig nu", "Hva' så", "Drikker for lidt", "Nemt", "Guld jul" and "Ked af det", all peaked at number 1 in Denmark, with "Se mig nu", "Drikker for lidt", "Guld jul", and "Ked af det" debuting at the top spot.
On 7 September 2016 Ebert as Gulddreng also released his own official app on the App Store. The app was developed by Thorwest Development.
In September 2017, Ebert stopped using the nickname Gulddreng and began using his real name, Malte Ebert. He said he created the nickname as a "reaction to bad pop music". In June 2018, he released "Rather Be", his first single as Malte Ebert. In May 2019, he played his first concert in Vega. In 2019, he also went to Los Angeles to make songs with more international sound.
== Discography ==
=== Album ===
=== Singles ===
== References == | Wikipedia/Model_(Gulddreng_song) |
The Movement for Democracy in Liberia (MODEL) was a rebel group in Liberia that became active in March 2003, launching attacks from Ivory Coast. MODEL was based on the Force Spéciale pour la Libération du Monde Africain (LIMA) militia formed in September 2002 to defend Laurent Gbagbo's government against insurgents backed by Liberia's president Charles Taylor. After fighting off the imminent threat, parts of LIMA crossed the border to Liberia to continue the war there. With Taylor's forces already pressed against the larger Liberians United for Reconciliation and Democracy (LURD), MODEL quickly gained ground. The initial leadership of MODEL came from LURD, while the majority of MODEL fighters were mobilized from Ivorian and Ghanaian refugee camps, to which many Liberians from the country's Southeast had fled.
The relationship between the rebel groups too was strained, with politicians from both movements uncooperative. MODEL was backed by the Ivorian government as a way of staking a claim in Liberian politics during the turmoil of that country's civil war, or as retaliation for the Liberian government's alleged support for rebels in Ivory Coast. Its political leader, Thomas Nimely, was named as Liberia's foreign minister in the transitional government that was appointed on October 14, 2003, following the resignation and exile of Taylor. The group may have exported timber from regions of southern Liberia under its control, which would have been a violation of United Nations sanctions. By 2004 MODEL in effect ceased to exist.
== References ==
== External links ==
Voice of America, Nimely denies becoming involved in Ivory Coast's 2010 election crisis | Wikipedia/MODEL |
The methods of neuro-linguistic programming are the specific techniques used to perform and teach neuro-linguistic programming, which teaches that people are only able to directly perceive a small part of the world using their conscious awareness, and that this view of the world is filtered by experience, beliefs, values, assumptions, and biological sensory systems. NLP argues that people act and feel based on their perception of the world and how they feel about that world they subjectively experience.
NLP claims that language and behaviors (whether functional or dysfunctional) are highly structured, and that this structure can be 'modeled' or copied into a reproducible form. Using NLP a person can 'model' the more successful parts of their own behavior in order to reproduce it in areas where they are less successful or 'model' another person to effect belief and behavior changes to improve functioning. If someone excels in some activity, it can be learned how specifically they do it by observing certain important details of their behavior. NLP embodies several techniques, including hypnotic techniques, which proponents claim can affect changes in the way people think, learn and communicate.
== Internal 'maps' of the world ==
NLP claims that our mind-body (neuro) and what we say (language) all interact together to form our perceptions of the world, or maps (programming) and that said map of the world determines feelings and behavior.
As an approach to personal development or therapy it claims that people create their own internal 'map' or world, recognizing unhelpful or destructive patterns of thinking based on impoverished maps of the world, then modifying or replacing these patterns with more useful or helpful ones. There is also an emphasis on ways to change internal representations or maps of the world with the intent of increasing behavioral flexibility.
== Modeling ==
"Modeling" in NLP is the process of adopting the behaviors, language, strategies and beliefs of another person or exemplar in order to 'build a model of what they do.
The original models were: Milton Erickson (hypnotherapy), Virginia Satir (family therapy), and Fritz Perls (gestalt therapy). NLP modeling methods are designed to unconsciously assimilate the tacit knowledge to learn what the master is doing of which the master is not aware. As an approach to learning it can involve modeling exceptional people. As Bandler and Grinder state "the function of NLP modeling is to arrive at descriptions which are useful." Einspruch & Forman 1985 state that "when modeling another person the modeler suspends his or her own beliefs and adopts the structure of the physiology, language, strategies, and beliefs of the person being modeled. After the modeler is capable of behaviorally reproducing the patterns (of behavior, communication, and behavioral outcomes) of the one being modeled, a process occurs in which the modeler modifies and readopts his or her own belief system while also integrating the beliefs of the one who was modeled." Modeling is not confined to therapy, but can be, and is, applied to a broad range of human learning. Another aspect of modeling is understanding the patterns of one's own behaviors in order to 'model' the more successful parts of oneself.
== Representational systems ==
The notion that experience is processed by the sensory systems or representational systems, was incorporated into NLP from psychology and gestalt therapy shortly after its creation. This teaches that people perceive the world through the senses and store the information from the senses in the mind. Memories are closely linked to sensory experience. When people are processing information they see images and hear sounds and voices and process this with internally created feelings. Some representations are within conscious awareness but information is largely processed at the unconscious level. When involved in any task, such as making conversation, describing a problem in therapy, reading a book, kicking a ball or riding a horse, their representational systems, consisting of images, sounds, feelings (and possibly smell and taste) are being activated at the same time. Moreover, the way representational systems are organised and the links between them impact on behavioral performance. Many NLP techniques rely on interrupting maladaptive patterns and replacing them with more positive and creative thought patterns which will in turn impact on behavior.
Preferred representational systems
Originally, NLP taught that most people had an internal preferred representational system (PRS) and preferred to process information primarily in one sensory modality. The practitioner could ascertain this from external cues such as the direction of eye movements, posture, breathing, voice tone and the use of sensory-based predicates. If a person repeatedly used predicates such as "I can see a bright future for myself", the words "see" and "bright" would be considered visual predicates. In contrast "I can feel that we will be comfortable" would be considered primarily kinesthetic because of the predicates "feel" and "comfortable". These verbal cues could also be coupled with posture changes, skin color or breathing shifts. The theory was that the practitioner by matching and working within the preferred representational system could achieve better communication with the client and hence swifter and more effective results. Many trainings and standard works still teach PRS
Although there is some research that supports the notion that eye movements can indicate visual and auditory (but not kinesthetic) components of thought in that moment, the existence of a preferred representational system ascertainable from external cues (an important part of original NLP theory) was discounted by research in the 1980s.
Submodalities
Submodalities are the fine details of representational systems. For example, the submodalities of sight include light/dark, colour/monochrome, sharp/blurred. Submodalities involve the relative size, location, brightness of internal images, the volume and direction of internal voices and sounds, and the location, texture, and movement of internally created sensations. A typical change process may involve manipulating the submodalities of internal representations. For example, someone may see their future as 'dark and cloudy' with associated emotions, but would seek through NLP to perceive, and feel it, as 'light and clear'. Other training exercises develop a person's ability to move around internal images, change the quality of sounds and find out how these affect the intensity of internal feelings or other submodalities. Although NLP did not discover submodalities, it appears that the proponents of NLP may have been the first to systematically use manipulation of submodalities for therapeutic or personal development purposes, particularly phobias, compulsions and addictions.
== Meta-programs ==
Neuro-linguistic programming (NLP) uses the term 'meta-programs' specifically to indicate general, pervasive and usually habitual patterns used by an individual across a wide range of situations. Examples of NLP meta-programs include the preference for overview or detail, the preference for where to place one's attention during conversation, habitual linguistic patterns and body language, and so on.
Related concepts in other disciplines are known as cognitive styles or thinking styles.
In NLP, the term programs is used as a synonym for strategy, which are specific sequences of mental steps, mostly indicated by their representational activity (using VAKOG), leading to a behavioral outcome. In the entry for the term strategy in their encyclopedia, Robert Dilts & Judith Delozier explicitly refer to the mind as computer metaphor:
A strategy is like a program in a computer. It tells you what to do with the information you are getting, and like a computer program, you can use the same strategy to process a lot of different kinds of information.
In their encyclopedia, Dilts and Delozier then define metaprograms as
[programs] which guide and direct other thought processes. Specifically they define common or typical patterns in the strategies or thinking styles of a particular individual, group or culture.
== Techniques ==
=== Anchoring ===
NLP teaches that we constantly make "anchors" (classical conditioning) between what we see, hear and feel; and our emotional states. While in an emotional state if a person is exposed to a unique stimulus (sight, sound or touch), then a connection is made between the emotion and the unique stimulus. If the unique stimulus occurs again, the emotional state will then be triggered. NLP teaches that anchors (such as a particular touch associated with a memory or state) can be deliberately created and triggered to help people access 'resourceful' or other target states.
=== Future pacing ===
A technique of asking a person to imagine doing something in the future and monitoring their reactions. It is typically used to check that a change process has been successful, by observing body language when the person imagines being in a difficult situation before and after an intervention. If the body language is the same, then the intervention has not been successful.
=== Swish ===
The swish pattern is a process that is designed to disrupt a pattern of thought from one that used to lead to an unwanted behavior to one that leads to a desired behavior. This involves visualizing a 'cue' which leads into the unwanted behavior, such as a smokers hand moving towards the face with a cigarette in it, and reprogramming the mind to 'switch' to a visualization of the desired outcome, such as a healthy-looking person, energetic and fit. In addition to visualization, auditory sound effects are often imagined to enhance the experience.
=== Reframing ===
Another technique, "reframing" functions through "changing the way you perceive an event and so changing the meaning. When the meaning changes, responses and behaviors will also change. Reframing with language allows you to see the world in a different way and this changes the meaning. Reframing is the basis of jokes, myths, legends, fairy tales and most creative ways of thinking." There are examples in children's literature; for example, the fictional Pollyanna would play The Glad Game whenever she felt down about life, to remind herself of the things that she could do, and not worry about the things she couldn't. Alice Mills also says that this occurs in Hans Christian Andersen's story where, to the surprise of the ugly duckling, the beautiful creatures welcome and accept him; gazing at his reflection, he sees that he too is a swan. Reframing is common to a number of therapies and is not original to NLP.: 103–107, 105
=== Well-formed outcome ===
In NLP this is one of a number of 'frames' wherein the desired state is considered as to its achievability and effect if achieved. A positive outcome must be defined by the client for their own use, be within the clients power to achieve, retain the positive products of the unwanted behaviors and produce an outcome that is appropriate for all circumstances.
=== VK/D ===
VK/D stands for 'Visual/Kinesthetic Dissociation'. This is a technique designed to eliminate bad feelings associated with past events by re-running (like a film, sometimes in reverse) an associated memory in a dissociated state. It combines elements of Ericksonian techniques, spatial sorting processes from Fritz Perls, reframing and 'changing history' techniques.
Metaphor
Largely derived from the ideas of Bateson and the techniques of Erickson, 'metaphor' in NLP ranges from simple figures of speech to allegories and stories. It tends to be used in conjunction with the skills of the Milton model to create a story which operates on many levels with the intention of communicating with the unconscious and to find and challenge basic assumptions.
State management
Sometimes called state control, is a neuro-linguistic programming (NLP) technique involving actively trying to control the emotional and mental state of an individual. One method to actively achieve state management anchoring where an individual associates a particular physical stimulus.
=== Covert hypnosis ===
Covert hypnosis is purportedly a method of using language patterns to hypnotise or persuade other people. Referred to as "sleight of mouth" by Robert Dilts. building off the phrase "sleight of hand", which refers to a magician's skills in making things happen which appear impossible.
== References == | Wikipedia/Modeling_(NLP) |
Models (also sometimes known as The Models) are an Australian rock band formed in Melbourne, Victoria in August 1978. They went into hiatus in 1988, but re-formed in 2000, 2006 and 2008 to perform reunion concerts. The band began regularly performing again from 2010 onwards. "Out of Mind, Out of Sight", their only No. 1 hit, appeared on the Australian singles charts in July 1985. The related album, Out of Mind, Out of Sight, peaked at No. 3 on the Australian albums charts after its release in August. Out of Mind, Out of Sight appeared on the Billboard 200 albums chart, with the single, "Out of Mind, Out of Sight", peaking at No. 37 on the Billboard Hot 100 singles chart. An earlier song from the same album, "Barbados", had peaked at No. 2 on the Australian singles chart.
Models early line-up included Andrew Duffield on keyboards, Mark Ferrie on bass guitar, Janis Freidenfelds (a.k.a. Johnny Crash) on drums and percussion, and Sean Kelly on vocals and lead guitar. A later line-up was mainstay Kelly on guitar, James Freud on vocals and bass, Roger Mason on keyboards, Barton Price on drums, and James Valentine on saxophone. Backing singers in the group included Zan Abeyratne and Kate Ceberano (both from I'm Talking) and Canadian-born Wendy Matthews. In early 1989, Duffield, Kelly, Matthews and Valentine were members of Absent Friends. On 27 October 2010, Models were inducted into the ARIA Hall of Fame by Matthews.
== History ==
=== 1977–1979: Early years ===
In 1977 Melbourne school friends Sean Kelly and James Freud formed their first band, Spread, which was soon renamed Teenage Radio Stars. They recorded two tracks for Suicide records' Lethal Weapons compilation album (1978).
Singer and guitarist Sean Kelly left in 1978 to form Models with bass guitarist Peter Sutcliffe (a.k.a. Pierre Voltaire and Pierre Sutcliffe, who won $503,000 in May 2014, on Australian TV quiz show Million Dollar Minute) and Ash Wednesday (formerly of JAB on keyboards, Sutcliffe and Janis Friedenfelds (a.k.a. Johnny Crash) on drums and percussion. Models were more pop influenced than the earlier punk bands and had a wider appeal. The initial version of the group did not stay together for long as, after six months, Sutcliffe was replaced on bass by Mark Ferrie (ex-Myriad). In August 1979, Wednesday was replaced by Andrew Duffield from Whirlywirld on keyboards. Their first release in October 1979 was a give-away, shared single, "Early Morning Brain (It's Not Quite the Same as Sobriety)" backed with The Boys Next Door's "Scatterbrain". Friction within the band led to their decision to break up in November 1978. However they rapidly reformed at the end of December when ex-The Easybeats members, Harry Vanda and George Young, who were now record producers and songwriters, offered to cut some demos for them – Their second single, "Owe You Nothing" appeared in August 1980. Both singles were released on independent labels and did not chart on the Top 40 Australian singles chart according to the Kent Music Report.
=== 1980–1982: Alphabravocharliedeltaechofoxtrotgolf to Local and/or General ===
Models performed extensively both locally and interstate, supporting the Ramones, The B-52's, XTC, The Vapors and Midnight Oil on national tours. Rather than signing immediately, the group financed the recording of their first album to guarantee creative control. In November 1980, the Duffield, Ferrie, Friedenfelds and Kelly line-up released their first album, Alphabravocharliedeltaechofoxtrotgolf. They then, under manager Adrian Barker, signed to Mushroom Records and, as a sign of its respect for the band, the label agreed not to release any singles from the album, which peaked at No. 43 on the Australian Kent Music Report Albums Chart. It was well received by audiences on the live pub circuit. The group intended to record completely new material for their studio albums. Much of their earlier work was unreleased until 2002, when Models Melbourne, a compilation album of live material, was released.
Models' early style was a spiky, distinctive blend of new wave, glam rock, dub and pop: which included Kelly's strangled singing voice, Duffield's virtuoso synthesiser performances (he used the EMS Synthi AKS), and the band's cryptic, slightly gruesome, lyrics (e.g., "Hans Stand: A War Record" from Alphabravocharliedeltaechofoxtrotgolf), which were mostly written or co-written by Kelly.
Early in 1981, following a support slot for The Police, the group signed an international deal with A&M Records. Friedenfelds was replaced on drums by Mark Hough (a.k.a. Buster Stiggs) from New Zealand band The Swingers before recording commenced on their international label release. Friedenfields went on to play with Sacred Cowboys, Beasts of Bourbon, The Slaughterman and Tombstone Hands. The band went to England to record with producer, Stephen Tayler producing. at Farmyard Studios, these tracks becoming the album Local and/or General.
In June, demo sessions recorded earlier in Australia so impressed the band that they were released as a 10" mini album, Cut Lunch (July 1981), which was produced by Tony Cohen and Models except one by Split Enz keyboard player Eddie Rayner. Cut Lunch peaked at No. 37 on the albums chart and at No. 38 on the singles chart. It included the whimsical pop tune, "Two Cabs to the Toucan".
In October, their second full-length album Local &/or General, was released. Local and/or General peaked at No. 30 and provided the single, "Local &/or General" in November, which did not chart.
Both albums helped widen their audience nationally, thanks to regular radio exposure on Triple J in Sydney and on community stations in other cities, as well as national TV exposure through their innovative music videos on programs such as the Australian Broadcasting Corporation (ABC-TV) pop music show Countdown.
During 1982, further line-up changes occurred with Ferrie and Hough leaving early in the year. Ferrie went on to form Sacred Cowboys with Garry Gray and Terry Doolan. He later (as of November 2010) became bass player in the RocKwiz house band on SBS TV. Hough became a graphic artist, art director and designer. James Freud (ex-Teenage Radio Stars, James Freud & Berlin) joined the band on bass and vocals, with John Rowell on guitar and Graham Scott on drums (both ex-Curse). Kelly and Freud had been in high school bands which developed into Teenage Radio Stars. Freud had a solo hit single, "Modern Girl", which peaked at No. 12 in 1980. Rowell and Scott left Models in May 1982, with Duffield following. New Zealand drummer, Barton Price (ex-Crocodiles, Sardine v) joined. They recorded a single, "On", produced by veteran rocker, Lobby Loyde, and released in August. It had no mainstream chart success, but peaked at No. 1 on the independent charts. Gus Till (ex-Beargarden) briefly joined on keyboards until Duffield rejoined in December. In 1982 they made a film, Pop Movie, which featured animation and live footage of the band, it was screened on TV rock show, Nightmoves, as well as at a few cinemas.
=== 1983–1985: The Pleasure of Your Company to Out of Mind, Out of Sight ===
Models' line-up of Duffield, Freud, Kelly and Price issued the highly regarded The Pleasure of Your Company in October 1983, produced by Nick Launay. Its big drum sound and dance-ability, reflected Launay's influence, and Freud's more radio-friendly voice made the album more accessible. The album was critically acclaimed and peaked at No. 12, with the single "I Hear Motion" becoming a No. 16 hit. Duffield later explained that the song's distinctive keyboard part had been inspired by a riff from Stevie Wonder's hit "Superstition". "I Hear Motion" was used on the soundtrack for the Yahoo Serious film Young Einstein (1988). The band released two other singles, "God Bless America" and "No Shoulders, No Head", but neither charted into the Top 50. The band supported David Bowie for the Australian leg of his Serious Moonlight Tour in November. Kelly and Duffield were invited to sing backing vocals on the INXS album, The Swing. The video for "God Bless America", from March 1984, featured backing singers Zan Abeyratne and Kate Ceberano (both members of I'm Talking). Kelly appeared ready to disband Models and was even rehearsing with a new band. Mushroom Records convinced him to continue with Models and their next single, "Big on Love" produced by Reggie Lucas, was released in November 1984 and peaked at No. 24.
Fellow Australian band INXS were fans of Models; their manager, Chris Murphy signed them to his MMA management company. The group created a hybrid of their alternative roots with a more commercial sound and, under the influence of Murphy, they reassessed their direction and moved towards a more radio-friendly format. By late 1984, Models relocated to Sydney and Duffield – with his crucial influence on the band's sound – was forced out by Murphy under acrimonious circumstances to be replaced by Roger Mason (ex-James Freud's Berlin) on keyboards and James Valentine on saxophone. Duffield released a solo album, Ten Happy Fingers in 1986 on his own Retrograde Records label. For touring during 1983 to 1985, the group was regularly augmented by backing singers Abeyratne and Ceberano; and in 1985, Canadian-born singer Wendy Matthews joined. Matthews and Kelly became a couple, remaining together for 11 years, and later founded the band Absent Friends.
In early 1985, Models started recording material for their next album, Out of Mind, Out of Sight, produced by Launay, Lucas and Mark Opitz. A single from the album, "Barbados", was released in March, which peaked at No. 2. It was a reggae influenced song co-written by Freud and Duffield (prior to his departure). The song related a tale of alcoholism and suicide, it later provided Freud with the titles of his two autobiographies, I Am the Voice Left from Drinking (2002) and I Am the Voice Left from Rehab (2007). The video clip was influenced by the film, The Deer Hunter, it included a cameo by Garry Gary Beers of INXS and was directed by Richard Lowenstein.
On 13 July, Models performed four songs for the Oz for Africa concert (part of the global Live Aid program) – "Big on Love", "I Hear Motion", "Stormy Tonight", "Out of Mind, Out of Sight". It was broadcast in Australia (on both Seven Network and Nine Network) and on MTV in the United States. Models went on a national tour with I'm Talking in July. In November, the band appeared on The Royal Variety Performance for Prince Charles and Princess Diana – Rocking the Royals at the Victoria State Arts Centre. The band released their most commercially successful work with the No. 1 hit single "Out of Mind, Out of Sight" in June and the No. 3 album Out of Mind, Out of Sight in August. "Out of Mind, Out of Sight" was the only No. 1 single on the Australian singles chart for 1985 by an Australian artist. (Midnight Oil's Species Deceases which peaked at No. 1 on the singles charts in December 1985 was an EP.) For the album, Models were Freud, Kelly, Mason, Matthews, Price and Valentine with Zan Abeyratne, and her twin sister, Sherine Abeyratne (Big Pig) on backing vocals.
"Cold Fever" released in October was their next single, which peaked into the Top 40. It was followed by a Christmas single, "King of Kings", which contains portions of a speech by Martin Luther King Jr., issued in December with all proceeds donated to the Salvation Army, but it did not chart into the Top 50. In 1986, Geffen Records released Out of Mind, Out of Sight in the US and it appeared on the Billboard 200 albums chart, with the single, "Out of Sight, Out of Mind", peaking at No. 37 on the Billboard Hot 100 singles chart. The band toured the US in November supporting Orchestral Manoeuvres in the Dark.
=== 1986–1988: Models' Media to dissolution ===
In 1986, Models went to UK to record their next album, Models' Media, with Julian Mendelsohn and Opitz, at Trevor Horn's state-of-the-art SARM West Studios in London. Two singles peaked into the Top 30, "Evolution" in September, and "Let's Kiss" in November. Models' Media was released in December and peaked at No. 30 but was less successful than Out of Mind, Out of Sight. Models also featured on the Australian Made Tour of late 1986 to January 1987 with INXS, Mental as Anything, The Triffids, I'm Talking, The Saints, Divinyls and Jimmy Barnes on the ticket. "Hold On" was released in March 1987 and peaked in the Top 30, their final single was a cover of The Beatles' "Oh! Darling" in September which peaked in the Top 50.
During 1987, Ceberano and Matthews sang together on the soundtrack for ABC-TV series, Stringer, the resultant album, You've Always Got the Blues was released in 1988, and peaked at No. 4 on the albums chart. Models members, including Mason as lead singer and Kelly on bass guitar, formed a side-project, The Clampetts, to record covers of nine country music tracks, which was released in 1987 as The Last Hoedown. Valentine left Models to pursue a radio and television journalism career.
In 1988, the Thank You Goodnight Tour was conducted but the pressures of ten years of touring, as well as financial troubles, hastened the break-up of Models, which was announced in June 1988, however in 2008, Kelly disputed the break-up: I remember in the late '80s I noticed James' [Freud] Record Company put out a press release that we'd split up, which was completely inaccurate. Because we had so many individuals in the group, we've always been able to sustain it in one form or another - and fortunately for me they've always let me be involved. As long as I'm there, we get to claim that continuity.
=== 1988–current: post-dissolution and reunions ===
Models' extended live exposure ensured that they stayed in the public eye when other contemporaries had been forgotten: the band's later work remained popular on radio throughout the 1990s; this, coupled with critical acclaim and cult appeal of earlier work, re-stimulated interest in their work in the latter half of that decade. The band reformed for a few gigs in 2000; in 2001 their rarities album Melbourne was released. Freud has written two memoirs, I Am The Voice Left From Drinking (2002) and I Am The Voice Left From Rehab (2007); the titles are both taken from "Barbados" and allude to his addiction with drugs and alcohol, and his subsequent recovery attempts.
Kelly and Matthews formed Absent Friends in early 1989 which included ex-Models members Duffield, Mason and Valentine. With Matthews on lead vocals their 1990 hit single, "I Don’t Want to Be with Nobody but You" peaked at No. 4 on the ARIA Charts. The associated album, Here's Looking Up Your Address peaked at No. 7. Absent Friends disbanded in 1991 and Kelly fronted The Dukes from 1991 to 1994. Matthews provided a No. 11 hit with her first solo album Émigré late in 1990. She followed with Lily, which peaked at No. 2 in 1992, and provided her best performed single, "The Day You Went Away", which also peaked at No. 2. Former drummer Scott was a member of Satellite (1993–1997). Matthews and Kelly separated as a couple in the mid-1990s.
Duffield wrote music (including the theme) for the Australian children's TV series, Round the Twist; and in 2007 composed all music and sound effects for the TV comedy, Kick. Duffield teamed up with Phil Kennihan to found a successful advertising music partnership.
Mason has composed soundtracks for many feature films and television series both locally and internationally. Valentine later worked in children's TV, is a popular radio host on 702 ABC Sydney and published a successful series of children's books. Price returned to New Zealand after stints with various Australian bands, and the world's first drum sample CD. Wednesday formed Crashland and plays with German avant garde band Einstürzende Neubauten.
Various versions of Models have reformed on several occasions for short tours, including in 2006 and in September 2008. The 2008 version was: Kelly, Freud, his son Jackson Freud (from Attack of the Mannequins) on guitar, Tim Rosewarne (ex-Big Pig, Chocolate Starfish) on keyboards and Cameron Goold (Propaganda Klann, Christine Anu backing band) on drums. In August 2010, Duffield, Ferrie, Kelly and Price reformed for two concerts in Sydney and Melbourne. On 27 October, Models were inducted into the ARIA Hall of Fame by Matthews. The line-up of Duffield, Ferrie, Kelly, Mason, Price and Valentine performed "I Hear Motion" and "Evolution". Matthews recalled meeting the group for the first time at a recording session – she was due to provide backing vocals but they were busy playing indoor cricket in the studio. During the ceremony, Kelly explained Freud's absence by saying he had "another bicycle accident". A week later, Freud was found dead at his Hawthorn home on 4 November in a suspected suicide.
In 2013, Models (consisting of Duffield, Ferrie, Kelly, and Price) issued a self-released four-song EP titled GTK. A follow-up EP was issued in 2015: titled Memo, it also consisted of four songs.
== Members ==
Current members
Sean Kelly – lead vocals, guitar, clarinet (1978–1988, 2000–2001, 2006, 2008, 2010–present)
Mark Ferrie – bass guitar (1979–1982, 2001, 2010–present), backing vocals and occasional lead vocals (1979–1982, 2010-present)
Andrew Duffield – keyboards, backing vocals (1979–1982, 1982–1984, 2010–present), occasional lead vocals (2010-present)
Ash Davies – drums (2015–present)
Former members
Janis Freidenfelds a.k.a. Johnny Crash – drums, percussion (1978–1981; died 2014)
Peter Sutcliffe a.k.a. Pierre Voltaire – bass guitar (1978–1979)
Ash Wednesday – keyboards (1978–1979, 2001)
Mark Hough a.k.a. Buster Stiggs – drums (1981–1982; died 2018)
James Freud – bass guitar, lead and backing vocals (1982–1988, 2000–2001, 2006, 2008; died 2010)
John Rowell – guitar (1982)
Graham Scott – drums (1982)
Barton Price – drums (1982–1988, 2000, 2010-2015)
Gus Till – keyboards (1982)
James Valentine – saxophone (1984–1987)
Roger Mason – keyboards, backing vocals (1984–1988, 2000)
Kate Ceberano – backing vocals (1983–1985)
Zan Abeyratne – backing vocals (1983–1985)
Sherine Abeyratne – backing vocals (1983–1985)
Wendy Matthews – backing vocals (1985–1988)
Jackson Freud – guitar (2008)
Tim Rosewarne – keyboards (2008)
Cameron Goold – drums (2008)
=== Timeline ===
== Discography ==
=== Studio albums ===
=== Compilation albums ===
=== Live albums ===
=== Extended plays ===
=== Singles ===
== Notes ==
A.^ "Early Morning Brain (It's Not Quite the Same as Sobriety)" was originally released by Models as a shared single with The Boys Next Door's "Scatterbrain" on the flip-side.
B.^ Cut Lunch (EP) charted on the Kent Music Report Albums Chart, with "Cut Lunch" and "Two Cabs to the Toucan" as the most played radio tunes. "Cut Lunch" also peaked on the related Singles Chart.
== Awards and nominations ==
=== ARIA Music Awards ===
The ARIA Music Awards is an annual awards ceremony that recognises excellence, innovation, and achievement across all genres of Australian music. They commenced in 1987. Models were inducted into the Hall of Fame in 2010.
=== Countdown Music Awards ===
Countdown was an Australian pop music TV series on national broadcaster ABC-TV from 1974 to 1987, it presented music awards from 1979 to 1987, initially in conjunction with magazine TV Week. The TV Week / Countdown Awards were a combination of popular-voted and peer-voted awards.
== References ==
== External links ==
Models at IMDb
Models discography at Billboard
Models discography at MusicBrainz
Models discography at Discogs | Wikipedia/Models_(band) |
The Model is a 2016 Danish psychological thriller film directed by Mads Matthiesen and written by Matthiesen, Martin Zandvliet and Anders August. The film stars Maria Palm and Ed Skrein.
== Plot ==
A young mentally ill Danish model named Emma is fighting for a breakthrough in the Parisian fashion world. Her journey to the center of the city of fashion, Paris, and the glamorous life as a top model evolve into a true drama, as Emma meets the attractive and somewhat older fashion photographer Shane White. Emma begins to love her lifestyle, and with Shane by her side, the fashion industry's doors begin to open. But soon Emma finds that love also has gloomy facets, and her dreams are challenged by both Shane and an unexpected, dark side of herself.
== Cast ==
Maria Palm as Emma
Ed Skrein as Shane White
Yvonnick Muller as André
Dominic Allburn as Sebastian
Virgile Bramly as Marcel
Thierry Hancisse as Bernard
Marco Ilsø as Frederik
== Reception ==
The Model received mixed reviews from critics. On review aggregator Rotten Tomatoes, the film has a rating of 71%, based on seven reviews, with an average rating of 5.87/10. Metacritic gives the film a score of 58 out of 100, based on six critics, indicating "mixed or average reviews".
The main criticisms of the film were its narrative, particularly plot development, and lack of character development. Variety stated, "The screenplay by Matthiessen and co-writers Martin Pieter Zandvliet and Anders Frithiof August is compelling up until the melodramatic, credulity-straining final act, although the characters, apart from Emma, feel underdeveloped". Neil Genzlinger of The New York Times wrote, "The bodies are thin in the Danish film and so is the plot, though the real-life model who plays the lead role acquits herself well enough".
== References ==
== External links ==
Official website
The Model at IMDb | Wikipedia/The_Model_(film) |
The fluid mosaic model explains various characteristics regarding the structure of functional cell membranes. According to this biological model, there is a lipid bilayer (two molecules thick layer consisting primarily of amphipathic phospholipids) in which protein molecules are embedded. The phospholipid bilayer gives fluidity and elasticity to the membrane. Small amounts of carbohydrates are also found in the cell membrane. The biological model, which was devised by Seymour Jonathan Singer and Garth L. Nicolson in 1972, describes the cell membrane as a two-dimensional liquid where embedded proteins are generally randomly distributed. For example, it is stated that "A prediction of the fluid mosaic model is that the two-dimensional long-range distribution of any integral protein in the plane of the membrane is essentially random."
== Chemical makeup ==
== Experimental evidence ==
The fluid property of functional biological membranes had been determined through labeling experiments, x-ray diffraction, and calorimetry. These studies showed that integral membrane proteins diffuse at rates affected by the viscosity of the lipid bilayer in which they were embedded, and demonstrated that the molecules within the cell membrane are dynamic rather than static.
Previous models of biological membranes included the Robertson Unit Membrane Model and the Davson-Danielli Tri-Layer model. These models had proteins present as sheets neighboring a lipid layer, rather than incorporated into the phospholipid bilayer. Other models described repeating, regular units of protein and lipid. These models were not well supported by microscopy and thermodynamic data, and did not accommodate evidence for dynamic membrane properties.
An important experiment that provided evidence supporting fluid and dynamic biological was performed by Frye and Edidin. They used Sendai virus to force human and mouse cells to fuse and form a heterokaryon. Using antibody staining, they were able to show that the mouse and human proteins remained segregated to separate halves of the heterokaryon a short time after cell fusion. However, the proteins eventually diffused and over time the border between the two halves was lost. Lowering the temperature slowed the rate of this diffusion by causing the membrane phospholipids to transition from a fluid to a gel phase. Singer and Nicolson rationalized the results of these experiments using their fluid mosaic model.
The fluid mosaic model explains changes in structure and behavior of cell membranes under different temperatures, as well as the association of membrane proteins with the membranes. While Singer and Nicolson had substantial evidence drawn from multiple subfields to support their model, recent advances in fluorescence microscopy and structural biology have validated the fluid mosaic nature of cell membranes.
== Subsequent developments ==
=== Membrane asymmetry ===
Additionally, the two leaflets of biological membranes are asymmetric and divided into subdomains composed of specific proteins or lipids, allowing spatial segregation of biological processes associated with membranes. Cholesterol and cholesterol-interacting proteins can concentrate into lipid rafts and constrain cell signaling processes to only these rafts. Another form of asymmetry was shown by the work of Mouritsen and Bloom in 1984, where they proposed a Mattress Model of lipid-protein interactions to address the biophysical evidence that the membrane can range in thickness and hydrophobicity of proteins.
=== Non-bilayer membranes ===
The existence of non-bilayer lipid formations with important biological functions was confirmed subsequent to publication of the fluid mosaic model. These membrane structures may be useful when the cell needs to propagate a non bilayer form, which occurs during cell division and the formation of a gap junction.
=== Membrane curvature ===
The membrane bilayer is not always flat. Local curvature of the membrane can be caused by the asymmetry and non-bilayer organization of lipids as discussed above. More dramatic and functional curvature is achieved through BAR domains, which bind to phosphatidylinositol on the membrane surface, assisting in vesicle formation, organelle formation and cell division. Curvature development is in constant flux and contributes to the dynamic nature of biological membranes.
=== Lipid movement within the membrane ===
During the 1970s, it was acknowledged that individual lipid molecules undergo free lateral diffusion within each of the layers of the lipid membrane. Diffusion occurs at a high speed, with an average lipid molecule diffusing ~2μm, approximately the length of a large bacterial cell, in about 1 second. It has also been observed that individual lipid molecules rotate rapidly around their own axis. Moreover, phospholipid molecules can, although they seldom do, migrate from one side of the lipid bilayer to the other (a process known as flip-flop). However, flip-flop movement is enhanced by flippase enzymes. The processes described above influence the disordered nature of lipid molecules and interacting proteins in the lipid membranes, with consequences to membrane fluidity, signaling, trafficking and function.
== Restrictions to lateral diffusion ==
There are restrictions to the lateral mobility of the lipid and protein components in the fluid membrane imposed by zonation. Early attempts to explain the assembly of membrane zones include the formation of lipid rafts and “cytoskeletal fences”, corrals wherein lipid and membrane proteins can diffuse freely, but that they can seldom leave. These ideas remain controversial, and alternative explanations are available such as the proteolipid code.
=== Lipid rafts ===
Lipid rafts are membrane nanometric platforms with a particular lipid and protein composition that laterally diffuse, navigating on the liquid bilipid layer. Sphingolipids and cholesterol are important building blocks of the lipid rafts.
=== Protein complexes ===
Cell membrane proteins and glycoproteins do not exist as single elements of the lipid membrane, as first proposed by Singer and Nicolson in 1972. Rather, they occur as diffusing complexes within the membrane. The assembly of single molecules into these macromolecular complexes has important functional consequences for the cell; such as ion and metabolite transport, signaling, cell adhesion, and migration.
=== Cytoskeletal fences (corrals) and binding to the extracellular matrix ===
Some proteins embedded in the bilipid layer interact with the extracellular matrix outside the cell, cytoskeleton filaments inside the cell, and septin ring-like structures. These interactions have a strong influence on shape and structure, as well as on compartmentalization. Moreover, they impose physical constraints that restrict the free lateral diffusion of proteins and at least some lipids within the bilipid layer.
When integral proteins of the lipid bilayer are tethered to the extracellular matrix, they are unable to diffuse freely. Proteins with a long intracellular domain may collide with a fence formed by cytoskeleton filaments. Both processes restrict the diffusion of proteins and lipids directly involved, as well as of other interacting components of the cell membranes.
Septins are a family of GTP-binding proteins highly conserved among eukaryotes. Prokaryotes have similar proteins called paraseptins. They form compartmentalizing ring-like structures strongly associated with the cell membranes. Septins are involved in the formation of structures such as, cilia and flagella, dendritic spines, and yeast buds.
== Historical timeline ==
1895 – Ernest Overton hypothesized that cell membranes are made out of lipids.
1925 – Evert Gorter and François Grendel found that red blood cell membranes are formed by a fatty layer two molecules thick, i.e. they described the bilipid nature of the cell membrane.
1935 – Hugh Davson and James Danielli proposed that lipid membranes are layers composed by proteins and lipids with pore-like structures that allow specific permeability for certain molecules. Then, they suggested a model for the cell membrane, consisting of a lipid layer surrounded by protein layers at both sides of it.
1957 – J. David Robertson, based on electron microscopy studies, establishes the "Unit Membrane Hypothesis". This, states that all membranes in the cell, i.e. plasma and organelle membranes, have the same structure: a bilayer of phospholipids with monolayers of proteins at both sides of it.
1972 – SJ Singer and GL Nicolson proposed the fluid mosaic model as an explanation for the data and latest evidence regarding the structure and thermodynamics of cell membranes.
1997 – K Simons and E Ikonen proposed the lipid raft theory as an initial explanation of membrane zonation.
2024 – TA Kervin and M Overduin proposed the proteolipid code to fully explain membrane zonation as the lipid raft theory became increasingly controversial.
== Notes and references == | Wikipedia/Fluid_mosaic_model |
A model city is a city built to a high standard and intended as a model for others to imitate. The term was first used in 1854.
Model city may specifically refer to:
Model City, Florida, also known as Liberty City, a neighborhood of Miami
Model City, New York, a hamlet in Lewiston
== See also ==
Model (disambiguation)
Model Town (disambiguation)
Model village, a type of community
Model Colony, Karachi, a neighborhood in Pakistan
Model Housing Estate, a residential area in Hong Kong
Miniature park, a scale model of a settlement
Model Cities Program, an element of US President Lyndon Johnson's Great Society and War on Poverty
List of planned cities | Wikipedia/Model_City_(disambiguation) |
Model is the third studio album by American indie rock band Wallows. It was released on May 24, 2024, through Atlantic Records. It follows their 2022 studio album, Tell Me That It's Over. It was supported by five singles: "Your Apartment", "Calling After Me", "Bad Dream", "A Warning", and "You (Show Me Where My Days Went)".
== Release and promotion ==
Wallows released the first single from the album, "Your Apartment", on February 16, 2024. On March 5, they announced the album's title and release date, as well as the second single, "Calling After Me", which was released on March 21. On March 20, the album's tracklist was revealed. On April 26, the band released the album's third single, "Bad Dream". On May 10, the fourth single, "A Warning", was released. On May 21, the fifth and final single, "You (Show Me Where My Days Went)", was released accompanied by a music video which premiered on their YouTube channel three days later.
On March 11, the band released a promotional short film shot in the Kia Forum announcing the Model World Tour, on which they will be supported by Benee for their North American leg.
== Critical reception ==
Model received generally positive reviews from critics. While many critics praised the album's sound and production, some criticized its repetition and lack of originality. Writing for NME, Anagricel Duran found that the album did not take enough creative risks in comparison to the band's previous releases and felt "bogged down by a few too many wet ballads".
== Track listing ==
== Personnel ==
Wallows
Braeden Lemasters – vocals (tracks 1–7, 9, 11), electric guitar (1–9, 11, 12), acoustic guitar (5, 11), synthesizer (11)
Dylan Minnette – vocals (all tracks), bass guitar (tracks 1–5, 7–12), bass synthesizer (7, 9), synthesizer (8)
Cole Preston – drums, percussion (all tracks); synthesizer (tracks 1, 3–12), bass synthesizer (1, 3, 4, 6, 8–12), piano (2, 4, 5, 8, 10–12), vocals (2, 6), electric guitar (3, 4, 6, 7, 10)
Additional contributors
John Congleton – production, engineering
Blake Slatkin – production (track 4)
Randy Merrill – mastering
Mark Stent – mixing
Clint Weleander – additional engineering
Sean Cook – additional engineering
Kieron Beardmore – mixing assistance
Matt Wolach – mixing assistance
Brian Walsh – saxophone (track 10)
Nate Mercereau – electric guitar (track 11)
== Charts ==
== References == | Wikipedia/Model_(album) |
Modell is the German word for "model" and also a surname. It may refer to:
== People ==
Arnold Modell (1924–2022), American professor of social psychiatry
Art Modell (1925–2012), American business executive and sports team owner
Bernadette Modell, (born 1935), British geneticist
David Modell (1961–2017), American business executive and sports team owner
Frank Modell (1917-2016), American cartoonist
Merriam Modell (1908–1994), American author of pulp fiction
Pat Modell (1931–2011), American TV actress
Rod Modell, given name for Deepchord, electronic music producer from Detroit, Michigan
William Modell (1921–2008), American businessman and chairman of Modell's Sporting Goods
== Companies ==
Modell's Sporting Goods, a sporting goods retailer based in New York City
Schabak Modell, a die-cast toy producer in Germany
Schuco Modell, a die-cast toy producer in Germany
== Media and entertainment ==
"Das Modell", a song recorded by the electro-pop group Kraftwerk
Modell Bianka, a 1951 East German film
== Other uses ==
Berliner Modell, a learning theory
Modell M and Modell S, types of Mauser bolt-action rifles
V-Modell, a software development model
== See also ==
Model (disambiguation)
Modella, Victoria, a rural locality in Australia
Micky Modelle, a music DJ and producer
Modello, the Italian word for "model" or preparatory study for a work of art | Wikipedia/Modell_(disambiguation) |
Models, also known as The Three Models and Les Poseuses, is a work by Georges Seurat, painted between 1886 and 1888 and held by the Barnes Foundation in Philadelphia. Models was exhibited at the fourth Salon des Indépendants in spring of 1888.
The piece, the third of Seurat's six major works, is a response to critics who deemed Seurat's technique inferior for being cold and unable to represent life. As a response, the artist offered a nude depiction of the same model in three different poses. In the left background is part of Seurat's 1884-1886 painting A Sunday Afternoon on the Island of La Grande Jatte.
Models is considered distinctive because of its pointillist technique and the political implications of its depiction of the nude female body.
== Seurat's life ==
Georges-Pierre Seurat was the third child of Ernestine Faivre and Antoine-Chrysostome Seurat. He was born in Paris on 2 December 1859 into a bourgeois family. He entered in the École des Beaux-Arts in 1878. He then studied under Henri Lehman. He, along with artists such as Paul Signac, Albert Dubois-Pilllet, and Odilon Redon were responsible for the Salon des Indépendents, which they established as an alternative to the state-sponsored Salon exhibitions.
Seurat is best-known for A Sunday Afternoon on the Island of La Grande Jatte, 1884, which was displayed in 1886 at the final Impressionist exhibition and subsequently exhibited at the Salon des Indépendents. The painting is known to be the start of Neo-Impressionist movement. Seurat is also praised for his technique of pointillism which in an almost scientific manner breaks the paint surface into dots of color that blend together when seen from afar.
== Pointillism and color theory ==
Models is a notable example of Pointillism, which refers to painting through a series of colored dots that together make up an image.
In an article written by Norma Broude in the Art Bulletin, she compares Pointillism to photo printing in the 1880s France. Though not the same, there are large similarities in the results given the preoccupation with color theory and the meticulously planning of paint application in pointillism. In his works, Seurat adopted the approach to replicate the luminosity and tones found in nature. Seurat's faith in color science, use of bright colors, and mechanical brush strokes are characteristic of Neo-Impressionism.
== Les Poseuses ==
Seurat painted two versions of Les Poseuses. The smaller of the two is more in accord with the divisionism technique that Seurat had invented, and favoured by Seurat specialists. This version is on the cover of the catalogue for the 1991 Seurat exhibition at the Metropolitan Museum of Art. Though the painting once belonged to the merchant Heinz Berggruen, it went on to become part of the Paul Allen collection and then his estate. In 1947, at the sale of the collection of Félix Fénéon, an early advocate and promoter of Seurat, France acquired studies for the painting that now reside in the Musée d'Orsay. In November 2022, Christies auction house sold the painting as part of the Paul G. Allen collection auction for $149.2m (£131m), including fees.
Painted between 1886 and 1888, Les Poseuses was Seurat’s response to criticism of his painting A Sunday Afternoon on the Island of La Grande Jatte. Critics at the time had claimed that the painting did not depict figures with sufficient realism. Les Poseuses has sometimes been interpreted as a response to this criticism, and the inclusion of A Sunday Afternoon on the Island of La Grande Jatte in the composition serves to connect the works. The incorporation of the earlier canvas within the picture also serves to make clear that the models are seen in the setting of the studio.
Les Poseuses roughly translate as "the posers," and the typical English translation of the title as "Models" obscures some of its original meaning. The title establishes a contrast with the subject of the painting, in which models appear to be off duty, not in the process of posing. Seurat painted the figures without idealizing them. By showing the banal realities of their work as models, he heightens the sense of their realness. They are not models in the sense of muses, but they are women who are earning money. Scholars have suggested that this approach complicates the traditional way that women have been objectified in painting.
The large size of the painting also challenged long-standing art historical traditions. In academic painting, larger canvases were typically reserved for history paintings, which aimed to depict mythological, religious, or historical scenes and events. Genre paintings, which tended to represent scenes of daily life, were usually smaller in scale. Seurat enlarged a banal and casual scene to the dimensions of a history painting, thereby subverting the traditional hierarchy.
The women's poses may also allude to earlier and widely-recognized paintings, such as Édouard Manet's 1863 Luncheon on the Grass or Jean-Auguste-Dominique Ingres's 1808 The Valpinçon Bather.
Furthermore, the English art critic Waldemar Januszczak believes this painting breaks the fourth wall, offering a glimpse into the poser who is the original source of the women depicted in A Sunday Afternoon on the Island of La Grand Jatte. He also points out that the model used for Les Poseuses may also be the same one used for the largest figure in La Grande Jatte; and that the hat and clothing worn by the rightmost, seated woman in La Grande Jatte may also appear in Les Poseuses.
== Gallery ==
== See also ==
List of paintings by Georges Seurat
== References ==
== Bibliography ==
Aichele, K. Porter (1989). "SEURAT'S "LES POSEUSES" IN THE CONTEXT OF FRENCH REALIST LITERATURE". Nineteenth-Century French Studies. 17(3/4): 385–396. ISSN 0146-7891.
Broude, Norma (1974). "New Light on Seurat's "Dot": Its Relation to Photo-Mechanical Color Printing in France in the 1880's". The Art Bulletin. 56 (4): 581–589 ISSN 0004-3079.
Distel, Anne (1992) [1991]. Seurat (in French). Paris: Ed. du Chêne. ISBN 9782851087119. OCLC 463717128, 935582389.
Dorra, Henri; Rewald, John (1959). Seurat : L'oeuvre peint, biographie et catalogue critique (in French). Les Beaux-Arts. OCLC 873288696.
Hauke, César M. de; Seurat, Georges (1962). Seurat et son oeuvre; [catalogue] (in French). Gründ. OCLC 1266200.
Herbert, Robert L.; Cachin, Françoise (1991). Georges Seurat, 1859-1891. New York, NY: Metropolitan Museum of Art : Distributed by Abrams. ISBN 9780870996184. OCLC 23870062.{{cite book}}: CS1 maint: publisher location (link)
Herbert, Robert (2001). Seurat : drawings and paintings. New Haven, CT: Yale University Press. ISBN 9780300071313. OCLC 45002103.
IRESON, NANCY (2010). "The pointillist and the past: three English views of Seurat". The Burlington Magazine. 152(1293): 799–803. ISSN 0007-6287.
Kostka, Alexandre (2000). "Two ladies vanishing : die "Poseuses" von Georges Seurat in der Sammlung Harry Graf Kessler; Kunsttransfer als Teilrezeption". In Fleckner, Uwe; Schieder, Martin; Zimmermann, Michael F (eds.). Jenseits der Grenzen. französische und deutsche Kunst vom Ancien Régime bis zur Gegenwart : Thomas W. Gaehtgens zum 60. Geburtstag 1 1 (in German). Köln: DuMont. ISBN 9783770153411. OCLC 886754027.
Madeleine-Perdrillat, Alain (1990). Seurat. Rizzoli. ISBN 9780847812868. OCLC 886215932.
Nochlin, Linda (March 1994). "Body politics : Seurat's "Poseuses" (PDF). Art in America (3). Art Media Holdings: 71–79. ISSN 0004-3214. OCLC 959051968.
Rewald, John (1943). "Georges Seurat". Translated by Abel, Lionel. New York: Wittenborn and Co. OCLC 561921719. Retrieved 2018-10-23.
Rich, Daniel Catton (1958), Seurat - paintings and drawings, Art Institute of Chicago, OCLC 313002001, Art Institute of Chicago, January 16 - March 7, 1958; the Museum of Modern Art, New York, March 24 - May 11, 1958
Tate. "Neo-impressionism – Art Term". Tate. Retrieved 2020-11-24.
== External links ==
"Georges Seurat: Models (Poseuses)". Barnes Collection Online — Georges Seurat: Models (Poseuses). Retrieved 2018-11-01.
Butcher, David (2019-11-06). "The Art Mysteries with Waldemar Januszczak: Seurat's Les Poseuses". Radio Times. Retrieved 2020-03-24. | Wikipedia/Models_(painting) |
Marco Francesco Andrea Pirroni (born 27 April 1959) frequently credited simply as Marco, is a British guitarist, songwriter and record producer. He has worked with Adam Ant, Sinéad O'Connor, Siouxsie and the Banshees and many others from the late 1970s to the present day.
== Early years ==
Born to Italian parents in Camden Town, London, Pirroni made his first appearance on stage with Siouxsie and the Banshees in their début gig, at 100 Club Punk Festival in 1976. Sid Vicious, future bassist for the Sex Pistols, was on drums. Pirroni also formed a short-lived punk rock band called the Models with future fellow Ants member Terry Lee Miall and future Wolfgang Press member Mick Allen, plus singer Cliff Fox. They recorded a Peel Session and released the single (the first of Marco's career) "Freeze"/"Man of the Year" on the Step Forward label in 1977. After his departure from The Models, Pirroni started a new project, Rema-Rema, a short-lived London punk rock group, consisting of Gary Asquith (guitar/vocals), Marco Pirroni, Michael Allen (bass/vocals), Mark Cox (keyboards) and Dorothy Prior (drums, generally known only as "Max"). The band released a four-track EP, Wheel in the Roses (released 1980 on 4AD), featured one side of studio recordings and another of live material. However, the group dissolved when Marco Pirroni joined Adam and the Ants.
== Adam and the Ants/Adam Ant ==
Pirroni was lead guitarist and co-songwriter in the second incarnation of Adam and the Ants, co-penning two UK number one singles and a further four Top Ten hits, with Ant. The two albums he co-wrote for Adam and the Ants, Kings of the Wild Frontier and Prince Charming, both made the Top 10 in the UK Albums Chart ("Kings" number 1; "Prince Charming" number 2).
When Adam and the Ants disbanded in 1982, Pirroni was retained as Adam Ant's co-writer and studio guitarist; they produced another number-one single ("Goody Two Shoes") and an album (Friend or Foe), followed by four more Top 20 hits. Ant and Pirroni won two shared Ivor Novello Awards for "Stand and Deliver".
== The Wolfmen ==
Pirroni was a member of the Wolfmen with Chris Constantinou. They released one EP, several singles, wrote music for television advertisements and released a début album, entitled Modernity Killed Every Night. The Wolfmen released their second album, Married to the Eiffel Tower, in 2011.
== Personal life ==
After living in London's Marylebone for several years, Pirroni relocated to north Derbyshire in 2013.
== Discography ==
With Rema-Rema
Wheel in the Roses (1980)
With Adam and the Ants
Kings of the Wild Frontier (1980)
Prince Charming (1981)
With Cowboys International
Today Today (1980)
With Adam Ant
Friend or Foe (1982)
Strip (1983)
Vive Le Rock (1985)
Manners & Physique (1990)
Persuasion (unreleased)
Wonderful (1995)
Adam Ant Is the Blueblack Hussar in Marrying the Gunner's Daughter (2013) - four tracks
With Sinéad O'Connor
The Lion and the Cobra (1987)
I Do Not Want What I Haven't Got (1990)
Universal Mother (1994)
How About I Be Me (And You Be You)? (2012)
With Spear of Destiny
Outland (1987)
With The Slits
Revenge of the Killer Slits (2006)
With The Wolfmen
Modernity Killed Every Night (2008)
Married to the Eiffel Tower (2011)
With Department S
"Wonderful Day"
"Is Vic There (Slight return)"
== References ==
== External links ==
Official MySpace for The Wolfmen | Wikipedia/The_Models |
Modelo, a Spanish word for model, may refer to:
== Places ==
Modelo, Santa Catarina, a city in Brazil
Modelo Formation, a geologic formation in southern California, U.S.
Modelo Group, a geologic group in Mexico
== Companies ==
El Modelo, a restaurant in Albuquerque, New Mexico, U.S.
Grupo Modelo, a brewery in Mexico
Modelo Brewery, a brewery in Cuba
Modelo Continente, a Portuguese hypermarket chain owned by Continente
== Music ==
"La Modelo" (Ozuna song), 2017
"La Modelo", a song by José Capmany
== Structures ==
Mercado Modelo (disambiguation)
Modelo Market, a handicraft market in Salvador, Bahia, Brazil
Modelo Museum of Science and Industry, Toluca, Mexico
=== Prisons ===
La Modelo, Bogota, Colombia
Cárcel Modelo, Madrid, Spain
Presidio Modelo, Nueva Gerona, Cuba
== See also ==
Las modelos (disambiguation)
Model (disambiguation) | Wikipedia/Modelo_(disambiguation) |
"Das Model" ("The Model" in English) is a song recorded by the German group Kraftwerk in 1978, written by musicians Ralf Hütter and Karl Bartos, with artist Emil Schult collaborating on the lyrics. It is featured on the album, Die Mensch-Maschine (known in international versions as The Man-Machine).
In 1981, the song was re-released to coincide with the release of the studio album Computerwelt (Computer World in English). It reached no. 1 in the UK singles chart. Both the German and English versions of the song have been covered by other artists, including Snakefinger, Hikashu, Big Black and Robert.
== Background ==
The lyrics were written by Emil Schult, who was in love with a model when he wrote the song. He also composed music for the song, though it was too guitar-heavy for the musical concept of Kraftwerk and it was rewritten by Bartos and Hütter to fit the sound of the band.
As with all of the songs on The Man-Machine, The Model was released in both German- and English-language versions. The lyrics are very close between two versions, with the exception of a guttural-sounding "Korrekt!" added after the line "Sie trinkt in Nachtclubs immer Sekt" in the German version. (The English lyric is "She's going out to nightclubs, drinking just champagne.") This was an in-joke by the band. In his autobiography, I Was A Robot, former Kraftwerk member Wolfgang Flür explains:
Our favourite discothèque, the Mora, lay in the Schneider-Wibbel Gasse in the middle of Düsseldorf's old town, and there was a waiter who worked there who always greeted new guests with the words "Hallöchen! Sekt? Korrrrrrrekt!" You didn't have the chance to contradict him, because he always answered himself. He loved selling champagne to the guests, largely because it was the drink on which he earned the highest commission, and he forced it on everyone.We'd heard him so often, and he was such a fine example of Düsseldorf chic, that we invited him into our studio when we were recording "The Model" so that he could speak his smug slogan directly into the microphone. That's why his pithy "Sekt? Korrrrrrrekt!" appears in our most famous song.
== Charts ==
== Certifications and sales ==
== Rammstein cover ==
German rock band Rammstein covered the German version of "Das Model" in 1997 as "Das Modell". It was released as a non-album single. "Das Modell" is introduced by a French phrase spoken by film editor Mathilde Bonnefoy. The single contains three non-album tracks taken from the Sehnsucht recording sessions. In the special version of "Alter Mann", Bobo (Christiane Hebold) sings alongside Till Lindemann in the chorus.
=== Track listings ===
Promo CD
Enhanced CD
=== Charts ===
==== Weekly charts ====
==== Year-end charts ====
== See also ==
List of UK singles chart number ones of the 1980s
== References ==
== External links ==
"Das Model" at Discogs (list of releases)
"Das Modell" covers (in Russian) | Wikipedia/Das_Model |
Models is the second studio album by British electronic musician and producer Lee Gamble. It was released on 20 October by Hyperdub Records.
== Background ==
On 25 July 2023, Lee Gamble announced the release of his studio album, along with the first single "She's Not".
== Critical reception ==
Models was met with "generally favorable" reviews from critics. At Metacritic, which assigns a weighted average rating out of 100 to reviews from mainstream publications, this release received an average score of 75, based on 4 reviews.
Writing for Pitchfork, Daniel Bromfield said "Models is a cold, sad, wispy album whose songs are like ghosts trying to communicate their unfinished business, unable to puncture the barrier between their plane of existence and ours. The seven tracks on the UK producer's new album don't just deconstruct pop music; they obliterate it".
== Track listing ==
== References ==
== External links ==
Models at Discogs (list of releases)
Models at MusicBrainz (list of releases) | Wikipedia/Models_(album) |
Richard Vigneault (born March 18, 1956) is a Canadian retired professional wrestler, trainer, and television presenter, better known by his ring name, "The Model" Rick Martel. He is best known for his appearances with the American Wrestling Association, the World Wrestling Federation and World Championship Wrestling. Championships held by Martel over the course of his career include the AWA World Heavyweight Championship, WCW World Television Championship, and WWF World Tag Team Championship.
== Professional wrestling career ==
=== Early career (1973–1980) ===
Martel is from a family of wrestlers, and made his professional debut at age seventeen when his brother Michel, a wrestler, asked him to replace an injured wrestler. Martel already was a skilled amateur wrestler, and quickly adapted to professional wrestling.
Martel wrestled throughout the world, winning titles in Canada (in Stu Hart's Stampede Wrestling and Vancouver-based NWA All Star Wrestling), New Zealand, Japan, Hawaii, and Puerto Rico–based World Wrestling Council (WWC). His first real success in the United States came in the National Wrestling Alliance (NWA)'s Portland affiliate, Pacific Northwest Wrestling in 1979, where he became a top talent, holding the Canadian and PNW tag team titles simultaneously. He left PNW on August 16, 1980, when he lost a "loser leaves town" match to Buddy Rose. Martel also served a stint as a booker for a wrestling territory in Hawaii, where he would help the promotion set up matches and construct the storylines that would play out inside and outside of the ring.
=== World Wrestling Federation (1980–1982) ===
Martel debuted in the World Wrestling Federation (WWF) in July 1980. That fall, he formed a tag team with Tony Garea. On November 8, they defeated The Wild Samoans to capture the WWF Tag Team Championship. They successfully defended the title until dropping the belts to The Moondogs on March 17, 1981. They regained the title from The Moondogs on July 21. Their second reign came to an end on October 13, when they lost to Mr. Fuji and Mr. Saito. Though they would challenge the champions numerous times, Martel and Garea were unable to recapture the belts, and Martel left the WWF in April 1982.
=== American Wrestling Association (1982–1986) ===
Martel signed with the AWA in 1982 and quickly ascended through the ranks, defeating Jumbo Tsuruta to win the AWA World Heavyweight Championship on May 13, 1984. His reign as champion lasted nearly nineteen months (the third-longest title reign and the longest title reign of the 1980s), during which time he wrestled several matches with NWA World Champion Ric Flair, as well as with Jimmy Garvin, Nick Bockwinkel and King Tonga. His finishing move alternated between the slingshot splash and the combination atomic drop/back suplex. On December 29, 1985, Martel lost the title to Stan Hansen, who forced him to submit to the "Brazos Valley Backbreaker" (Hansen's version of the Boston crab).
=== World Wrestling Federation (1986–1995) ===
==== Can-Am Connection (1986–1987) ====
In 1986, Martel returned to the WWF, with his tag team partner Tom Zenk. They were billed as The Can-Am Connection. The Can-Am Connection had been formed by Martel in the Montreal-based Lutte Internationale in 1986. Zenk was the boyfriend of Martel's sister-in-law, and had been introduced to Martel in the AWA by Curt Hennig. The Can-Am Connection with their youthful looks and high energy in-ring performances quickly garnered the affection of fans, and they looked likely to win the WWF Tag Team Title in the near future. At WrestleMania III in front of 93,173 fans at the Pontiac Silverdome, The Cam-Am connection defeated Ace Cowboy Bob Orton and The Magnificent Muraco in the opening match, when Martel pinned Muraco with a flying cross-body helped by what commentator Gorilla Monsoon called "a schoolboy trip from behind" by Zenk. They split shortly afterward; Zenk claimed Martel had secretly negotiated an individual contract worth three times more than his partner's contract (traditionally, tag teams are paid roughly equal salaries).
Martel strongly disagreed. In Mad Dogs, Midgets and Screw Jobs, he said: “Ever since I had been fired by Jim Barnett, I decided not to discuss money matters with other wrestlers . . . I did the same thing with Tom, and he put it in his head, or some other people put it in his head, that I made more than him. But as far as Vince was concerned,
if you were in a tag team, you earned the same amount of money.” He also claimed Zenk "...was overwhelmed by it all... Wrestling is very hard on your body. Hard on you also mentally. It's hard physically. Tom wasn't mentally or physically hard as I thought he would be."
==== Strike Force (1987–1989) ====
At the time of Zenk's departure, The Can-Am Connection was in a feud with The Islanders (Haku and Tama); Zenk's departure was worked into the feud, with the Islanders claiming that Zenk was a quitter and abandoned Martel because he knew they could never beat them. In July 1987, Martel defeated both Haku and Tama in singles competition. Then on the August 15, 1987, episode of Superstars of Wrestling after Martel defeated Barry Horowitz, he was jumped by The Islanders. Tito Santana, who was doing commentary in the Spanish broadcast booth, ran to the ring to help Martel fight off his attackers. Martel and Santana then formed a tag team called Strike Force. The team were played off as good looking pretty boys (a storyline that came directly from the Can-Am Connection), even using the theme called "Girls In Cars", which was originally made for the Can-Am Connection. The name Strike Force came from Santana's promise that as a team they would, "be striking (The Islanders) with force." Martel immediately came up with the team's name based on this.
After winning their feud with The Islanders, Strike Force immediately challenged The Hart Foundation (Bret "Hitman" Hart and Jim "The Anvil" Neidhart) for the WWF World Tag Team Title. Strike Force won the titles on an episode of Superstars after Martel made Neidhart submit to a Boston crab. Strike Force would hold the titles for five months, defending primarily against the Hart Foundation and the Islanders, before losing to Demolition (Ax and Smash) at WrestleMania IV in Atlantic City when Smash pinned Martel as a result of Martel being hit on the back of the neck by Ax using Mr. Fuji's cane as a weapon when Martel had Smash in the Boston crab and the referee was distracted by Santana beating up Mr. Fuji on the ring apron.
Shortly afterward, Martel (kayfabe) took time off due to injuries sustained in a title rematch against Demolition at a Prime Time Wrestling taping in Oakland, California, on June 1, 1988 (aired July 11). Smash hit Martel with a steel chair, then Demolition performed their "Demolition Decapitation" finisher on him at ringside, leaving him unconscious on the floor. On the June 18 Superstars, it was announced he suffered back injuries and a concussion. In the storyline, he briefly retired due to these injuries. In reality, he was granted leave from the WWF and took six months off to help care for his severely ill wife.
Before returning to the WWF Martel returned briefly to the WWC where he defeated Kamala. Martel returned in January 1989 as a singles wrestler, before reforming Strike Force with Santana at WrestleMania V to face The Brain Busters (Arn Anderson and Tully Blanchard). During the match, Santana accidentally hit Martel with his signature flying forearm smash and knocked him out of the ring. A frustrated Martel refused to tag in and walked away, leaving Santana to be beaten down and pinned. Immediately after the match in an interview with "Mean" Gene Okerlund who asked him how he could leave his partner "high and dry" and said that Strike Force was "supposed to be a team, a team", an irate Martel said, "I'm sick and tired. I'm sick and tired of him. You know, I was doing great as a singles wrestler, but no, Mr. Tito wants to ride my coattails some more. You saw his timing was off". Then angrily addressing Santana he said "You're lucky that being the gentleman that I am that I just walked off. That could have been a lot worse for you Tito Santana."
Following his heel turn, Martel acquired Slick as his manager. He feuded with Santana on and off over the next two years, losing to him in the finals of the 1989 King of the Ring tournament, then defeating him at The Main Event IV taping on October 30, 1990 (aired November 23).
As 1989 came to a close, Martel's association with Slick quietly ended.
==== The Model (1989–1995) ====
In late 1989, Martel adopted a narcissistic gimmick, as the Model. Just before the 1989 Survivor Series (where in a continuation of their feud, Martel pinned Santana in the opening elimination match of the night), he introduced his own (fictional) brand of cologne called Arrogance, which he carried in a large atomizer and sprayed in the eyes of his opponents to blind them. He wore a turquoise sweater tied around his neck to the ring (later replaced by a turquoise sportcoat), with a large lapel pin that read "Yes, I am a model." Martel made his pay-per-view singles match debut at WrestleMania VI at the Skydome in Toronto, where he defeated Koko B. Ware via submission with his signature Boston Crab.
Martel's most high-profile feud during his stint as the Model was with Jake "The Snake" Roberts, sparked when he blinded Roberts with Arrogance on "The Brother Love Show" in October 1990. Martel and Roberts captained opposing teams at the Survivor Series. "The Visionaries" (Martel, The Warlord and Power and Glory) defeated The Vipers (Roberts, Superfly Jimmy Snuka and The Rockers) in a 4-0 clean sweep, the first time this had happened in Survivor Series history. In the 1990 Survivor Series (unlike previous editions), the heel survivors faced off against the babyface survivors in a grand finale "Match Of Survival". There "The Visionaries" teamed with "The Million Dollar Man" Ted Dibiase to face Tito Santana, Hulk Hogan and WWF World Heavyweight Champion The Ultimate Warrior. Martel was eliminated from the match after he got himself counted out by abandoning his team after receiving beatings from both Hogan and The Warrior. Martel continued to have the upper hand over Roberts in the 1991 Royal Rumble match, eliminating Roberts from the match en route to lasting (a then-record) 53 minutes, before being eliminated by the British Bulldog Davey Boy Smith. Roberts would ultimately get his revenge at WrestleMania VII, defeating Martel in a blindfold match. For the rest of 1991, Martel represented the WWF on Japanese tours for Super World Sports. In December 1991, he lost to Naoki Sano in a match to determine the inaugural SWS Junior Heavyweight Champion.
In early 1992, Martel began a feud with Tatanka, leading to WrestleMania VIII, where Tatanka pinned him. He went on to work against Santana on house shows that spring.
During that time he unsuccessfully challenged Bret Hart for the WWF Intercontinental Heavyweight Championship at UK Rampage (1992). Also that summer, Martel had a brief feud with Shawn Michaels, as both men sought the affections of Sensational Sherri. The feud ended with a chain of events that resulted in a double countout at SummerSlam 1992 held at the Wembley Stadium in London, England in front of what remains the SummerSlam record attendance of 80,355. The match carried a "no punching in the face" stipulation, mutually agreed upon and eventually disregarded by the two narcissistic heels.
Martel then resumed his rivalry with Tatanka by stealing his sacred eagle feathers, to add to his wardrobe. The feud was resolved at the 1992 Survivor Series, where Tatanka again defeated Martel and reclaimed the feathers.
In 1993, Martel mainly appeared on the lower undercard, and rarely on television, mostly on programs such as All-American Wrestling and Wrestling Challenge. However, at the September 27 Monday Night Raw taping, he was the co-winner (with Razor Ramon) of a battle royal (aired October 4) to decide the competitors in a match for the vacant Intercontinental Championship. He lost that match (aired the next week) to Ramon. After this, Martel began moving slightly up the card once again. Martel also briefly feuded with Bastion Booger, losing one of their matches when he got fed up with how Booger smelled and started spraying him with his Arrogance cologne. Martel also appeared at Survivor Series 1993 in a 4-4 elimination match, being eliminated by The 1-2-3 Kid and in the 1994 Royal Rumble as the 26th entrant before getting eliminated by his old rival Tatanka. Martel was set to appear in a 10-man tag team match at Wrestlemania X but the match was cancelled during the show due to the show running out of time. The match was later held 2 weeks and 1 day later on Monday Night Raw, with Martel's team victorious. This turned out to be his final WWF in-ring match.
In August 1994, Martel dropped out of the WWF picture and wasn't seen again until participating in the 1995 Royal Rumble (he was a substitute for Jim Neidhart, who was fired from WWF due to no-showing events). Martel's final appearance came the following month at a house show in Montreal, as his wrestling career began to slow as Martel pursued a career in real estate.
In a shoot interview with RF video, Martel claimed that he and Don Callis were set to return to the WWF as 'The Supermodels' in 1997, before Callis turned on Martel, turning him face for the first time since 1989. However, after a pay dispute with WWF owner Vince McMahon, Martel signed with World Championship Wrestling (WCW). Callis confirmed that he and Martel were set to debut as a team during an interview with WWE.com in 2015.
=== Other promotions (1994–1997) ===
In 1994, Martel worked for a few appearances for International World Class Championship Wrestling (IWCCW) where in one of the matches defeated his former partner Tito Santana on September 9.
After leaving WWF in 1995, Martel wrestled in the independent circuit in United States and Canada. He had a feud with Don Callis aka The Natural in Manitoba. Later that year he went to Germany to work for Catch Wrestling Association. He lost to Santana in a Texas Death match by count out for NWA New Jersey on October 14.
In 1996 he wrestled in Malaysia. In 1997, Martel returned to Canada to team with Don Callis as the Supermodels, feuding with a young Edge and Christian, then known as Adam Impact and Christian Cage.
=== World Championship Wrestling (1998) ===
Martel debuted for WCW in 1998 on the January 5 episode of Nitro, defeating Brad Armstrong in his debut. Martel feuded with Booker T for the World Television Championship, failing to win it at Souled Out before winning the championship on the February 16 episode of Nitro. Martel's comeback was cut short during his rematch with Booker T at SuperBrawl VIII on February 22, when he landed badly on a throw, hitting his leg on one of the ring ropes. He tore an inside ligament of his right knee, fractured his leg and suffered cartilage damage, effectively ending his in-ring career. He was originally booked to retain the Television Title in the match, intended to be a gauntlet match, by beating Booker and then Perry Saturn. Martel and Booker worked out a finish in the ring, and then Booker and Saturn worked the second half of the match entirely on the fly. Martel was out of action for several months.
During his recovery, he worked briefly as a French language announcer alongside Marc Blondin and Michel Letourneur for the French-language WCW programming that was airing in Europe.
After suffering another injury in his first match back on the July 13 episode of Nitro, against Booker T's Harlem Heat tag team partner (and real life older brother), Stevie Ray, Martel retired from the ring.
=== Hawaiian Islands Wrestling Federation (1999) ===
After WCW, Martel wrestled his last match in Kailua, Hawaii, for Hawaiian Islands Wrestling Federation defeating The Metal Maniac on March 23, 1999.
=== Retirement (1999–2007) ===
After retiring from the ring, Martel worked for WCW as a trainer, and as host of the French versions of WCW programming. Rick also manages commercial properties he invested in from his earnings when wrestling.
After the main event of a house show in Quebec City on May 3, 2003, then WWE Champion Brock Lesnar introduced Martel to the ring as a surprise, and shook his hand. Martel, who received a standing ovation from his home fans, said he was honoured to be associated with WWE and thanked the fans.
At WWE's Vengeance: Night of Champions pay-per-view in 2007, Martel, along with his former teammate Tony Garea, saved Jimmy Snuka and Sgt. Slaughter from a post-match attack at the hands of Deuce 'n Domino.
Martel is a playable character in WWE 2K18 and WWE 2K19, the first video game appearance since Showdown: Legends of Wrestling.
== Championships and accomplishments ==
50th State Big Time Wrestling
NWA North American Heavyweight Championship (Hawaii version) (1 time)
All Japan Pro Wrestling
World's Strongest Tag Determination League Fighting Spirit Award (1986) – with Tom Zenk
American Wrestling Association
AWA World Heavyweight Championship (1 time)
Cauliflower Alley Club
Lou Thesz Award (2011)
Georgia Championship Wrestling
NWA Georgia Tag Team Championship (1 time) – with Tommy Rich
Lutte Internationale
Canadian International Heavyweight Championship (1 time)
NWA All-Star Wrestling
NWA Canadian Tag Team Championship (Vancouver version) (1 time) – with Roddy Piper
NWA New Zealand
NWA British Commonwealth Heavyweight Championship (New Zealand version) (3 times)
New England Pro Wrestling Hall of Fame
Class of 2011
Pacific Northwest Wrestling
NWA Pacific Northwest Heavyweight Championship (1 time)
NWA Pacific Northwest Tag Team Championship (3 times) – with Roddy Piper
Pro Wrestling Illustrated
Ranked No. 48 of the 500 best singles wrestlers during the "PWI Years" in 2003
Ranked No. 70 of the 100 best tag teams during the PWI years with Tito Santana in 2003
Professional Wrestling Hall of Fame
Class of 2015
Stampede Wrestling
Stampede International Tag Team Championship (1 time) – with Lennie Hurst
Universal Superstars of America
USA Heavyweight Championship (1 time)
World Championship Wrestling
WCW World Television Championship (1 time)
World Championship Wrestling
NWA Austra-Asian Tag Team Championship (1 time) – with Larry O'Dea
World Wrestling Council
WWC North American Tag Team Championship (1 time) – with Pierre Martel
World Wrestling Federation
WWF Tag Team Championship (3 times) – with Tony Garea (2), and Tito Santana (1)
== References ==
== External links ==
Richard Vigneault at IMDb
Rick Martel's profile at WWE.com , Cagematch.net , Internet Wrestling Database | Wikipedia/The_Model_(wrestler) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.