text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
DesignSpark Mechanical is a 3D computer-aided design (CAD) solid modeling software application. It is licensed as proprietary freeware.
It enables users to solid model in a 3D environment and create files to use with 3D printers. Using the direct modeling approach, it allows for unlimited and frequent design changes using an intuitive set of tools. This free 3D CAD software is offered as a payment free download, but requires a one-time registration with DesignSpark.com to receive the latest community news and product promotions.
To create engineering drawings in the same framework, a paid subscription to the DesignSpark Creator or Engineer plan is needed.
== Background ==
DesignSpark Mechanical is based on the SpaceClaim Engineer application and is the product of a collaboration between RS Group plc and Ansys, Inc. An introductory brochure is available here. The goal to offer a free 3D CAD software with many features of high-end software is to engage with those such as engineering education students or small businesses who may not need or cannot afford premium branded 3D CAD software.
== Rapid prototyping ==
DesignSpark Mechanical supports rapid prototyping through SpaceClaim's 3D direct modeling methodology using the Pull, Move, Fill and Combine tools that allow interacting with digital 3D objects like modeling with clay, all available in the free 3D CAD version.
== 3D CAD library ==
3D models for more than 75,000 products from the RS catalog are available for download within the software.
== Subscription plans ==
Paid subscription plans provide added functions for DesignSpark Mechanical, such as Mirror tool, full support of popular file formats such as STEP, STL, IGES, DXF, and DWG, and an associative drawing environment, adding many functions such as cosmetic threading, geometric dimensioning and tolerancing, annotations, and more.
== See also ==
Comparison of 3D computer graphics software
Comparison of computer-aided design editors
DesignSpark PCB
DesignSpark PCB Pro
List of 3D printing software
== References ==
== Further reading ==
"EE Journal article introducing DesignSpark subscription plans". EE Journal
"Diseñar en 3D con DesignSpark Mechanical". Automática e instrumentación. No. 454, 2013. pages 36–37. ISSN 0213-3113 (in Spanish)
"推出3D设计软件 DesignSpark Mechanical". Global Electronics China. No. 10. 2013. ISSN 1006-7604 (in Chinese)
"48-Hour 3D Design Challenge With DesignSpark Mechanical". EE Times. ISSN 0192-1541
"DesignSpark Mechanical: It's Not Your Grandmother's MCAD!". EE Times. ISSN 0192-1541
"DesignSpark Mechanical User Design Challenge". Engineering.com.
"DesignSpark Mechanical Power Hack". Engineering.com.
"Independent software reviews on Capterra". Capterra.com.
== External links ==
Official website
Official Forum
Independent review of software | Wikipedia/DesignSpark_Mechanical |
Communication design is a mixed discipline between design and information-development concerned with how media communicate with people. A communication design approach is concerned with developing the message and aesthetics in media. It also creates new media channels to ensure the message reaches the target audience. Due to overlapping skills, some designers use graphic design and communication design interchangeably.
Communication design can also refer to a systems-based approach, in which the totality of media and messages within a culture or organization are designed as a single integrated process rather than a series of discrete efforts. This is done through communication channels that aim to inform and attract the attention of the target audience. Design skills must be used to create content suitable for different cultures and to maintain a pleasurable visual design. These are crucial pieces of a successful media communications kit.
Within the Communication discipline, the emerging framework for Communication as Design focuses on redesigning interactivity and shaping communication affordances. Software and applications create opportunities for and place constraints on communication. Recently, Guth and Brabham examined the way that ideas compete within a crowdsourcing platform, providing a model for the relationships among design ideas, communication, and platform. The same authors have interviewed technology company founders about the democratic ideals they build into the design of e-government applications and technologies. Interest in the Communication as Design framework continues growing among researchers.
== Overview ==
Communication design seeks to attract, inspire, and motivate people to respond to messages and to make favorable impact. This impact oriented toward the objectives of the commissioning body, which can be either to build a brand or move sales. It can also range from changing behaviors, to promoting a message, to disseminating information. The process of communication design involves strategic business thinking, including using market research, creativity, problem-solving, and technical skills and knowledge such as colour theory, page layout, typography, and creating visual hierarchies. Communication designers translate ideas and information through a variety of media. In order to establish credibility and influence audiences through the communication, communication designers use both traditional tangible skills and the ability to think strategically in design and marketing terms.
The term communication design is often used interchangeably with visual communication, but it maintains a broader meaning that includes auditory, vocal, touch, and olfactory senses. Examples of communication design practices include information architecture, editing, typography, illustration, web design, animation, advertising, ambient media, visual identity design, performing arts, copywriting and professional writing skills applied in the creative industries.
== Education ==
Students of communication design learn how to create visual messages and broadcast them to the world in new and meaningful ways. In the complex digital environment around us, communication design has become a powerful means of reaching out to the target audiences. Therefore, it expands its focus beyond user-experiences to user-networks. Students learn how to combine communication with art and technology. The communication design discipline involves teaching how to design web pages, video games, animation, motion graphics, and more.
Communication Design has content as its main purpose. It must achieve a reaction, or get a customer to see a product in a genuine way to attract sales or effectively communicate a message. Communication design students are often Illustrators, Graphic Designers, Web designers, Advertising artists, Animators, Video Editors, Motion graphic artists, Printmakers, and Conceptual Artists. The term communications design is fairly general considering its interdisciplinary practitioners operate within various mediums to get a message across.
== Subdisciplines ==
Advertising
Art direction
Brand management
Content strategy
Copywriting
Creative direction
Graphic design
Illustration
Industrial design
Information architecture
Information graphics
Instructional design
Marketing communications
Performing arts
Presentation
Technical writing
Visual arts
=== Visual communication design ===
Visual communication design is the design working in any media or support of visual communication. This is considered by some to be more accurate alternative terminology to cover all types of design applied in communication. It uses a visual channel for message transmission, reflecting the visual language inherent to some media. Unlike the terms graphic design (graphics) or interface design (electronic media), it is not limited to support a particular form of content.
=== Print media design ===
Print media design is a graphic design discipline that creates designs for printed media. Print design involves the creation of flyers, brochures, book covers, t-shirt prints, business cards, booklets, bookmarks, envelope designs, signs, letterheads, posters, CD cover, print media design templates, and more. The goal of print design is to use visual graphics to communicate a specific message to viewers.
== See also ==
Design elements
Design principles
Communication studies
Swiss Style (design)
== Footnotes ==
== External links ==
Dossier Communication Design in Germany of the Goethe-Institut | Wikipedia/Communication_design |
== Color ==
Color is the result of light reflecting back from an object to our eyes. The color that our eyes perceive is determined by the pigment of the object itself. Color theory and the color wheel are often referred to when studying color combinations in visual design. Color is often deemed to be an important element of design as it is a universal language which presents the countless possibilities of visual communication. Color serves various purposes to contribute to the overall effectiveness of the design. It is used as an element to convey meaning and emotion, create visual hierarchy, enhance brand identity, improve readability and accessibility, create visual interest and appeal, differentiate information and elements, and make cultural and contextual significance.
Hue, saturation, and brightness are the three characteristics that describe color.
Hue can simply be referred to as "color" as in red, yellow, or green.
=== Color theory in visual design ===
Color theory studies color mixing and color combinations. It is one of the first things that marked a progressive design approach. In visual design, designers refer to color theory as a body of practical guidance to achieving certain visual impacts with specific color combinations. Theoretical color knowledge is implemented in designs in order to achieve a successful color design.
Color harmony
Color harmony, often referred to as a "measure of aesthetics", studies which color combinations are harmonious and pleasing to the eye, and which color combinations are not. Color harmony is a main concern for designers given that colors always exist in the presence of other colors in form or space.
When a designer harmonizes colors, the relationships among a set of colors are enhanced to increase the way they complement one another. Colors are harmonized to achieve a balanced, unified, and aesthetically pleasing effect for the viewer.
Color harmony is achieved in a variety of ways, some of which consist of combining a set of colors that share the same hue, or a set of colors that share the same values for two of the three color characteristics (hue, saturation, brightness). Color harmony can also be achieved by simply combining colors that are considered compatible to one another as represented in the color wheel.
Color contrasts
Color contrasts are studied with a pair of colors, as opposed to color harmony, which studies a set of colors. In color contrasting, two colors with perceivable differences in aspects such as luminance, or saturation, are placed side by side to create contrast.
Johannes Itten presented seven kinds of color contrasts: contrast of light and dark, contrast of hue, contrast of temperature, contrast of saturation, simultaneous contrast, contrast of sizes, and contrast of complementary. These seven kinds of color contrasts have inspired past works involving color schemes in design.
Color schemes
Color schemes are defined as the set of colors chosen for a design. They are often made up of two or more colors that look appealing beside one another, and that create an aesthetic feeling when used together. Color schemes depend on color harmony as they point to which colors look pleasing beside one another.
A satisfactory design product is often accompanied by a successful color scheme. Over time, color design tools with the function of generating color schemes were developed to facilitate color harmonizing for designers.
=== Use of color in visual design ===
Color is used to create harmony, balance, and visual comfort in a design
Color is used to evoke the desired mood and emotion in the viewer
Color is used to create a theme in the design
Color holds meaning and can be symbolic. In certain cultures, different colors can have different meanings.
Color is used to put emphasis on desired elements and create visual hierarchy in a piece of art
Color can create identity for a certain brand or design product
Color allows viewers to have different interpretations of visual designs. The same color can evoke different emotions, or have various meanings to different individuals and cultures
Color strategies are used for organization and consistency in a design product
In the architectural design of a retail environment, colors affect decision-making, making which motivates consumers to buy particular products
Color strengthens narrative and storytelling in visual design.
Color can represent characters, themes, and symbolism.
Color is a tool that designers use to strategically add layers of meaning and subtext to their designs.
Colors can create recurring visual motifs in a design, strengthening ideas and fostering coherence.
Color is an effective tool for communication because it allows for complex interpretation and expression.
== Line ==
More specifically, Line is defined as a series of points, or the connection between two points, or the path of a moving point. The importance of line comes from its versatility as its characteristics is significantly expressive. Lines may also appear as linear shapes that take on a line-link quality, or as suggested line perceived from eyes as they follow a sequence related shapes. Line may be used either in two-dimensional forms with enclosing a space as an outline and creating shape, or in three-dimensional forms. On top of that, there are different types of lines aside from the ones previously mentioned. For example, you could have a line that is horizontal and zigzagged or a line that is vertical and zigzagged. Different lines create different moods, it all depends on what mood you are using line to create and convey.
== Point ==
A point is basically the beginning of “something” in “nothing”. It forces the mind to think upon its position and gives something to build upon in both imagination and space. Some abstract points in a group can provoke human imagination to link it with familiar shapes or forms.
== Shape ==
Shapes are recognizable objects and forms and are usually composed of other elements of design.
For example, a square that is drawn on a piece of paper is considered a shape. It is created with a series of lines which serve as a boundary that shapes the square and separates it from the space around it that is not part of the square.
=== Types of shapes ===
Organic shapes are irregular shapes that are often complex and resemble shapes that are found in nature. Organic shapes can be drawn by hand, which is why they are sometimes subjective and only exist in the imagination of the artist.
Curvilinear shapes are composed of curved lines and smooth edges. They give off a more natural feeling to the shape. In contrast, rectilinear shapes are composed of sharp edges and right angles, and give off a sense of order in the composition. They look more human-made, structured, and artificial. Artists can choose to create a composition that revolves mainly around one of these styles of shape, or they can choose to combine both.
== Texture ==
Texture refers to the physical and visual qualities of a surface.
=== Definition of texture ===
Texture is the variation of data at a scale smaller than the scale of the main object. Taking a person wearing a Hawaiian shirt as an example, as long as we consider the person as the main object looking at, the patterns of their shirt are considered as texture. However, if we try to identify the pattern of the shirt, each flower or bird of the pattern is a non-textured object, as no smaller detail inside of it can be recognized. Texture in our environment helps us to better understand the nature of things, as a smooth paved road signals safe passage and thick fog creates a veil on our view.
=== Texture in design ===
Texture in design includes the literal physical surface employed in a printed piece as well as the optical appearance of the surface. Physical texture affects how the piece feels in hand and also how it conveys the design, as a glossy surface for example reflects the light differently than a soft or pebbly one. Many of the textures manipulated by graphic designers, however, cannot be physically experienced as it is utilized in the visual representation aspect of the design. Texture adds detail to an image in a way that conveys the overall quality of a surface. Graphic designers use texture to establish a mood, reinforce a point of view, or convey a sense of physical presence whether setting a type or drawing a tree.
=== Uses of texture in design ===
Texture can also be used to add complex detail into the composition of a design.
In theatrical design, the surface qualities of a costume sculpt the look and feel of a character, which influences the way the audience reacts to the character.
Types of texture
Tactile texture, also known as "actual texture", refers to the physical three-dimensional texture of an object. Tactile texture can be perceived by the sense of touch. A person can feel the tactile texture of a sculpture by running their hand over its surface and feelings its ridges and dents.
Texture can be created through collage. This is when artists assemble three dimensional objects and apply them onto a two-dimensional surface, like a piece of paper or canvas, to create one final composition.
Papier collé is another collaging technique in which artists glue paper to a surface to create different textures on its surface.
Assemblage is a technique that consists of assembling various three-dimensional objects into a sculpture, which can also reveal textures to the viewer.
Visual texture, also referred to as "implied texture", is not detectable by our sense of touch, but by our sense of sight. Visual texture is the illusion of a real texture on a two-dimensional surface. Any texture perceived in an image or photograph is a visual texture. A photograph of rough tree bark is considered a visual texture. It creates the impression of a real texture on a two-dimensional surface which would remain smooth to the touch no matter how rough the represented texture is.
In painting, different paints are used to achieve different types of textures. Paints such as oil, acrylic, and encaustic are thicker and more opaque and are used to create three-dimensional impressions on the surface. Other paints, such as watercolor, tend to be used for visual textures, because they are thinner and have transparency, and do not leave much tactile texture on the surface.
=== Pattern ===
Many textures appear to repeat the same motif. When a motif is repeated over and over again in a surface, it results in a pattern. Patterns are frequently used in fashion design or textile design, where motifs are repeated to create decorative patterns on fabric or other textile materials. Patterns are also used in architectural design, where decorative structural elements such as windows, columns, or pediments, are incorporated into building design.
== See also ==
Composition (visual arts)
Interior design
Landscape design
Pattern language
Elements of art
Color theory
== Notes ==
== References ==
Kilmer, R., & Kilmer, W. O. (1992). Designing Interiors. Orland, FL: Holt, Rinehart and Winston, Inc. ISBN 978-0-03-032233-4.
Nielson, K. J., & Taylor, D. A. (2002). Interiors: An Introduction. New York: McGraw-Hill Companies, Inc. ISBN 978-0-07-296520-9
Pile, J.F. (1995; fourth edition, 2007). Interior Design. New York: Harry N. Abrams, Inc. ISBN 978-0-13-232103-7
Sully, Anthony (2012). Interior Design: Theory and Process. London: Bloomsbury. ISBN 978-1-4081-5202-7.
== External links ==
Art, Design, and Visual Thinking. An online, interactive textbook by Charlotte Jirousek at Cornell University.
The 6 Principles of Design | Wikipedia/Design_elements |
In software engineering, a design marker is a technique of documenting design choices in source code using the Marker Interface pattern. Marker interfaces have traditionally been limited to those interfaces intended for explicit, runtime verification (normally via instanceof). A design marker is a marker interface used to document a design choice. In Java programs the design choice is documented in the marker interface's Javadoc documentation.
Many choices made at software design time cannot be directly expressed in today's implementation languages like C# and Java. These design choices (known by names like Design Pattern, Design Contract, Refactoring, Effective Programming Idioms, Blueprints, etc.) must be implemented via programming and naming conventions, because they go beyond the built-in functionality of production programming languages. The consequences of this limitation conspire over time to erode design investments as well as to promote a false segregation between the designer and implementer mindsets.
Two independent proposals recognize these problems and give the same basic strategies for tackling them. Until now, the budding explicit programming movement has been linked to the use of an experimental Java research tool called ELIDE. The Design Markers technique requires only standard Javadoc-like tools to garner many of the benefits of Explicit Programming.
== See also ==
Design Patterns
Marker interface pattern
== External links ==
Design Markers: Explicit Programming for the rest of us
Design Markers home page
Explicit Programming manifesto | Wikipedia/Design_marker |
3D computer graphics software refers to packages used to create 3D computer-generated imagery.
== General information ==
=== Current software ===
This table compares elements of notable software that is currently available, based on the raw software, with no added plug-ins.
=== Inactive software ===
There are many discontinued software applications.
== Operating system support ==
The operating systems on which the editors can run natively (without emulation or compatibility layers), meaning which operating systems have which editors specifically coded for them (not, for example, Wings 3D for Windows running on Linux with Wine).
== Features ==
== I/O ==
=== Image, video, and audio files ===
=== General 3D files ===
=== Game and renderer files ===
=== Cache and animation files ===
=== CAD files ===
=== Point clouds and photogrammetry files ===
=== GIS and DEM files ===
== Supported primitives ==
== Modeling ==
== Lookdev, Shader writing ==
== Lighting ==
== Path-tracing rendering ==
== Level of detail (LoD) generation, baking ==
== See also ==
Comparison of raster graphics editors
Comparison of vector graphics editors
Comparison of computer-aided design software
Comparison of CAD, CAM and CAE file viewers
== References == | Wikipedia/Comparison_of_3D_computer_graphics_software |
Costume design is the process of selecting or creating clothing for a performers. A costume may be designed from scratch or may be designed by combining existing garments. "Costume" may also refer to the style of dress particular to a nation, a social class, or historical period. It is intended to contribute to the fullness of the artistic, visual world which is unique to a particular theatrical or cinematic production. Costumes can denote status, age, or personality of a character, or provide visual interest to a character. Costumes may be for a theater, cinema, musical performance, cosplay, parties, or other events.
== History ==
In ancient Greek theatre, costumes were simplistic yet symbolic, aiding in character differentiation. Ritualized masks were a defining feature, allowing actors to convey emotions without switching masks. Ancient Greek village festivals and processions in honor of Dionysus (See also: Dionysia) are believed to be the origin of theatre, and therefore theatre costume. Sculpture and vase paintings provide the clearest evidence of these costumes. Ritualized masks were used giving each character a specific look. They varied depending on whether they were used for comedic or dramatic purposes. Some masks were constructed with a cheerful as well as a serious side on the same face in an attempt to indicate a change in emotion without a change of mask. The same is true for the Romans, who continued the mask tradition; doubling a mask made doubling roles easier.
During the Late Middle Ages in Europe, dramatic enactments of Bible stories were prevalent, therefore actual Christian vestments, stylized from traditional Byzantine court dress, were worn as costumes to keep the performances as realistic as possible. Stereotypical characterization was key when clothing performers for this style of theatre. In most instances actors had to supply their own costumes when playing a character found in daily life.
By the Elizabethan era, costumes became the most important visual element, often made from luxurious fabrics. In Elizabethan theatre of the 16th and 17th centuries in England, costume emerged as the most important visual element. Garments were very expensive as they were made from the finest fabrics. By the 17th and 18th centuries, European theatre saw actors wearing contemporary fashion with added elements like crowns to signify royalty. The majority of characters were clothed in contemporary Elizabethan fashion. The costumes could be divided into five categories:
"Ancient", which was out of style clothing used to represent another period; "Antique", older additions to contemporary clothing to distinguish classical characters; Dreamlike, "fanciful" garments for supernatural or allegorical characters; "Traditional" clothing which represented only a few specific people, such as Robin Hood, or "National or Racial" costumes that were intended to set apart a specific group of people but did not tend to be historically accurate.
"Ordinarily, fashionable garments were used in both comedy and tragedy until 1727, when Adrienne Lecouvreur adopted the much more elaborate and formal court dress for tragedy. Her practice soon became standard for all tragic heroines" Major actors began to compete with one another about who would have the most lavish stage dress. This practice continued until around the 1750s when costumes became relevant to the character again. Art began to copy life and realistic characteristics were favored especially during the 19th century. The 19th century marked a shift toward historical accuracy, driven by figures like Georg II, Duke of Saxe-Meiningen, who insisted on authentic materials such as real chain mail and armor. For example, Georg the second, Duke of Saxe-Meiningen took personal interest in the theatre and began managing troupes. He advocated for authenticity and accuracy of the script and time period, therefore he refused to let actors tamper with their own costumes. He also made sure the materials were authentic and specific, using real chain mail, armor, swords, etc. No cheap substitutes would be allowed.
In August 1823, James Planché's advocacy for historically accurate Shakespearean costumes revolutionized British theatre, inspiring productions that prioritized realism, especially when it comes to costumes. In the same year, a casual conversation led to one of Planché's more lasting effects on British theatre. He observed to Charles Kemble, the manager of Covent Garden, that "while a thousand pounds were frequently lavished upon a Christmas pantomime or an Easter spectacle, the plays of Shakespeare were put upon the stage with makeshift scenery, and, at the best, a new dress or two for the principal characters." Kemble "saw the possible advantage of correct appliances catching the taste of the town" and agreed to give Planché control of the costuming for the upcoming production of King John, if he would carry out the research, design the costumes and superintend the production. Planché had little experience in this area and sought the help of antiquaries such as Francis Douce and Sir Samuel Meyrick. The research involved sparked Planché's latent antiquarian interests; these came to occupy an increasing amount of his time later in life.
Despite the actors' reservations, King John was a success and led to a number of similarly costumed Shakespeare productions by Kemble and Planché (Henry IV, Part I, As You Like It, Othello, Cymbeline, Julius Caesar). The designs and renderings of King John, Henry IV, As You Like It, Othello, Hamlet and Merchant of Venice were published, though there is no evidence that Hamlet and Merchant of Venice were ever produced with Planché's historically accurate costume designs. Planché also wrote a number of plays or adaptations which were staged with historically accurate costumes (Cortez, The Woman Never Vext, The Merchant's Wedding, Charles XII, The Partisans, The Brigand Chief, and Hofer). After 1830, although he still used period costume, he no longer claimed historical accuracy for his work in plays. His work in King John had brought about a "revolution in nineteenth-century stage practice" which lasted for almost a century.
In 1923 the first of a series of innovative modern dress productions of Shakespeare plays, Cymbeline, directed by H. K. Ayliff, opened at Barry Jackson's Birmingham Repertory Theatre in England.
Costumes in Chinese theatre are very important, especially in Beijing Opera. They are usually heavily patterned with intense, bright colors. The standard items consist of at least 300 pieces and indicate the actors character type, age and social status through ornament, design, color and accessories. "Color is always used symbolically: red for loyalty and high position, yellow for royalty, and dark crimson for barbarians or military advisors." Symbolic significance is also found in the designs used for emblems. For example, the tiger stands for power and masculine strength. A majority of the clothing, regardless of rank, is made out of rich and luxurious materials. Makeup is also used symbolically and completes the overall look.
In Japanese Noh drama masks are always used and the prominent aspect of the costume. They are made of wood and usually last for generations. There are five basic types: male, female, aged, deities and monsters, all with many variations. The masks are changed often throughout the play.
In Kabuki, another form of Japanese theatre, actors do not wear masks but rely heavily on makeup for the characterizations. Features are exaggerated or removed and for some of the athletic roles musculature is outlined in a specific pattern. Traditional costumes are used for each role, based upon historical garments that are altered for dramatic effect. "Some costumes weigh as much as fifty pounds, and stage attendants assist the actors in keeping them properly arranged while on stage"
In the 21st century digital technologies have ushered in a new era of costume design. Traditionally, theater costumers were manually crafted by hand, through sewing and patterns drafted on paper. Now, theater costumes are able to be designed using 3D printers, modeling software and other digital tools to create costumes more efficiently. Utilizing 3D costume-modeling programs and 3D printers allows designers to come up with the most efficient ways to save the amount of materials used on a project. Designers can optimize material usage with design software, and reduce costs through cheaper materials printed from 3D printing. Moreover, these technologies save on time where models can be adjusted in real time to the corresponding feedback through virtual fittings and sewing.
== Design process ==
The costume design process involves many steps and though they differ from genre to genre a basic method is commonly used.
Analysis: The first step is an analysis of the script, musical composition, choreography, etc. Costume parameters for the show are established and a rough costume plot is created. A costume plot outlines which character is in which scene, when the actors change, and what costumes are mentioned in the script.
Design collaboration: An important phase in the process is when all of the designers meet with the director. There must be a clear understanding of the overall show concept. The designers must all get on the same page with the director in terms of themes for the show and what messages they want the audience to get from the show.
Costume research: Once the director and designers are on the same page, the next step is for the Costume designer to gather research. Costume designers usually begin with research where they find resources to establish the world where the play takes place. This helps the designers establish the rules of the world and then in turn understand the characters better. The designer will then go into broad research about each character to try to establish their personalities though their costume.
Preliminary sketching and color layout: Once enough information is obtained, Costume designers begin by creating preliminary sketches. Beginning with very quick rough sketches the designer can get a basic idea for how the show will look put together and if the rules of the world are being maintained. The costume designer will then go into more detailed sketches and will figure out the specific costumes and colors for the character. Sketches help see the show as a whole without them having to spend too much time on them.
Final sketches: Once the costume designer and the director agree on the costumes and the ideas are fully flushed out, the designer will create final sketches. These are called renderings and are usually painted with watercolors or acrylic paints. These final sketches show what the designer wants the character to look like and the colors of the costume.
== Production process ==
Once the show is designed, it is necessary to plan where the items will be sourced. There are four options. Garments can be:
Pulled, which refers to searching through a costume shop's stock
Rented
Shopped/purchased
Constructed, or also known as made to order.
There are two ways a garment can begin to be constructed; either pattern drafted or draped, and many times both methods will be used together.
Pattern drafting begins by using a set of basic pattern blocks developed from the actor's measurements. They are drawn out on paper first, then transferred to fabric, and sewn together to test fit.
Draping involves manipulating a piece of fabric on a dress form or mannequin that has measurements closely related to the actor's. It is a process that takes a flat piece of cloth and shapes it to conform the fabric to a three-dimensional body by cutting and pinning.
Once constructed, however, the costume has not finished "working." A very important aspect of costumes is the ways they affect actors' performances and function within their settings. The very best costume designers build their original ideas after assessing the visual and spatial conditions of the costumes.
== See also ==
Costume designer
== References ==
== External links ==
"Costume Designs and Designers Collections" held in the "Performing Arts Collection" Archived 2012-11-03 at the Wayback Machine, at Arts Centre Melbourne.
"The National Costumers Association" Nationwide Non-Profit organization for costume designers and costumers.
"The Stagecraft Wiki" A Wiki dedicated to technical theater arts. Part of Stagecraft.com
IDD: Costume & Theatrical International Costume & Theatrical Design Directory
University of Washington Libraries Digital Collections - Fashion Plates Costumes
Costumes of All Nations 104 plates of costumes
Williams College Theatre Department a database of costumes with VR movies and the original sketches
"Stagelink" Theatrical costume, makeup and wig resources
Theatre Costume and Set Design Archive at the University of Bristol Theatre Collection, University of Bristol
The 50 films that changed men's style
"British Society of Theatre Designers"
Costume Design training in Auckland NZ at Unitec Performing and Screen Arts
Costume Designs of Early Films on the European Film Gateway
The 50 films that changed men's style'
"British Society of Theatre Designers
Costume Design training in Auckland NZ at Unitec Performing and Screen Arts Costume Designs of Early Films on the European Film Gateway
The 50 films that changed men's style
"British Society of Theatre Designers"
Costume Design training in Auckland NZ at Unitec Performing and Screen Arts Costume Designs of Early Films on the European Film Gateway | Wikipedia/Costume_design |
Privacy by design is an approach to systems engineering initially developed by Ann Cavoukian and formalized in a joint report on privacy-enhancing technologies by a joint team of the Information and Privacy Commissioner of Ontario (Canada), the Dutch Data Protection Authority, and the Netherlands Organisation for Applied Scientific Research in 1995. The privacy by design framework was published in 2009 and adopted by the International Assembly of Privacy Commissioners and Data Protection Authorities in 2010. Privacy by design calls for privacy to be taken into account throughout the whole engineering process. The concept is an example of value sensitive design, i.e., taking human values into account in a well-defined manner throughout the process.
Cavoukian's approach to privacy has been criticized as being vague, challenging to enforce its adoption, difficult to apply to certain disciplines, challenging to scale up to networked infrastructures, as well as prioritizing corporate interests over consumers' interests and placing insufficient emphasis on minimizing data collection. Recent developments in computer science and data engineering, such as support for encoding privacy in data and the availability and quality of Privacy-Enhancing Technologies (PET's) partly offset those critiques and help to make the principles feasible in real-world settings.
The European GDPR regulation incorporates privacy by design.
== History and background ==
The privacy by design framework was developed by Ann Cavoukian, Information and Privacy Commissioner of Ontario, following her joint work with the Dutch Data Protection Authority and the Netherlands Organisation for Applied Scientific Research in 1995.
In 2009, the Information and Privacy Commissioner of Ontario co-hosted an event, Privacy by Design: The Definitive Workshop, with the Israeli Law, Information and Technology Authority at the 31st International Conference of Data Protection and Privacy Commissioner (2009).
In 2010 the framework achieved international acceptance when the International Assembly of Privacy Commissioners and Data Protection Authorities unanimously passed a resolution on privacy by design recognising it as an international standard at their annual conference. Among other commitments, the commissioners resolved to promote privacy by design as widely as possible and foster the incorporation of the principle into policy and legislation.
== Foundational principles ==
Privacy by design is based on seven "foundational principles":
Proactive not reactive; preventive not remedial
Privacy as the default setting
Privacy embedded into design
Full functionality – positive-sum, not zero-sum
End-to-end security – full lifecycle protection
Visibility and transparency – keep it open
Respect for user privacy – keep it user-centric
The principles have been cited in over five hundred articles referring to the Privacy by Design in Law, Policy and Practice white paper by Ann Cavoukian.
=== Principles in detail ===
==== Proactive not reactive; preventive not remedial ====
The privacy by design approach is characterized by proactive rather than reactive measures. It anticipates and prevents privacy invasive events before they happen. Privacy by design does not wait for privacy risks to materialize, nor does it offer remedies for resolving privacy infractions once they have occurred — it aims to prevent them from occurring. In short, privacy by design comes before-the-fact, not after.
==== Privacy as the default (PbD) ====
Privacy by design seeks to deliver the maximum degree of privacy by ensuring that personal data are automatically protected in any given IT system or business practice. If an individual does nothing, their privacy still remains intact. No action is required on the part of the individual to protect their privacy — it is built into the system, by default.
===== PbD practices =====
Purpose Specification - The data subjects must be clearly communicated to at or before any data collection, retention, or usage occurs, and the purpose(s) must be limited and relevant to the stated needs.
Collection Limitation - Collection of data must be fair, lawful, and limited to the stated purpose.
Data minimization - Collection of data should be minimized as much as possible, and technologies should default to have users be non-identifiable and non-observable or minimized if absolutely necessary.
Use, Retention, and Disclosure - Use, retention, and disclosure of data must be limited and only for what has been consented to, with exceptions by law. Information should only be retained for the stated amount time needed and then securely erased.
==== Privacy embedded into design ====
Privacy by design is embedded into the design and architecture of IT systems as well as business practices. It is not bolted on as an add-on, after the fact. The result is that privacy becomes an essential component of the core functionality being delivered. Privacy is integral to the system without diminishing functionality.
==== Full functionality – positive-sum, not zero-sum ====
Privacy by design seeks to accommodate all legitimate interests and objectives in a positive-sum “win-win” manner, not through a dated, zero-sum approach, where unnecessary trade-offs are made. Privacy by design avoids the pretense of false dichotomies, such as privacy versus security, demonstrating that it is possible to have both.
==== End-to-end security – full lifecycle protection ====
Privacy by design, having been embedded into the system prior to the first element of information being collected, extends securely throughout the entire lifecycle of the data involved — strong security measures are essential to privacy, from start to finish. This ensures that all data are securely retained, and then securely destroyed at the end of the process, in a timely fashion. Thus, privacy by design ensures cradle-to-grave, secure lifecycle management of information, end-to-end.
==== Visibility and transparency – keep it open ====
Privacy by design seeks to assure all stakeholders that whatever business practice or technology involved is in fact operating according to the stated promises and objectives, subject to independent verification. The component parts and operations remain visible and transparent, to users and providers alike. Remember to trust but verify.
==== Respect for user privacy – keep it user-centric ====
Above all, privacy by design requires architects and operators to keep the interests of the individual uppermost by offering such measures as strong privacy defaults, appropriate notice, and empowering user-friendly options. Keep it user-centric.
== Design and standards ==
The International Organization for Standardization (ISO) approved the Committee on Consumer Policy (COPOLCO) proposal for a new ISO standard: Consumer Protection: Privacy by Design for Consumer Goods and Services (ISO/PC317). The standard will aim to specify the design process to provide consumer goods and services that meet consumers’ domestic processing privacy needs as well as the personal privacy requirements of data protection. The standard has the UK as secretariat with thirteen participating members and twenty observing members.
The Standards Council of Canada (SCC) is one of the participating members and has established a mirror Canadian committee to ISO/PC317.
The OASIS Privacy by Design Documentation for Software Engineers (PbD-SE) Technical Committee provides a specification to operationalize privacy by design in the context of software engineering. Privacy by design, like security by design, is a normal part of the software development process and a risk reduction strategy for software engineers. The PbD-SE specification translates the PbD principles to conformance requirements within software engineering tasks and helps software development teams to produce artifacts as evidence of PbD principle adherence. Following the specification facilitates the documentation of privacy requirements from software conception to retirement, thereby providing a plan around adherence to privacy by design principles, and other guidance to privacy best practices, such as NIST's 800-53 Appendix J (NIST SP 800–53) and the Fair Information Practice Principles (FIPPs) (PMRM-1.0).
== Relationship to privacy-enhancing technologies ==
Privacy by design originated from privacy-enhancing technologies (PETs) in a joint 1995 report by Ann Cavoukian and John Borking. In 2007 the European Commission provided a memo on PETs. In 2008 the British Information Commissioner's Office commissioned a report titled Privacy by Design – An Overview of Privacy Enhancing Technologies.
There are many facets to privacy by design. There is the technical side like software and systems engineering, administrative elements (e.g. legal, policy, procedural), other organizational controls, and operating contexts. Privacy by design evolved from early efforts to express fair information practice principles directly into the design and operation of information and communications technologies. In his publication Privacy by Design: Delivering the Promises Peter Hustinx acknowledges the key role played by Ann Cavoukian and John Borking, then Deputy Privacy Commissioners, in the joint 1995 publication Privacy-Enhancing Technologies: The Path to Anonymity. This 1995 report focussed on exploring technologies that permit transactions to be conducted anonymously.
Privacy-enhancing technologies allow online users to protect the privacy of their Personally Identifiable Information (PII) provided to and handled by services or applications. Privacy by design evolved to consider the broader systems and processes in which PETs were embedded and operated. The U.S. Center for Democracy & Technology (CDT) in The Role of Privacy by Design in Protecting Consumer Privacy distinguishes PET from privacy by design noting that “PETs are most useful for users who already understand online privacy risks. They are essential user empowerment tools, but they form only a single piece of a broader framework that should be considered when discussing how technology can be used in the service of protecting privacy.”
== Global usage ==
Germany released a statute (§ 3 Sec. 4 Teledienstedatenschutzgesetz [Teleservices Data Protection Act]) back in July 1997. The new EU General Data Protection Regulation (GDPR) includes ‘data protection by design’ and ‘data protection by default’, the second foundational principle of privacy by design. Canada's Privacy Commissioner included privacy by design in its report on Privacy, Trust and Innovation – Building Canada’s Digital Advantage. In 2012, U.S. Federal Trade Commission (FTC) recognized privacy by design as one of its three recommended practices for protecting online privacy in its report entitled Protecting Consumer Privacy in an Era of Rapid Change, and the FTC included privacy by design as one of the key pillars in its Final Commissioner Report on Protecting Consumer Privacy. In Australia, the Commissioner for Privacy and Data Protection for the State of Victoria (CPDP) has formally adopted privacy by design as a core policy to underpin information privacy management in the Victorian public sector. The UK Information Commissioner's Office website highlights privacy by design and data protection by design and default. In October 2014, the Mauritius Declaration on the Internet of Things was made at the 36th International Conference of Data Protection and Privacy Commissioners and included privacy by design and default. The Privacy Commissioner for Personal Data, Hong Kong held an educational conference on the importance of privacy by design.
In the private sector, Sidewalk Toronto commits to privacy by design principles; Brendon Lynch, Chief Privacy Officer at Microsoft, wrote an article called Privacy by Design at Microsoft; whilst Deloitte relates certifiably trustworthy to privacy by design.
== Criticism and recommendations ==
The privacy by design framework attracted academic debate, particularly following the 2010 International Data Commissioners resolution that provided criticism of privacy by design with suggestions by legal and engineering experts to better understand how to apply the framework into various contexts.
Privacy by design has been critiqued as "vague" and leaving "many open questions about their application when engineering systems." Suggestions have been made to instead start with and focus on minimizing data, which can be done through security engineering.
In 2007, researchers at K.U. Leuven published Engineering Privacy by Design noting that “The design and implementation of privacy requirements in systems is a difficult problem and requires translation of complex social, legal and ethical concerns into systems requirements”. The principles of privacy by design "remain vague and leave many open questions about their application when engineering systems". The authors argue that "starting from data minimization is a necessary and foundational first step to engineer systems in line with the principles of privacy by design". The objective of their paper is to provide an "initial inquiry into the practice of privacy by design from an engineering perspective in order to contribute to the closing of the gap between policymakers’ and engineers’ understanding of privacy by design."
Extended peer consultations performed 10 years later in an EU project however confirmed persistent difficulties in translating legal principles into engineering requirements. This is partly a more structural problem due to the fact that legal principles are abstract, open-ended with different possible interpretations and exceptions, whereas engineering practices require unambiguous meanings and formal definitions of design concepts.
In 2011, the Danish National It and Telecom Agency published a discussion paper in which they argued that privacy by design is a key goal for creating digital security models, by extending the concept to "Security by Design". The objective is to balance anonymity and surveillance by eliminating identification as much as possible.
Another criticism is that current definitions of privacy by design do not address the methodological aspect of systems engineering, such as using decent system engineering methods, e.g. those which cover the complete system and data life cycle. This problem is further exacerbated in the move to networked digital infrastructures initiatives such as the smart city or the Internet of Things. Whereas privacy by design has mainly been focused on the responsibilities of singular organisations for a certain technology, these initiatives often require the interoperability of many different technologies operated by different organisations. This requires a shift from organisational to infrastructural design.
The concept of privacy by design also does not focus on the role of the actual data holder but on that of the system designer. This role is not known in privacy law, so the concept of privacy by design is not based on law. This, in turn, undermines the trust by data subjects, data holders and policy-makers. Questions have been raised from science and technology studies of whether privacy by design will change the meaning and practice of rights through implementation in technologies, organizations, standards and infrastructures. From a civil society perspective, some have even raised the possibility that a bad use of these design-based approaches can even lead to the danger of bluewashing. This refers to the minimal instrumental use by organizations of privacy design without adequate checks, in order to portray themselves as more privacy-friendly than is factually justified.
It has also been pointed out that privacy by design is similar to voluntary compliance schemes in industries impacting the environment, and thus lacks the teeth necessary to be effective, and may differ per company. In addition, the evolutionary approach currently taken to the development of the concept will come at the cost of privacy infringements because evolution implies also letting unfit phenotypes (privacy-invading products) live until they are proven unfit. Some critics have pointed out that certain business models are built around customer surveillance and data manipulation and therefore voluntary compliance is unlikely.
In 2013, Rubinstein and Good used Google and Facebook privacy incidents to conduct a counterfactual analysis in order to identify lessons learned of value for regulators when recommending privacy by design. The first was that “more detailed principles and specific examples” would be more helpful to companies. The second is that “usability is just as important as engineering principles and practices”. The third is that there needs to be more work on “refining and elaborating on design principles–both in privacy engineering and usability design”. including efforts to define international privacy standards. The final lesson learned is that “regulators must do more than merely recommend the adoption and implementation of privacy by design.”
The advent of GDPR with its maximum fine of 4% of global turnover now provides a balance between business benefit and turnover and addresses the voluntary compliance criticism and requirement from Rubinstein and Good that “regulators must do more than merely recommend the adoption and implementation of privacy by design”. Rubinstein and Good also highlighted that privacy by design could result in applications that exemplified Privacy by Design and their work was well received.
The May 2018 European Data Protection Supervisor Giovanni Buttarelli's paper Preliminary Opinion on Privacy by Design states, "While privacy by design has made significant progress in legal, technological and conceptual development, it is still far from unfolding its full potential for the protection of the fundamental rights of individuals. The following sections of this opinion provide an overview of relevant developments and recommend further efforts".
The executive summary makes the following recommendations to EU institutions:
To ensure strong privacy protection, including privacy by design, in the ePrivacy Regulation,
To support privacy in all legal frameworks which influence the design of technology, increasing incentives and substantiating obligations, including appropriate liability rules,
To foster the roll-out and adoption of privacy by design approaches and PETs in the EU and at the member states’ level through appropriate implementing measures and policy initiatives,
To ensure competence and resources for research and analysis on privacy engineering and privacy-enhancing technologies at EU level, by ENISA or other entities,
To support the development of new practices and business models through the research and technology development instruments of the EU,
To support EU and national public administrations to integrate appropriate privacy by design requirements in public procurement,
To support an inventory and observatory of the “state of the art” of privacy engineering and PETs and their advancement.
The EDPS will:
Continue to promote privacy by design, where appropriate in cooperation with other data protection authorities in the European Data Protection Board (EDPB),
Support coordinated and effective enforcement of Article 25 of the GDPR and related provisions,
Provide guidance to controllers on the appropriate implementation of the principle laid down in the legal base, and
Together with data protection authorities of Austria, Ireland and Schleswig-Holstein, award privacy friendly apps in the mobile health domain.
== Implementing privacy by design ==
The European Data Protection Supervisor Giovanni Buttarelli set out the requirement to implement privacy by design in his article. The European Union Agency for Network and Information Security (ENISA) provided a detailed report Privacy and Data Protection by Design – From Policy to Engineering on implementation. The Summer School on real-world crypto and privacy provided a tutorial on "Engineering Privacy by Design". The OWASP Top 10 Privacy Risks Project for web applications that gives hints on how to implement privacy by design in practice. The OASIS Privacy by Design Documentation for Software Engineers (PbD-SE) offers a privacy extension/complement to OMG's Unified Modeling Language (UML) and serves as a complement to OASIS’ eXtensible Access Control Mark-up Language (XACML) and Privacy Management Reference Model (PMRM). Privacy by Design guidelines are developed to operationalise some of the high-level privacy-preserving ideas into more granular actionable advice., such as recommendations on how to implement privacy by design into existing (data) systems. However, still the applications of privacy by design guidelines by software developers remains a challenge.
== See also ==
Computer security
Consumer privacy
General Data Protection Regulation
FTC fair information practice
Internet privacy
Mesh networking
Dark web
End-to-end encryption
Personal data service
Privacy engineering
Privacy-enhancing technologies
Surveillance capitalism
User interface design
== References == | Wikipedia/Privacy_by_design |
A design specification (or product design specification) is a document which details exactly what criteria a product or a process should comply with. If the product or its design are being created on behalf of a customer, the specification should reflect the requirements of the customer or client. A design specification could, for example, include required dimensions, environmental factors, ergonomic factors, aesthetic factors, maintenance requirement, etc. It may also give specific examples of how the design should be executed, helping others work properly (a guideline for what the person should do).
== Example of a design specification ==
An example design specification, which may be a physical product, software, the construction of a building, or another type of output. Columns and information may be adjustable based on the output format.
== Special requirements ==
Construction design specifications are referenced in US government procurement rules, where there is a requirement that an architect-engineer should specify using "the maximum practicable amount of recovered materials consistent with the performance requirements, availability, price reasonableness, and cost-effectiveness" in a construction design specification.
== See also ==
Data sheet (Spec sheet)
Design by contract
Software requirements specification
Specification
== References ==
== Other sources ==
Mohan, S., Dr. "Design Specifications", Dr. S. Mohan. N.p., n.d. Web. 27 Dec. 2015.
"What Are Specifications?" Specificationsdenver. N.p., n.d. Web. 27 Dec. 2015. | Wikipedia/Design_specification |
3D computer graphics, sometimes called CGI, 3D-CGI or three-dimensional computer graphics, are graphics that use a three-dimensional representation of geometric data (often Cartesian) that is stored in the computer for the purposes of performing calculations and rendering digital images, usually 2D images but sometimes 3D images. The resulting images may be stored for viewing later (possibly as an animation) or displayed in real time.
3D computer graphics, contrary to what the name suggests, are most often displayed on two-dimensional displays. Unlike 3D film and similar techniques, the result is two-dimensional, without visual depth. More often, 3D graphics are being displayed on 3D displays, like in virtual reality systems.
3D graphics stand in contrast to 2D computer graphics which typically use completely different methods and formats for creation and rendering.
3D computer graphics rely on many of the same algorithms as 2D computer vector graphics in the wire-frame model and 2D computer raster graphics in the final rendered display. In computer graphics software, 2D applications may use 3D techniques to achieve effects such as lighting, and similarly, 3D may use some 2D rendering techniques.
The objects in 3D computer graphics are often referred to as 3D models. Unlike the rendered image, a model's data is contained within a graphical data file. A 3D model is a mathematical representation of any three-dimensional object; a model is not technically a graphic until it is displayed. A model can be displayed visually as a two-dimensional image through a process called 3D rendering, or it can be used in non-graphical computer simulations and calculations. With 3D printing, models are rendered into an actual 3D physical representation of themselves, with some limitations as to how accurately the physical model can match the virtual model.
== History ==
William Fetter was credited with coining the term computer graphics in 1961 to describe his work at Boeing. An early example of interactive 3-D computer graphics was explored in 1963 by the Sketchpad program at Massachusetts Institute of Technology's Lincoln Laboratory. One of the first displays of computer animation was Futureworld (1976), which included an animation of a human face and a hand that had originally appeared in the 1971 experimental short A Computer Animated Hand, created by University of Utah students Edwin Catmull and Fred Parke.
3-D computer graphics software began appearing for home computers in the late 1970s. The earliest known example is 3D Art Graphics, a set of 3-D computer graphics effects, written by Kazumasa Mitazawa and released in June 1978 for the Apple II.
Virtual Reality 3D is a version of 3D computer graphics. With the first headset coming out in the late 1950s, the popularity of VR didn't take off until the 2000s. In 2012 the Oculus was released and since then, the 3D VR headset world has expanded.
== Overview ==
3D computer graphics production workflow falls into three basic phases:
3D modeling – the process of forming a computer model of an object's shape
Layout and CGI animation – the placement and movement of objects (models, lights etc.) within a scene
3D rendering – the computer calculations that, based on light placement, surface types, and other qualities, generate (rasterize the scene into) an image
=== Modeling ===
The modeling describes the process of forming the shape of an object. The two most common sources of 3D models are those that an artist or engineer originates on the computer with some kind of 3D modeling tool, and models scanned into a computer from real-world objects (Polygonal Modeling, Patch Modeling and NURBS Modeling are some popular tools used in 3D modeling). Models can also be produced procedurally or via physical simulation.
Basically, a 3D model is formed from points called vertices that define the shape and form polygons. A polygon is an area formed from at least three vertices (a triangle). A polygon of n points is an n-gon. The overall integrity of the model and its suitability to use in animation depend on the structure of the polygons.
=== Layout and animation ===
Before rendering into an image, objects must be laid out in a 3D scene. This defines spatial relationships between objects, including location and size. Animation refers to the temporal description of an object (i.e., how it moves and deforms over time. Popular methods include keyframing, inverse kinematics, and motion-capture). These techniques are often used in combination. As with animation, physical simulation also specifies motion.
Stop Motion has multiple categories within such as Claymation, Cutout, Silhouette, Lego, Puppets, and Pixelation.
Claymation is the use of models made of clay used for an animation. Some examples are Clay Fighter and Clay Jam.
Lego animation is one of the more common types of stop motion. Lego stop motion is the use of the figures themselves moving around. Some examples of this are Lego Island and Lego Harry Potter.
=== Materials and textures ===
Materials and textures are properties that the render engine uses to render the model. One can give the model materials to tell the render engine how to treat light when it hits the surface. Textures are used to give the material color using a color or albedo map, or give the surface features using a bump map or normal map. It can be also used to deform the model itself using a displacement map.
=== Rendering ===
Rendering converts a model into an image either by simulating light transport to get photo-realistic images, or by applying an art style as in non-photorealistic rendering. The two basic operations in realistic rendering are transport (how much light gets from one place to another) and scattering (how surfaces interact with light). This step is usually performed using 3-D computer graphics software or a 3-D graphics API.
Altering the scene into a suitable form for rendering also involves 3D projection, which displays a three-dimensional image in two dimensions. Although 3-D modeling and CAD software may perform 3-D rendering as well (e.g., Autodesk 3ds Max or Blender), exclusive 3-D rendering software also exists (e.g., OTOY's Octane Rendering Engine, Maxon's Redshift)
Examples of 3-D rendering
== Software ==
3-D computer graphics software produces computer-generated imagery (CGI) through 3D modeling and 3D rendering or produces 3-D models for analytical, scientific and industrial purposes.
=== File formats ===
There are many varieties of files supporting 3-D graphics, for example, Wavefront .obj files, .fbx and .x DirectX files. Each file type generally tends to have its own unique data structure.
Each file format can be accessed through their respective applications, such as DirectX files, and Quake. Alternatively, files can be accessed through third-party standalone programs, or via manual decompilation.
=== Modeling ===
3-D modeling software is a class of 3-D computer graphics software used to produce 3-D models. Individual programs of this class are called modeling applications or modelers.
3-D modeling starts by describing 3 display models : Drawing Points, Drawing Lines and Drawing triangles and other Polygonal patches.
3-D modelers allow users to create and alter models via their 3-D mesh. Users can add, subtract, stretch and otherwise change the mesh to their desire. Models can be viewed from a variety of angles, usually simultaneously. Models can be rotated and the view can be zoomed in and out.
3-D modelers can export their models to files, which can then be imported into other applications as long as the metadata are compatible. Many modelers allow importers and exporters to be plugged-in, so they can read and write data in the native formats of other applications.
Most 3-D modelers contain a number of related features, such as ray tracers and other rendering alternatives and texture mapping facilities. Some also contain features that support or allow animation of models. Some may be able to generate full-motion video of a series of rendered scenes (i.e. animation).
=== Computer-aided design (CAD) ===
Computer aided design software may employ the same fundamental 3-D modeling techniques that 3-D modeling software use but their goal differs. They are used in computer-aided engineering, computer-aided manufacturing, Finite element analysis, product lifecycle management, 3D printing and computer-aided architectural design.
=== Complementary tools ===
After producing a video, studios then edit or composite the video using programs such as Adobe Premiere Pro or Final Cut Pro at the mid-level, or Autodesk Combustion, Digital Fusion, Shake at the high-end. Match moving software is commonly used to match live video with computer-generated video, keeping the two in sync as the camera moves.
Use of real-time computer graphics engines to create a cinematic production is called machinima.
== Other types of 3D appearance ==
=== Photorealistic 2D graphics ===
Not all computer graphics that appear 3D are based on a wireframe model. 2D computer graphics with 3D photorealistic effects are often achieved without wire-frame modeling and are sometimes indistinguishable in the final form. Some graphic art software includes filters that can be applied to 2D vector graphics or 2D raster graphics on transparent layers. Visual artists may also copy or visualize 3D effects and manually render photo-realistic effects without the use of filters.
=== 2.5D ===
Some video games use 2.5D graphics, involving restricted projections of three-dimensional environments, such as isometric graphics or virtual cameras with fixed angles, either as a way to improve performance of the game engine or for stylistic and gameplay concerns. By contrast, games using 3D computer graphics without such restrictions are said to use true 3D.
=== Other forms of animation ===
Cutout is the use of flat materials such as paper. Everything is cut out of paper including the environment, characters, and even some props. An example of this is Paper Mario. Silhouette is similar to cutouts except they are one solid color, black. Limbo is an example of this. Puppets are dolls and different puppets used in the game. An example of this would be Yoshi's Wooly World. Pixelation is when the entire game appears pixelated, this includes the characters and the environment around them. One example of this is seen in Shovel Knight.
== See also ==
Graphics processing unit (GPU)
List of 3D computer graphics software
3D data acquisition and object reconstruction
3D projection on 2D planes
Geometry processing
Isometric graphics in video games and pixel art
List of stereoscopic video games
Medical animation
Render farm
== References ==
== External links ==
A Critical History of Computer Graphics and Animation (Wayback Machine copy)
How Stuff Works - 3D Graphics
History of Computer Graphics series of articles (Wayback Machine copy)
How 3D Works - Explains 3D modeling for an illuminated manuscript | Wikipedia/3D_computer_graphics |
In the field of patents, the phrase "to design around" means to design or invent an alternative to a patented invention that does not infringe the patent's claims. The phrase can also refer to the alternative itself.
Design-arounds are considered to be one of the benefits of patent law. By providing monopoly rights to inventors in exchange for disclosing how to make and use their inventions, others are given both the information and incentive to invent competitive alternatives that design around the original patent. In the field of vaccines, for example, design-arounds are considered fairly easy. It is often possible to use the original patent as a guide for developing an alternative that does not infringe the original patent.
Design-arounds can be a defense against patent trolls. The amount of license fee that a patent troll can demand is limited by the alternative of the cost of designing around the troll's patent(s).
In order to defend against design-arounds, inventors often develop a large portfolio of interlocking patents, sometimes called a patent thicket. Thus a competitor will have to avoid many patents when designing.
== See also ==
Essential patent
Evergreening
Patent map
Reinventing the wheel
Workaround
== References == | Wikipedia/Design_around |
Automotive suspension design is an aspect of automotive engineering, concerned with designing the suspension for cars and trucks. Suspension design for other vehicles is similar, though the process may not be as well established.
The process entails
Selecting appropriate vehicle level targets
Selecting a system architecture
Choosing the location of the 'hard points', or theoretical centres of each ball joint or bushing
Selecting the rates of the bushings
Analysing the loads in the suspension
Designing the spring rates
Designing shock absorber characteristics
Designing the structure of each component so that it is strong, stiff, light, and cheap
Analysing the vehicle dynamics of the resulting design
Since the 1990s the use of multibody simulation and finite element software has made this series of tasks more straightforward.
== Vehicle level targets ==
A partial list would include:
Maximum steady state lateral acceleration (in understeer mode)
Roll stiffness (degrees per g of lateral acceleration)
Ride frequencies
Lateral load transfer percentage distribution front to rear
Roll moment distribution front to rear
Ride heights at various states of load
Understeer gradient
Turning circle
Ackermann
Jounce travel
Rebound travel
Once the overall vehicle targets have been identified they can be used to set targets for the two suspensions. For instance, the overall understeer target can be broken down into contributions from each end using a Bundorf analysis.
== System architecture ==
Typically a vehicle designer is operating within a set of constraints. The suspension architecture selected for each end of the vehicle will have to obey those constraints. For both ends of the car this would include the type of spring, location of the spring, and location of the shock absorbers.
For the front suspension the following need to be considered
The type of suspension (MacPherson strut or double wishbone suspension)
Type of steering actuator (rack and pinion or recirculating ball)
Location of the steering actuator in front of, or behind, the wheel centre
For the rear suspension there are many more possible suspension types, in practice.
== Hardpoints ==
The hardpoints control the static settings and the kinematics of the suspension.
The static settings are
Toe
Camber
Caster
Roll center height at design load
Mechanical (or caster) trail
Anti-dive and anti-squat
Kingpin Inclination
Scrub radius
Spring and shock absorber motion ratios
The kinematics describe how important characteristics change as the suspension moves, typically in roll or steer. They include
Bump Steer
Roll Steer
Tractive Force Steer
Brake Force Steer
Camber gain in roll
Caster gain in roll
Roll centre height gain
Ackermann change with steering angle
Track gain in roll
The analysis for these parameters can be done graphically, or by CAD, or by the use of kinematics software.
== Compliance analysis ==
The compliance of the bushings, the body, and other parts modify the behaviour of the suspension. In general it is difficult to improve the kinematics of a suspension using the bushings, but one example where it does work is the toe control bush used in Twist-beam rear suspensions. More generally, modern cars suspensions include a Noise, vibration, and harshness (NVH) bush. This is designed as the main path for the vibrations and forces that cause road noise and impact noise, and is supposed to be tunable without affecting the kinematics too much.
In racing cars, bushings tend to be made of harder materials for good handling such as brass or delrin.
In Passenger cars, bushings tend to be made of softer material for added comfort.
In general physical terms, the mass and mechanical hysteresis (damping effect) of solid parts should be accounted for in a dynamic analysis, as well as their elasticity.
== Loads ==
Once the basic geometry is established the loads in each suspension part can be estimated. This can be as simple as deciding what a likely maximum load case is at the contact patch, and then drawing a Free body diagram of each part to work out the forces, or as complex as simulating the behaviour of the suspension over a rough road, and calculating the loads caused. Often loads that have been measured on a similar suspension are used instead - this is the most reliable method.
== Detailed design of arms ==
The loads and geometry are then used to design the arms and spindle. Inevitably some problems will be found in the course of this that force compromises to be made with the basic geometry of the suspension.
== References ==
=== Notes ===
=== Sources ===
The Automotive Chassis Engineering Principles - J. Reimpell H. Stoll J. W. Betzler. - ISBN 978-0-7680-0657-5
Race Car Vehicle Dynamics - William F. Milliken and Douglas L. Milliken.
Fundamentals of Vehicle Dynamics - Thomas Gillespie.
Chassis Design - Principles and Analysis - William F. Milliken and Douglas L. Milliken.
Simulation and direct equations:
Abramov, S., Mannan, S., & Durieux, O. (2009)'Semi-Active Suspension System Simulation Using SIMULINK'. International Journal
of Engineering Systems Modelling and Simulation, 1(2/3), 101 - 114 http://collections.crest.ac.uk/232/1/fulltext.pdf | Wikipedia/Automotive_suspension_design_process |
Motion graphic design, also known as motion design, is a subset of graphic design which combines design with motion graphics and video production. Examples include kinetic typography and graphics used in film and television opening sequences, and station identification logos of some television channels.
Both design principles and animation principles are important for good motion design.
Some motion designers start out as traditional graphic designers and later incorporate motion into their skillsets, while others have come from filmmaking, editing, or animation backgrounds, as these fields share a number of overlapping skills.
== Technology ==
Technological advancements during the 20th and 21st centuries have greatly impacted the field; chief among these are improvements in modern computing technology, as computer programs for the film and video industries became more powerful and more widely available during this period. Modern motion graphic design typically involves any of several computerized tools and processes.
Adobe After Effects is one of the leading computer programs used by modern motion graphic designers. It allows users to create and modify graphics over time.
3D software such as Cinema 4D and Blender are part of many modern motion designers' toolkits.
Adobe Animate, formerly known as Flash, is a tool for 2D motion graphic design. Prior to the rise of HTML5, it was the primary tool for web animation. It has also been used for creating video animations, such as the web series Homestar Runner. It is still used by some motion designers, particularly for frame-by-frame, or "cel" animation.
Adobe Premiere Pro is often used with After Effects when combining video footage with motion graphics.
Prior to animation, motion designers use design tools such as Adobe Photoshop for rasterized graphics, and Adobe Illustrator for vector art. Photoshop can also be used for cel animation.
Motion by Apple Inc., now a part of Final Cut Studio, is another tool for motion graphics.
== Types of motion graphics ==
Motion graphic design is often used in the film industry. Openings to movies, television shows, and news programs often use photography, typography and motion graphics to create visually appealing imagery. Motion graphic design has also achieved widespread use in content marketing and advertising.
In 2018, Cisco projected that 82% of all web traffic would be video by 2022. Marketers and advertisers have focused much of their efforts on the production of high-quality branded video and motion graphic content.
In addition to its myriad of uses in advertising, marketing, and branding, motion graphics are used in software, UI design, video game development, and other fields. Although motion design and animation share many commonalities, the difference between them lies in the fact that animation as a specific art form focuses more on cinematic effects and storytelling techniques to craft a narrative, whereas motion design is typically associated with setting abstract objects, text and other graphic design elements in motion. Bringing a graph, infographic or web design to life using movement is broadly speaking "animation", but more specifically, it's a type of animation that's called motion graphics.
Motion graphics take a variety of forms. While some are entirely animated, others incorporate live-action video and/or photography. The latter may include animation overlay, such as data visualizations, icons, illustrations, and explanatory text used to complement and enhance audiences' understanding of the content.
In content marketing contexts, there are three primary types of motion graphics which marketers choose to use depending on the goals they wish to achieve with the motion graphic. Explainer motion graphics seek to elucidate a product, process, or concept. Emotive motion graphics, meanwhile, aim to inspire a particular emotional response in audiences. And finally, promotional motion graphics are used to raise awareness about a service, product, or initiative. Because so many motion graphics are designed with particular goals in mind, it is often essential to partner with a designer or organization specializing in visual communication design to achieve a final product that conveys information in both an accurate and compelling way.
== UX and motion design ==
UX, also known as user experience, works hand in hand with motion design. For example, when designing a phone app, motion design is used to improve user experience.
Motion design improves the user experience tremendously and effectively by adding animations to any screen. Motion design is not only used in phone apps; it is used in computers, tablets, smartphones, televisions, and lots more. UX designers use motion design to create their prototyping, and experience with it to determine whether it is easy to use for an average person, or if it needs enhancing.
== Jobs and salaries ==
There are a variety of career opportunities for motion designers, including animation, art direction, design, concept art, compositing, creative direction, editing, illustration, producing, and storyboarding. Some motion designers take on a range of these responsibilities, while others prefer to specialize.
Motion designers can work on a range of projects, including advertisements, branding/identity, video games, UX / UI, AR / VR and film.
In the United States, the average motion designer income was $87,900 (USD) per year in 2019.
== Motion design skills ==
Skills in typography are critical to motion designers, as videos, cartoons and advertisements often include text. A good motion designer knows how to use type styles, sizes, and timing to use text to attract audiences.
Knowledge of color theory is also very important for motion designers. They must have a good understanding of the color circle, complementary colors, and color saturations. The use of color is extremely helpful in communicating moods, effects and emotions.
Motion designers must also have software experience. Some of the software includes Adobe Photoshop, Adobe Illustrator, Adobe After Effects, Adobe Premiere Pro and Adobe Substance.
Other important motion-design skills are attention to detail; and good timing sense, for things such as matching video to audio.
== History ==
Motion design began as early as the 1800s, when early animation devices such as flip-books were invented.
There were no official founders of this art form, however, Saul Bass, Pablo Ferro, and John Whitney are some of the earliest well-known motion designers.
John Whitney was one of the pioneers of computer-generated motion design. In 1960, he coined the term "motion graphics" with the foundation of his company, Motion Graphics Incorporated. He invented his own mechanical analog computer to design motion graphics for television commercials and movie title sequences.
Whitney collaborated with Saul Bass to animate one of his most famous pieces, the title sequence for Alfred Hitchcock's 1958 film Vertigo, which featured swirling graphics increasing in size.
== Professional education ==
A degree in motion design can help an aspiring designer build a foundation for a career, by developing their skills in design, animation and conceptual thinking. In the United States, college-level bachelor's degree programs can cost around $200,000.
Since the 2010s, online learning options for motion design have become more prevalent, with resources like School of Motion, Video Copilot, Greyscalegorilla, and an abundance of YouTube tutorials from channels like Ben Marriott, EC Abrams, Eyedesyn and Mt. Mograph.
Online communities like Creative COW allow motion designers to get advice and technical assistance from more experienced designers.
== See also ==
Animation
Film title design
Motion graphics
Web design
Web television
User experience
Graphic design
Video editing
Adobe software
== References ==
"Motion design in digital products: a white paper | by Issara Willenskomer | UX in Motion | Medium". 20 June 2019.
Williams, Richard (7 January 2002). The Animator's Survival Kit: A Manual of Methods, Principles and Formulas for Classical, Computer, Games, Stop Motion and Internet Animators. ISBN 0571202284.
"The History of Motion Graphics - Triplet 3D | Blog". triplet3d.com. 3 July 2015.
Norman, Donald A. (2013). "The Design of Everyday Things" (PDF). Basic Books.
Blauvelt, Andrew, et al. Graphic Design : Now in Production. 1st ed., Walker Art Center, 2011.
Krasner, Jon S. Motion Graphic Design : Applied History and Aesthetics. 2nd ed., Focal Press, 2008, doi:10.4324/9780080887326.
== External links ==
KRASNER, JON. “Chapter 3.” MOTION GRAPHIC DESIGN: Applied History and Aesthetics, CRC PRESS, 2017. | Wikipedia/Motion_graphic_design |
Sonata was a 3D building design software application developed in the early 1980s and now regarded as the forerunner of today's building information modeling applications.
Sonata was commercially released in 1986, having been developed by Jonathan Ingram independently and was sold to T2 Solutions (renamed from GMW Computers in 1987 - which was eventually bought by Alias|Wavefront), and was sold as a successor to GMW's RUCAPS. It ran on workstation computer hardware (by contrast, other 2D computer-aided design (CAD) systems could run on personal computers). The system was not expensive, according to Michael Phiri. Reiach Hall purchased "three Sonata workstations on Silicon Graphics machines, at a total cost of approximately £2000 each" [1990 prices]. Approximately 1,000 seats were sold between 1985 and 1992. However, as a BIM application, in addition to geometric modelling, it could model complete buildings, including complex parametrics, costs and staging of the construction process.
Archicad founder Gábor Bojár has acknowledged that Sonata "was more advanced in 1986 than Archicad at that time", adding that it "surpassed already the matured definition of 'BIM' specified only about one and a half decade later".
Many projects were designed and built using Sonata, including Peddle Thorp Architect's Rod Laver Arena in 1987, and Gatwick Airport North Terminal Domestic Facility by Taylor Woodrow. The US-based architect HKS used the software in 1992 to design a horse racing facility (Lone Star Park in Grand Prairie, Texas) and subsequently purchased the successor product, Reflex.
Target Australia Pty. Ltd. the Australian discount department store retailer bought two Sonata licences in 1992 to replace two RUCAPS workstations originally from Coles Supermarkets. The software was run on two Silicon Graphics IRIS Indigo workstations. Staff were trained to use the software including the parametric language. The simple but powerful parametrics enable productivity gains in documenting buildings and fixture layouts. The object-oriented system suited the standard components installed by the retailer. Combined with the multiple project access (MPA) networking on the Unix operating system platform, a key selection criteria for continuing with the RUCAPS-Sonata architecture enabled the retailer's 50 stores in 5 years program during the late 1990s be executed with a small team. More workstations were purchased, including Silicon Graphics IRIS Indigo and Personal IRIS from the Queensland University of Technology. Year 2000 funding enabled the purchase of eight Silicon Graphics O2 workstations bringing the network to 11 workstations. The department continued to follow the development of Reflex and had contact with other users including Jeff Findlay at Peddle Thorp Architects. The business change to PTC and the direction away from building to a mechanical engineering system combined with Silicon Graphics move to Intel x86 architecture led Target to change to the most similar CAD software, Graphisoft’s Archicad.
The Sonata business was founded in 1984 and, by one account it "disappeared in a mysterious, corporate black hole, somewhere in eastern Canada in 1992," after new owner Alias Research discontinued marketing of the product. Ingram then went on to develop Reflex, bought out by Parametric Technology Corporation (PTC) in 1996.
== References == | Wikipedia/Sonata_(building_design_software) |
Healthy community design is planning and designing communities that make it easier for people to live healthy lives. Healthy community design offers important benefits:
Decreases dependence on the automobile by building homes, businesses, schools, churches and parks closer to each other so that people can more easily walk or bike between them.
Provides opportunities for people to be physically active and socially engaged as part of their daily routine, improving the physical and mental health of its citizens.
Allows persons, if they choose, to age in place and remain all their lives in a community that reflects their changing lifestyles and changing physical capabilities.
== Health benefits ==
Healthy places are those designed and built to improve the quality of life for all people who live, work, learn, and play within their borders—person is free to make choices amid a variety of healthy, available, accessible, and affordable options.
Healthy community design can provide many advantages:
Promote physical activity
Promote a diet free of additives, preservatives, and pesticides
Improve air quality
Lower risk of injuries
Increase social connection and sense of community
Reduce contributions to climate change
== Principles ==
Encourage mixed land use and greater land density to shorten distances between homes, workplaces, schools and recreation so people can walk or bike more easily to them.
Provide good mass transit to reduce the dependence upon automobiles. Build good pedestrian and bicycle infrastructure, including sidewalks and bike paths that are safely removed from automobile traffic as well as good right of way laws and clear, easy-to-follow signage.
Ensure affordable housing is available for people of all income levels. *Create community centers where people can gather and mingle as part of their daily activities.
Offer access to green space and parks.
== See also ==
== References ==
== External links ==
Healthy Community Design Initiative (Centers for Disease Control and Prevention)
Active Living by Design
Healthy Communities by Design
Project for Livable Communities
LEED for Neighborhood Development
Congress for New Urbanism | Wikipedia/Healthy_community_design |
Film title design is a term describing the craft and design of motion picture title sequences. Since the beginning of the film form, it has been an essential part of any motion picture. Originally a motionless piece of artwork called title art, it slowly evolved into an artform of its own.
== History ==
In the beginning, main title design consisted of the movie studio's name and/or logo and the presentation of the main characters along with the actor's names, generally using that same artwork presented on title cards. Most independent or major studio had their own title art logo used as the background for their screen credits and they used it almost exclusively on every movie that they produced.
Then, early in the 1930s, the more progressive motion picture studios started to change their approach in presenting their screen credits. The major studios took on the challenge of improving the way they introduced their movies. They made the decision to present a more complete list of credits to go with a higher quality of artwork to be used in their screen credits.
Above-mentioned title design first appeared in 1955 in Otto Preminger’s The Man with the Golden Arm. The theme was introduced with many moving white lines and a white hand reaching into frame, providing small clues on the stories summary.
The 1960s was where the interest in title design really began to grow. Big studios were losing out to TV shows and needed ways to bring people back to the theater. With studios ready and wanting to invest more money into every part of films, title design became a great point of interest. Soon enough, a new generation of designers began to catch the attention of directors such as Alfred Hitchcock, Otto Preminger, and Stanley Donen.
In the 1970, the impact of computer-aided title design really begins to rise. The application of new technology and software make experimentation easier and faster , further pushing the boundaries of what designers were capable of; including the combination of animation, cinematography, graphics, special effects, and typography.
A main title designer is the designer of the movie title. The manner in which title of a movie is displayed on screen is widely considered an art form. It has often been classified as motion graphics, title design, title sequences and animated credits. The title sequence is often presented through animated visuals and kinetic type while the credits are introduced on screen. The Morrison Studio is a leading title sequence company in both film and TV, with great examples of title design from films such as Tim Burton's Batman (1989) and Sweeney Todd (2007) through to Creation Stories (2021). Led by title designers Richard Morrison and Dean Wares.
From the mid-1930s through the late-1940s the major film studios led the way in Film Title Art by employing artists like Al Hirschfeld, George Petty, Ted Ireland (Vencentini), William Galraith Crawford, Symeon Shimin, and Jacques Kapralik.
Quality artists met this challenge by designing their artwork to "set a mood" and "capture the audience" before the movie started. An overall 10% jump in box-office receipts was proof that this was a profitable improvement to the introduction of their motion pictures.
Pacific Title & Art Studio was an American company founded in Hollywood in 1919 by Leon Schlesinger. Originally they produced title cards for silent films, but moved into film title design. One of their artists, Wayne Fitzgerald was encouraged by Warren Beatty to design titles on his own. Phill Norman was a contemporary American film title designer at the same time
One famous example of the form is the work of Saul Bass in the 1950s and 1960s. His modish title sequences for the films of Alfred Hitchcock were key in setting the style and mood of the movie even before the action began, and contributed to Hitchcock's "house style" that was a key element in his approach to marketing. Another well known designer is Maurice Binder, who designed the often erotic titles for most of the James Bond films from the 1960s to the 1980s; Robert Brownjohn designed two of the films. After Binder's death, Daniel Kleinman has done several of the titles.
However, the leader in the industry in the 1990s - 2000 was Cinema Research Corporation, with over 400 movie titles to its credit in that time period alone, and almost 700 titles in total from the 1950s to 2000.
Modern technology has enabled a much more fantastical way of presenting them through use of programs such as Adobe After Effects and Maxon Cinema4D. Although a form of editing, it's considered a different role and art form rather than of a traditional film editor.
== Further reading ==
Art of the Title
== References ==
== External links ==
The Morrison Studio – Title sequence company, led by Richard Morrison and Dean Wares
Yu, Li (2008). Typography in film title sequence design (MFA thesis). Iowa State University. Retrieved 8 February 2013. | Wikipedia/Film_title_design |
User-centered design (UCD) or user-driven development (UDD) is a framework of processes in which usability goals, user characteristics, environment, tasks and workflow of a product, service or brand are given extensive attention at each stage of the design process. This attention includes testing which is conducted during each stage of design and development from the envisioned requirements, through pre-production models to post production.
Testing is beneficial as it is often difficult for the designers of a product to understand the experiences of first-time users and each user's learning curve. UCD is based on the understanding of a user, their demands, priorities and experiences, and can lead to increased product usefulness and usability. UCD applies cognitive science principles to create intuitive, efficient products by understanding users' mental processes, behaviors, and needs.
UCD differs from other product design philosophies in that it tries to optimize the product around how users engage with the product, in order that users are not forced to change their behavior and expectations to accommodate the product. The users are at the focus, followed by the product's context, objectives and operating environment, and then the granular details of task development, organization, and flow.
== History ==
The term user-centered design (UCD) was coined by Rob Kling in 1977 and later adopted in Donald A. Norman's research laboratory at the University of California, San Diego. The concept became popular as a result of Norman's 1986 book User-Centered System Design: New Perspectives on Human-Computer Interaction and the concept gained further attention and acceptance in Norman's 1988 book The Design of Everyday Things, in which Norman describes the psychology behind what he deems 'good' and 'bad' design through examples. He exalts the importance of design in our everyday lives and the consequences of errors caused by bad designs.
Norman describes principles for building well-designed products. His recommendations are based on the user's needs, leaving aside what he considers secondary issues like aesthetics. The main highlights of these are:
Simplifying the structure of the tasks such that the possible actions at any moment are intuitive.
Making things visible, including the conceptual model of the system, actions, results of actions and feedback.
Achieving correct mappings between intended results and required actions.
Embracing and exploiting the constraints of systems.
In a later book, Emotional Design,: p.5 onwards Norman returns to some of his earlier ideas to elaborate what he had come to find as overly reductive.
== Models and approaches ==
The UCD process considers user requirements from the beginning and throughout the product cycle. Requirements are noted and refined through investigative methods including: ethnographic study, contextual inquiry, prototype testing, usability testing and other methods. Generative methods may also be used including: card sorting, affinity diagramming and participatory design sessions. In addition, user requirements can be inferred by careful analysis of usable products similar to the product being designed.
UCD takes inspiration from the following models:
Cooperative design (a.k.a. co-design) which involves designers and users on an equal footing. This is the Scandinavian tradition of design of IT artifacts and it has been evolving since 1970.
Participatory design (PD), a North American model inspired by cooperative design, with focus on the participation of users. Since 1990, bi-annual conferences have been held.
Contextual design (CD, a.k.a. customer-centered design) involves gathering data from actual customers in real-world situations and applying findings to the final design.
The following principles help in ensuring a design is user-centered:
Design is based upon an explicit understanding of users, tasks and environments.
Users are involved throughout design and development.
Design is driven and refined by user-centered evaluation.
Process is iterative (see below).
Design addresses the whole user experience.
Design team includes multidisciplinary skills and perspectives.
== User-centered design process ==
The goal of UCD is to make products with a high degree of usability (i.e., convenience of use, manageability, effectiveness, and meeting the user's requirements). The general phases of the UCD process are:
Specify context of use: Identify the primary users of the product and their reasons, requirements and environment for product use.
Specify requirements: Identified the detailed technical requirements of the product. This can aid designers in planning development and setting goals.
Create design solutions and development: Based on product goals and requirements, create an iterative cycle of product testing and refinement.
Evaluate product: Perform usability testing and collect user feedback at every design stage.
The above procedure is repeated to further refine the product. These phases are general approaches and factors such as design goals, team and their timeline, and environment in which the product is developed, determine the appropriate phases for a project and their order. Practical models include the waterfall model, agile model or any other software engineering practice.
== Analysis tools ==
There are a number of tools that are used in the analysis of UCD, mainly: personas, scenarios, and essential use cases.
=== Persona ===
During the UCD process, the design team may create a persona, an archetype representing a product user which helps guide decisions about product features, navigation, interactions, and aesthetics. In most cases, personas are synthesized from a series of ethnographic interviews with real people, then captured in one- or two-page descriptions that include behavior patterns, goals, skills, attitudes, and environment, and possibly fictional personal details to give it more character.
== See also ==
== References ==
== Further reading ==
ISO 13407:1999 Human-centred design processes for interactive systems
ISO 9241-210:2010 Ergonomics of human-system interaction -- Part 210: Human-centred design for interactive systems
Human Centered Design, IDEO’s David Kelley (video)
User Centered Design, Don Norman (video) | Wikipedia/User-centered_design |
Urban design is an approach to the design of buildings and the spaces between them that focuses on specific design processes and outcomes based on geographical location. In addition to designing and shaping the physical features of towns, cities, and regional spaces, urban design considers 'bigger picture' issues of economic, social and environmental value and social design. The scope of a project can range from a local street or public space to an entire city and surrounding areas. Urban designers connect the fields of architecture, landscape architecture and urban planning to better organize local and community environments' dependent upon geographical location.
Some important focuses of urban design on this page include its historical impact, paradigm shifts, its interdisciplinary nature, and issues related to urban design.
== Theory ==
Urban design deals with the larger scale of groups of buildings, infrastructure, streets, and public spaces, entire neighbourhoods and districts, and entire cities, with the goal of making urban environments that are equitable, beautiful, performative, and sustainable.
Urban design is an interdisciplinary field that utilizes the procedures and the elements of architecture and other related professions, including landscape design, urban planning, civil engineering, and municipal engineering, while extenuating to the Spatial Sciences. It borrows substantive and procedural knowledge from public administration, sociology, law, urban geography, urban economics and other related disciplines from the social and behavioral sciences, as well as from the natural sciences. In more recent times different sub-subfields of urban design have emerged such as strategic urban design, landscape urbanism, water-sensitive urban design, and sustainable urbanism. Urban design demands an understanding of a wide range of subjects from physical geography to social science, and an appreciation for disciplines, such as real estate development, urban economics, political economy, and social theory.
Urban design theory deals primarily with the design and management of public space (i.e. the 'public environment', 'public realm' or 'public domain'), and the way public places are used and experienced. Public space includes the totality of spaces used freely on a day-to-day basis by the general public, such as streets, plazas, parks, and public infrastructure. Some aspects of privately owned spaces, such as building facades or domestic gardens, also contribute to public space and are therefore also considered by urban design theory. Important writers on urban design theory include Christopher Alexander, Peter Calthorpe, Gordon Cullen, Andrés Duany, Jane Jacobs, Jan Gehl, Allan B. Jacobs, Kevin Lynch, Aldo Rossi, Colin Rowe, Robert Venturi, William H. Whyte, Camillo Sitte, Bill Hillier (space syntax), and Elizabeth Plater-Zyberk.
== History ==
Although contemporary professional use of the term 'urban design' dates from the mid-20th century, urban design as such has been practiced throughout history. Ancient examples of carefully planned and designed cities exist in Asia, Africa, Europe, and the Americas, and are particularly well known within Classical Chinese, Roman, and Greek cultures. Specifically, Hippodamus of Miletus was a famous ancient Greek architect and urban planner, and all around academic that is often considered to be a "father of European urban planning", and the namesake of the "Hippodamian plan", also known as the grid plan of a city layout.
European Medieval cities are often, and often erroneously, regarded as exemplars of undesigned or 'organic' city development. There are many examples of considered urban design in the Middle Ages. In England, many of the towns listed in the 9th-century Burghal Hidage were designed on a grid, examples including Southampton, Wareham, Dorset and Wallingford, Oxfordshire, having been rapidly created to provide a defensive network against Danish invaders. 12th century western Europe brought renewed focus on urbanisation as a means of stimulating economic growth and generating revenue. The burgage system dating from that time and its associated burgage plots brought a form of self-organising design to medieval towns.
Throughout history, the design of streets and deliberate configuration of public spaces with buildings have reflected contemporaneous social norms or philosophical and religious beliefs. Yet the link between designed urban space and the human mind appears to be bidirectional. Indeed, the reverse impact of urban structure upon human behaviour and upon thought is evidenced by both observational study and historical records. There are clear indications of impact through Renaissance urban design on the thought of Johannes Kepler and Galileo Galilei. Already René Descartes in his Discourse on the Method had attested to the impact Renaissance planned new towns had upon his own thought, and much evidence exists that the Renaissance streetscape was also the perceptual stimulus that had led to the development of coordinate geometry.
=== Early modern era ===
Early modern
The beginnings of modern urban design in Europe are associated with the Renaissance but, especially, with the Age of Enlightenment. Spanish colonial cities were often planned, as were some towns settled by other imperial cultures. These sometimes embodied utopian ambitions as well as aims for functionality and good governance, as with James Oglethorpe's plan for Savannah, Georgia. In the Baroque period the design approaches developed in French formal gardens such as Versailles were extended into urban development and redevelopment. In this period, when modern professional specializations did not exist, urban design was undertaken by people with skills in areas as diverse as sculpture, architecture, garden design, surveying, astronomy, and military engineering. In the 18th and 19th centuries, urban design was perhaps most closely linked with surveyors engineers and architects. The increase in urban populations brought with it problems of epidemic disease, the response to which was a focus on public health, the rise in the UK of municipal engineering and the inclusion in British legislation of provisions such as minimum widths of street in relation to heights of buildings in order to ensure adequate light and ventilation.
Much of Frederick Law Olmsted's work was concerned with urban design, and the newly formed profession of landscape architecture also began to play a significant role in the late 19th century.
=== Modern urban design ===
In the 19th century, cities were industrializing and expanding at a tremendous rate. Private businesses largely dictated the pace and style of this development. The expansion created many hardships for the working poor and concern for public health increased. However, the laissez-faire style of government, in fashion for most of the Victorian era, was starting to give way to a New Liberalism. This gave more power to the public. The public wanted the government to provide citizens, especially factory workers, with healthier environments. Around 1900, modern urban design emerged from developing theories on how to mitigate the consequences of the industrial age.
The first modern urban planning theorist was Sir Ebenezer Howard. His ideas, although utopian, were adopted around the world because they were highly practical. He initiated the garden city movement. in 1898.
His garden cities were intended to be planned, self-contained communities surrounded by parks. Howard wanted the cities to be proportional with separate areas of residences, industry, and agriculture. Inspired by the Utopian novel Looking Backward and Henry George's work Progress and Poverty, Howard published his book Garden Cities of To-morrow in 1898. His work is an important reference in the history of urban planning. He envisioned the self-sufficient garden city to house 32,000 people on a site of 6,000 acres (2,428 ha). He planned on a concentric pattern with open spaces, public parks, and six radial boulevards, 120 ft (37 m) wide, extending from the center. When it reached full population, Howard wanted another garden city to be developed nearby. He envisaged a cluster of several garden cities as satellites of a central city of 50,000 people, linked by road and rail. His model for a garden city was first created at Letchworth and Welwyn Garden City in Hertfordshire. Howard's movement was extended by Sir Frederic Osborn to regional planning.
==== 20th century ====
In the early 1900s, urban planning became professionalized. With input from utopian visionaries, civil engineers, and local councilors, new approaches to city design were developed for consideration by decision-makers such as elected officials. In 1899, the Town and Country Planning Association was founded. In 1909, the first academic course on urban planning was offered by the University of Liverpool. Urban planning was first officially embodied in the Housing and Town Planning Act of 1909 Howard's 'garden city' compelled local authorities to introduce a system where all housing construction conformed to specific building standards. In the United Kingdom following this Act, surveyor, civil engineers, architects, and lawyers began working together within local authorities. In 1910, Thomas Adams became the first Town Planning Inspector at the Local Government Board and began meeting with practitioners. In 1914, The Town Planning Institute was established. The first urban planning course in America was not established until 1924 at Harvard University. Professionals developed schemes for the development of land, transforming town planning into a new area of expertise.
In the 20th century, urban planning was changed by the automobile industry. Car-oriented design impacted the rise of 'urban design'. City layouts now revolved around roadways and traffic patterns.
In June 1928, the International Congresses of Modern Architecture (CIAM) was founded at the Chateau de la Sarraz in Switzerland, by a group of 28 European architects organized by Le Corbusier, Hélène de Mandrot, and Sigfried Giedion. The CIAM was one of many 20th century manifestos meant to advance the cause of "architecture as a social art".
Modernism
===== Postwar =====
Team X was a group of architects and other invited participants who assembled starting in July 1953 at the 9th Congress of the International Congresses of Modern Architecture (CIAM) and created a schism within CIAM by challenging its doctrinaire approach to urbanism.
In 1956, the term "Urban design" was first used at a series of conferences hosted by Harvard University. The event provided a platform for Harvard's Urban Design program. The program also utilized the writings of famous urban planning thinkers: Gordon Cullen, Jane Jacobs, Kevin Lynch, and Christopher Alexander.
In 1961, Gordon Cullen published The Concise Townscape. He examined the traditional artistic approach to city design of theorists including Camillo Sitte, Barry Parker, and Raymond Unwin. Cullen also created the concept of 'serial vision'. It defined the urban landscape as a series of related spaces.
Also in 1961, Jane Jacobs published The Death and Life of Great American Cities. She critiqued the modernism of CIAM (International Congresses of Modern Architecture). Jacobs also claimed crime rates in publicly owned spaces were rising because of the Modernist approach of 'city in the park'. She argued instead for an 'eyes on the street' approach to town planning through the resurrection of main public space precedents (e.g. streets, squares).
In the same year, Kevin Lynch published The Image of the City. He was seminal to urban design, particularly with regards to the concept of legibility. He reduced urban design theory to five basic elements: paths, districts, edges, nodes, landmarks. He also made the use of mental maps to understand the city popular, rather than the two-dimensional physical master plans of the previous 50 years.
Other notable works:
Architecture of the City by Aldo Rossi (1966)
Learning from Las Vegas by Robert Venturi and Denise Scott Brown (1972)
Collage City by Colin Rowe (1978)
The Next American Metropolis by Peter Calthorpe (1993)
The Social Logic of Space by Bill Hillier and Julienne Hanson (1984)
The popularity of these works resulted in terms that become everyday language in the field of urban planning. Aldo Rossi introduced 'historicism' and 'collective memory' to urban design. Rossi also proposed a 'collage metaphor' to understand the collection of new and old forms within the same urban space. Peter Calthorpe developed a manifesto for sustainable urban living via medium-density living. He also designed a manual for building new settlements in his concept of Transit Oriented Development (TOD). Bill Hillier and Julienne Hanson introduced Space Syntax to predict how movement patterns in cities would contribute to urban vitality, anti-social behaviour, and economic success. 'Sustainability', 'livability', and 'high quality of urban components' also became commonplace in the field.
==== Current trends ====
New Urbanism
Today, urban design seeks to create sustainable urban environments with long-lasting structures, buildings, and overall livability. Walkable urbanism is another approach to practice that is defined within the Charter of New Urbanism. It aims to reduce environmental impacts by altering the built environment to create smart cities that support sustainable transport. Compact urban neighborhoods encourage residents to drive less. These neighborhoods have significantly lower environmental impacts when compared to sprawling suburbs. To prevent urban sprawl, Circular flow land use management was introduced in Europe to promote sustainable land use patterns.
As a result of the recent New Classical Architecture movement, sustainable construction aims to develop smart growth, walkability, architectural tradition, and classical design. It contrasts with modernist and globally uniform architecture. In the 1980s, urban design began to oppose the increasing solitary housing estates and suburban sprawl.
Managed Urbanisation with the view to making the urbanising process completely culturally and economically, and environmentally sustainable, and as a possible solution to the urban sprawl, Frank Reale has submitted an interesting concept of Expanding Nodular Development (E.N.D.) that integrates many urban designs and ecological principles, to design and build smaller rural hubs with high-grade connecting freeways, rather than adding more expensive infrastructure to existing big cities and the resulting congestion.
==== Paradigm shifts ====
Throughout the young existence of the Urban Design discipline, many paradigm shifts have occurred that have affected the trajectory of the field regarding theory and practice. These paradigm shifts cover multiple subject areas outside of the traditional design disciplines.
Team 10 - The first major paradigm shift was the formation of Team 10 out of CIAM, or the Congres Internationaux d'Architecture Moderne. They believed that Urban Design should introduce ideas of 'Human Association', which pivots the design focus from the individual patron to concentrating on the collective urban population.
The Brundtland Report and Silent Spring - Another paradigm shift was the publication of the Brundtland Report and the book Silent Spring by Rachel Carson. These writings introduced the idea that human settlements could have detrimental impacts on ecological processes, as well as human health, which spurred a new era of environmental awareness in the field.
The Planner's Triangle - The Planner's Triangle, created by Scott Cambell, emphasized three main conflicts in the planning process. This diagram exposed the complex relationships between Economic Development, Environmental Protection, and Equity and Social Justice. For the first time, the concept of Equity and Social Justice was considered as equally important as Economic Development and Environmental Protection within the design process.
Death of Modernism (Demolition of Pruitt Igoe) - Pruitt Igoe was a spatial symbol and representation of Modernist theory regarding social housing. In its failure and demolition, these theories were put into question and many within the design field considered the era of Modernism to be dead.
Neoliberalism & the election of Reagan - The election of President Reagan and the rise of Neoliberalism affected the Urban Design discipline because it shifted the planning process to emphasize capitalistic gains and spatial privatization. Inspired by the trickle-down approach of Reaganomics, it was believed that the benefits of a capitalist emphasis within design would positively impact everyone. Conversely, this led to exclusionary design practices and to what many consider as "the death of public space".
Right to the City - The spatial and political battle over our citizens' rights to the city has been an ongoing one. David Harvey, along with Dan Mitchell and Edward Soja, discussed rights to the city as a matter of shifting the historical thinking of how spatial matter was determined in a critical form. This change of thinking occurred in three forms: ontologically, sociologically, and the combination of this socio-spatial dialect. Together the aim shifted to be able to measure what matters in a socio-spatial context.
Black Lives Matter (Ferguson) - The Black Lives Matter movement challenged design thinking because it emphasized the injustices and inequities suffered by people of color in urban space, as well as emphasized their right to public space without discrimination and brutality. It claims that minority groups lack certain spatial privileges and that this deficiency can result in matters of life and death. In order to reach an equitable state of urbanism, there needs to be equal identification of socio-economic lives within our urbanscapes.
== New approaches ==
There have been many different theories and approaches applied to the practice of urban design.
New Urbanism is an approach that began in the 1980s as a place-making initiative to combat suburban sprawl. Its goal is to increase density by creating compact and complete towns and neighborhoods. The 10 principles of new urbanism are walkability, connectivity, mixed-use and diversity, mixed housing, quality architecture and urban design, traditional neighborhood structure, increased density, smart transportation, sustainability, and quality of life. New urbanism and the developments that it has created are sources of debates within the discipline, primarily with the landscape urbanist approach but also due to its reproduction of idyllic architectural tropes that do not respond to the context. Andres Duany, Elizabeth Plater-Zyberk, Peter Calthorpe, and Jeff Speck are all strongly associated with New Urbanism and its evolution over the years.
Landscape Urbanism is a theory that first surfaced in the 1990s, arguing that the city is constructed of interconnected and ecologically rich horizontal field conditions, rather than the arrangement of objects and buildings. Charles Waldheim, Mohsen Mostafavi, James Corner, and Richard Weller are closely associated with this theory. Landscape urbanism theorises sites, territories, ecosystems, networks, and infrastructures through landscape practice according to Corner, while applying a dynamic concept to cities as ecosystems that grow, shrink or change phases of development according to Waldheim.
Everyday Urbanism is a concept introduced by Margaret Crawford and influenced by Henry Lefebvre that describes the everyday lived experience shared by urban residents including commuting, working, relaxing, moving through city streets and sidewalks, shopping, buying, eating food, and running errands. Everyday urbanism is not concerned with aesthetic value. Instead, it introduces the idea of eliminating the distance between experts and ordinary users and forces designers and planners to contemplate a 'shift of power' and address social life from a direct and ordinary perspective.
Tactical Urbanism (also known as DIY Urbanism, Planning-by-Doing, Urban Acupuncture, or Urban Prototyping) is a city, organizational, or citizen-led approach to neighborhood-building that uses short-term, low-cost, and scalable interventions and policies to catalyze long term change.
Top-up Urbanism is the theory and implementation of two techniques in urban design: top-down and bottom-up. Top-down urbanism is when the design is implemented from the top of the hierarchy - normally the government or planning department. Bottom-up or grassroots urbanism begins with the people or the bottom of the hierarchy. Top-up means that both methods are used together to make a more participatory design, so it is sure to be comprehensive and well regarded in order to be as successful as possible.
Infrastructural Urbanism is the study of how the major investments that go into making infrastructural systems can be leveraged to be more sustainable for communities. Instead of the systems being solely about efficiency in both cost and production, infrastructural urbanism strives to utilize these investments to be more equitable for social and environmental issues as well. Linda Samuels is a designer investigating how to accomplish this change in infrastructure in what she calls "next-generation infrastructure" which is "multifunctional; public; visible; socially productive; locally specific, flexible, and adaptable; sensitive to the eco-economy; composed of design prototypes or demonstration projects; symbiotic; technologically smart; and developed collaboratively across disciplines and agencies".
Sustainable urbanism
Sustainable Urbanism is the study from the 1990s of how a community can be beneficial for the ecosystem, the people, and the economy for which it is associated. It is based on Scott Campbell's planner's triangle which tries to find the balance between economy, equity, and the environment. Its main concept is to try and make cities as self-sufficient as possible while not damaging the ecosystem around them, today with an increased focus on climate stability. A key designer working with sustainable urbanism is Douglas Farr.
Feminist Urbanism is the study and critique of how the built environment affects genders differently because of patriarchal social and political structures in society. Typically, the people at the table making design decisions are men, so their conception about public space and the built environment relates to their life perspectives and experiences, which do not reflect the same experiences of women or children. Dolores Hayden is a scholar who has researched this topic from 1980 to the present day. Hayden's writing says, “when women, men, and children of all classes and races can identify the public domain as the place where they feel most comfortable as citizens, Americans will finally have homelike urban space.”
Educational Urbanism is an emerging discipline, at the crossroads of urban planning, educational planning, and pedagogy. An approach that tackles the notion that economic activities, the need for new skills at the workplace, and the spatial configuration of the workplace rely on the spatial reorientation in the design of educational spaces and the urban dimension of educational planning.
Black Urbanism is an approach in which black communities are active creators, innovators, and authors of the process of designing and creating the neighborhoods and spaces of the metropolitan areas they have done so much to help revive over the past half-century. The goal is not to build black cities for black people but to explore and develop the creative energy that exists in so-called black areas: that has the potential to contribute to the sustainable development of the whole city.
=== Debates in urbanism ===
Underlying the practice of urban design are the many theories about how to best design the city. Each theory makes a unique claim about how to effectively design thriving, sustainable urban environments. Debates over the efficacy of these approaches fill the urban design discourse. Landscape Urbanism and New Urbanism are commonly debated as distinct approaches to addressing suburban sprawl. While Landscape Urbanism proposes landscape as the basic building block of the city and embraces horizontality, flexibility, and adaptability, New Urbanism offers the neighborhood as the basic building block of the city and argues for increased density, mixed uses, and walkability. Opponents of Landscape Urbanism point out that most of its projects are urban parks, and as such, its application is limited. Opponents of New Urbanism claim that its preoccupation with traditional neighborhood structures is nostalgic, unimaginative, and culturally problematic. Everyday Urbanism argues for grassroots neighborhood improvements rather than master-planned, top-down interventions. Each theory elevates the roles of certain professions in the urban design process, further fueling the debate. In practice, urban designers often apply principles from many urban design theories. Emerging from the conversation is a universal acknowledgement of the importance of increased interdisciplinary collaboration in designing the modern city.
== Urban design as an integrative profession ==
Urban designers work with architects, landscape architects, transportation engineers, urban planners, and industrial designers to reshape the city. Cooperation with public agencies, authorities and the interests of nearby property owners is necessary to manage public spaces. Users often compete over the spaces and negotiate across a variety of spheres. Input is frequently needed from a wide range of stakeholders. This can lead to different levels of participation as defined in Arnstein's Ladder of Citizen Participation.
While there are some professionals who identify themselves specifically as urban designers, a majority have backgrounds in urban planning, architecture, or landscape architecture. Many collegiate programs incorporate urban design theory and design subjects into their curricula. There is an increasing number of university programs offering degrees in urban design at the post-graduate level.
Urban design considers:
Pedestrian zones
Incorporation of nature within a city
Aesthetics
Urban structure – arrangement and relation of business and people
Urban typology, density, and sustainability - spatial types and morphologies related to the intensity of use, consumption of resources, production, and maintenance of viable communities
Accessibility – safe and easy transportation
Legibility and wayfinding – accessible information about travel and destinations
Animation – Designing places to stimulate public activity
Function and fit – places support their varied intended uses
Complimentary mixed uses – Locating activities to allow constructive interaction between them
Character and meaning – Recognizing differences between places
Order and incident – Balancing consistency and variety in the urban environment
Continuity and change – Locating people in time and place, respecting heritage and contemporary culture
Civil society – people are free to interact as civic equals, important for building social capital
Participation/engagement – including people in the decision-making process can be done at many different scales.
=== Relationships with other related disciplines ===
The original urban design was thought to be separated from architecture and urban planning. Urban Design has developed to a certain extent, and comes from the foundation of engineering. In Anglo-Saxon countries, it is often considered as a branch under the architecture, urban planning, and landscape architecture and limited as the construction of the urban physical environment. However Urban Design is more integrated into the social science-based, cultural, economic, political, and other aspects. Not only focus on space and architectural group, but also look at the whole city from a broader and more holistic perspective to shape a better living environment. Compared to architecture, the spatial and temporal scale of Urban Design processing is much larger. It deals with neighborhoods, communities, and even the entire city.
== The urban design education ==
The University of Liverpool's Department of Civic Design is the first urban design school in the world founded in 1909. Following the 1956 Urban Design conference, Harvard University established the first graduate program with urban design in its title, The Master of Architecture in Urban Design, although as a subject taught in universities its history in Europe is far older. Urban design programs explore the built environment from diverse disciplinary backgrounds and points of view. The pedagogically innovative combination of interdisciplinary studios, lecture courses, seminars, and independent study creates an intimate and engaging educational atmosphere in which students thrive and learn. Soon after in 1961, Washington University in St. Louis founded their Master of Urban Design program. Today, twenty urban design programs exist in the United States:
Andrews University, Berrien Springs, MI
Clemson University - Charleston, SC
Columbia Graduate School of Architecture, Planning and Preservation - New York, NY
City College of New York - New York, NY
Estopinal College of Architecture and Planning at Ball State University - Muncie, IN
Georgia Institute of Technology College of Design - Atlanta, GA
Harvard Graduate School of Design - Cambridge, MA
Iowa State University - Ames, IA
New York Institute of Technology - New York, NY
Notre Dame School of Architecture - Notre Dame, IN
Pratt Institute - Brooklyn, NY
Sam Fox School of Design & Visual Arts at Washington University in St. Louis - St. Louis, MO
Savannah College of Art and Design - Savannah, GA
Stuart Weitzman School of Design at University of Pennsylvania - Philadelphia, PA
Taubman College of Architecture and Urban Planning at University of Michigan - Ann Arbor, MI
University of California, Berkeley - Berkeley, CA
University of Colorado Denver - Denver, CO
University of Maryland - College Park, MD
University of Miami - Miami, FL
Stuart Weitzman School of Design at University of Pennsylvania - Philadelphia, PA
University of Texas at Austin School of Architecture - Austin, TX
University of North Carolina at Charlotte - Charlotte, NC
In the United Kingdom, Master's programmes in Urban Design at University of Manchester or University of Sheffield and Cardiff University or London South Bank University and City Design at the Royal College of Art or Queen's University Belfast are offered.
== Issues ==
The field of urban design holds enormous potential for helping us address today's biggest challenges: an expanding population, mass urbanization, rising inequality, and climate change. In its practice as well as its theories, urban design attempts to tackle these pressing issues. As climate change progresses, urban design can mitigate the results of flooding, temperature changes, and increasingly detrimental storm impacts through a mindset of sustainability and resilience. In doing so, the urban design discipline attempts to create environments that are constructed with longevity in mind, such as zero-carbon cities. Cities today must be designed to minimize resource consumption, waste generation, and pollution while also withstanding the unprecedented impacts of climate change. To be truly resilient, our cities need to be able to not just bounce back from a catastrophic climate event but to bounce forward to an improved state.
Another issue in this field is that it is often assumed that there are no mothers of planning and urban design. However, this is not the case, many women have made proactive contributions to the field, including the work of Mary Kingsbury Simkhovitch, Florence Kelley, and Lillian Wald, to name a few of whom were prominent leaders in the City Social movement. The City Social was a movement that steamed between the commonly known City Practical and City Beautiful movements. It was a movement mainly concerning lay with the economic and social equalities regarding urban issues.
Justice is and will always be a key issue in urban design. As previously mentioned, past urban strategies have caused injustices within communities incapable of being remedied via simple means. As urban designers tackle the issue of justice, they often are required to look at the injustices of the past and must be careful not to overlook the nuances of race, place, and socioeconomic status in their design efforts. This includes ensuring reasonable access to basic services, transportation, and fighting against gentrification and the commodification of space for economic gain. Organizations such as the Divided Cities Initiatives at Washington University in St. Louis and the Just City Lab at Harvard work on promoting justice in urban design.
Until the 1970s, the design of towns and cities took little account of the needs of people with disabilities. At that time, disabled people began to form movements demanding recognition of their potential contribution if social obstacles were removed. Disabled people challenged the 'medical model' of disability which saw physical and mental problems as an individual 'tragedy' and people with disabilities as 'brave' for enduring them. They proposed instead a 'social model' which said that barriers to disabled people result from the design of the built environment and attitudes of able-bodied people. 'Access Groups' were established composed of people with disabilities who audited their local areas, checked planning applications, and made representations for improvements. The new profession of 'access officer' was established around that time to produce guidelines based on the recommendations of access groups and to oversee adaptations to existing buildings as well as to check on the accessibility of new proposals. Many local authorities now employ access officers who are regulated by the Access Association. A new chapter of the Building Regulations (Part M) was introduced in 1992. Although it was beneficial to have legislation on this issue the requirements were fairly minimal but continue to be improved with ongoing amendments. The Disability Discrimination Act 1995 continues to raise awareness and enforce action on disability issues in the urban environment.
The issue of walkability has gained prominence in recent years, not only with the concerns of the aforementioned climate change, but also the health outcomes of residents. Car-centric urban design has an invariably negative effect on such outcomes. With proximity to internal combustion engines, residents tend to suffer from dangerous levels of air pollution which lead to cardiovascular complications ranging from the acute, in hypertension and alterations in heart rate, and the chronic, the outright development of atherosclerosis. More people die from air pollution each year than from car accidents. This issue has been used to fuel movements for alternative forms of long to mid range transportation such as trains and bicycles, with walking as the primary means of short-range travel. This would bring benefits from two simultaneous avenues. The physical activity from walking, and the lack of particulate matter (carbon dioxide, sulfur dioxide, nitrogen dioxide, etc.) has shown to alleviate and lower the risk of many maladies such as diabetes, hypertension and cardiovascular disease. Physical activity levels from walking are closely related to the abundance of open public spaces, commercial shops, greenery, among others. These attributes also have been stated to contribute to stronger social and emotional health as the open public spaces facilitate more social interaction within communities. This issue is most prevalent in the United States, where the rise of neoliberalism directly and intentionally caused the car-centric infrastructure.
== See also ==
== References ==
== Further reading ==
Carmona, Matthew Public Places Urban Spaces, The Dimensions of Urban Design, Routledge, London New York, ISBN 9781138067783.
Carmona, Matthew, and Tiesdell, Steve, editors, Urban Design Reader, Architectural Press of Elsevier Press, Amsterdam Boston other cities 2007, ISBN 0-7506-6531-9.
Larice, Michael, and MacDonald, Elizabeth, editors, The Urban Design Reader, Routledge, New York London 2007, ISBN 0-415-33386-5.
== External links ==
Cities of the Future: overview of important urban design elements | Wikipedia/Urban_design |
Book design is the graphic art of determining the visual and physical characteristics of a book. The design process begins after an author and editor finalize the manuscript, at which point it is passed to the production stage. During production, graphic artists, art directors, or professionals in similar roles will work with printing press operators to decide on visual elements—including typography, margins, illustrations, and page layout—and physical features, such as trim size, type of paper, kind of printing, binding.
From the late Middle Ages to the 21st century, the basic structure and organization of Western books have remained largely unchanged. Front matter introduces readers to the book, offering practical information like the title, author and publisher details, and an overview of the content. It may also include editorial or authorial notes providing context. This is followed by the main content of the book, often broadly organized into chapters or sections. The book concludes with back matter, which may include bibliographies, appendices, indexes, glossaries, or errata.
Effective book design is a critical part of publishing, helping to communicate an author’s message and satisfy readers and often having great influence on the commercial, scholarly, or artistic value of a work. Designers use established principles and rules developed in the centuries following the advent of printing.
Contemporary artists, designers, researchers, and artisans who have contributed to the many theories of typography and book design include Jan Tschichold, Josef Müller-Brockman, Paul Rand, Johanna Drucker, Ellen Lupton, Wiliam Lidwell and others.
== Structure ==
=== Front matter ===
Front matter is the initial section of a book, typically containing the fewest pages. Traditionally, front matter pages do not have a folio (the printed page number), unless it is a multi-paged piece of text such as a foreword, introduction, or preface. Front matter pages are numbered using lower-case Roman numerals. If there is no praise page, a book begins numbering with the letter i. This practice allows for additional content, like dedication pages or acknowledgments, to be inserted without affecting the numbering of the main text. Page numbers are usually omitted on blank and stand-alone display pages such as the half-title, frontispiece, title page, colophon, dedication, and epigraph. Additionally, page numbers may either be omitted or presented as a drop folio on the first page of each new front matter section, such as the table of contents, foreword, or preface. In multi-volume works, the front matter typically appears only in the first volume, although some elements like the table of contents or an index may be repeated in each volume.
=== Text ===
The structure of a work—and especially of its body matter—is often described hierarchically.
Volumes
A set of leaves bound together. Thus each work is either a volume, or is divided into volumes.
Books and parts
Single-volume works account for most of the non-academic consumer market in books. A single volume may embody either a part of a book or the whole of a book; in some works, parts encompass multiple books, while in others, books may consist of multiple parts.
Chapters and sections
A chapter or section may be contained within a part or a book. When both chapters and sections are used in the same work, the sections are more often contained within chapters than the reverse. Chapters and sections may have intertitles, also known as internal titles.
Modules and units
In some books the chapters are grouped into bigger parts, sometimes called modules. The numbering of the chapters can begin again at the start of every module. In educational books, especially, the chapters are often called units.
The first page of the actual text of a book is the opening page, which often incorporates special design features, such as initials. Arabic numbering starts at this first page. If the text is introduced by a second half title or opens with a part title, the half title or part title counts as page one. As in the front matter, page numbers are omitted on blank pages, and are either omitted or a drop folio is used on the opening page of each part and chapter. On pages containing only illustrations or tables, page numbers are usually omitted, except in the case of a long sequence of figures or tables.
The following are two instructive examples:
The Lord of the Rings has three parts (either in one volume each, or in a single volume), with each part containing two books, each containing, in turn, multiple chapters.
The Christian Bible (usually bound as a single volume) comprises two "testaments" (which might more typically be described as "parts", and differ in length by a factor of three or four), each containing dozens of books of varying lengths. In turn, each book (except for the shortest) contains multiple chapters, which are traditionally divided (for purposes of citation) into "verses" each containing roughly one independent clause.
=== Back matter (end matter) ===
The back matter, also known as end matter, if used, normally consists of one or more of the following components:
Arabic numbering continues for the back matter.
== Front cover, spine, and back cover ==
The front cover is the front of the book, and is marked appropriately by text or graphics in order to identify it as such (namely as the very beginning of the book). The front cover usually contains at least the title or author, with possibly an appropriate illustration. When the book has a soft or hard cover with dust jacket, the cover yields all or part of its informational function to the dust jacket.
On the inside of the cover page, extending to the facing page is the front endpaper sometimes referred as FEP. The free half of the end paper is called a flyleaf. Traditionally, in hand-bound books, the endpaper was just a sheet of blank or ornamented paper physically masking and reinforcing the connection between the cover and the body of the book. In modern publishing it can be either plain, as in many text-oriented books, or variously ornamented and illustrated in books such as picture books, other children's literature, some arts and craft and hobbyist books, novelty/gift-market and coffee table books, and graphic novels. Elaborate artwork is more expensive than plain paper, but it may be used when expected for the genre, or for an anniversary edition or other special edition of a book in any genre. These books have an audience and traditions of their own, in which the graphic design and immediacy is especially important and publishing tradition and formality are less important.
The spine is the vertical edge of a book as it normally stands on a bookshelf. Early books did not have titles on their spines; rather they were shelved flat with their spines inward and titles written with ink along their fore edges. Modern books display their titles on their spines.
In languages with Chinese-influenced writing systems, the title is written top-to-bottom. In languages written from left to right, the spine text can be pillar (one letter per line), transverse (text line perpendicular to long edge of spine) and along spine. Conventions differ about the direction in which the title along the spine is rotated:
Top-to-bottom (descending):
In texts published or printed in the United States, the United Kingdom, the Commonwealth, Scandinavia and the Netherlands, the spine text, when the book is standing upright, runs from the top to the bottom. This means that when the book is lying flat with the front cover upwards, the title is oriented left-to-right on the spine. This practice is reflected in the industry standards ANSI/NISO Z39.41 and ISO 6357., but "... lack of agreement in the matter persisted among English-speaking countries as late as the middle of the twentieth century, when books bound in Britain still tended to have their titles read up the spine ...".
In many continental European countries, where the ascending system has been used in the past, the descending system has been used in recent decades, probably due to the influence of the English-speaking countries, such as Italy, Russia, Poland and elsewhere.
Bottom-to-top (ascending):
In many continental European and Latin American countries, the spine text, when the book is standing upright, runs from the bottom up, so the title can be read by tilting the head to the left. This allows the reader to read spines of books shelved in alphabetical order in accordance to the usual way left-to-right and top-to-bottom.
The spine usually contains all, or some, of four elements (besides decoration, if any), and in the following order: (1) author, editor, or compiler; (2) title; (3) publisher; and (4) publisher logo.
On the inside of the back cover page, extending from the facing page before it, is the endpaper. Its design matches the front endpaper and, in accordance with it, contains either plain paper or pattern, image etc.
The back cover often contains biographical matter about the author or editor, and quotes from other sources praising the book. It may also contain a summary or description of the book
== Binding ==
Books are classified under two categories according to the physical nature of their binding. The designation hardcover (or hardback) refers to books with stiff covers, as opposed to flexible ones. The binding of a hardcover book usually includes boards (often made of paperboard) covered in cloth, leather, or other materials. Hard cover books are traditionally the most profitable. Expensive options, such as leather covers, are often available for deluxe editions of classic literature. The binding is usually sewn to the pages using string stitching.
A less expensive binding method is that used for paperback books (sometimes called softback or softcover). Most paperbacks are bound with paper or light cardboard, though other materials (such as plastic) are used. The covers are flexible and usually bound to the pages using glue (perfect binding). Some small paperback books are sub-classified as pocketbooks. These paperbacks are smaller than usual—small enough to barely fit into a pocket (especially the back pocket of one's trousers). However, this capacity to fit into a pocket diminishes with increasing number of pages and increasing thickness of the book. Such a book may still be designated as a pocketbook.
== Other features ==
Other design features may be added, especially for deluxe editions. Just as publishers sell hardcover and paperback editions for the same book, deluxe editions may be sold alongside regular editions. The additional features may require extra printing time, sometimes adding a week or two to the production timeline, and they are not necessarily more profitable. However, they can appeal strongly to an existing fanbase, and features of a book design that show well in video can help a book go viral.
Some books such as Bibles or dictionaries may have a thumb index to help find material quickly.
Gold leaf may also be applied to the edges of the pages, so that when closed, the side, top, and bottom of the book have a golden color. On some books, a design may be printed on the edges, or marbling or a simple colour applied. Some artist's books go even further, by using fore-edge painting. Features such as these colored page edges, or others such as using metallic foil elements, reversible dust jackets, or affixing a ribbon for a bookmark, are often seen in special editions or when the publisher wants to signal that the book is a collectible.
Pop-up elements and fold-out pages may be used to add dimensions to the page in different ways.
Children's books commonly incorporate a wide array of design features built into the fabric of the book. Some books for preschoolers include textured fabric, plastic on other materials. Die-cut techniques in the work of Eric Carle are one example. Clear or reflective surfaces, flaps, textiles, and scratch-and-sniff are other possible features.
== Page spread ==
A basic unit in book design is the page spread. The left page and right page (called verso and recto respectively, in left-to-right language books) are of the same size and aspect ratio, and are centered on the gutter where they are bound together at the spine.
The design of each individual page, on the other hand, is governed by the canons of page construction.
The possible layout of the sets of letters of the alphabet, or words, on a page is determined by the so-called print space, and is also an element in the design of the page of the book. There must be sufficient space at the spine of the book if the text is to be visible. On the other hand, the other three margins of the page, which frame the book, are made of the appropriate size for both practical and aesthetic reasons.
=== Print space ===
The print space or type area determines the effective area on the paper of a book, journal or other press work. The print space is limited by the surrounding borders, or in other words the gutters outside the printed area.
== See also ==
Galley proof
Imprint
Letterpress
Page numbering
Visual design
Recto and verso
Page (paper)
Other types of books
Interactive children's book
Interactive fiction
Pop-up book
== References ==
=== Citations ===
=== Sources ===
=== Further reading ===
== External links ==
Dutch Art Nouveau and Art Deco Book Design (archived 26 May 2013)
Binding design and paper conservation of antique books, albums and documents (archived 3 December 2013)
The Rollo Books by Jacob Abbott: an example of first edition designs
"Signs – Books – Networks" virtual exhibition of the German Museum of Books and Writing, i.a. with a thematic module on book design | Wikipedia/Book_design |
In safe-life design, products are intended to be removed from service at a specific design life.
Safe-life is particularly relevant to simple metal aircraft, where airframe components are subjected to alternating loads over the lifetime of the aircraft which makes them susceptible to metal fatigue. In certain areas such as in wing or tail components, structural failure in flight would be catastrophic.
The safe-life design technique is employed in critical systems which are either very difficult to repair or whose failure may cause severe damage to life and property. These systems are designed to work for years without requirement of any repairs.
The disadvantage of the safe-life design philosophy is that serious assumptions must be made regarding the alternating loads imposed on the aircraft, so if those assumptions prove to be inaccurate, cracks may commence prior to the component being removed from service. To counter this disadvantage, alternative design philosophies like fail-safe design and fault-tolerant design were developed.
== The automotive industry ==
One way the safe-life approach is planning and envisaging the toughness of the mechanisms in the automotive industry. When the repetitive loading on mechanical structures intensified with the advent of the steam engine, back in the mid-1800s, this approach was established (Oja 2013). According to Michael Oja, “Engineers and academics began to understand the effect that cyclic stress (or strain) has on the life of a component; a curve was developed relating the magnitude of the cyclic stress (S) to the logarithm of the number of cycles to failure (N)” (Oja 2013). The S-N curve because the fundamental relation is in safe life designs. The curve is reliant on many conditions, including the ratio of maximum load to minimum load (R-ratio), the type of material being inspected, and the regularity at which the cyclic stresses (or strains) are applied. Today, the curve is still consequential by experimentally testing laboratory specimens at many continuous cyclic load levels, and detecting the number of cycles to failure (Oja 2013). Michael Oja states that, “Unsurprisingly, as the load decreases, the life of the specimen increases” (Oja 2013). The practical limit of experimental challenges has been due to frequency confines of hydraulic-powered test machines. The load at which this high-cycle life happens has come to be recognized as the fatigue asset of the material (Oja 2013).
== Aerospace ==
=== Aircraft structure ===
There are two generic types of aircraft structure, safe-life and fail-safe. The former is one that has low residual strength if a primary load-bearing member should fail, whereas the latter has alternative load paths so that if a primary load-bearing member cracks, residual strength remains because the loads can be assumed by adjacent members. In modern aircraft, fail-safe structures with up to three alternative load paths are provided, but back in 1947 the main load-bearing structure was safe life. This did not matter on an interim airframe designed for operations in the calm upper air, but at around 500 ft the loads and stresses were more volatile.
=== Helicopter structure ===
The safe-life design philosophy is applied to all helicopter structures. In the current generation of Army helicopters, such as the UH-60 Black Hawk, composite materials make up as great as 17 percent of the airframe and rotor weight (Reddick). Harold Reddick states that, “With the advent of major helicopter composite structures R&D projects, such as the Advanced Composite Airframe Program (ACAP), and Manufacturing Methods and Technology (MM&T) projects, such as UH-60 Low Cost Composite Blade Program, it is estimated that within a few years composite materials could be applied to as much as 80% of the airframe and rotor weight of a helicopter in a production program” (Reddick). Along with this application, it is the essential obligation that sound, definitive design criteria be industrialized in order that the composite structures have high fatigue lives for economy of ownership and good damage tolerance for flight safety. Safe-life and damage-tolerant criteria are practical to all helicopter flight critical components (Reddick).
== See also ==
Fail-safe
Fault-tolerant design
Safety engineering
Damage tolerance
1945 Australian National Airways Stinson crash
== References ==
=== Citations ===
Oja, Michael (2013-03-18). "Structural Design Concepts: Overview of Safe Life and Damage Tolerance". Vextec.com | Reducing Life Cycle Costs From Design To Field Service. Retrieved 2019-06-11.
"Fatigue (material)", Wikipedia, 2019-06-04, retrieved 2019-06-11
Reddick, Harold. "Safe-Life and Damage-Tolerant Design Approaches for Helicopter Structures" (PDF). NASA. Retrieved June 11, 2019.
== External links ==
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10] | Wikipedia/Safe-life_design |
The theory of constraints (TOC) is a management paradigm that views any manageable system as being limited in achieving more of its goals by a very small number of constraints. There is always at least one constraint, and TOC uses a focusing process to identify the constraint and restructure the rest of the organization around it. TOC adopts the common idiom "a chain is no stronger than its weakest link". That means that organizations and processes are vulnerable because the weakest person or part can always damage or break them, or at least adversely affect the outcome.
== History ==
The theory of constraints is an overall management philosophy, introduced by Eliyahu M. Goldratt in his 1984 book titled The Goal, that is geared to help organizations continually achieve their goals. Goldratt adapted the concept to project management with his book Critical Chain, published in 1997.
An earlier propagator of a similar concept was Wolfgang Mewes in Germany with publications on power-oriented management theory (Machtorientierte Führungstheorie, 1963) and following with his Energo-Kybernetic System (EKS, 1971), later renamed Engpasskonzentrierte Strategie (Bottleneck-focused Strategy) as a more advanced theory of bottlenecks. The publications of Wolfgang Mewes are marketed through the FAZ Verlag, publishing house of the German newspaper Frankfurter Allgemeine Zeitung. However, the paradigm Theory of constraints was first used by Goldratt.
=== Key assumption ===
The underlying premise of the theory of constraints is that organizations can be measured and controlled by variations on three measures: throughput, operational expense, and inventory. Inventory is all the money that the system has invested in purchasing things which it intends to sell. Operational expense is all the money the system spends in order to turn inventory into throughput. Throughput is the rate at which the system generates money through sales.
Before the goal itself can be reached, necessary conditions must first be met. These typically include safety, quality, legal obligations, etc. For most businesses, the goal itself is to make profit. However, for many organizations and non-profit businesses, making money is a necessary condition for pursuing the goal. Whether it is the goal or a necessary condition, understanding how to make sound financial decisions based on throughput, inventory, and operating expense is a critical requirement.
=== The five focusing steps ===
TOC is based on the premise that the rate of goal achievement by a goal-oriented system (i.e., the system's throughput) is limited by at least one constraint.
The argument by reductio ad absurdum is as follows: If there was nothing preventing a system from achieving higher throughput (i.e., more goal units in a unit of time), its throughput would be infinite – which is impossible in a real-life system.
Only by increasing flow through the constraint can overall throughput be increased.
Assuming the goal of a system has been articulated and its measurements defined, the steps are:
Identify the system's constraint(s).
Decide how to exploit the system's constraint(s).
Subordinate everything else to the above decision.
Elevate the system's constraint(s).
Warning! If in the previous steps a constraint has been broken, go back to step 1, but do not allow inertia to cause a system's constraint.
The goal of a commercial organization is: "Make more money now and in the future", and its measurements are given by throughput accounting as: throughput, inventory, and operating expenses.
The five focusing steps aim to ensure ongoing improvement efforts are centered on the organization's constraint(s). In the TOC literature, this is referred to as the process of ongoing improvement (POOGI).
These focusing steps are the key steps to developing the specific applications mentioned below.
=== Constraints ===
A constraint is anything that prevents the system from achieving its goal. There are many ways that constraints can show up, but a core principle within TOC is that there are not tens or hundreds of constraints. There is at least one, but at most only a few in any given system. Constraints can be internal or external to the system. An internal constraint is in evidence when the market demands more from the system than it can deliver. If this is the case, then the focus of the organization should be on discovering that constraint and following the five focusing steps to open it up (and potentially remove it). An external constraint exists when the system can produce more than the market will bear. If this is the case, then the organization should focus on mechanisms to create more demand for its products or services.
Types of (internal) constraints
Equipment: The way equipment is currently used limits the ability of the system to produce more salable goods/services.
People: Lack of skilled people limits the system. Mental models held by people can cause behaviour that becomes a constraint.
Policy: A written or unwritten policy prevents the system from making more.
The concept of the constraint in Theory of Constraints is analogous to but differs from the constraint that shows up in mathematical optimization. In TOC, the constraint is used as a focusing mechanism for management of the system. In optimization, the constraint is written into the mathematical expressions to limit the scope of the solution (X can be no greater than 5).
Please note: organizations have many problems with equipment, people, policies, etc. (A breakdown is just that – a breakdown – and is not a constraint in the true sense of the TOC concept). The constraint is the limiting factor that is preventing the organization from getting more throughput (typically, revenue through sales) even when nothing goes wrong.
=== Breaking a constraint ===
If a constraint's throughput capacity is elevated to the point where it is no longer the system's limiting factor, this is said to "break" the constraint. The limiting factor is now some other part of the system, or may be external to the system (an external constraint). This is not to be confused with a breakdown.
=== Buffers ===
Buffers are used throughout the theory of constraints. They often result as part of the exploit and subordinate steps of the five focusing steps. Buffers are placed before the governing constraint, thus ensuring that the constraint is never starved. Buffers are also placed behind the constraint to prevent downstream failure from blocking the constraint's output. Buffers used in this way protect the constraint from variations in the rest of the system and should allow for normal variation of processing time and the occasional upset (Murphy) before and behind the constraint.
Buffers can be a bank of physical objects before a work center, waiting to be processed by that work center. Buffers ultimately buy you time, as in the time before work reaches the constraint and are often verbalized as time buffers. There should always be enough (but not excessive) work in the time queue before the constraint and adequate offloading space behind the constraint.
Buffers are not the small queue of work that sits before every work center in a kanban system although it is similar if you regard the assembly line as the governing constraint. A prerequisite in the theory is that with one constraint in the system, all other parts of the system must have sufficient capacity to keep up with the work at the constraint and to catch up if time was lost. In a balanced line, as espoused by kanban, when one work center goes down for a period longer than the buffer allows, then the entire system must wait until that work center is restored. In a TOC system, the only situation where work is in danger is if the constraint is unable to process (either due to malfunction, sickness or a "hole" in the buffer – if something goes wrong that the time buffer can not protect).
Buffer management, therefore, represents a crucial attribute of the theory of constraints. There are many ways to apply buffers, but the most often used is a visual system of designating the buffer in three colors: green (okay), yellow (caution) and red (action required). Creating this kind of visibility enables the system as a whole to align and thus subordinate to the need of the constraint in a holistic manner. This can also be done daily in a central operations room that is accessible to everybody.
=== Plant types ===
There are four primary types of plants in the TOC lexicon. Draw the flow of material from the bottom of a page to the top, and you get the four types. They specify the general flow of materials through a system, and also provide some hints about where to look for typical problems. This type of analysis is known as VATI analysis as it uses the bottom-up shapes of the letters V, A, T, and I to describe the types of plants. The four types can be combined in many ways in larger facilities, e.g. "an A plant feeding a V plant".
V-plant: The general flow of material is one-to-many, such as a plant that takes one raw material and can make many final products. Classic examples are meat rendering plants or a steel manufacturer. The primary problem in V-plants is "robbing," where one operation (A) immediately after a diverging point "steals" materials meant for the other operation (B). Once the material has been processed by A, it cannot come back and be run through B without significant rework.
A-plant: The general flow of material is many-to-one, such as in a plant where many sub-assemblies converge for a final assembly. The primary problem in A-plants is in synchronizing the converging lines so that each supplies the final assembly point at the right time.
T-plant: The general flow is that of an I-plant (or has multiple lines), which then splits into many assemblies (many-to-many). Most manufactured parts are used in multiple assemblies and nearly all assemblies use multiple parts. Customized devices, such as computers, are good examples. T-plants suffer from both synchronization problems of A-plants (parts aren't all available for an assembly) and the robbing problems of V-plants (one assembly steals parts that could have been used in another).
I-plant: Material flows in a sequence, such as in an assembly line. The primary work is done in a straight sequence of events (one-to-one). The constraint is the slowest operation.
From the above list, one can deduce that for non-material systems one could draw the flow of work or the flow of processes, instead of physical flows, and arrive at similar basic V, A, T, or I structures. A project, for example, is an A-shaped sequence of work, culminating in a delivered product (i.e., the intended outcome of the project).
== Applications ==
The focusing steps, this process of ongoing improvement, have been applied to manufacturing, project management, supply chain/distribution generated specific solutions. Other tools (mainly the "thinking process") also led to TOC applications in the fields of marketing and sales, and finance. The solution as applied to each of these areas are listed below.
=== Operations ===
Within manufacturing operations and operations management, the solution seeks to pull materials through the system, rather than push them into the system. The primary methodology used is drum-buffer-rope (DBR) and a variation called simplified drum-buffer-rope (S-DBR).
Drum-buffer-rope is a manufacturing execution methodology based on the fact the output of a system can only be the same as the output at the constraint of the system. Any attempt to produce more than what the constraint can process just leads to excess inventory piling up. The method is named for its three components. The drum is the rate at which the physical constraint of the plant can work: the work center or machine or operation that limits the ability of the entire system to produce more. The rest of the plant follows the beat of the drum. Schedule at the drum decides what the system should produce, in what sequence to produce and how much to produce. They make sure the drum has work and that anything the drum has processed does not get wasted.
The buffer protects the drum, so that it always has work flowing to it. Buffers in DBR provide the additional lead time beyond the required set up and process times, for materials in the product flow. Since these buffers have time as their unit of measure, rather than quantity of material, this makes the priority system operate strictly based on the time an order is expected to be at the drum. Each work order will have a remaining buffer status that can be calculated. Based on this buffer status, work orders can be color coded into Red, Yellow and Green. The red orders have the highest priority and must be worked on first, since they have penetrated most into their buffers followed by yellow and green. As time evolves, this buffer status might change and the color assigned to the particular work order change with it.
Traditional DBR usually calls for buffers at several points in the system: the constraint, synchronization points and at shipping. S-DBR has a buffer at shipping and manages the flow of work across the drum through a load planning mechanism.
The rope is the work release mechanism for the plant. Orders are released to the shop floor at one "buffer time" before they are due to be processed by the constraint. In other words, if the buffer is 5 days, the order is released 5 days before it is due at the constraint. Putting work into the system earlier than this buffer time is likely to generate too-high work-in-process and slow down the entire system.
=== High-speed automated production lines ===
Automated production lines achieve high throughput rates and output quantities by deploying automation solutions that are highly task-specific. Depending on their design and construction, these machines operate at different speeds and capacities and therefore have varying efficiency levels.
A prominent example is the use of automated production lines in the beverage industry. Filling systems usually have several machines executing parts of the complete bottling process, from filling primary containers to secondary packaging and palletisation.
To be able to maximize the throughput, the production line usually has a designed constraint. This constraint is typically the slowest and often the most expensive machine on the line. The overall throughput of the line is determined by this machine. All other machines can operate faster and are connected by conveyors.
The conveyors usually have the ability to buffer product. In the event of a stoppage at a machine other than the constraint, the conveyor can buffer the product enabling the constraint machine to keep on running.
A typical line setup is such that in normal operation the upstream conveyors from the constraint machine are always run full to prevent starvation at the constraint and the downstream conveyors are run empty to prevent a back up at the constraint. The overall aim is to prevent minor stoppages at the machines from impacting the constraint.
For this reason as the machines get further from the constraint, they have the ability to run faster than the previous machine and this creates a V curve.
=== Supply chain and logistics ===
In general, the solution for supply chains is to create flow of inventory so as to ensure greater availability and to eliminate surpluses.
The TOC distribution solution is effective when used to address a single link in the supply chain and more so across the entire system, even if that system comprises many different companies. The purpose of the TOC distribution solution is to establish a competitive advantage based on extraordinary availability by reducing the damages caused when the flow of goods is interrupted by shortages and surpluses.
This approach uses several new rules to protect availability with less inventory than is conventionally required.
Inventory is held at an aggregation point(s) as close as possible to the source. This approach ensures smoothed demand at the aggregation point, requiring proportionally less inventory. The distribution centers holding the aggregated stock are able to ship goods downstream to the next link in the supply chain much more quickly than a make-to-order manufacturer can.
Following this rule may result in a make-to-order manufacturer converting to make-to-stock. The inventory added at the aggregation point is significantly less than the inventory reduction downstream.
In all stocking locations, initial inventory buffers are set which effectively create an upper limit of the inventory at that location. The buffer size is equal to the maximum expected consumption within the average Replenishment Time ("RT"), plus additional stock to protect in case a delivery is late. In other words, there is no advantage in holding more inventory in a location than the amount that might be consumed before more could be ordered and received. Typically, the sum of the on hand value of such buffers are 25–75% less than currently observed average inventory levels
Replenishment Time (RT) is the sum of the delay, after the first consumption following a delivery, before an order is placed plus the delay after the order is placed until the ordered goods arrive at the ordering location.
Once buffers have been established, no replenishment orders are placed as long as the quantity inbound (already ordered but not yet received) plus the quantity on hand are equal to or greater than the buffer size. Following this rule causes surplus inventory to be bled off as it is consumed.
For any reason, when on hand plus inbound inventory is less than the buffer, orders are placed as soon as practical to increase the inbound inventory so that the relationship on Hand + Inbound = Buffer is maintained.
To ensure buffers remain correctly sized even with changes in the rates of demand and replenishment, a simple recursive algorithm called Buffer Management is used. When the on hand inventory level is in the upper third of the buffer for a full RT, the buffer is reduced by one third (and don't forget rule 3). Alternatively, when the on hand inventory is in the bottom one third of the buffer for too long, the buffer is increased by one third (and don't forget rule 4). The definition of "too long" may be changed depending on required service levels, however, a rule of thumb is 20% of the RT. Moving buffers up more readily than down is supported by the usually greater damage caused by shortages as compared to the damage caused by surpluses.
Once inventory is managed as described above, continuous efforts should be undertaken to reduce RT, late deliveries, supplier minimum order quantities (both per SKU and per order) and customer order batching. Any improvements in these areas will automatically improve both availability and inventory turns, thanks to the adaptive nature of Buffer Management.
A stocking location that manages inventory according to the TOC should help a non-TOC customer (downstream link in a supply chain, whether internal or external) manage their inventory according to the TOC process. This type of help can take the form of a vendor managed inventory (VMI). The TOC distribution link simply extends its buffer sizing and management techniques to its customers' inventories. Doing so has the effect of smoothing the demand from the customer and reducing order sizes per SKU. VMI results in better availability and inventory turns for both supplier and customer. The benefits to the non-TOC customers are sufficient to meet the purpose of capitalizing on the competitive edge by giving the customer a reason to be more loyal and give more business to the upstream link. When the end consumers buy more, the whole supply chain sells more.
One caveat should be considered. Initially and only temporarily, the supply chain or a specific link may sell less as the surplus inventory in the system is sold. However, the sales lift due to improved availability is a countervailing factor. The current levels of surpluses and shortages make each case different.
=== Finance and accounting ===
Holistic thinking applied to the finance application has been termed throughput accounting. Throughput accounting suggests that one examine the impact of investments and operational changes in terms of the impact on the throughput of the business. It is an alternative to cost accounting.
The primary measures for a TOC view of finance and accounting are: throughput, operating expense and investment. Throughput is calculated from sales minus "totally variable cost", where totally variable cost is usually calculated as the cost of raw materials that go into creating the item sold.: 13–14
=== Project management ===
Critical Chain Project Management (CCPM) are utilized in this area. CCPM is based on the idea that all projects look like A-plants: all activities converge to a final deliverable. As such, to protect the project, there must be internal buffers to protect synchronization points and a final project buffer to protect the overall project.
=== Marketing and sales ===
While originally focused on manufacturing and logistics, TOC has expanded into sales management and marketing. Its role is explicitly acknowledged in the field of sales process engineering. For effective sales management one can apply Drum Buffer Rope to the sales process similar to the way it is applied to operations (see Reengineering the Sales Process book reference below). This technique is appropriate when your constraint is in the sales process itself, or if you just want an effective sales management technique which includes the topics of funnel management and conversion rates.
== Thinking processes ==
The thinking processes are a set of tools to help managers walk through the steps of initiating and implementing a project. When used in a logical flow, they help walk through a buy-in process:
Gain agreement on the problem
Gain agreement on the direction for a solution
Gain agreement that the solution solves the problem
Agree to overcome any potential negative ramifications
Agree to overcome any obstacles to implementation
TOC practitioners sometimes refer to these in the negative as working through layers of resistance to a change.
Recently, the current reality tree (CRT) and future reality tree (FRT) have been applied to an argumentative academic paper.
Despite its origins as a manufacturing approach (Goldratt & Cox, The Goal: A process of Ongoing Improvement, 1992), Goldratt's Theory of Constraints (TOC) methodology is now regarded as a systems methodology with strong foundations in the hard sciences (Mabin, 1999). Through its tools for convergent thinking and synthesis, the "Thinking processes", which underpin the entire TOC methodology, help identify and manage constraints and guide continuous improvement and change in organizations (Dettmer H., 1998).
The process of change requires the identification and acceptance of core issues; the goal and the means to the goal. This comprehensive set of logical tools can be used for exploration, solution development and solution implementation for individuals, groups or organizations. Each tool has a purpose and nearly all tools can be used independently (Cox & Spencer, 1998). Since these thinking tools are designed to address successive "layers of resistance" and enable communication, it expedites securing "buy in" of groups. While CRT (current reality tree) represents the undesirable effects of the current situation, the FRT (the future reality tree), NBR (negative branch) help people plan and understand the possible results of their actions. The PRT (prerequisite tree) and TRT (transition tree) are designed to build collective buy in and aid in the Implementation phase. The logical constructs of these tools or diagrams are the necessary condition logic, the sufficient cause logic and the strict logic rules that are used to validate cause-effect relationships which are modelled with these tools (Dettmer W., 2006).
A summary of these tools, the questions they help answer and the associated logical constructs used is presented in the table below.
=== TOC thinking process tools ===
Use of these tools are based on the fundamental beliefs of TOC that organizations a) are inherently simple (interdependencies exist in organizations) b) desire inherent harmony (win – win solutions are possible) c) are inherently good (people are good) and have inherent potential (people and organizations have potential to do better) (Goldratt E., 2009). In the book "Through the clouds to solutions" Jelena Fedurko (Fedurko, 2013) states that the major areas for application of TP tools as:
To create and enhance thinking and learning skills
To make better decisions
To develop responsibility for one's own actions through understanding their consequences
To handle conflicts with more confidence and win-win outcomes
To correct behavior with undesirable consequences
Assist in evaluating conditions for achieving a desired outcome
To assist in peer mediation
To assist in relationship between subordinates and bosses
== Development and practice ==
TOC was initiated by Goldratt, who until his death was still the main driving force behind the development and practice of TOC. There is a network of individuals and small companies loosely coupled as practitioners around the world. TOC is sometimes referred to as "constraint management". TOC is a large body of knowledge with a strong guiding philosophy of growth.
== Criticism ==
Criticisms that have been leveled against TOC include:
=== Claimed suboptimality of drum-buffer-rope ===
While TOC has been compared favorably to linear programming techniques, D. Trietsch from University of Auckland argues that DBR methodology is inferior to competing methodologies. Linhares, from the Getulio Vargas Foundation, has shown that the TOC approach to establishing an optimal product mix is unlikely to yield optimum results, as it would imply that P=NP.
=== Unacknowledged debt ===
Duncan (as cited by Steyn)
says that TOC borrows heavily from systems dynamics developed by Forrester in the 1950s and from statistical process control which dates back to World War II. And Noreen Smith and Mackey, in their independent report on TOC, point out that several key concepts in TOC "have been topics in management accounting textbooks for decades.": 149 It is also claimed that Goldratt's books fail to acknowledge that TOC borrows from more than 40 years of previous management science research and practice, particularly from program evaluation and review technique/critical path method (PERT/CPM) and the just in time strategy.
A rebuttal to these criticisms is offered in Goldratt's "What is the Theory of Constraints and How Should it be Implemented?", and in his audio program, "Beyond The Goal". In these, Goldratt discusses the history of disciplinary sciences, compares the strengths and weaknesses of the various disciplines, and acknowledges the sources of information and inspiration for the thinking processes and critical chain methodologies. Articles published in the now-defunct Journal of Theory of Constraints referenced foundational materials. Goldratt published an article and gave talks with the title "Standing on the Shoulders of Giants" in which he gives credit for many of the core ideas of Theory of Constraints. Goldratt has sought many times to show the correlation between various improvement methods.
Goldratt has been criticized on lack of openness in his theories, an example being him not releasing the algorithm he used for the Optimum Performance Training system. Some view him as unscientific with many of his theories, tools and techniques not being a part of the public domain, rather a part of his own framework of profiting on his ideas.
According to Gupta and Snyder (2009), despite being recognized as a genuine management philosophy nowadays, TOC has yet failed to demonstrate its effectiveness in the academic literature and as such, cannot be considered academically worthy to be called a widely recognized theory. TOC needs more case studies that prove a connection between implementation and improved financial performance.
Nave (2002) argues that TOC does not take employees into account and fails to empower them in the production process. He also states that TOC fails to address unsuccessful policies as constraints.
In contrast, Mukherjee and Chatterjee (2007) state that much of the criticism of Goldratt's work has been focused on the lack of rigour in his work, but not of the bottleneck approach, which are two different aspects of the issue.
== Certification and education ==
The Theory of Constraints International Certification Organization (TOCICO) is an independent not-for-profit incorporated society that sets exams to ensure a consistent standard of competence. It is overseen by a board of academic and industrial experts. It also hosts an annual international conference. The work presented at these conferences constitutes a core repository of the current knowledge.
== See also ==
Index of articles related to the theory of constraints
Linear programming
Industrial engineering
Limiting factor
Systems thinking – Critical systems thinking – Joint decision traps
Twelve leverage points by Donella Meadows
Constraint (disambiguation)
Thinklets
Throughput
Rate-determining step
Liebig's law of the minimum
== References ==
== Further reading ==
Cox, Jeff; Goldratt, Eliyahu M. (1986). The goal: a process of ongoing improvement. [Great Barrington, Massachusetts]: North River Press. ISBN 0-88427-061-0.
Dettmer, H. William. (2003). Strategic Navigation: A Systems Approach to Business Strategy. [Milwaukee, Wisconsin]: ASQ Quality Press. p. 302. ISBN 0-87389-603-3.
Dettmer, H. William. (2007). The Logical Thinking Process: A Systems Approach to Complex Problem Solving. [Milwaukee, Wisconsin]: ASQ Quality Press. p. 413. ISBN 978-0-87389-723-5.
Goldratt, Eliyahu M. (1994). It's not luck. [Great Barrington, Massachusetts]: North River Press. ISBN 0-88427-115-3.
Goldratt, Eliyahu M. (1997). Critical chain. [Great Barrington, Massachusetts]: North River Press. ISBN 0-88427-153-6.
Carol A. Ptak; Goldratt, Eliyahu M.; Eli Schragenheim (2000). Necessary But Not Sufficient. [Great Barrington, Massachusetts]: North River Press. ISBN 0-88427-170-6.
Goldratt, Eliyahu M. (1998). Essays on the Theory of Constraints. [Great Barrington, Massachusetts]: North River Press. ISBN 0-88427-159-5.
Goldratt, Eliyahu M. (1990). Theory of Constraints. [Great Barrington, Massachusetts]: North River Press. ISBN 0-88427-166-8.
Goldratt, Eliyahu M. Beyond the Goal : Eliyahu Goldratt Speaks on the Theory of Constraints (Your Coach in a Box). Coach Series. ISBN 1-59659-023-8.
Lisa Lang (January 2006). Achieving a Viable Vision: The Theory of Constraints Strategic Approach to Rapid Sustainable Growth. Throughput Publishing, Inc. ISBN 0-9777604-1-3.
Goldratt, Eliyahu M. (1990). The haystack syndrome: sifting information out of the data ocean. [Great Barrington, Massachusetts]: North River Press. ISBN 0-88427-089-0.
Fox, Robert; Goldratt, Eliyahu M. (1986). The race. [Great Barrington, Massachusetts]: North River Press. ISBN 0-88427-062-9.
Schragenheim, Eli. (1999). Management dilemmas. [Boca Raton, Florida]: St. Lucie Press. p. 209. ISBN 1-57444-222-8.
Schragenheim, Eli & Dettmer, H. William. (2000). Manufacturing at warp speed: optimizing supply chain financial performance. [Boca Raton, Florida]: St. Lucie Press. p. 342. ISBN 1-57444-293-7.
Schragenheim, Eli, Dettmer, H. William, and Patterson, J. Wayne. (2009). Supply chain management at warp speed: integrating the system from end to end. [Boca Raton, Florida]: CRC Press. p. 220. ISBN 978-1-42007-335-5.{{cite book}}: CS1 maint: multiple names: authors list (link)
Lepore & Cohen, Domenico & Oded (1999). Deming and Goldratt: The Decalogue. Great Barrington (Massachusetts): North River Press. p. 179. ISBN 0884271633.
John Tripp TOC Executive Challenge A Goal Game. ISBN 0-88427-186-2
Goldratt, Eliyahu M. (2003). Production the TOC Way with Simulator. North River Press, Great Barrington, Massachusetts. ISBN 0-88427-175-7.
Stein, Robert E. (3 June 2003). Re-Engineering The Manufacturing System. Marcel Dekker. ISBN 0-8247-4265-6.
Stein, Robert E. (14 February 1997). The Theory of Constraints. Marcel Dekker. ISBN 0-8247-0064-3.
Jacob, Dee; Bergland, Suzan; Cox, Jeff (29 December 2009). Velocity: Combining Lean, Six Sigma and the Theory of Constraints to Achieve Breakthrough Performance. Free Pre. p. 320. ISBN 978-1439158920.
Dettmer, H (1998). Constraint Theory A Logic-Based Approach to System Improvement (PDF).
Fedurko, J. Through Clouds to Solutions: Working with UDEs and UDE clouds. Estonia: Ou Vali Press.{{cite book}}: CS1 maint: publisher location (link)
== External links ==
A Guide to Implementing the Theory of Constraints
Five focusing Steps
Theory of Constraints Essentials
Theory of Constraints: A Research Database
Flying Logic: An application to build and explore constraint models according the Theory of Constraints | Wikipedia/Theory_of_constraints |
Generative design is an iterative design process that uses software to generate outputs that fulfill a set of constraints iteratively adjusted by a designer. Whether a human, test program, or artificial intelligence, the designer algorithmically or manually refines the feasible region of the program's inputs and outputs with each iteration to fulfill evolving design requirements. By employing computing power to evaluate more design permutations than a human alone is capable of, the process is capable of producing an optimal design that mimics nature's evolutionary approach to design through genetic variation and selection. The output can be images, sounds, architectural models, animation, and much more. It is, therefore, a fast method of exploring design possibilities that is used in various design fields such as art, architecture, communication design, and product design.
Generative design has become more important, largely due to new programming environments or scripting capabilities that have made it relatively easy, even for designers with little programming experience, to implement their ideas. Additionally, this process can create solutions to substantially complex problems that would otherwise be resource-exhaustive with an alternative approach making it a more attractive option for problems with a large or unknown solution set. It is also facilitated with tools in commercially available CAD packages. Not only are implementation tools more accessible, but also tools leveraging generative design as a foundation.
== Generative design in architecture ==
Generative design in architecture is an iterative design process that enables architects to explore a wider solution space with more possibility and creativity. Architectural design has long been regarded as a wicked problem. Compared with traditional top-down design approach, generative design can address design problems efficiently, by using a bottom-up paradigm that uses parametric defined rules to generate complex solutions. The solution itself then evolves to a good, if not optimal, solution. The advantage of using generative design as a design tool is that it does not construct fixed geometries, but take a set of design rules that can generate an infinite set of possible design solutions. The generated design solutions can be more sensitive, responsive, and adaptive to the problem.
Generative design involves rule definition and result analysis which are integrated with the design process. By defining parameters and rules, the generative approach is able to provide optimized solution for both structural stability and aesthetics. Possible design algorithms include cellular automata, shape grammar, genetic algorithm, space syntax, and most recently, artificial neural network. Due to the high complexity of the solution generated, rule-based computational tools, such as finite element method and topology optimisation, are more preferable to evaluate and optimise the generated solution. The iterative process provided by computer software enables the trial-and-error approach in design, and involves architects interfering with the optimisation process.
Historical precedent work includes Antoni Gaudí's Sagrada Família, which used rule based geometrical forms for structures, and Buckminster Fuller's Montreal Biosphere where the rules to generate individual components is designed, rather than the final product.
More recent generative design cases include Foster and Partners' Queen Elizabeth II Great Court, where the tessellated glass roof was designed using a geometric schema to define hierarchical relationships, and then the generated solution was optimized based on geometrical and structural requirement.
== Generative design in sustainable design ==
Generative design in sustainable design is an effective approach addressing energy efficiency and climate change at the early design stage, recognizing buildings contribute to approximately one-third of global greenhouse gas emissions and 30%-40% of total building energy use. It integrates environmental principles with algorithms, enabling exploration of countless design alternatives to enhance energy performance, reduce carbon footprints, and minimize waste.
A key feature of generative design in sustainable design is its ability to incorporate Building Performance Simulations (BPS) into the design process. Simulation programs like EnergyPlus, Ladybug Tools, and so on, combined with generative algorithms, can optimize design solutions for cost-effective energy use and zero-carbon building designs. For example, the GENE_ARCH system used a Pareto algorithm with DOE2.1E building energy simulation for the whole building design optimization. Generative design has improved sustainable facade design, as illustrated by the algorithm of cellular automata and daylight simulations in adaptive facade design. In addition, genetic algorithms were used with radiation simulations for energy-efficient PV modules on high-rise building facades. Generative design is also applied to life cycle analysis (LCA), as demonstrated by a framework using grid search algorithms to optimize exterior wall design for minimum environmental embodied impact.
Multi-objective optimization embraces multiple diverse sustainability goals, such as interactive kinetic louvers using biomimicry and daylight simulations to enhance daylight, visual comfort and energy efficiency. The study of PV and shading systems can maximize on-site electricity, improve visual quality and daylight performance.
AI and machine learning (ML) further improve computation efficiency in complex climate-responsive sustainable design. one study employed reinforcement learning to identify the relationship between design parameters and energy use for a sustainable campus, while some other studies tried hybrid algorithms, such as using the genetic algorithm and GANs to balance daylight illumination and thermal comfort under different roof conditions. Other popular AI tools were also integrated, including deep reinforcement learning (DRL) and computer vision (CV) to generate an urban block according to direct sunlight hours and solar heat gains. These AI-driven generative design methods enable faster simulations and design decision making, resulting in designs that are environmentally responsible.
== Generative design in additive manufacturing ==
Additive manufacturing (AM) is a process that creates physical models directly from 3D data by joining materials layer by layer. It is used in industries to produce a variety of end-use parts, which are final components designed for direct application in products or systems. AM provides design flexibility and enables material reduction in lightweight applications, such as aerospace, automotive, medical, and portable electronic devices, where minimizing weight is critical for performance. Generative design, one of the four key methods for lightweight design in AM, is commonly applied to optimize structures for specific performance requirements.
Generative design can help create optimized solutions that balance multiple objectives, such as enhancing performance while minimizing cost. In design for additive manufacturing (DfAM), multi-objective topology optimization is used to generate a set of candidate solutions. Designers then assess these options using their expertise and key performance indicators (KPIs) to select the best option for implementation.
However, integrating AM constraints (e.g.,speed of build, materials, build envelope, and accuracy) into generative design remains challenging, as ensuring all solutions are valid is complex. Balancing multiple design objectives while limiting computational costs adds further challenges for designers. To overcome these difficulties, researchers proposed a generative design method with manufacturing validation to improve decision-making efficiency. This method starts with a constructive solid geometry (CSG)-based technique to create smooth topology shapes with precise geometric control. Then, a genetic algorithm is used to optimize these shapes, and the method offers designers a set of top non-dominated solutions on the Pareto front for further evaluation and final decision-making. By combining multiple techniques, this method can generate many high-quality solutions with smooth boundaries at lower computational costs, making it a practical approach for designing lightweight structures in AM.
Building on topology optimization methods, software providers introduced generative design features in their tools, helping designers set criteria and rank solutions. Industry is driving advancements in generative design for AM, highlighting the need for tools that not only offer a range of solution choices but also streamline workflows for industrial use.
== See also ==
Computer art
Computer-automated design
Feedback
Generative art
Parametric design
Procedural modeling
Random number generation
System dynamics
Topology optimization
== References ==
== Further reading ==
Gary William Flake: The Computational Beauty of Nature: Computer Explorations of Fractals, Chaos, Complex Systems, and Adaptation. MIT Press 1998, ISBN 978-0-262-56127-3
John Maeda: Design by Numbers, MIT Press 2001, ISBN 978-0-262-63244-7
Krish, Sivam (2011). "A practical generative design method". Computer-Aided Design. 43: 88–100. doi:10.1016/j.cad.2010.09.009.
Celestino Soddu: papers on Generative Design (1991–2011) at Generative Art Design Papers. C.Soddu, E.Colabella | Wikipedia/Generative_design |
Instructional design (ID), also known as instructional systems design and originally known as instructional systems development (ISD), is the practice of systematically designing, developing and delivering instructional materials and experiences, both digital and physical, in a consistent and reliable fashion toward an efficient, effective, appealing, engaging and inspiring acquisition of knowledge. The process consists broadly of determining the state and needs of the learner, defining the end goal of instruction, and creating some "intervention" to assist in the transition. The outcome of this instruction may be directly observable and scientifically measured or completely hidden and assumed. There are many instructional design models, but many are based on the ADDIE model with the five phases: analysis, design, development, implementation, and evaluation.
== History ==
=== Origins ===
As a field, instructional design is historically and traditionally rooted in cognitive and behavioral psychology, though recently constructivism has influenced thinking in the field. This can be attributed to the way it emerged during a period when the behaviorist paradigm was dominating American psychology. There are also those who cite that, aside from behaviorist psychology, the origin of the concept could be traced back to systems engineering. While the impact of each of these fields is difficult to quantify, it is argued that the language and the "look and feel" of the early forms of instructional design and their progeny were derived from this engineering discipline. Specifically, they were linked to the training development model used by the U.S. military, which were based on systems approach and was explained as "the idea of viewing a problem or situation in its entirety with all its ramifications, with all its interior interactions, with all its exterior connections and with full cognizance of its place in its context."
The role of systems engineering in the early development of instructional design was demonstrated during World War II when a considerable amount of training materials for the military were developed based on the principles of instruction, learning, and human behavior. Tests for assessing a learner's abilities were used to screen candidates for the training programs. After the success of military training, psychologists began to view training as a system and developed various analysis, design, and evaluation procedures. In 1946, Edgar Dale outlined a hierarchy of instructional methods, organized intuitively by their concreteness. The framework first migrated to the industrial sector to train workers before it finally found its way to the education field.
=== 1950s ===
In 1954, B. F. Skinner suggested that effective instructional materials, called programmed instructional materials, should include small steps, frequent questions, and immediate feedback; and should allow self-pacing. Robert F. Mager popularized the use of learning objectives. The article describes how to write objectives including desired behavior, learning condition, and assessment.
In 1956, a committee led by Benjamin Bloom published an influential taxonomy with three domains of learning: cognitive (what one knows or thinks), psychomotor (what one does, physically) and affective (what one feels, or what attitudes one has). These taxonomies still influence the design of instruction.
=== 1960s ===
Robert Glaser introduced "criterion-referenced measures" in 1962. In contrast to norm-referenced tests in which an individual's performance is compared to group performance, a criterion-referenced test is designed to test an individual's behavior in relation to an objective standard. It can be used to assess the learners' entry level behavior, and to what extent learners have developed mastery through an instructional program.
In 1965, Robert Gagné described three domains of learning outcomes (cognitive, affective, psychomotor), five l(verbal information, intellectual skills, cognitive strategy, attitude, motor skills), and nine events of instruction in the conditions of learning, which remain foundations of instructional design practices. Gagne's work in learning hierarchies and hierarchical analysis led to an important notion in instruction – to ensure that learners acquire prerequisite skills before attempting superordinate ones.
In 1967, after analyzing the failure of training material, Michael Scriven suggested the need for formative assessment – e.g., to try out instructional materials with learners (and revise accordingly) before declaring them finalized.
=== 1970s ===
During the 1970s, the number of instructional design models greatly increased and prospered in different sectors in military, academia, and industry. Many instructional design theorists began to adopt an information-processing-based approach to the design of instruction. David Merrill for instance developed Component Display Theory (CDT), which concentrates on the means of presenting instructional materials (presentation techniques).
=== 1980s ===
Although interest in instructional design continued to be strong in business and the military, there was little evolution of ID in schools or higher education.
However, educators and researchers began to consider how the personal computer could be used in a learning environment or a learning space. PLATO is one example of how computers began to be integrated into instruction. Many of the first uses of computers in the classroom were for "drill and skill" exercises. There was a growing interest in how cognitive psychology could be applied to instructional design.
=== 1990s ===
During the 1990s, performance improvement also emerged as a key goal in the design process. The rise of the internet introduced new tools for online learning, which were seen as effective for supporting learning. As both technology and constructivist theory evolved, classroom practices shifted—from basic drill-and-practice methods to more interactive, cognitively demanding activities.
By the late 1990s and early 2000s, the term learning design entered the field of educational technology. It reflected the idea that designers and instructors should choose an appropriate blend of behaviorist and constructivist strategies for their online courses. However, the underlying concept of designing for learning is likely as old as teaching itself. One definition describes learning design as “the description of the teaching-learning process that takes place in a unit of learning (e.g., a course, a lesson, or any other structured learning event).”
=== 2000–2010 ===
In 2008, the Association for Educational Communications and Technology changed the definition of educational technology to "the study and ethical practice of facilitating learning and improving performance by creating, using, and managing appropriate technological processes and resources".
=== 2010–2020 ===
Academic degrees focused on integrating technology, internet, and human–computer interaction with education gained momentum with the introduction of Learning Design and Technology (LDT) majors. Universities such as Bowling Green State University, Pennsylvania State University, Purdue, San Diego State University, Stanford, Harvard University of Georgia, California State University, Fullerton, and Carnegie Mellon University have established undergraduate and graduate degrees in technology-centered methods of designing and delivering education.
Informal learning became an area of growing importance in instructional design, particularly in the workplace. A 2014 study showed that formal training makes up only 4 percent of the 505 hours per year an average employee spends learning. It also found that the learning output of informal learning is equal to that of formal training. As a result of this and other research, more emphasis was placed on creating knowledge bases and other supports for self-directed learning.
=== Timeline ===
== Models ==
=== ADDIE model ===
Perhaps the most common model used for creating instructional materials is the ADDIE Model. This acronym stands for the five phases contained in the model: Analyze, Design, Develop, Implement, and Evaluate.
The ADDIE model was initially developed by Florida State University to explain "the processes involved in the formulation of an instructional systems development (ISD) program for military interservice training that will adequately train individuals to do a particular job, and which can also be applied to any interservice curriculum development activity." The model originally contained several steps under its five original phases (Analyze, Design, Develop, Implement, and [Evaluation and] Control), whose completion was expected before movement to the next phase could occur. Over the years, the steps were revised and eventually the model itself became more dynamic and interactive than its original hierarchical rendition, until its most popular version appeared in the mid-80s, as we understand it today.
Connecting all phases of the model are external and reciprocal revision opportunities. As in the internal Evaluation phase, revisions should and can be made throughout the entire process.
Most of the current instructional design models are variations of the ADDIE model.
=== Rapid prototyping ===
Proponents suggest that through an iterative process the verification of the design documents saves time and money by catching problems while they are still easy to fix. This approach is not novel to the design of instruction, but appears in many design-related domains including software design, architecture, transportation planning, product development, message design, user experience design, etc. In fact, some proponents of design prototyping assert that a sophisticated understanding of a problem is incomplete without creating and evaluating some type of prototype, regardless of the analysis rigor that may have been applied up front. In other words, up-front analysis is rarely sufficient to allow one to confidently select an instructional model. For this reason many traditional methods of instructional design are beginning to be seen as incomplete, naive, and even counter-productive.
=== Dick and Carey ===
Another well-known instructional design model is the Dick and Carey Systems Approach Model. The model was originally published in 1978 by Walter Dick and Lou Carey in their book entitled The Systematic Design of Instruction.
Dick and Carey made a significant contribution to the instructional design field by championing a systems view of instruction, in contrast to defining instruction as the sum of isolated parts. The model addresses instruction as an entire system, focusing on the interrelationship between context, content, learning and instruction. According to Dick and Carey, "Components such as the instructor, learners, materials, instructional activities, delivery system, and learning and performance environments interact with each other and work together to bring about the desired student learning outcomes". The components of the Systems Approach Model, also known as the Dick and Carey Model, are as follows:
Identify Instructional Goal(s): A goal statement describes a skill, knowledge or attitude (SKA) that a learner will be expected to acquire
Conduct instructional Analysis: Identify what a learner must recall and identify what learner must be able to do to perform particular task
Analyze Learners and Contexts: Identify general characteristics of the target audience, including prior skills, prior experience, and basic demographics; identify characteristics directly related to the skill to be taught; and perform analysis of the performance and learning settings.
Write Performance Objectives: Objectives consists of a description of the behavior, the condition and criteria. The component of an objective that describes the criteria will be used to judge the learner's performance.
Develop Assessment Instruments: Purpose of entry behavior testing, purpose of pretesting, purpose of post-testing, purpose of practice items/practice problems
Develop Instructional Strategy: Pre-instructional activities, content presentation, Learner participation, assessment
Develop and Select Instructional Materials
Design and Conduct Formative Evaluation of Instruction: Designers try to identify areas of the instructional materials that need improvement.
Revise Instruction: To identify poor test items and to identify poor instruction
Design and Conduct Summative Evaluation
With this model, components are executed iteratively and in parallel, rather than linearly.
=== Guaranteed learning ===
The instructional design model, Guaranteed Learning, was formerly known as the Instructional Development Learning System (IDLS). The model was originally published in 1970 by Peter J. Esseff, PhD and Mary Sullivan Esseff, PhD in their book entitled IDLS—Pro Trainer 1: How to Design, Develop, and Validate Instructional Materials.
Peter (1968) & Mary (1972) Esseff both received their doctorates in Educational Technology from the Catholic University of America under the mentorship of Gabriel Ofiesh, a founding father of the Military Model mentioned above. Esseff and Esseff synthesized existing theories to develop their approach to systematic design, "Guaranteed Learning" aka "Instructional Development Learning System" (IDLS). In 2015, the Drs. Esseffs created an eLearning course to enable participants to take the GL course online under the direction of Esseff.
The components of the Guaranteed Learning Model are the following:
Design a task analysis
Develop criterion tests and performance measures
Develop interactive instructional materials
Validate the interactive instructional materials
Create simulations or performance activities (Case Studies, Role Plays, and Demonstrations)
=== Other ===
Other useful instructional design models include: the Smith/Ragan Model, the Morrison/Ross/Kemp Model and the OAR Model of instructional design in higher education, as well as, Wiggins' theory of backward design.
Learning theories also play an important role in the design of instructional materials. Theories such as behaviorism, constructivism, social learning, and cognitivism help shape and define the outcome of instructional materials.
==== Motivational design ====
Motivation is defined as an internal drive that activates behavior and gives it direction. The term motivation theory is concerned with the process that describes why and how human behavior is activated and directed.
Motivation concepts include intrinsic motivation and extrinsic motivation.
John M. Keller
has devoted his career to researching and understanding motivation in instructional systems. These decades of work constitute a major contribution to the instructional design field. First, by applying motivation theories systematically to design theory. Second, in developing a unique problem-solving process he calls the ARCS model.
Although Keller's ARCS model currently dominates instructional design with respect to learner motivation, in 2006 Hardré and Miller proposed a need for a new design model that includes current research in human motivation, a comprehensive treatment of motivation, integrates various fields of psychology and provides designers the flexibility to be applied to a myriad of situations.
Hardré proposes an alternate model for designers called the Motivating Opportunities Model or MOM. Hardré's model incorporates cognitive, needs, and affective theories as well as social elements of learning to address learner motivation. MOM has seven key components spelling the acronym 'SUCCESS' – Situational, Utilization, Competence, Content, Emotional, Social, and Systemic.
== Influential researchers and theorists ==
Alphabetic by last name
Bloom, Benjamin – Taxonomies of the cognitive, affective, and psychomotor domains – 1950s
Bransford, John D. – How People Learn: Bridging Research and Practice – 1990s
Bruner, Jerome – Constructivism - 1950s-1990s
Gagné, Robert M. – The Conditions of Learning has had a great influence on the discipline.
Gibbons, Andrew S - developed the Theory of Model Centered Instruction; a theory rooted in Cognitive Psychology.
Heinich, Robert – Instructional Media and the new technologies of instruction 3rd ed. – Educational Technology – 1989
Jonassen, David – problem-solving strategies – 1990s
Kemp, Jerold E. – Created a cognitive learning design model - 1980s
Mager, Robert F. – ABCD model for instructional objectives – 1962 - Criterion-Referenced Instruction and Learning Objectives
Marzano, Robert J. - "Dimensions of Learning", Formative Assessment - 2000s
Mayer, Richard E. - Multimedia Learning - 2000s
Merrill, M. David – Component Display Theory / Knowledge Objects / First Principles of Instruction
Osguthorpe, Russell T. – Overview of Instructional Design – The education of the heart: rediscovering the spiritual roots of learning
Papert, Seymour – Constructionism, LOGO – 1970s-1980s
Piaget, Jean – Cognitive development – 1960s
Reigeluth, Charles – Elaboration Theory, "Green Books" I, II, and III – 1990s–2010s
Rita Richey - instructional design theory and research methods
Schank, Roger – Constructivist simulations – 1990s
Simonson, Michael – Instructional Systems and Design via Distance Education – 1980s
Skinner, B.F. – Radical Behaviorism, Programed Instruction - 1950s-1970s
Vygotsky, Lev – Learning as a social activity – 1930s
Wiley, David A. - influential work on open content, open educational resources, and informal online learning communities
== See also ==
== References ==
== External links ==
Instructional Design – An overview of Instructional Design
ISD Handbook
Edutech wiki: Instructional design model
ATD: What Is Instructional Design? | Wikipedia/Instructional_design |
Process-centered design (PCD) is a design methodology, which proposes a business centric approach for designing user interfaces. Because of the multi-stage business analysis steps involved right from the beginning of the PCD life cycle, it is believed to achieve the highest levels of business-IT alignment that is possible through UI.
== Purpose ==
This method is aimed at enterprise applications where there is a business process involved. Unlike content oriented systems such as websites or portals, enterprise applications are built to enable a company's business processes. Enterprise applications often have a clear business goal and a set of specific objectives like- improve employee productivity, increase business performance by a certain percent, etc.
== Comparison between other popular UI design methods ==
Although there are proven UI design methodologies (like the most popular "user-centered design", which helps design highly Usable Interfaces), PCD differentiates itself by precisely catering to business process intensive software which has not been the case with other UI design methodologies.
== Process-UI alignment ==
Process-UI alignment is a component of PCD, which ensures tight alignment between the business process and the enterprise application being developed. UI design activities are affected by PCD.
For example: A call center software used by a customer support agent, if designed for high process-UI alignment will achieve tremendous agent productivity improvement and call center performance; which is not likely to be seen if it were designed only for user satisfaction, ease of use, etc.
== See also ==
Business process
Overall labor effectiveness
User-centered design
Usability
== References ==
== External links ==
Align Journal, October 3, 2007, Retrieved on Aug 01, 2008. Process-User Interface Alignment: New Value From a New Level of Alignment
More research exploring the relation between business process and user interfaces: ACM SAC 2008: Sousa, Mendonca, Vanderdonckt | Wikipedia/Process-centered_design |
An industrial design right is an intellectual property right that protects the visual design of objects that are purely utilitarian. An industrial design consists of the creation of a shape, configuration or composition of pattern or color, or combination of pattern and color in three-dimensional form containing aesthetic value. An industrial design can be a two- or three-dimensional pattern used to produce a product, industrial commodity or handicraft.
Under the Hague Agreement Concerning the International Deposit of Industrial Designs, a WIPO-administered treaty, a procedure for an international registration exists. To qualify for registration, the national laws of most member states of WIPO require the design to be novel. An applicant can file for a single international deposit with WIPO or with the national office in a country party to the treaty. The design will then be protected in as many member countries of the treaty as desired. Design rights started in the United Kingdom in 1787 with the Designing and Printing of Linen Act and have expanded from there.
Registering for an industrial design right is related to granting a patent.
== Law making ==
=== Kenya ===
According to industrial property Act 2001, an industrial design is defined as "any composition of lines or colours or any three-dimensional form whether or not associated with lines or colours, provided that such composition or form gives a special appearance to a product of industry or handicraft and can serve as pattern for a product of industry or handicraft" .
An industrial design is registrable if it is new. An industrial design is deemed to be new if it has not been disclosed to the public, anywhere in the world, by publication in tangible form or, in Kenya by use or in any other way, prior to the filing date or, where applicable, the priority date of the application for registration. However a disclosure of the industrial design is not taken into consideration if it occurred not earlier than twelve months before the filing date or, where applicable, the priority date of the application and if it was by reason or in consequence of acts committed by the applicant or his predecessor in title; or an evident abuse committed by a third party in relation to the applicant or his predecessor in title.
=== India ===
India's Design Act, 2000 was enacted to consolidate and amend the law relating to protection of design and to comply with the articles 25 and 26 of Trade-Related Aspects of Intellectual Property Rights TRIPS agreement. The new act, (earlier Patent and Design Act, 1911 was repealed by this act) now defines "design" to mean only the features of shape, configuration, pattern, ornament, or composition of lines or colours applied to any article, whether in two- or three-dimensional, or in both forms, by any industrial process or means, whether manual or mechanical or chemical, separate or combined, which in the finished article appeal to and are judged solely by the eye; but does not include any mode or principle of construction.
=== Indonesia ===
In Indonesia the protection of the Right to Industrial Design shall be granted for 10 (ten) years commencing from the filing date and there is not any renewal or annuity after the given period.
Industrial Designs that are Granted Protection
1. The Right to Industrial Design shall be granted for an Industrial Design that is novel/new
2. An Industrial Design shall be deemed new if on the filing date, such Industrial Design is not the same as any previous disclosure.
3. The previous disclosure as referred to in point 2 shall be one which before :
a. The filing date or
b. The Priority Date, if the applicant is filed with priority right.
c. Has been announced or used in Indonesia or outside Indonesia.
An industrial design shall not be deemed to have been announced if within the period of 6 (six) months at the latest before the filing date, such industrial design
a. Has been displayed in a national or international exhibition in Indonesia or overseas that is official or deemed to be official; or,
b. Has been used in Indonesia by the designer in an experiment for the purposes of education, research or development.
=== Canada ===
Canadian law affords ten years of protection to industrial designs that are registered; there is no protection for unregistered designs. The Industrial Design Act defines "design" or "industrial design" to mean "features of shape, configuration, pattern or ornament and any combination of those features that, in a finished article, appeal to and are judged solely by the eye." The design must also be original: in 2012, the Patent Appeal Board rejected a design for a trash can, and gave guidance as to what the Act requires:
The degree of originality required to register an original design is greater than that laid down by Canadian copyright legislation, but less than that required to register a patent.
The articles being compared should not be examined side by side, but separate so that imperfect recollection comes into play.
One is to look at the design as a whole.
Any change must be substantial. It must not be trivial or infinitesimal.
During the existence of an exclusive right, no person can "make, import for the purpose of trade or business, or sell, rent, or offer or expose for sale or rent, any article in respect of which the design is registered." The rule also applies to kits and substantial differences are in reference to previously published designs.
Registering an industrial design in Canada may be appropriate for a variety of articles such as consumer products, vehicles, sports equipment, packaging, etc., having an original aesthetic appearance, and may even be used to protect new technologies such as electronic icons. Industrial designs can also serve to complement other forms of intellectual property rights such as patents and trade-marks.
The Canadian courts see infrequent litigation concerning industrial designs — the first case in almost two decades took place in 2012 between Bodum and Trudeau Corporation concerning visual features of double wall drinking glasses.
It is possible for a registered design to also receive protection under Canadian copyright or trademark law:
a "useful article" (ie, one with a utilitarian function) will receive copyright protection where it is reproduced in a quantity of fifty or less, but that limitation does not apply with respect to:
a graphic or photographic representation that is applied to the face of an article
a trade-mark or a representation thereof or a label
material that has a woven or knitted pattern or that is suitable for piece goods or surface coverings or for making wearing apparel
a representation of a real or fictitious being, event or place that is applied to an article as a feature of shape, configuration, pattern or ornament
where a registered design has become publicly identifiable with the product, it may be eligible for registration as a "distinguishing guise" under trademark law, but such registration cannot be used to limit the development of any art or industry
=== European Union ===
Registered and unregistered European Union designs are available which provide a unitary right covering the European Union. Protection for a registered EU design is for up to 25 years, subject to the payment of renewal fees every five years. The unregistered EU design lasts for three years after a design is made available to the public and infringement only occurs if the protected design has been copied.
=== United Kingdom ===
Legislation given in Britain during the years 1787 to 1839 protected designs for textiles. The Copyright of Design Act passed in 1842 allowed other material designs, such as those for metal and earthenware objects, to be registered with a diamond mark to indicate the date of registration.
In addition to the design protection available under community designs, UK law provides its own national registered design right (Registered Designs Act 1949, later amended by Copyright, Designs and Patents Act 1988) and an unregistered design right. The unregistered right, which exists automatically if the requirements are met, can last for up to 15 years. The registered design right can last up to 25 years subject to the payment of maintenance fees. The topography of semi-conductor circuits are also covered by integrated circuit layout design protection, a form of protection which lasts 10 years.
=== Japan ===
Article 1 of the Japanese Design Law states: "This law was designed to protect and utilize designs and to encourage creation of designs in order to contribute to industrial development". The protection period in Japan is 20 years from the day of registration.
=== United States ===
U.S. design patents last fifteen years from the date of grant if filed on or after May 13, 2015 (fourteen years if filed before May 13, 2015) and cover the ornamental aspects of utilitarian objects. Objects that lack a use beyond that conferred by their appearance or the information they convey may be covered by copyright—a form of intellectual property of much longer duration that exists as soon as a qualifying work is created. In some circumstances, rights may also be acquired in trade dress, but trade dress protection is akin to trademark rights and requires that the design have source significance or "secondary meaning". It is useful only to prevent source misrepresentations; trade dress protection.
=== Australia ===
In Australia, design patent registration lasts for 5 years, with an option to be extended once for an additional 5 years. For the patent to be granted, a formalities exam is needed. If infringement action is to be taken, the design needs to become certified which involves a substantive examination. This process ensures that the design is truly unique and eligible for protection under Australian patent law.
== Duration of design rights ==
Depending on the jurisdiction registered design rights have a duration between 15 and 50 years. Members of the WIPO Hague system have to publish their maximum term of protection for design rights. These terms are presented in the table below. Some of the jurisdiction below are unions or collaborative office for design registration like the African Intellectual Property Organization, the European Union and the Benelux.
== Industrial design applications ==
Between 1883 and the early 1950s, the offices of Japan and the United States of America averaged a similar number of industrial design applications, rarely exceeding 10,000. The office of Japan received the highest number of applications per year from the 1950s thru to the late 1990s, reaching approximately 50,000 annual filings at its peak. The office of China, which received 640 applications when it first began receiving applications in 1985, has seen an unprecedented rate of growth, peaking at 805,710 applications filed in 2021. The office of the Republic of Korea surpassed the office of Japan in 2004 and has remained in second position ever since. In 2012, the office of the US moved ahead of Japan to become the third largest globally. The EUIPO began receiving applications in 2003 and moved up to fourth position in 2019. Among these top five offices, the EUIPO is the only one to have a multiple design system. Applications filed at the European Union IP Office contained 109,132 designs in 2022.
In 2022, about 1.1 million industrial design applications were filed worldwide. Asia accounted for 70.3% of all designs in applications filed worldwide in 2022. Asia was followed by Europe (22.4%) and North America (4.4%).
== Bibliography ==
Brian W. Gray & Effie Bouzalas, editors, Industrial Design Rights: An International Perspective (Kluwer Law International: The Hague, 2001) ISBN 90-411-9684-6
== See also ==
Design patent (US patent law)
Geschmacksmuster (German design law)
Industrial design rights in the European Union
Open-design movement
Utility model
Design Law Treaty
Hague Agreement Concerning the International Deposit of Industrial Designs
== References ==
== External links ==
Information about industrial design rights on the UK Patent Office web site
International Designs on the WIPO web site
Hague System for the International Registration of Industrial Designs on the WIPO web site | Wikipedia/Industrial_design_right |
A reference design is a technical design of a system that is intended for others to copy. It contains the essential elements of the system; however, third parties may enhance or modify the design as required. When discussing computer designs, the concept is generally known as a reference platform.
The main purpose of reference design is to support companies in development of next generation products using latest technologies. The reference product is proof of the platform concept and is usually targeted for specific applications. Reference design packages enable a fast track to market thereby cutting costs and reducing risk in the customer's integration project.
As the predominant customers for reference designs are original equipment manufacturers (OEMs), many reference designs are created by technology component vendors, whether hardware or software, as a means to increase the likelihood that their product will be designed into the OEM's product, giving them a competitive advantage.
== Examples ==
NanoBook, a reference design of a miniature laptop
Open source hardware (also Category:Open source hardware)
RONJA, a free and open telecommunication technology ("free Internet")
VIA OpenBook, a free and open reference design of a laptop
== References == | Wikipedia/Reference_design |
== Balance ==
=== Types of balance in visual design ===
Symmetry
== Hierarchy/Dominance/Emphasis ==
== Scale/proportion ==
== Scale in design ==
Increasing an element's scale in a design piece increases its value in terms of hierarchy and makes it to be seen first compared to other elements while decreasing an element's scale reduces its value.
== See also ==
Composition (visual arts)
Gestalt laws of grouping
Interior design
Landscape design
Pattern language
Elements of art
Principles of art
Color theory
== Notes ==
== References ==
Kilmer, R., & Kilmer, W. O. (1992). Designing Interiors. Orland, FL: Holt, Rinehart and Winston, Inc. ISBN 978-0-03-032233-4.
Nielson, K. J., & Taylor, D. A. (2002). Interiors: An Introduction. New York: McGraw-Hill Companies, Inc. ISBN 978-0-07-296520-9
Pile, J.F. (1995; fourth edition, 2007). Interior Design. New York: Harry N. Abrams, Inc. ISBN 978-0-13-232103-7
Sully, Anthony (2012). Interior Design: Theory and Process. London: Bloomsbury. ISBN 978-1-4081-5202-7.
== External links ==
Art, Design, and Visual Thinking. An online, interactive textbook by Charlotte Jirousek at Cornell University.
The 6 Principles of Design | Wikipedia/Design_principles |
Hotel design involves the planning, drafting, design and development of hotels. The concept of hotel design is rooted in traditions of hospitality to travellers dating back to ancient times, and the development of many diverse types of hotels has occurred in many cultures. For example, the advent of rail travel in the early 1900s led to the planning, design and development of hotels near railroad stations that catered to rail travelers. Hotels around Grand Central Terminal in New York City are an example of this phenomenon. Hotel interior design and styles are very diverse, with numerous variations existent.
== Types of hotels ==
Numerous types of hotel designs exist in the world. Examples of hotel designs include guest palaces across Asia, English country inns, hotel-casino resorts, designer and art hotels, hotel-spa resorts, boutique hotels, "no-frills" hotels that offer very basic amenities at budget rates, basic rooming houses, monasteries offering refuge and spare bedrooms rented out in ordinary homes. Another type is capsule hotels, which are offered in Japan as an option for those who just need the basic necessities during their stay. Historically, the development of lodging areas and facilities was sometimes driven by their physical locations, such as at river crossings, at major trading posts or in locations lending themselves to defense, such as forts or castles. Property location continues to be a key consideration in hotel design in contemporary times. Many hotels throughout the United States cater to either tourists or residents. Visitors of the hotel have the options ranging from renting a room for one night or renting a suite for a month. Though residential hotels are not as popular today as they were in the past, they still provide a significant number of America's homes.
== Professional design ==
Contemporary hotel design can be sophisticated and functional, involving specialist architects and designers, environmental and structural engineers, interior designers and skilled contractors and suppliers, particularly for large, intricate projects. Hotel design can involve the refurbishment of an existing building already used for lodging, the conversion of a building previously used for another purpose or the construction of new buildings. USA based Newport Design Group is considered among the Top Design Firms specializing in branded hotels for many of the top franchises. Other firms include ReardonSmith Architects, HOK, Gensler and WATG.
Hotel design involves planning regarding the estimated client needs for the facility along with the designers' vision. Hotel buildings may have several various functions, including restaurants, outdoor facilities and swimming pools, fitness centers and spas. Contemporary hotel design involves effectively integrating these various aspects of hotel operations within a location to minimize interference with one another. For example, hotel design includes considerations to avoid guests being inundated with excessive noise and the movement of people. Hotels are usually designed from the inside-out to ensure the practical functionality and relationship of its parts.
== Cultural influences ==
Hotel designers bring to their work their own cultural mores and need to understand the culture in which the hotel will operate if working outside their native environment. Due to travel becoming international in scope, links with local traditions in many hotel designs have been weakened, and ‘International’ has become a style in its own right. Some hotels base their operations with a theme of vernacular local traditional styles, while others have modernist stylistic designs.
Hotel design ranges from basic variables, such as the appropriate height for bed head light switches to the more specialized, such as the right layout for a kitchen or the sightlines from reception areas to enable control and protection of entry to rooms. The pace of change in hotel design has, as in most areas of modern life, increased with the development of innovative technology.
Despite cultural variations, hotels commonly function to provide a welcome environment that supports the comfort of its guests for work, rest and relaxation.
== Maintenance ==
Hotels are permanently undergoing maintenance or renovation work (roofing, furniture, electricity, entertainment options, new ecological standards), so a constant reorganization plan must be prepared.
== See also ==
Hospitality industry
Hotel manager
Motel
== Notes ==
== References ==
Richard H. Penner; Lawrence Adams; Stephani K. A. Robson (2012), Hotel Design, Planning, and Development, W.W. Norton & Company, ISBN 9780393733853
Rutes, Walter A.; Penner, Richard H.; Adams; Lawrence (2001). Hotel Design, Planning, and Development. W.W. Norton & Company. ISBN 9780393730555. Retrieved May 18, 2012. ISBN 0393730557
Asensio, Paco; et al. (2004). Ultimate Hotel Design. teNeues Publishing Company. ISBN 9783823845942. Retrieved May 18, 2012. ISBN 3823845942
Riewold, Otto (2002). New Hotel Design. Laurence King Publishing ltd. ISBN 9781856694797. Retrieved May 18, 2012. ISBN 1856694798 | Wikipedia/Hotel_design |
Prevention through design (PtD), also called safety by design in Europe, is the concept of applying methods to minimize occupational hazards early in the design process, with an emphasis on optimizing employee health and safety throughout the life cycle of materials and processes. It is a concept and movement that encourages construction or product designers to "design out" health and safety risks during design development. The process also encourages the various stakeholders within a construction project to be collaborative and share the responsibilities of workers' safety evenly. The concept supports the view that along with quality, programme and cost; safety is determined during the design stage. It increases the cost-effectiveness of enhancements to occupational safety and health.
Compared to traditional forms of hazard control, PtD possesses a proactive nature whereas other safety measures are reactive to incidences that occur within construction projects. This method for reducing workplace safety risks lessens workers' reliance on personal protective equipment, which is the least effective of the hierarchy of hazard control.
In the domain of process safety, safety by design is usually referred to as inherent safety or inherently safer design (ISD).
== Background ==
Each year in the U.S., 55,000 people die from work-related injuries and diseases, 294,000 are made sick, and 3.8 million are injured. The annual direct and indirect costs have been estimated to range from $128 billion to $155 billion. For U.S. industries such as construction, even though construction personnel account for only 5% of the total U.S. workforce, they are responsible for nearly 20% of all workplace fatalities. Recent studies in Australia indicate that design is a significant contributor to 37% of work-related fatalities; therefore, the successful implementation of prevention through design concepts can have substantial impacts on worker health and safety.
A safer workplace can be created by removing hazards and reducing worker risks to an appropriate level "at the source," or as early in the life cycle of products or workplaces as possible. Designing, redesigning and retrofitting new and current work environments, systems, tools, facilities, equipment, machinery, goods, chemicals, work processes, and work organization. Improving the working climate by incorporating preventive approaches into all designs that have an effect on employees and those on the premises. The strategic plan lays out the objectives for implementing the PtD Plan for the National Initiative successfully.
The National Institute for Occupational Safety and Health (NIOSH) in the United States is a major contributor and promoter of PtD policy and guidelines. NIOSH considers PtD to be "the most effective and reliable type" of prevention of occupational injuries. A core tenet of PtD philosophy is the concept of addressing workplace hazards using methods at the top of the hierarchy of hazard controls, namely elimination and substitution.
Within Europe, construction designers are legally bound to design out risks during design development to reduce hazards in the construction and end use phases via the Mobile Worksite Directive (also known as CDM regulations in the UK). The concept supports this legal requirement. Some Notified Bodies provide testing and design verification services to ensure compliance with the safety standards defined in regulation codes such as the American Society of Mechanical Engineers. Many non-governmental organizations have been established to support this aim, principally in the UK, Australia and the United States.
== History ==
While engineering, as a rule, factors human safety into the design process, a modern appraisal of specific links to design and workers' safety can be seen in efforts beginning in the 1800s. Trends included the widespread implementation of guards for machinery, controls for elevators, and boiler safety practices. This was followed by enhanced design for ventilation, enclosures, system monitors, lockout/tagout controls, and hearing protectors. More recently, there has been the development of chemical process safety, ergonomically engineered tools, chairs, and workstations, lifting devices, retractable needles, latex-free gloves, and a parade of other safety devices and processes.
In 2007, NIOSH began its National Initiative on Prevention through Design with the goal of promoting prevention through design philosophy, practice, and policy.
== Goal ==
The PtD National Initiative's goal is to avoid or mitigate occupational accidents, diseases, deaths, and exposures by incorporating prevention factors into all designs that impact people in the workplace. This is accomplished by eliminating hazards and reducing worker risks to an acceptable level "at the source," or as early in the life cycle of items or workplaces as possible.
Designing, redesigning, and retrofitting new and existing work premises, structures, tools, facilities, equipment, machinery, products, substances, work processes, and work organization.
== Integration ==
Prevention through design represents a shift in approach for on-the-job safety. It involves evaluating potential risks associated with processes, structures, equipment, and tools. It takes into consideration the construction, maintenance, decommissioning, and disposal or recycling of waste material.
The idea of redesigning job tasks and work environments has begun to gain momentum in business and government as a cost-effective means to enhance occupational safety and health. Many U.S. companies openly support PtD concepts and have developed management practices to implement them. Other countries are actively promoting PtD concepts as well. The United Kingdom began requiring construction companies, project owners, and architects to address safety and health during the design phase of projects in 1994. Australia developed the Australian National OHS Strategy 2002–2012, which set "eliminating hazards at the design stage" as one of five national priorities. As a result, the Australian Safety and Compensation Council (ASCC) developed the Safe Design National Strategy and Action Plans for Australia encompassing a wide range of design areas.
== By country ==
=== Australia ===
In Australia, the Work Health and Safety Act of 2011 was passed which included elements that laid out the legal responsibilities of employers, designers, and other stakeholders within construction projects to take the necessary steps to ensure that safety is prioritized through all phases of the construction process. In practice, what this has looked like is Australian state governments such as Queensland, South Australia, and Western Australia mandating design professionals to create a strategy for safety considerations throughout the construction process. The plan has to include pre-construction considerations, how safety can be evaluated, and providing details of how safety will be controlled once the physical construction process begins. Even before the Work Health and Safety Act of 2011, since 1998, any construction project that was valued over AU$3 million was subject to this requirement.
=== United Kingdom ===
Within the United Kingdom (U.K.), PtD has been legally required for those in the construction industry since March 31, 1995. At the time of implementation, the fatality rate within the U.K. construction industry was 10 fatalities per 100,000 workers. In 2021, the fatality rate has been reduced to 1.62 fatalities per 100,000 workers. Although it cannot be established that PtD is the sole facilitator of this reduction in construction fatalities, it does show that since its enactment, fatalities have dropped substantially. Since its establishment in 1995, the UK government has periodically updated the legislation with the 2015 version of The Construction (Design and Management) Regulations placing even greater emphasis on the role that principal designers should play in injury and fatality prevention during the design phase of a project.
=== United States ===
==== Government ====
The National Institute for Occupational Safety and Health (NIOSH) is a contributor to prevention through design efforts in the United States. Several NIOSH initiatives and guidelines directly or indirectly advocate for PtD practices. Through NIOSH efforts, the U.S. Green Building Council posted new PtD credits available for Leadership in Energy and Environmental Design (LEED) certification for construction. Additionally, they provide a wide variety of educational and guidance materials on the topic of PtD. The NIOSH "Buy Quiet" initiative uses elements of prevention through design to encourage companies to buy quieter machinery, thereby reducing occupational hearing loss for their workers.
The Prevention through Design (PtD) Initiative of NIOSH collaborates with business, labor, trade unions, professional organizations, and academia. The curriculum focuses on “designing out” workplace hazards and threats in order to avoid sickness, injury, and death. It encourages technical accreditation bodies to include PtD in their evaluations to educate and encourage others to use PtD goals and processes in collaborative design and renovation of facilities, work processes, equipment, and resources.
Priorities of this initiative include:
attempting to make business executives aware of the cost-cutting potential of PtD,
produce succinct, actionable PtD guides and checklists for small companies, their insurers, and the publishers of local government codebooks
increase PtD practice by disseminating case studies of real-world PtD solutions and empowering stakeholders to implement and share them
encourage businesses, trade unions, governments, academic institutions, and consensus standards organizations to use PtD in policy revisions.
=== Singapore ===
In Singapore, the government's Workplace Safety and Health Council pioneered a Design for Safety (Dfs) mark which would allow the Singaporean government to recognize construction projects that were completed with safety in mind. Receiving the Dfs mark for safety considerations is analogous to a building receiving a LEED certification for featuring aspects of sustainability and carbon footprint reduction.
== Barriers to PtD implementation ==
=== Education ===
Even though PtD is not a new concept and has shown to be associated with reductions in injuries and fatalities across various construction industries on the international stage, it is still not a core feature of various engineering and architectural schools' curriculum. This can compromise designers' ability to consider safety in real-world applications since they have had limited education on the concept of safety let alone PtD.
== See also ==
Inherent safety
Occupational health psychology – Health and Safety psychology
Occupational exposure banding – Process to assign chemicals into categories corresponding to permissible exposure concentrations
== References ==
=== Sources ===
"Prevention through Design Initiative" (PDF). NIOSH. Retrieved 21 April 2020.
"Prevention Through Design ; Plan for the National Initiative" (PDF). CDC. Retrieved 21 April 2020.
== External links ==
Prevention through Design
Australian Safety and Compensation Council
Safety and Health Awareness for Preventive Engineering (SHAPE) program
Design for Construction Safety
== Further reading ==
MacCollum, David V. Construction Safety Engineering Principles Designing and Managing Safer Job Sites (1st ed.). McGraw-Hill Professional. ISBN 978-0-07-148244-8.
Brauer, Roger L. Safety and Health for Engineers (2nd ed.). Wiley-Interscience. ISBN 978-0-471-29189-3. | Wikipedia/Prevention_through_design |
Mechanical engineering is the study of physical machines and mechanisms that may involve force and movement. It is an engineering branch that combines engineering physics and mathematics principles with materials science, to design, analyze, manufacture, and maintain mechanical systems. It is one of the oldest and broadest of the engineering branches.
Mechanical engineering requires an understanding of core areas including mechanics, dynamics, thermodynamics, materials science, design, structural analysis, and electricity. In addition to these core principles, mechanical engineers use tools such as computer-aided design (CAD), computer-aided manufacturing (CAM), computer-aided engineering (CAE), and product lifecycle management to design and analyze manufacturing plants, industrial equipment and machinery, heating and cooling systems, transport systems, motor vehicles, aircraft, watercraft, robotics, medical devices, weapons, and others.
Mechanical engineering emerged as a field during the Industrial Revolution in Europe in the 18th century; however, its development can be traced back several thousand years around the world. In the 19th century, developments in physics led to the development of mechanical engineering science. The field has continually evolved to incorporate advancements; today mechanical engineers are pursuing developments in such areas as composites, mechatronics, and nanotechnology. It also overlaps with aerospace engineering, metallurgical engineering, civil engineering, structural engineering, electrical engineering, manufacturing engineering, chemical engineering, industrial engineering, and other engineering disciplines to varying amounts. Mechanical engineers may also work in the field of biomedical engineering, specifically with biomechanics, transport phenomena, biomechatronics, bionanotechnology, and modelling of biological systems.
== History ==
The application of mechanical engineering can be seen in the archives of various ancient and medieval societies. The six classic simple machines were known in the ancient Near East. The wedge and the inclined plane (ramp) were known since prehistoric times. Mesopotamian civilization is credited with the invention of the wheel by several, mainly old sources. However, some recent sources either suggest that it was invented independently in both Mesopotamia and Eastern Europe or credit prehistoric Eastern Europeans with the invention of the wheel The lever mechanism first appeared around 5,000 years ago in the Near East, where it was used in a simple balance scale, and to move large objects in ancient Egyptian technology. The lever was also used in the shadoof water-lifting device, the first crane machine, which appeared in Mesopotamia circa 3000 BC. The earliest evidence of pulleys date back to Mesopotamia in the early 2nd millennium BC.
The Saqiyah was developed in the Kingdom of Kush during the 4th century BC. It relied on animal power reducing the tow on the requirement of human energy. Reservoirs in the form of Hafirs were developed in Kush to store water and boost irrigation. Bloomeries and blast furnaces were developed during the seventh century BC in Meroe. Kushite sundials applied mathematics in the form of advanced trigonometry.
The earliest practical water-powered machines, the water wheel and watermill, first appeared in the Persian Empire, in what are now Iraq and Iran, by the early 4th century BC. In ancient Greece, the works of Archimedes (287–212 BC) influenced mechanics in the Western tradition. The geared Antikythera mechanisms was an Analog computer invented around the 2nd century BC.
In Roman Egypt, Heron of Alexandria (c. 10–70 AD) created the first steam-powered device (Aeolipile). In China, Zhang Heng (78–139 AD) improved a water clock and invented a seismometer, and Ma Jun (200–265 AD) invented a chariot with differential gears. The medieval Chinese horologist and engineer Su Song (1020–1101 AD) incorporated an escapement mechanism into his astronomical clock tower two centuries before escapement devices were found in medieval European clocks. He also invented the world's first known endless power-transmitting chain drive.
The cotton gin was invented in India by the 6th century AD, and the spinning wheel was invented in the Islamic world by the early 11th century, Dual-roller gins appeared in India and China between the 12th and 14th centuries. The worm gear roller gin appeared in the Indian subcontinent during the early Delhi Sultanate era of the 13th to 14th centuries.
During the Islamic Golden Age (7th to 15th century), Muslim inventors made remarkable contributions in the field of mechanical technology. Al-Jazari, who was one of them, wrote his famous Book of Knowledge of Ingenious Mechanical Devices in 1206 and presented many mechanical designs.
In the 17th century, important breakthroughs in the foundations of mechanical engineering occurred in England and the Continent. The Dutch mathematician and physicist Christiaan Huygens invented the pendulum clock in 1657, which was the first reliable timekeeper for almost 300 years, and published a work dedicated to clock designs and the theory behind them. In England, Isaac Newton formulated his laws of motion and developed calculus, which would become the mathematical basis of physics. Newton was reluctant to publish his works for years, but he was finally persuaded to do so by his colleagues, such as Edmond Halley. Gottfried Wilhelm Leibniz, who earlier designed a mechanical calculator, is also credited with developing the calculus during the same time period.
During the early 19th century Industrial Revolution, machine tools were developed in England, Germany, and Scotland. This allowed mechanical engineering to develop as a separate field within engineering. They brought with them manufacturing machines and the engines to power them. The first British professional society of mechanical engineers was formed in 1847 Institution of Mechanical Engineers, thirty years after the civil engineers formed the first such professional society Institution of Civil Engineers. On the European continent, Johann von Zimmermann (1820–1901) founded the first factory for grinding machines in Chemnitz, Germany in 1848.
In the United States, the American Society of Mechanical Engineers (ASME) was formed in 1880, becoming the third such professional engineering society, after the American Society of Civil Engineers (1852) and the American Institute of Mining Engineers (1871). The first schools in the United States to offer an engineering education were the United States Military Academy in 1817, an institution now known as Norwich University in 1819, and Rensselaer Polytechnic Institute in 1825. Education in mechanical engineering has historically been based on a strong foundation in mathematics and science.
== Education ==
Degrees in mechanical engineering are offered at various universities worldwide. Mechanical engineering programs typically take four to five years of study depending on the place and university and result in a Bachelor of Engineering (B.Eng. or B.E.), Bachelor of Science (B.Sc. or B.S.), Bachelor of Science Engineering (B.Sc.Eng.), Bachelor of Technology (B.Tech.), Bachelor of Mechanical Engineering (B.M.E.), or Bachelor of Applied Science (B.A.Sc.) degree, in or with emphasis in mechanical engineering. In Spain, Portugal and most of South America, where neither B.S. nor B.Tech. programs have been adopted, the formal name for the degree is "Mechanical Engineer", and the course work is based on five or six years of training. In Italy the course work is based on five years of education, and training, but in order to qualify as an Engineer one has to pass a state exam at the end of the course. In Greece, the coursework is based on a five-year curriculum.
In the United States, most undergraduate mechanical engineering programs are accredited by the Accreditation Board for Engineering and Technology (ABET) to ensure similar course requirements and standards among universities. The ABET web site lists 302 accredited mechanical engineering programs as of 11 March 2014. Mechanical engineering programs in Canada are accredited by the Canadian Engineering Accreditation Board (CEAB), and most other countries offering engineering degrees have similar accreditation societies.
In Australia, mechanical engineering degrees are awarded as Bachelor of Engineering (Mechanical) or similar nomenclature, although there are an increasing number of specialisations. The degree takes four years of full-time study to achieve. To ensure quality in engineering degrees, Engineers Australia accredits engineering degrees awarded by Australian universities in accordance with the global Washington Accord. Before the degree can be awarded, the student must complete at least 3 months of on the job work experience in an engineering firm. Similar systems are also present in South Africa and are overseen by the Engineering Council of South Africa (ECSA).
In India, to become an engineer, one needs to have an engineering degree like a B.Tech. or B.E., have a diploma in engineering, or by completing a course in an engineering trade like fitter from the Industrial Training Institute (ITIs) to receive a "ITI Trade Certificate" and also pass the All India Trade Test (AITT) with an engineering trade conducted by the National Council of Vocational Training (NCVT) by which one is awarded a "National Trade Certificate". A similar system is used in Nepal.
Some mechanical engineers go on to pursue a postgraduate degree such as a Master of Engineering, Master of Technology, Master of Science, Master of Engineering Management (M.Eng.Mgt. or M.E.M.), a Doctor of Philosophy in engineering (Eng.D. or Ph.D.) or an engineer's degree. The master's and engineer's degrees may or may not include research. The Doctor of Philosophy includes a significant research component and is often viewed as the entry point to academia. The Engineer's degree exists at a few institutions at an intermediate level between the master's degree and the doctorate.
=== Coursework ===
Standards set by each country's accreditation society are intended to provide uniformity in fundamental subject material, promote competence among graduating engineers, and to maintain confidence in the engineering profession as a whole. Engineering programs in the U.S., for example, are required by ABET to show that their students can "work professionally in both thermal and mechanical systems areas." The specific courses required to graduate, however, may differ from program to program. Universities and institutes of technology will often combine multiple subjects into a single class or split a subject into multiple classes, depending on the faculty available and the university's major area(s) of research.
The fundamental subjects required for mechanical engineering usually include:
Mathematics (in particular, calculus, differential equations, and linear algebra)
Basic physical sciences (including physics and chemistry)
Statics and dynamics
Strength of materials and solid mechanics
Materials engineering, composites
Thermodynamics, heat transfer, energy conversion, and HVAC
Fuels, combustion, internal combustion engine
Fluid mechanics (including fluid statics and fluid dynamics)
Mechanism and Machine design (including kinematics and dynamics)
Instrumentation and measurement
Manufacturing engineering, technology, or processes
Vibration, control theory and control engineering
Hydraulics and Pneumatics
Mechatronics and robotics
Engineering design and product design
Drafting, computer-aided design (CAD) and computer-aided manufacturing (CAM)
Mechanical engineers are also expected to understand and be able to apply basic concepts from chemistry, physics, tribology, chemical engineering, civil engineering, and electrical engineering. All mechanical engineering programs include multiple semesters of mathematical classes including calculus, and advanced mathematical concepts including differential equations, partial differential equations, linear algebra, differential geometry, and statistics, among others.
In addition to the core mechanical engineering curriculum, many mechanical engineering programs offer more specialized programs and classes, such as control systems, robotics, transport and logistics, cryogenics, fuel technology, automotive engineering, biomechanics, vibration, optics and others, if a separate department does not exist for these subjects.
Most mechanical engineering programs also require varying amounts of research or community projects to gain practical problem-solving experience. In the United States it is common for mechanical engineering students to complete one or more internships while studying, though this is not typically mandated by the university. Cooperative education is another option. Future work skills research puts demand on study components that feed student's creativity and innovation.
== Job duties ==
Mechanical engineers research, design, develop, build, and test mechanical and thermal devices, including tools, engines, and machines.
Mechanical engineers typically do the following:
Analyze problems to see how mechanical and thermal devices might help solve the problem.
Design or redesign mechanical and thermal devices using analysis and computer-aided design.
Develop and test prototypes of devices they design.
Analyze the test results and change the design as needed.
Oversee the manufacturing process for the device.
Manage a team of professionals in specialized fields like mechanical drafting and designing, prototyping, 3D printing or/and CNC Machines specialists.
Mechanical engineers design and oversee the manufacturing of many products ranging from medical devices to new batteries. They also design power-producing machines such as electric generators, internal combustion engines, and steam and gas turbines as well as power-using machines, such as refrigeration and air-conditioning systems.
Like other engineers, mechanical engineers use computers to help create and analyze designs, run simulations and test how a machine is likely to work.
=== License and regulation ===
Engineers may seek license by a state, provincial, or national government. The purpose of this process is to ensure that engineers possess the necessary technical knowledge, real-world experience, and knowledge of the local legal system to practice engineering at a professional level. Once certified, the engineer is given the title of Professional Engineer United States, Canada, Japan, South Korea, Bangladesh and South Africa), Chartered Engineer (in the United Kingdom, Ireland, India and Zimbabwe), Chartered Professional Engineer (in Australia and New Zealand) or European Engineer (much of the European Union).
In the U.S., to become a licensed Professional Engineer (PE), an engineer must pass the comprehensive FE (Fundamentals of Engineering) exam, work a minimum of 4 years as an Engineering Intern (EI) or Engineer-in-Training (EIT), and pass the "Principles and Practice" or PE (Practicing Engineer or Professional Engineer) exams. The requirements and steps of this process are set forth by the National Council of Examiners for Engineering and Surveying (NCEES), composed of engineering and land surveying licensing boards representing all U.S. states and territories.
In Australia (Queensland and Victoria) an engineer must be registered as a Professional Engineer within the State in which they practice, for example Registered Professional Engineer of Queensland or Victoria, RPEQ or RPEV. respectively.
In the UK, current graduates require a BEng plus an appropriate master's degree or an integrated MEng degree, a minimum of 4 years post graduate on the job competency development and a peer-reviewed project report to become a Chartered Mechanical Engineer (CEng, MIMechE) through the Institution of Mechanical Engineers. CEng MIMechE can also be obtained via an examination route administered by the City and Guilds of London Institute.
In most developed countries, certain engineering tasks, such as the design of bridges, electric power plants, and chemical plants, must be approved by a professional engineer or a chartered engineer. "Only a licensed engineer, for instance, may prepare, sign, seal and submit engineering plans and drawings to a public authority for approval, or to seal engineering work for public and private clients." This requirement can be written into state and provincial legislation, such as in the Canadian provinces, for example the Ontario or Quebec's Engineer Act.
In other countries, such as the UK, no such legislation exists; however, practically all certifying bodies maintain a code of ethics independent of legislation, that they expect all members to abide by or risk expulsion.
=== Salaries and workforce statistics ===
The total number of engineers employed in the U.S. in 2015 was roughly 1.6 million. Of these, 278,340 were mechanical engineers (17.28%), the largest discipline by size. In 2012, the median annual income of mechanical engineers in the U.S. workforce was $80,580. The median income was highest when working for the government ($92,030), and lowest in education ($57,090). In 2014, the total number of mechanical engineering jobs was projected to grow 5% over the next decade. As of 2009, the average starting salary was $58,800 with a bachelor's degree.
== Subdisciplines ==
The field of mechanical engineering can be thought of as a collection of many mechanical engineering science disciplines. Several of these subdisciplines which are typically taught at the undergraduate level are listed below, with a brief explanation and the most common application of each. Some of these subdisciplines are unique to mechanical engineering, while others are a combination of mechanical engineering and one or more other disciplines. Most work that a mechanical engineer does uses skills and techniques from several of these subdisciplines, as well as specialized subdisciplines. Specialized subdisciplines, as used in this article, are more likely to be the subject of graduate studies or on-the-job training than undergraduate research. Several specialized subdisciplines are discussed in this section.
=== Mechanics ===
Mechanics is, in the most general sense, the study of forces and their effect upon matter. Typically, engineering mechanics is used to analyze and predict the acceleration and deformation (both elastic and plastic) of objects under known forces (also called loads) or stresses. Subdisciplines of mechanics include
Statics, the study of non-moving bodies under known loads, how forces affect static bodies
Dynamics, the study of how forces affect moving bodies. Dynamics includes kinematics (about movement, velocity, and acceleration) and kinetics (about forces and resulting accelerations).
Mechanics of materials, the study of how different materials deform under various types of stress
Fluid mechanics, the study of how fluids react to forces
Kinematics, the study of the motion of bodies (objects) and systems (groups of objects), while ignoring the forces that cause the motion. Kinematics is often used in the design and analysis of mechanisms.
Continuum mechanics, a method of applying mechanics that assumes that objects are continuous (rather than discrete)
Mechanical engineers typically use mechanics in the design or analysis phases of engineering. If the engineering project were the design of a vehicle, statics might be employed to design the frame of the vehicle, in order to evaluate where the stresses will be most intense. Dynamics might be used when designing the car's engine, to evaluate the forces in the pistons and cams as the engine cycles. Mechanics of materials might be used to choose appropriate materials for the frame and engine. Fluid mechanics might be used to design a ventilation system for the vehicle (see HVAC), or to design the intake system for the engine.
=== Mechatronics and robotics ===
Mechatronics is a combination of mechanics and electronics. It is an interdisciplinary branch of mechanical engineering, electrical engineering and software engineering that is concerned with integrating electrical and mechanical engineering to create hybrid automation systems. In this way, machines can be automated through the use of electric motors, servo-mechanisms, and other electrical systems in conjunction with special software. A common example of a mechatronics system is a CD-ROM drive. Mechanical systems open and close the drive, spin the CD and move the laser, while an optical system reads the data on the CD and converts it to bits. Integrated software controls the process and communicates the contents of the CD to the computer.
Robotics is the application of mechatronics to create robots, which are often used in industry to perform tasks that are dangerous, unpleasant, or repetitive. These robots may be of any shape and size, but all are preprogrammed and interact physically with the world. To create a robot, an engineer typically employs kinematics (to determine the robot's range of motion) and mechanics (to determine the stresses within the robot).
Robots are used extensively in industrial automation engineering. They allow businesses to save money on labor, perform tasks that are either too dangerous or too precise for humans to perform them economically, and to ensure better quality. Many companies employ assembly lines of robots, especially in Automotive Industries and some factories are so robotized that they can run by themselves. Outside the factory, robots have been employed in bomb disposal, space exploration, and many other fields. Robots are also sold for various residential applications, from recreation to domestic applications.
=== Structural analysis ===
Structural analysis is the branch of mechanical engineering (and also civil engineering) devoted to examining why and how objects fail and to fix the objects and their performance. Structural failures occur in two general modes: static failure, and fatigue failure. Static structural failure occurs when, upon being loaded (having a force applied) the object being analyzed either breaks or is deformed plastically, depending on the criterion for failure. Fatigue failure occurs when an object fails after a number of repeated loading and unloading cycles. Fatigue failure occurs because of imperfections in the object: a microscopic crack on the surface of the object, for instance, will grow slightly with each cycle (propagation) until the crack is large enough to cause ultimate failure.
Failure is not simply defined as when a part breaks, however; it is defined as when a part does not operate as intended. Some systems, such as the perforated top sections of some plastic bags, are designed to break. If these systems do not break, failure analysis might be employed to determine the cause.
Structural analysis is often used by mechanical engineers after a failure has occurred, or when designing to prevent failure. Engineers often use online documents and books such as those published by ASM to aid them in determining the type of failure and possible causes.
Once theory is applied to a mechanical design, physical testing is often performed to verify calculated results. Structural analysis may be used in an office when designing parts, in the field to analyze failed parts, or in laboratories where parts might undergo controlled failure tests.
=== Thermodynamics and thermo-science ===
Thermodynamics is an applied science used in several branches of engineering, including mechanical and chemical engineering. At its simplest, thermodynamics is the study of energy, its use and transformation through a system. Typically, engineering thermodynamics is concerned with changing energy from one form to another. As an example, automotive engines convert chemical energy (enthalpy) from the fuel into heat, and then into mechanical work that eventually turns the wheels.
Thermodynamics principles are used by mechanical engineers in the fields of heat transfer, thermofluids, and energy conversion. Mechanical engineers use thermo-science to design engines and power plants, heating, ventilation, and air-conditioning (HVAC) systems, heat exchangers, heat sinks, radiators, refrigeration, insulation, and others.
=== Design and drafting ===
Drafting or technical drawing is the means by which mechanical engineers design products and create instructions for manufacturing parts. A technical drawing can be a computer model or hand-drawn schematic showing all the dimensions necessary to manufacture a part, as well as assembly notes, a list of required materials, and other pertinent information. A U.S. mechanical engineer or skilled worker who creates technical drawings may be referred to as a drafter or draftsman. Drafting has historically been a two-dimensional process, but computer-aided design (CAD) programs now allow the designer to create in three dimensions.
Instructions for manufacturing a part must be fed to the necessary machinery, either manually, through programmed instructions, or through the use of a computer-aided manufacturing (CAM) or combined CAD/CAM program. Optionally, an engineer may also manually manufacture a part using the technical drawings. However, with the advent of computer numerically controlled (CNC) manufacturing, parts can now be fabricated without the need for constant technician input. Manually manufactured parts generally consist of spray coatings, surface finishes, and other processes that cannot economically or practically be done by a machine.
Drafting is used in nearly every subdiscipline of mechanical engineering, and by many other branches of engineering and architecture. Three-dimensional models created using CAD software are also commonly used in finite element analysis (FEA) and computational fluid dynamics (CFD).
== Modern tools ==
Many mechanical engineering companies, especially those in industrialized nations, have incorporated computer-aided engineering (CAE) programs into their existing design and analysis processes, including 2D and 3D solid modeling computer-aided design (CAD). This method has many benefits, including easier and more exhaustive visualization of products, the ability to create virtual assemblies of parts, and the ease of use in designing mating interfaces and tolerances.
Other CAE programs commonly used by mechanical engineers include product lifecycle management (PLM) tools and analysis tools used to perform complex simulations. Analysis tools may be used to predict product response to expected loads, including fatigue life and manufacturability. These tools include finite element analysis (FEA), computational fluid dynamics (CFD), and computer-aided manufacturing (CAM).
Using CAE programs, a mechanical design team can quickly and cheaply iterate the design process to develop a product that better meets cost, performance, and other constraints. No physical prototype need be created until the design nears completion, allowing hundreds or thousands of designs to be evaluated, instead of a relative few. In addition, CAE analysis programs can model complicated physical phenomena which cannot be solved by hand, such as viscoelasticity, complex contact between mating parts, or non-Newtonian flows.
As mechanical engineering begins to merge with other disciplines, as seen in mechatronics, multidisciplinary design optimization (MDO) is being used with other CAE programs to automate and improve the iterative design process. MDO tools wrap around existing CAE processes, allowing product evaluation to continue even after the analyst goes home for the day. They also use sophisticated optimization algorithms to more intelligently explore possible designs, often finding better, innovative solutions to difficult multidisciplinary design problems.
== Areas of research ==
Mechanical engineers are constantly pushing the boundaries of what is physically possible in order to produce safer, cheaper, and more efficient machines and mechanical systems. Some technologies at the cutting edge of mechanical engineering are listed below (see also exploratory engineering).
=== Micro electro-mechanical systems (MEMS) ===
Micron-scale mechanical components such as springs, gears, fluidic and heat transfer devices are fabricated from a variety of substrate materials such as silicon, glass and polymers like SU8. Examples of MEMS components are the accelerometers that are used as car airbag sensors, modern cell phones, gyroscopes for precise positioning and microfluidic devices used in biomedical applications.
=== Friction stir welding (FSW) ===
Friction stir welding, a new type of welding, was discovered in 1991 by The Welding Institute (TWI). The innovative steady state (non-fusion) welding technique joins materials previously un-weldable, including several aluminum alloys. It plays an important role in the future construction of airplanes, potentially replacing rivets. Current uses of this technology to date include welding the seams of the aluminum main Space Shuttle external tank, Orion Crew Vehicle, Boeing Delta II and Delta IV Expendable Launch Vehicles and the SpaceX Falcon 1 rocket, armor plating for amphibious assault ships, and welding the wings and fuselage panels of the new Eclipse 500 aircraft from Eclipse Aviation among an increasingly growing pool of uses.
=== Composites ===
Composites or composite materials are a combination of materials which provide different physical characteristics than either material separately. Composite material research within mechanical engineering typically focuses on designing (and, subsequently, finding applications for) stronger or more rigid materials while attempting to reduce weight, susceptibility to corrosion, and other undesirable factors. Carbon fiber reinforced composites, for instance, have been used in such diverse applications as spacecraft and fishing rods.
=== Mechatronics ===
Mechatronics is the synergistic combination of mechanical engineering, electronic engineering, and software engineering. The discipline of mechatronics began as a way to combine mechanical principles with electrical engineering. Mechatronic concepts are used in the majority of electro-mechanical systems. Typical electro-mechanical sensors used in mechatronics are strain gauges, thermocouples, and pressure transducers.
=== Nanotechnology ===
At the smallest scales, mechanical engineering becomes nanotechnology—one speculative goal of which is to create a molecular assembler to build molecules and materials via mechanosynthesis. For now that goal remains within exploratory engineering. Areas of current mechanical engineering research in nanotechnology include nanofilters, nanofilms, and nanostructures, among others.
=== Finite element analysis ===
Finite Element Analysis is a computational tool used to estimate stress, strain, and deflection of solid bodies. It uses a mesh setup with user-defined sizes to measure physical quantities at a node. The more nodes there are, the higher the precision. This field is not new, as the basis of Finite Element Analysis (FEA) or Finite Element Method (FEM) dates back to 1941. But the evolution of computers has made FEA/FEM a viable option for analysis of structural problems. Many commercial software applications such as NASTRAN, ANSYS, and ABAQUS are widely used in industry for research and the design of components. Some 3D modeling and CAD software packages have added FEA modules. In the recent times, cloud simulation platforms like SimScale are becoming more common.
Other techniques such as finite difference method (FDM) and finite-volume method (FVM) are employed to solve problems relating heat and mass transfer, fluid flows, fluid surface interaction, etc.
=== Biomechanics ===
Biomechanics is the application of mechanical principles to biological systems, such as humans, animals, plants, organs, and cells. Biomechanics also aids in creating prosthetic limbs and artificial organs for humans. Biomechanics is closely related to engineering, because it often uses traditional engineering sciences to analyze biological systems. Some simple applications of Newtonian mechanics and/or materials sciences can supply correct approximations to the mechanics of many biological systems.
In the past decade, reverse engineering of materials found in nature such as bone matter has gained funding in academia. The structure of bone matter is optimized for its purpose of bearing a large amount of compressive stress per unit weight. The goal is to replace crude steel with bio-material for structural design.
Over the past decade the Finite element method (FEM) has also entered the Biomedical sector highlighting further engineering aspects of Biomechanics. FEM has since then established itself as an alternative to in vivo surgical assessment and gained the wide acceptance of academia. The main advantage of Computational Biomechanics lies in its ability to determine the endo-anatomical response of an anatomy, without being subject to ethical restrictions. This has led FE modelling to the point of becoming ubiquitous in several fields of Biomechanics while several projects have even adopted an open source philosophy (e.g. BioSpine).
=== Computational fluid dynamics ===
Computational fluid dynamics, usually abbreviated as CFD, is a branch of fluid mechanics that uses numerical methods and algorithms to solve and analyze problems that involve fluid flows. Computers are used to perform the calculations required to simulate the interaction of liquids and gases with surfaces defined by boundary conditions. With high-speed supercomputers, better solutions can be achieved. Ongoing research yields software that improves the accuracy and speed of complex simulation scenarios such as turbulent flows. Initial validation of such software is performed using a wind tunnel with the final validation coming in full-scale testing, e.g. flight tests.
=== Acoustical engineering ===
Acoustical engineering is one of many other sub-disciplines of mechanical engineering and is the application of acoustics. Acoustical engineering is the study of Sound and Vibration. These engineers work effectively to reduce noise pollution in mechanical devices and in buildings by soundproofing or removing sources of unwanted noise. The study of acoustics can range from designing a more efficient hearing aid, microphone, headphone, or recording studio to enhancing the sound quality of an orchestra hall. Acoustical engineering also deals with the vibration of different mechanical systems.
== Related fields ==
Manufacturing engineering, aerospace engineering, automotive engineering and marine engineering are grouped with mechanical engineering at times. A bachelor's degree in these areas will typically have a difference of a few specialized classes.
== See also ==
Automobile engineering
Index of mechanical engineering articles
Lists
Associations
Wikibooks
== References ==
== Further reading ==
Burstall, Aubrey F. (1965). A History of Mechanical Engineering. The MIT Press. ISBN 978-0-262-52001-0.
Marks' Standard Handbook for Mechanical Engineers (11 ed.). McGraw-Hill. 2007. ISBN 978-0-07-142867-5.
Oberg, Erik; Franklin D. Jones; Holbrook L. Horton; Henry H. Ryffel; Christopher McCauley (2016). Machinery's Handbook (30th ed.). New York: Industrial Press Inc. ISBN 978-0-8311-3091-6.
== External links ==
Mechanical engineering at MTU.edu | Wikipedia/Mechanical_design |
In broad terms, transformation design is a human-centered, interdisciplinary process that seeks to create desirable and sustainable changes in behavior and form – of individuals, systems and organizations. It is a multi-stage, iterative process of applying design principles to large and complex systems.
Its practitioners examine problems holistically rather than reductively to understand relationships as well as components to better frame the challenge. They then prototype small-scale systems – composed of objects, services, interactions and experiences – that support people and organizations in achievement of a desired change. Successful prototypes are then scaled.
Because transformation design is about applying design skills in non-traditional territories, it often results in non-traditional design outputs.3 Projects have resulted in the creation of new roles, new organizations, new systems and new policies. These designers are just as likely to shape a job description, as they are a new product.3
This emerging field draws from a variety of design disciplines - service design, user-centered design, participatory design, concept design, information design, industrial design, graphic design, systems design, interactive design, experience design - as well as non-design disciplines including cognitive psychology and perceptual psychology, linguistics, cognitive science, architecture, haptics, information architecture, ethnography, storytelling and heuristics.
== History ==
Though academics have written about the economic value of and need for transformations over the years7,8, its practice first emerged in 2004 when The Design Council, the UK's national strategic body for design, formed RED: a self-proclaimed "do-tank" challenged to bring design thinking to the transformation of public services.1
This move was in response to Prime Minister Tony Blair's desire to have public services "redesigned around the needs of the user, the patients, the passenger, the victim of crime".3
The RED team, led by Hilary Cottam, studied these big, complex problems to determine how design thinking and design techniques could help government rethink the systems and structures within public services and possibly redesign them from beginning to end.3
Between 2004 and 2006, the RED team, in collaboration with many other people and groups, developed techniques, processes and outputs that were able to "transform" social issues such as preventing illness, managing chronic illnesses, senior citizen care, rural transportation, energy conservation, re-offending prisoners and public education.
In 2015 Braunschweig University of Art / Germany has launched a new MA in Transformation Design. In 2016 The Glasgow School of Art launched another masters program "M.Des in Design Innovation and Transformation Design". In 2019 the University of Applied Sciences Augsburg / Germany launched a masters program in Transformation Design.
== Process ==
Transformation design, like user-centered design, starts from the perspective of the end user. Designers spend a great deal of time not only learning how users currently experience the system and how they want to experience the system, but also co-creating with them the designed solutions.
Because transformation design tackles complex issues involving many stakeholders and components, more expertise beyond the user and the designer is always required. People such as, but not limited to, policy makers, sector analysts, psychologists, economists, private businesses, government departments and agencies, front-line workers and academics are invited to participate in the entire design process - from problem definition to solution development.6
With so many points-of-view brought into the process, transformation designers are not always 'designers.' Instead, they often play the role of moderator. Though varying methods of participation and co-creation, these moderating designers create hands-on, collaborative workshops (a.k.a. charrette) that make the design process accessible to the non-designers.
Ideas from workshops are rapidly prototyped and beta-tested in the real world with a group of real end users. Their experience with and opinions of the prototypes are recorded and fed back into the workshops and development of the next prototype.
== See also ==
Human-centered design
== Sources ==
[1] RED's homepage
https://www.designcouncil.org.uk/ Design Council's homepage
[2] White Paper published by RED which discusses transformation design
[3] RED's website page which talks about transformation design
http://www.torinoworlddesigncapital.it/portale/en/content.php?sezioneID=10 Interview with Hilary Cottam at World Design Capital
https://web.archive.org/web/20070818190054/http://www.hilarycottam.com/html/RED_Paper%2001%20Health_Co-creating_services.pdf Whitepaper on co-creation
The Experience Economy, B.J. Pine and J. Gilmore, Harvard Business School Press 1999. Book discussing the economic value and importance of companies offering transformations
The Support Economy, S. Zuboff and J. Maxmin, Viking Press 2002. Book discussing the need for companies and governments to realign themselves with how people live
Transformationsdesign - Wege in eine zukunftsfähige Moderne, H. Welzer and B. Sommer, oekom 2014 [4]
Transformation Design - Perspectives on a new Design Attitude, W. Jonas, S. Zerwas and K. von Anshelm, Birkhäuser 2015 [5] | Wikipedia/Transformation_design |
A design museum is a museum with a focus on product, industrial, graphic, fashion and architectural design.
Many design museums were founded as museums for applied arts or decorative arts and started only in the late 20th century to collect design.
The first museum of this kind was the Victoria and Albert Museum in London. In Germany the first museum of decorative arts was the Deutsches-Gewerbe-Museum zu Berlin (now Kunstgewerbemuseum), founded in 1868 in Berlin.
Also some museums of contemporary or modern art have important design collections, such as the MoMA in New York and the Centre Pompidou in Paris. A special concept has been realised in the Pinakothek der Moderne in Munich, in which four independent museums cooperate, one of them being Die Neue Sammlung – the largest design museum in the world.
Today corporate museums like the Vitra Design Museum, Museo Alessi or Museo Kartell play an important role.
== List of design museums ==
21 21 Design Sight, Tokyo, Japan
ADI Design Museum, Milan, Italy
Archivo Diseño y Arquitectura, Mexico City
Art, Design & Architecture Museum (AD&A), University of California, Santa Barbara, Goleta, California
Bauhaus Archive, Berlin, Germany
Bröhan Museum, Berlin, Germany
Chicago Athenaeum, Galena, Illinois, USA
Cooper Hewitt, Smithsonian Design Museum, New York, USA
Design Exchange, Toronto, Canada
Design Museum of Barcelona, Spain
Design Museum Brussels (former Art & Design Atomium Museum), Belgium
Design Museum of Chicago, Chicago, USA
Museum dan Rumah Desain Runa, Bali, Indonesia
Design Museum Dedel, Den Haag, Netherlands
Design Museum Den Bosch, Netherlands
Design Museum Dharavi, India
Design Museum Gent, Belgium
Design Museum, Helsinki
Design Museum Holon, Tel Aviv, Israel
Design Museum, London, UK
Design Museum of Thessaloniki, Greece
Danish Museum of Art & Design, Copenhagen, Denmark
Die Neue Sammlung, Munich, Germany
HKDI Gallery (Hong Kong Design Institute), Hong Kong
Icelandic Museum of Design and Applied Art, Garðabær, Iceland
International Design Centre, Nagoya, Japan
Kunstgewerbemuseum Berlin, Germany
Leipzig Museum of Applied Arts, Germany
Ljubljana Museum of Architecture and Design, Slovenia
M+ Museum, Hong Kong
Museo del Objeto del Objeto, Mexico City
Musée des Arts Décoratifs, Paris, France
Musée des Arts Décoratifs et du Design, Bordeaux, France
Musée des Arts et Métiers, Paris, France
Museo Nacional de Artes Decorativas, Madrid, Spain
Museum of Applied Arts (Belgrade), Serbia
Museum of Applied Arts (Budapest), Hungary
Museum of Arts and Design, New York, USA
Museum of Craft and Design, San Francisco, USA
Museum für angewandte Kunst Frankfurt, Germany
Museum für angewandte Kunst Cologne, Germany
Museum für angewandte Kunst Wien, Vienna, Austria
Museum of Contemporary Design and Applied Arts (MUDAC), Lausanne, Switzerland
Museum für Gestaltung Zürich, Switzerland
Museum für Kunst und Gewerbe Hamburg, Germany
Museum of Decorative Arts in Prague, Czech Republic
Museum of Design Atlanta, Atlanta, Georgia, USA
Museum of Domestic Design and Architecture, London, UK
National Museum of Art, Architecture and Design, Oslo, Norway
Powerhouse Museum, Sydney, Australia
Röhsska Museum, Gothenburg, Sweden
Swedish Centre for Architecture and Design, Stockholm, Sweden
Swedish Design Museum (virtual), Sweden
SONS Museum, a museum dedicated to shoe design, Kruishoutem, Belgium
Singapore City Gallery, Singapore
Red Dot Design Museum, Essen, Germany
Red Dot Design Museum (Singapore)
Stedelijk Museum, Amsterdam, Netherlands
Stedelijk Museum, Breda, Netherlands
Stieglitz Museum of Applied Arts, Saint Petersburg, Russia
Taiwan Design Museum, Taipei, Taiwan
Triennale di Milano, Milan, Italy
Victoria and Albert Museum (V&A), London, UK
V&A Dundee, Dundee, Scotland, UK
Vitra Design Museum, Weil am Rhein, Germany
Wolfsonian-FIU, Miami Beach, Florida, USA
Z33, Hasselt, Belgium
== References ==
== External links ==
"design museums blog" with information on design museums
Map of design museums around the world | Wikipedia/Design_museum |
Bentley Systems, Incorporated is an American-based software development company that develops, manufactures, licenses, sells and supports computer software and services for the design, construction, and operation of infrastructure. The company's software serves the building, plant, civil, and geospatial markets in the areas of architecture, engineering, construction (AEC) and operations. Their software products are used to design, engineer, build, and operate large constructed assets such as roadways, railways, bridges, buildings, industrial plants, power plants, and utility networks. The company re-invests 20% of their revenues in research and development.
Bentley Systems is headquartered in Exton, Pennsylvania, United States, but has development, sales and other departments in over 50 countries. In 2021, the company generated revenue of $1 billion in 186 countries.
== Software ==
Bentley has three principal software product lines: MicroStation, ProjectWise, and AssetWise. Since 2014, some products have been based on the Microsoft Azure cloud computing platform. In 2024, it continues to sell software lines such as MicroStation and ProjectWise, as well as several dozen others such as SYNCHRO and OpenRoads Designer.
== History ==
Keith A. Bentley and Barry J. Bentley founded Bentley Systems in 1984. They introduced the commercial version of PseudoStation in 1985, which allowed users of Intergraph's VAX systems to use low-cost graphics terminals to view and modify the designs on their Intergraph IGDS (Interactive Graphics Design System) installations. Their first product was shown to potential users who were polled as to what they would be willing to pay for it. They averaged the answers, arriving at a price of $7,943. A DOS-based version of MicroStation was introduced in 1986.
In April 2002, Bentley filed for an initial public offering, but it was withdrawn before taking effect. In November 2016, German-based Siemens announced it would pay about $76 million for a minority stake in Bentley, as well as invest in developing joint software with it. In September 2020, Bentley Systems sets terms of its IPO valuing the company at about $4.96 billion. The company would offer 10.75 million shares priced between $17 and $19 per share. In October 2024, Bentley Systems began using Google 2D and 3D geospatial content in some of its software. Nicholas Cumins was Bentley chief officer.
=== Acquisitions ===
On June 18, 1997, Bentley acquired IdeaGraphix, a developer of MicroStation-based application software for architecture, engineering, and facilities management. On January 15, 1998, Bentley acquired Jacobus. On January 2, 2001, Bentley acquired Intergraph's civil engineering, plot-services and raster conversion software businesses. On October 17, 2001, Bentley Systems bought Geopak design software for road and rail infrastructure.
Bentley Systems acquired Rebis in 2003, Infrasoft Corporation in 2003, Haestad Methods, Inc. in 2004, and then agreed to acquire netGuru's Research Engineers International (REI) business which included its STAAD structural analysis and design product line on August 31, 2005. Bentley acquired GEF-RIS AG in 2006, KIWI Software in 2007, C.W. Beilfuss and Associates in 2007, and TDV GmbH, an analysis and design software provider for bridge engineering, in May 2007.
In early 2008, Bentley acquired Hevacomp, Ltd., LEAP Software, Inc., the promis•e product line from ECT International, and Common Point for mainstream construction simulation.
On October 13, 2009, Bentley added geotechnical and geoenvironmental capabilities with the acquisition of gINT Software. On February 9, 2010, Bentley Systems announced two acquisitions: Exor Corporation and Enterprise Informatics. On March 2, 2011, Bentley Systems acquired SACS software for offshore structural analysis from Engineering Dynamics, Inc. Also in 2011, Bentley acquired FormSys and Pointools Ltd., a British developer of point-cloud software technology.
In 2012, Bentley acquired the elcoSystem software business of Hannappel Software, as well as InspectTech Systems, USA, a provider of field inspection applications and asset management services for bridges and other transportation assets. Also that year it acquired Canadian-based Ivara, the Microprotol pressure vessel design and analysis software from EuResearch, and SpecWave. In 2013, Bentley acquired topoGRAPH, a provider of surveying software, as well as the MOSES software business from Ultramarine. On February 25, 2014, Bentley acquired DocQnet Systems’ eB Services BizDocQnet Systems. Later that year, it acquired SITEOPS, optimization software for enhanced land development site design, from Blueridge Analytics. Bentley acquired C3global for predictive modeling in 2015, and also that year acquired Acute3D, and the reality modeling creator e-on.
In 2016, Bentley acquired the progressive assurance platform ComplyPro from UK-based ComplyServe.
On January 23, 2018, Bentley acquired S-Cube Futuretech Pvt Ltd. to expand its offerings specific to the concrete engineering design and documentation software users in India, Southeast Asia, and the Middle East. On April 26, 2018, Bentley acquired Dutch geotechnical modelling company Plaxis B.V. On July 15, 2018, Bentley acquired Canadian geotechnical modeling company SOILVISION Systems Ltd. in order to enhance its 3D geotechnical offerings. Also in 2018, the company acquired Synchro, Agency9, LEGION, ACE enterprise Slovakia, and Alworx.
In 2019, Bentley acquired SignCAD Systems, Keynetix, and Citilabs, Inc. & Orbit GeoSpatial Technologies. In 2020, Bentley acquired UK based consultancy Professional Construction Strategies Group (PCSG), and SRO Solutions. In 2021, Bentley acquired Ontracks Consulting, INRO Software, SPIDA Software, and Seequent Holdings Limited, and in 2022 acquisitions included Power Line Systems, ADINA R&D, Inc., and Eagle.io. In 2023, Bentley acquired Salt Lake City, Utah-based Blyncsy, a provider of breakthrough artificial intelligence services for departments of transportation to support operations and maintenance activities. In September 2024, Bentley announced the acquisition of Cesium GS, Inc., a provider of 3D geospatial software applications and platforms.
== Bentley Institute Press ==
Bentley Systems also is a publisher of textbooks and professional references for the architectural, engineering, and construction (AEC), operations, geospatial, and educational communities, under the name Bentley Institute Press.
== Bentley Infrastructure 500 ==
Since 2010, Bentley annually published a ranking of the top owners of infrastructure from both the public and private sectors.
== See also ==
Comparison of computer-aided design software
Comparison of 3D computer graphics software
List of CAx companies
List of companies based in the Philadelphia area
List of collaborative software
List of 3D computer graphics software
== References ==
== External links ==
Official website | Wikipedia/Bentley_Systems |
Affective design describes the design of products, services, and user interfaces that aim to evoke intended emotional responses from consumers, ultimately improving customer satisfaction. It is often regarded within the domain of technology interaction and computing, in which emotional information is communicated to the computer from the user in a natural and comfortable way. The computer processes the emotional information and adapts or responds to try to improve the interaction in some way. The notion of affective design emerged from the field of human–computer interaction (HCI), specifically from the developing area of affective computing. Affective design serves an important role in user experience (UX) as it contributes to the improvement of the user's personal condition in relation to the computing system. Decision-making, brand loyalty, and consumer connections have all been associated with the integration of affective design. The goals of affective design focus on providing users with an optimal, proactive experience. Amongst overlap with several fields, applications of affective design include ambient intelligence, human–robot interaction, and video games.
== Background ==
Emotions are an integral part of the human experience, and thus, play a role in how users and consumers interact with interfaces and products. Donald Norman, an academic in the field of human-centered design, explored the importance of emotion in design, coining the concept of user-centered design in the 1980s. He discussed design heuristics and advocated for providing users with a pleasurable experience through the application of emotional design. According to Norman, there are three levels of emotional processing that influence the user’s affective experience: visceral design, behavioural design, and reflective design.
Visceral design relates to the immediate, subconscious responses to a product. It is triggered by an object’s perceptual properties and sensory experiences, such as the use of specific shapes or colours. Visceral level responses are rooted in biological and evolutionary processes that facilitate rapid assessment of encountered objects, including evaluations of their safety and the scope for further exploration. Product designers utilise visceral level responses to create positive user experiences by incorporating elements such as specific imagery, colour, typography, or branding to convey the desired emotional state or association to the user.
Behavioural design is related to the joy and effectiveness of use of a given product, particularly in terms of functionality and understandability. It is also affected by the physical feel of an object, such as its weight or texture. Effective behavioural design is intuitive and meets the user’s expectations and goals, as well as instils a sense of control over the product in the consumer. Behavioural design, while subconscious, is closely related to the users’ past experiences, where the expectations for a given product originate.
Reflective design is considered the highest level of emotional design, where the affective response stems from conscious mental processing. At this level, users reflect upon their experience with the product and how it affects their self-image. Reflective design is largely embedded within a social and cultural context, as consumers assess the social role or status communicated by using the product, particularly in light of cultural norms and preferences. Reflective design is also linked to customer retention, marking the stage where the decision to reuse the product in the future is made.
To cater to all three levels of emotional responses, designers should consider both a product’s appearance and its usability.
Bødker, Christensen, and Jørgensen presented a definition of affective design that emphasizes the importance of considering current social and cultural influences when relating to human emotions.
Along with the growth of human-computer interaction, the past few decades have seen an increase in the discussion of emotions in relation to design. Research in recent years has looked at what affects our emotions as well as how emotions affect our mental and physical states. Additionally, designers and researchers have explored how to elicit and map people’s emotions, ranging from positive to negative. Affective design encompasses more than the functionality of a product as it emphasizes user experience and is concerned with the dynamics of how humans interact with the world.
Affective design includes utilizing users’ emotions as data to guide technologies’ responses in addition to designing with predetermined elements intended to influence users’ emotions. The growth in the number and diversity of users carries with it the challenge to tailor interfaces and products to each individual. Affective design offers the potential to provide a unique, adaptive response to each user’s emotion. It has emerged as an intersection of functionality and pleasure, illustrating the significant influence of emotional components in technology and user experiences.
== Aims ==
Affective computing aims to construct affective interfaces which are capable of providing certain emotional experiences for users. Affective design attempts to understand the emotional relationships between users and products as well as how products communicate affectively through their physical features. It aims to create artefacts capable of eliciting the most pleasurable experience possible for users, across all of their senses. Affective design works to create the optimal user experience by tailoring human-interactions to individual users in response to their emotional input. It promotes affective interaction through communication, positioning itself as a mediator between human input and the computer's output. The effectiveness of affective design is measured with reference to feeling discrepancy, which defines the disparity between the target customer's emotional response and the actual emotions experienced by the user. Design that generates low feeling discrepancy is regarded as impactful affective design.
Another aim of affective design is increasing customer retention by creating memorable user experiences and ensuring brand loyalty. The integration of affective design and the subsequent emotional response elicited in customers has been shown to positively impact attachment, loyalty, and long-term commitment to the brand. This aim of affective design is grounded in the experience economy theory, suggesting that consumer engagement should occur at the emotional level. By creating positive affective responses, brands generate memorable experiences for product users, improving commercial success. This leads to positive sale-driving behaviours in consumers, such as spreading positive word-of-mouth, price insensitivity, and repurchasing.
== Challenges ==
The key challenge for affective design involves accurately identifying the user's affective needs, and, subsequently, the design of products that would address those needs. Current research focuses on the measurement and analysis of human interactions towards affective design and the assessment of the corresponding affective design features.
Another challenge for affective design is balancing the emotional and utilitarian aspects of product design. Prioritising emotional value over usability can affect users’ satisfaction with a given product if it fails to meet their functional expectations. Conversely, the overemphasis on product functionality can detract from an emotionally positive experience, leading to decreased memorability of use and brand loyalty. Therefore, effective design should encompass both product functionality and generate positive affective responses to create an optimal user experience.
Notably, while striking a balance between usability and affective design is important, generating strong emotional responses has been found to mitigate some negative experiences stemming from a lack of functionality. According to Norman, customer satisfaction at the emotional level often transcends functional inconveniences, and positive reflective memories can mitigate the negative effect of the initial experience.
Emotion-centered design has also been found to have a more significant impact on a product’s success than its functionality. One example is the introduction of the colourful casing to Apple’s iMac, which, by appealing to the visceral level of emotional processing, improved the product’s sales despite the hardware components remaining mostly unchanged.
Direct measures of users’ emotional states present another challenge for affective design. Products and interfaces that incorporate affective computing into their design, specifically to create user experiences that adapt to the emotional state of the user, often rely on indirect measures, such as physiological arousal. However, the use of biological markers, such as heart rate, blood pressure, or respiratory rate, only provides an indirect measure of affective states, which can be influenced by various external factors.
== Applications ==
Ambient intelligence (AmI) involves a variety of processes, including aspects of affective design, to construct systems that proactively interact with the user. It incorporates areas from computer science and engineering, including sensors, human-computer interfaces, and artificial intelligence, to construct an adaptive, intelligent user environment. Collecting information from the environment and calculating the user’s anticipated needs, AmI lies at the intersection of the Internet of things and artificial intelligence . Applying affective design, AmI considers human desires and emotional responses. One way AmI processes human emotions is through facial expressions, which allows the technology to recognize user emotions and respond accordingly. These electronic environments provide the users with an aesthetic and pleasurable experience by enhancing human-product interactions.
Human–robot interaction is another area in which affective design is applied, specifically with emotional robots. Recognizing human emotions, emotional robots are aware of the user’s emotions and engage in an emotional interaction with the user. Emotional robots are designed to mimic human emotions and cognition. They analyze the user’s emotions by gathering data through various methods, including facial recognition, body language, and physiological signals, and then they exhibit a behavioral response. One example of an emotional robot is Erica, developed by Hiroshi Ishiguro and his team at Osaka University. Erica is an intelligent robot capable of carrying out a conversation with people and expressing emotions.
Video games serve as an immersive form of entertainment that can apply affective design in their development. Emotions impact the user’s engagement and relationship with the video game, prompting designers to consider affective design in their creation of video games. Affective gaming, for example, explores how video games can analyze the player's emotions and change game features accordingly. This has the potential to increase the personalization and adaptability of the games with the intention to increase user interest and commitment. It has been recognised as a potential solution to the issue of games providing an unbalanced player experience, often oscillating between excessively difficult and overly simplistic gameplay. Researchers suggest that game adaptability can also play a crucial role in facilitating a state of flow in players, which has been considered an integral part of enjoyable gaming experiences. Biofeedback and physiological arousal measures have been suggested as tools for games to adapt the gameplay, thus increasing player satisfaction by minimising frustration and maintaining an optimal level of challenge.
== See also ==
Affective computing
Human–computer interaction
User-centered design
User experience design
== References == | Wikipedia/Affective_design |
Work design (also referred to as job design or task design) is an area of research and practice within industrial and organizational psychology, and is concerned with the "content and organization of one's work tasks, activities, relationships, and responsibilities" (p. 662). Research has demonstrated that work design has important implications for individual employees (e.g., employee engagement, job strain, risk of occupational injury), teams (e.g., how effectively groups co-ordinate their activities), organisations (e.g., productivity, occupational safety and health targets), and society (e.g., utilizing the skills of a population or promoting effective aging).
The terms job design and work design are often used interchangeably in psychology and human resource management literature, and the distinction is not always well-defined. A job is typically defined as an aggregation of tasks assigned to individual. However, in addition to executing assigned technical tasks, people at work often engage in a variety of emergent, social, and self-initiated activities. Some researchers have argued that the term job design therefore excludes processes that are initiated by incumbents (e.g., proactivity, job crafting) as well as those that occur at the level of teams (e.g., autonomous work groups). The term work design has been increasingly used to capture this broader perspective. Additionally, deliberate interventions aimed at altering work design are sometimes referred to as work redesign. Such interventions can be initiated by the management of an organization (e.g., job rotation, job enlargement, job enrichment) or by individual workers (e.g., job crafting, role innovation, idiosyncratic deals).
== History ==
Interest in the question of what makes good work was largely initiated during the industrial revolution, when machine-operated work in large factories replaced smaller, craft-based industries. In 1776, Adam Smith popularized the concept of division of labor in his book The Wealth of Nations, which states that dividing production processes into different stages would enable workers to focus on specific tasks, increasing overall productivity. This idea was further developed by Frederick Winslow Taylor in the late 19th century with his highly influential theory of scientific management (sometimes referred to as Taylorism). Taylor argued that jobs should be broken down into the smallest possible parts and managers should specify the one best way that these tasks should be carried out. Additionally, Taylor believed that maximum efficiency could only be achieved when managers were responsible for planning work while workers were responsible for performing tasks.
Scientific management became highly influential during the early 20th century, as the narrow tasks reduced training times and allowed less skilled and therefore cheaper labor to be employed. In 1910, Henry Ford took the ideas of scientific management further, introducing the idea of the automotive assembly line. In Ford's assembly lines, each worker was assigned a specific set of tasks, standing stationary while a mechanical conveyor belt brought the assemblies to the worker. While the assembly line made it possible to manufacture complex products at a fast rate, the jobs were extremely repetitive and workers were almost tied to the line.
Researchers began to observe that simplified jobs were negatively affecting employees' mental and physical health, while other negative consequences for organizations such as turnover, strikes, and absenteeism began to be documented. Over time, a field of research within industrial and organizational psychology known as job design, and more recently work design, emerged. Empirical work in the field flourished from the 1960s, and has become ever more relevant with modern technological developments that have changed the fundamental nature of work, such as automation, artificial intelligence, and remote work.
== Theoretical perspectives ==
=== Job characteristics model ===
Hackman & Oldham's (1976) job characteristics model is generally considered to be the dominant motivational theory of work design. The model identifies five core job characteristics that affect five work-related outcomes (i.e. motivation, satisfaction, performance, and absenteeism and turnover) through three psychological states (i.e. experienced meaningfulness, experienced responsibility, and knowledge of results):
Skill variety – The degree to which a job involves a variety of activities, requiring the worker to develop a variety of skills and talents. Workers are more likely to have a more positive experience in jobs that require several different skills and abilities than when the jobs are elementary and routine.
Task identity – The degree to which the job requires completion of a whole and identifiable piece of work with a clear outcome. Workers are more likely have a more positive experience in a job when they are involved in the entire process rather than just being responsible for a part of the work.
Task significance – The degree to which a job has a substantial impact on the lives or work of others. Workers are more likely have a more positive experience in a job that substantially improves either psychological or physical well-being of others than a job that has limited effect on anyone else.
Autonomy – The degree to which the job provides the employee with significant freedom, independence, and discretion to plan out the work and determine the procedures in the job. For jobs with a high level of autonomy, the outcomes of the work depend on the workers' own efforts, initiatives, and decisions; rather than on the instructions from a manager or a manual of job procedures. In such cases, the jobholders experience greater personal responsibility for their own successes and failures at work.
Feedback – The degree to which a job incumbent has knowledge of results. When workers receive clear, actionable information about their work performance, they have better overall knowledge of the effect of their work activities, and what specific actions they need to take (if any) to improve their productivity.
The central proposition of job characteristics theory - that is, that work characteristics affect attitudinal outcomes - is well established by meta analysis. However, some have criticized the use of job incumbents' perceptions to assess job characteristics, arguing that individuals' perceptions are constructions arising from social influences, such as the attitudes of their peers.
Job characteristics theory has been described as the logical conclusion of efforts to understand how work can satisfy basic human needs. The development of the job characteristics model was largely stimulated by Frederick Herzberg's two factor theory (also known as motivator-hygiene theory). Although Herzberg's theory was largely discredited, the idea that intrinsic job factors impact motivation sparked an interest in the ways in which jobs could be enriched which culminated in the job characteristics model.
=== Sociotechnical systems ===
Sociotechnical systems is an organizational development approach which proposes that the technical and social aspects of work should be jointly optimized when designing work. This contrasts with traditional methods that prioritize the technical component and then 'fit' people into it, often resulting in mediocre performance at a high social cost. Application of sociotechnical theory has typically focused on group rather than individual work design, and is responsible for the rise of autonomous work groups, which are still popular today.
One of the key principles of sociotechnical system design is that overall productivity is directly related to the system's accurate analysis of the social and technical needs. Accurate analysis of these needs typically results in the following work characteristics:
Minimal critical specification of rules – Work design should be precise about what has to be done, but not how to do it. The use of rules, policies and procedures should be kept to a minimum.
Variance control – Deviations from the ideal process should be controlled at the point where they originate.
Multiskills – A work system will be more flexible and adaptive if each member of the system is skilled in more than one function.
Boundary location – Interdependent roles should fall within the same departmental boundaries, usually drawn on the basis of technology, territory, and/or time.
Information flow – Information systems should provide information at the point of problem solving rather than being based on hierarchical channels.
Support congruence – The social system should reinforce behaviours which are intended by the work group structure.
Design and human values – The design should achieve superior results by providing a high quality of work life for individuals.
=== Job demands-control model ===
Karasek's (1979) job demands-control model is the earliest and most cited model relating work design to occupational stress. The key assumption of this model is that low levels of work-related decision latitude (i.e. job control) combined with high workloads (i.e. job demands) can lead to poorer physical and mental health. For example, high pressure and demands at work may lead to a range of negative outcomes such as psychological stress, burnout, and compromised physical health. Additionally, the model suggests that high levels of job control can buffer or reduce the adverse health effects of high job demands. Instead, this high decision latitude can lead to feelings of mastery and confidence, which in turn aid the individual in coping with further job demands.
The job demands-control model is widely regarded as a classic work design theory, spurring large amounts of research. However, the model has been criticized for its focus on a narrow set of work characteristics. Additionally, while strong support has been found for the negative effects of high job demands, some researchers have argued that the buffering effect of high job control on the negative effects of demand is less convincing.
==== Job demands-resources model ====
The job demands-resources model was introduced as a theoretical extension to the job demands-control model, and recognizes that other features of work in addition to control and support might serve as resources to counter job demands. The authors of the job demands-resources model argued that previous models of employee well-being "have been restricted to a given and limited set of predictor variables that may not be relevant for all job positions" (p. 309). Examples of the resources identified in this model include career opportunities, participation in decision making, and social support.
=== Relational job design theory ===
Relational job design theory is a popular contemporary approach to work design developed by American organizational psychologist Adam Grant, which builds on the foundations laid by Hackman & Oldham's (1976) job characteristics model. The core thesis of relational work design is that the work context shapes workers' motivations to care about making a prosocial difference (i.e. the desire to help or benefit others). Rather than focusing on the characteristics of tasks which make up jobs, relational work design is concerned with the 'relational architecture' of the workplace that influences workers' interpersonal relationships and connections with beneficiaries of the work. In this context, beneficiaries refer to the people whom the worker believes are affected by his or her work. An employer can design the relational architecture of the workplace as a means of motivating workers to care about making a prosocial difference.
Grant's theory makes a distinction between two key components of relational architecture:
Impact on beneficiaries – This refers to the perception that one's work has a positive impact on the lives and well-being of others. A visible, positive impact of the job provides employees with a feeling that their tasks matter, which in turn results in higher prosocial motivation.
Contact with beneficiaries – This refers to opportunities for employees to communicate and interact with the people who benefit from their work. Increased interaction with clients will result in employees will become more emotionally engaged "as a result of first-hand exposure to their actions affecting a living, breathing human being" (p. 307). Thus, increasing job contact results in higher prosocial motivation.
=== Learning and development approach ===
The learning and development approach to work design, advanced by Australian organizational behavior Professor Sharon K. Parker, draws on the findings of a diverse body of research which shows that certain job characteristics (e.g. high demands and control, autonomy, complex work with low supervision) can promote learning and development in workers. Parker argues that work design can not only shape cognitive, identity, and moral processes, but also speed up an individual's learning and development.
=== Economic theory ===
In economics, job design has been studied in the field of contract theory. In particular, Holmström and Milgrom (1991) have developed the multi-task moral hazard model. Some of the tasks are easier to measure than other tasks, so one can study which tasks should be bundled together. While the original model was focused on the incentives versus insurance trade-off when agents are risk-averse, subsequent work has also studied the case of risk-neutral agents who are protected by limited liability. In this framework, researchers have studied whether tasks that are in direct conflict with each other (for instance, selling products that are imperfect substitutes) should be delegated to the same agent or to different agents. The optimal task assignment depends on whether the tasks are to be performed simultaneously or sequentially.
== Measurement and diagnostics ==
=== Job Diagnostic Survey (JDS) ===
The Job Diagnostic Survey (JDS) was developed by Hackman and Oldham in 1975 to assess perceptions of the core job characteristics outlined in job characteristics theory. The JDS consists of seven scales measuring variety, autonomy, task identity, significance, job feedback, feedback from others, and dealing with others. Prior to the development of viable alternatives, the JDS was the most commonly used job design measure. However, some authors have criticised its focus on a narrow set of motivational characteristics and neglect of other important work characteristics. Additionally, the psychometric properties of the JDS have been brought into question, including a low internal consistency and problems with the factor structure.
=== Multimethod Job Design Questionnaire (MJDQ) ===
The Multimethod Job Design Questionnaire (MJDQ) was developed by Michael Campion in 1988 to assess what were, at the time, the main interdisciplinary approaches to work design (i.e. motivational, mechanistic, biological, perceptual motor). Intended to address the weaknesses of the JDS, the MJDQ suffered from both measurement problems and gaps in construct measurement.
=== Work Design Questionnaire (WDQ) ===
The Work Design Questionnaire (WDQ) was developed by Morgeson and Humphrey in 2006 as a comprehensive and integrative work design measure which addresses the inadequacies of its predecessors. The WDQ focuses not only on the tasks that make up a person's job, but also the relations between workers and the broader environment. The WDQ has since been translated into several languages other than English, including German, Italian, and Spanish.
== Antecedents of work design behaviours ==
Decisions about the organization of work are typically made by those in positions of formal authority, such as executives, managers, and team leaders. These decisions, which usually regard the division of labor and the integration of effort, create work designs in which employees have assigned tasks and responsibilities. In addition to work design arising from formal decision-making, work design can also be created through emergent, informal, and social processes (e.g. role expectations from peers). Usually, these types of processes arise from the actions and decisions of employees, meaning employees have a certain degree of agency in shaping their own work designs.
=== Motivation, knowledge, skills, and abilities (KSAs) ===
In accordance with the ability-motivation-opportunity model of behaviour, the work design-related decisions of individuals are shaped by their motivation and knowledge, skills, and abilities. These proximal processes apply to decision making in both people in formal positions of authority (i.e. managers) as well as individual employees. With respect to motivation, managers' decisions could be shaped by autonomous motivation (e.g. the desire the retain employees) or controlled motivation (e.g. reducing staffing costs). In terms of KSAs, managers' knowledge about work design options and their skills to engage employees in the decision making process may shape their decisions. It is believed that these same processes apply to employees' work design-related actions and decisions.
=== Opportunity ===
Opportunity, in this context, refers to the uncontrollable forces surrounding an individual that enable or constrain the individuals task performance. Regardless of an individual's motivation or KSAs regarding a particular work design-related decision, that individual can only implement change if they have the opportunity to do so. For example, if a manager lacks the power to mobilise necessary resources, perhaps due to a rigid organizational hierarchy, their work design-related actions would be constrained.
=== Individual influences ===
Demographics – Characteristics such as age, gender, and ethnicity can shape work design decisions. The more these attributes signal assumptions that the employee is competent and trustworthy, the more managers will be motivated to make role adjustments to improve work design. Additionally, there is evidence that demographic characteristics can affect the work design decision of employees. For example, older workers may be discouraged to renegotiate their work designs due to discriminatory attitudes in the workplace. Gender and ethnicity can make some workers more vulnerable to low-quality work designs, with data showing that female workers have less autonomy, fewer development opportunities, and reduced career possibilities. Evidence also suggests that migrant workers often have less enriched work designs compared to non-migrant workers.
Competence and learning – Karasek and Theorell propose that enriched work designs create a self-perpetuating spiral by which the promotion of learning builds employees' mastery and competence, which in turn enables employees to take on more challenging tasks and responsibilities, generating further learning.
Other individual differences – Personality traits and stable individual differences such as motivation and initiative can affect both managerial and individual work design-related decision making. For example, personality traits may affect who managers select for particular jobs as well as an employee's choice of occupation.
=== Contextual influences ===
International – Organizations operate today under the influence of globalization and market liberalization. While there is little empirical work on the direct effects of these factors, some have argued that globalization has increased the perceived threat of competition and job insecurity, leading to increased expectations about working harder. Additionally, increased access to new suppliers in other countries, especially developing countries, has increased the potential for organizations to influence work design in these countries. Evidence has shown that cost pressures on suppliers are linked to poor work designs, such as high workloads and physical demands.
National – Organizations are subject to the economic, cultural, and institutional context of the country they operate in. Work designs in economies with a relatively high GDP and low employment typically have lower workloads and higher job resources (e.g. autonomy, skill variety, challenge) due to higher investment in practices aimed at attracting and retaining employees. Additionally, some have argued that national culture shapes individual preferences for particular working conditions. For example, managers and employees from cultures with a preference for structure and formal rules might prefer work designs which are clearly defined. Finally, national institutions such as trade unions, national employment policies, and training systems policies may have direct or indirect effects on work design.
Occupational – Occupations shape the distribution of tasks as well as the influence of skills used in completing those tasks, both of which are key to work design. Additionally, occupations tend to encourage and reinforce particular values, which may or may not be congruent with the values of individual workers. For example, occupations which value independence (e.g. police detectives) are likely to reward actions which demonstrate initiative and creativity, giving rise to job characteristics such as autonomy and variety.
Organizational – According to strategic human resource management theory (SHRM), a key task for managers is to adopt HR practices which are internally consistent with the strategic objectives of the organization. For example, if an organization's strategy is to gain competitive advantage by minimizing costs, managers may be motivated to adopt work designs based on scientific management (i.e. low training and induction costs to allow low-skill and low-paid workers to be employed). In contrast, managers working for an organization that aims to gain competitive advantage through quality and innovation may be motivated to provide employees with opportunities to use specialist knowledge and skills, resulting in enriched work designs.
Work groups – Drawing on the sociotechnical theory and team effectiveness literature, some authors argue that key characteristics of work groups (i.e. composition, interdependence, autonomy, and leadership) can influence the work design of individual team members, although it is acknowledged that evidence on this particular topic is limited.
== Strategies for work (re)design ==
=== Managerial strategies ===
==== Job rotation ====
Job rotation is a job design process by which employee roles are rotated in order to promote flexibility and tenure in the working environment. Through job rotation, employees laterally mobilize and serve their tasks in different organizational levels; when an individual experiences different posts and responsibilities in an organization, the ability to evaluate his or her capabilities in the organization increases. By design, it is intended to enhance motivation, develop workers' outlook, increase productivity, improve the organization's performance on various levels by its multi-skilled workers, and provides new opportunities to improve the attitude, thought, capabilities and skills of workers.
==== Job enlargement ====
Hulin and Blood (1968) define job enlargement as the process of allowing individual workers to determine their own pace (within limits), to serve as their own inspectors by giving them responsibility for quality control, to repair their own mistakes, to be responsible for their own machine set-up and repair, and to attain choice of method. By working in a larger scope, as Hulin and Blood state, workers are pushed to adapting new tactics, techniques, and methodologies on their own. Frederick Herzberg referred to the addition of interrelated tasks as 'horizontal job loading,' or, in other words, widening the breadth of an employee's responsibilities.
==== Job enrichment ====
Job enrichment increases the employees' autonomy over the planning and execution of their own work, leading to self-assigned responsibility. Because of this, job enrichment has the same motivational advantages of job enlargement, however it has the added benefit of granting workers autonomy. Frederick Herzberg viewed job enrichment as 'vertical job loading' because it also includes tasks formerly performed by someone at a higher level where planning and control are involved.
=== Individual strategies ===
==== Job crafting ====
Job crafting can be defined as the proactive changing the boundaries and conditions of the tasks, relationships, and meaning of a job. These changes are not negotiated with the employer and may not even be noticed by the manager. Job crafting behaviours have been found to lead to a variety of positive work outcomes, including work engagement, job satisfaction, resilience, and thriving.
==== Role innovation ====
Role innovation occurs when an employee proactively redefines a work role by changing the mission or practice of the role. When work roles are defined by organizations they do not always adequately address the problems faced by the profession. When employees notice this, they can attempt to redefine the role through innovation, improving the resilience of the profession in handling future situations.
==== Task revision ====
Task revision is seen as a form of counter-role behavior in that it is about resistance to defective work procedures, such as inaccurate job descriptions and dysfunctional expectations. This may involves acting against the norms of the organization with the end goal of making corrections to procedures. It has been noted that task revision rarely occurs in work settings as this type of resistance is often seen as inappropriate by managers and employees alike. However, a work environment which is supportive of deviation from social norms could facilitate task revision.
==== Voice ====
In the context of job redesign, voice refers to behaviours which emphasize challenging the status quo with the intention of improving the situation rather than merely criticizing. This can be as simple as suggesting more effective ways of doing things within the organization. When individuals stand up and express innovating ideas, the organization may benefit from these fresh perspectives. Voice may be particularly important in organizations where change and innovation is necessary for survival. While the individual employee does not immediately benefit from this expression, successful innovations may lead to improved performance appraisals.
==== Idiosyncratic deals ====
Idiosyncratic deals, also known as i-deals, is a concept developed by American organizational psychologist Denise Rousseau which refers to individualized work arrangements negotiated proactively by an employee with their employer. The most common forms of i-deals are flexible working hours and opportunities for personal development. However, also other forms of Idiosyncratic deals are known from previous research, such as task and work responsibilities, workload reduction, location flexibility, and financial Incentives These arrangements may be put in place because an employer values the negotiating employee, and by granting the i-deal the likelihood of retaining the employee is increased. This can be seen as a win-win scenario for both parties.
==== Personal initiative ====
Personal initiative refers to self-starting behaviours by an employee that are consistent with the mission of the organization, has a long term focus, are goal directed and action oriented, and are persistent in the face of difficulty. Additionally, these behaviours typically go beyond what is required of the employee in their work role.
== See also ==
Industrial and organizational psychology
Applied psychology
Occupational health psychology
Management
Organizational behaviour
Work motivation
Applied psychology
Occupational health psychology
== References == | Wikipedia/Work_design |
Cradle-to-cradle design (also referred to as 2CC2, C2C, cradle 2 cradle, or regenerative design) is a biomimetic approach to the design of products and systems that models human industry on nature's processes, where materials are viewed as nutrients circulating in healthy, safe metabolisms. The term itself is a play on the popular corporate phrase "cradle to grave", implying that the C2C model is sustainable and considerate of life and future generations—from the birth, or "cradle", of one generation to the next generation, versus from birth to death, or "grave", within the same generation.
C2C suggests that industry must protect and enrich ecosystems and nature's biological metabolism while also maintaining a safe, productive technical metabolism for the high-quality use and circulation of organic and technical nutrients. It is a holistic, economic, industrial and social framework that seeks to create systems that are not only efficient but also essentially waste free. Building off the whole systems approach of John T. Lyle's regenerative design, the model in its broadest sense is not limited to industrial design and manufacturing; it can be applied to many aspects of human civilization such as urban environments, buildings, economics and social systems.
The term "Cradle to Cradle" is a registered trademark of McDonough Braungart Design Chemistry (MBDC) consultants. The Cradle to Cradle Certified Products Program began as a proprietary system; however, in 2012 MBDC turned the certification over to an independent non-profit called the Cradle to Cradle Products Innovation Institute. Independence, openness, and transparency are the Institute's first objectives for the certification protocols. The phrase "cradle to cradle" itself was coined by Walter R. Stahel in the 1970s. The current model is based on a system of "lifecycle development" initiated by Michael Braungart and colleagues at the Environmental Protection Encouragement Agency (EPEA) in the 1990s and explored through the publication A Technical Framework for Life-Cycle Assessment.
In 2002, Braungart and William McDonough published a book called Cradle to Cradle: Remaking the Way We Make Things, a manifesto for cradle-to-cradle design that gives specific details of how to achieve the model. The model has been implemented by many companies, organizations and governments around the world. Cradle-to-cradle design has also been the subject of many documentary films such as Waste = Food.
== Introduction ==
In the cradle-to-cradle model, all materials used in industrial or commercial processes—such as metals, fibers, dyes—fall into one of two categories: "technical" or "biological" nutrients.
Technical nutrients are strictly limited to non-toxic, non-harmful synthetic materials that have no negative effects on the natural environment; they can be used in continuous cycles as the same product without losing their integrity or quality. In this manner these materials can be used over and over again instead of being "downcycled" into lesser products, ultimately becoming waste.
Biological nutrients are organic materials that, once used, can be disposed of in any natural environment and decompose into the soil, providing food for small life forms without affecting the natural environment. This is dependent on the ecology of the region; for example, organic material from one country or landmass may be harmful to the ecology of another country or landmass.
The two types of materials each follow their own cycle in the regenerative economy envisioned by Keunen and Huizing.
=== Structure ===
Initially defined by McDonough and Braungart, the Cradle to Cradle Products Innovation Institute's five certification criteria are:
Material health, which involves identifying the chemical composition of the materials that make up the product. Particularly hazardous materials (e.g. heavy metals, pigments, halogen compounds etc.) have to be reported whatever the concentration, and other materials reported where they exceed 100 ppm. For wood, the forest source is required. The risk for each material is assessed against criteria and eventually ranked on a scale with green being materials of low risk, yellow being those with moderate risk but are acceptable to continue to use, red for materials that have high risk and need to be phased out, and grey for materials with incomplete data. The method uses the term 'risk' in the sense of hazard (as opposed to consequence and likelihood).
Material reutilization, which is about recovery and recycling at the end of product life.
Assessment of energy required for production, which for the highest level of certification needs to be based on at least 40% renewable energy for all parts and subassemblies.
Water, particularly usage and discharge quality.
Social responsibility, which assesses fair labor practices.
=== Health ===
Currently, many human beings come into contact or consume, directly or indirectly, many harmful materials and chemicals daily. In addition, countless other forms of plant and animal life are also exposed. C2C seeks to remove dangerous technical nutrients (synthetic materials such as mutagenic materials, heavy metals and other dangerous chemicals) from current life cycles. If the materials we come into contact with and are exposed to on a daily basis are not toxic and do not have long term health effects, then the health of the overall system can be better maintained. For example, a fabric factory can eliminate all harmful technical nutrients by carefully reconsidering what chemicals they use in their dyes to achieve the colours they need and attempt to do so with fewer base chemicals.
=== Economics ===
The C2C model shows high potential for reducing the financial cost of industrial systems. For example, in the redesign of the Ford River Rouge Complex, the planting of Sedum (stonecrop) vegetation on assembly plant roofs retains and cleanses rain water. It also moderates the internal temperature of the building in order to save energy. The roof is part of an $18 million rainwater treatment system designed to clean 20 billion US gallons (76,000,000 m3) of rainwater annually. This saved Ford $30 million that would otherwise have been spent on mechanical treatment facilities. Following C2C design principles, product manufacture can be designed to cost less for the producer and consumer. Theoretically, they can eliminate the need for waste disposal such as landfills.
=== Definitions ===
Cradle to cradle is a play on the phrase "cradle to grave", implying that the C2C model is sustainable and considerate of life and future generations.
Technical nutrients are basically inorganic or synthetic materials manufactured by humans—such as plastics and metals—that can be used many times over without any loss in quality, staying in a continuous cycle.
Biological nutrients and materials are organic materials that can decompose into the natural environment, soil, water, etc. without affecting it in a negative way, providing food for bacteria and microbiological life.
Materials are usually referred to as the building blocks of other materials, such as the dyes used in colouring fibers or rubbers used in the sole of a shoe.
Downcycling is the reuse of materials into lesser products. For example, a plastic computer case could be downcycled into a plastic cup, which then becomes a park bench, etc.; this eventually leads to plastic waste. In conventional understanding, this is no different from recycling that produces a supply of the same product or material.
Waste = Food is a basic concept of organic waste materials becoming food for bugs, insects and other small forms of life who can feed on it, decompose it and return it to the natural environment which we then indirectly use for food ourselves.
=== Existing synthetic materials ===
The question of how to deal with the countless existing technical nutrients (synthetic materials) that cannot be recycled or reintroduced to the natural environment is dealt with in C2C design. The materials that can be reused and retain their quality can be used within the technical nutrient cycles while other materials are far more difficult to deal with, such as plastics in the Pacific Ocean.
== Hypothetical examples ==
One potential example is a shoe that is designed and mass-produced using the C2C model. The sole might be made of "biological nutrients" while the upper parts might be made of "technical nutrients". The shoe is mass-produced at a manufacturing plant that utilizes its waste material by putting it back into the cycle, potentially by using off-cuts from the rubber soles to make more soles instead of merely disposing of them; this is dependent on the technical materials not losing their quality as they are reused. Once the shoes have been manufactured, they are distributed to retail outlets where the customer buys the shoe at a reduced price because the customer is only paying for the use of the materials in the shoe for the period of time that they will be wearing them. When they outgrow the shoe or it is damaged, they return it to the manufacturer. When the manufacturer separates the sole from the upper parts (separating the technical and biological nutrients), the biological nutrients are returned to the natural environment while the technical nutrients can be used to create the sole of another shoe.
Another example of C2C design is a disposable cup, bottle, or wrapper made entirely out of biological materials. When the user is finished with the item, it can be disposed of and returned to the natural environment; the cost of disposal of waste such as landfill and recycling is greatly reduced. The user could also potentially return the item for a refund so it can be used again.
== Finished products ==
Rohner Textile AG Climatex-textile
Biofoam, a cradle-to-cradle alternative to expanded polystyrene
Sewage sludge treatment plants are facilities that may create fertiliser from sewage sludge. This approach is green retrofit for the current (inefficient) system of organic waste disposal; as composting toilets are a better approach in the long run.
Aquion Energy large scale batteries
Ecovative Design packaging and insulation made from waste by binding it together with mycelium
== Implementation ==
The C2C model can be applied to almost any system in modern society: urban environments, buildings, manufacturing, social systems, etc. Five steps are outlined in Cradle to Cradle: Remaking the Way We Make Things:
Get "free of" known culprits
Follow informed personal preferences
Create "passive positive" lists—lists of materials used categorised according to their safety level
The X list—substances that must be phased out, such as teratogenic, mutagenic, carcinogenic
The gray list—problematic substances that are not so urgently in need of phasing out
The P list—the "positive" list, substances actively defined as safe for use
Activate the positive list
Reinvent—the redesign of the former system
Products that adhere to all steps may be eligible to receive C2C certification. Other certifications such as Leadership in Energy and Environmental Design (LEED) and Building Research Establishment Environmental Assessment Method (BREEAM) can be used to qualify for certification, and vice versa in the case of BREEAM.
C2C principles were first applied to systems in the early 1990s by Braungart's Hamburger Umweltinstitut (HUI) and The Environmental Institute in Brazil for biomass nutrient recycling of effluent to produce agricultural products and clean water as a byproduct.
In 2007, MBDC and the EPEA formed a strategic partnership with global materials consultancy Material ConneXion to help promote and disseminate C2C design principles by providing greater global access to C2C material information, certification and product development.
As of January 2008, Material ConneXion's Materials Libraries in New York, Milan, Cologne, Bangkok and Daegu, Korea, started to feature C2C assessed and certified materials and, in collaboration with MBDC and EPEA, the company now offers C2C Certification, and C2C product development.
While the C2C model has influenced the construction or redevelopment of smaller sites, several large organizations and governments have also implemented the C2C model and its ideas and concepts:
=== Major implementations ===
The Lyle Center for Regenerative Studies incorporates holistic & cyclic systems throughout the center. Regenerative design is arguably the foundation for the trademarked C2C.
The Government of China contributed to the construction of the city of Huangbaiyu based on C2C principles, utilising the rooftops for agriculture. This project is largely criticized as a failure to meet the desires & constraints of the local people.
The Ford River Rouge Complex redevelopment, cleaning 20 billion US gallons (76,000,000 m3) of rainwater annually.
The Netherlands Institute of Ecology (NIOO-KNAW) planned to make its laboratory and office complex completely cradle-to-cradle compliant.
Several private houses and communal buildings in the Netherlands.
Fashion Positive, an initiative to assist the fashion world in implementing the cradle-to-cradle model in five areas: material health, material reuse, renewable energy, water stewardship and social fairness.
== Coordination with other models ==
The cradle-to-cradle model can be viewed as a framework that considers systems as a whole or holistically. It can be applied to many aspects of human society, and is related to life-cycle assessment. See for instance the LCA-based model of the eco-costs, which has been designed to cope with analyses of recycle systems. The cradle-to-cradle model in some implementations is closely linked with the car-free movement, such as in the case of large-scale building projects or the construction or redevelopment of urban environments. It is closely linked with passive solar design in the building industry and with permaculture in agriculture within or near urban environments. An earthship is a perfect example where different re-use models are used, including cradle-to-cradle design and permaculture.
== Constraints ==
A major constraint in the optimal recycling of materials is that at civic amenity sites, products are not disassembled by hand and have each individual part sorted into a bin, but instead have the entire product sorted into a certain bin.
This makes the extraction of rare-earth elements and other materials uneconomical (at recycling sites, products typically get crushed after which the materials are extracted by means of magnets, chemicals, special sorting methods, ...) and thus optimal recycling of, for example metals is impossible (an optimal recycling method for metals would require to sort all similar alloys together rather than mixing plain iron with alloys).
Obviously, disassembling products is not feasible at currently designed civic amenity sites, and a better method would be to send back the broken products to the manufacturer, so that the manufacturer can disassemble the product. These disassembled product can then be used for making new products or at least to have the components sent separately to recycling sites (for proper recycling, by the exact type of material). At present though, few laws are put in place in any country to oblige manufacturers to take back their products for disassembly, nor are there even such obligations for manufacturers of cradle-to-cradle products. One process where this is happening is in the EU with the Waste Electrical and Electronic Equipment Directive. Also, the European Training Network for the Design and Recycling of Rare-Earth Permanent Magnet Motors and Generators in Hybrid and Full Electric Vehicles (ETN-Demeter) makes designs of electric motors of which the magnets can be easily removed for recycling the rare earth metals.
== Criticism and response ==
Criticism has been advanced on the fact that McDonough and Braungart previously kept C2C consultancy and certification in their inner circle. Critics argued that this lack of competition prevented the model from fulfilling its potential. Many critics pleaded for a public-private partnership overseeing the C2C concept, thus enabling competition and growth of practical applications and services.
McDonough and Braungart responded to this criticism by giving control of the certification protocol to a non-profit, independent Institute called the Cradle to Cradle Products Innovation Institute. McDonough said the new institute "will enable our protocol to become a public certification program and global standard". The new Institute announced the creation of a Certification Standards Board in June 2012. The new board, under the auspices of the Institute, will oversee the certification moving forward.
Experts in the field of environment protection have questioned the practicability of the concept. Friedrich Schmidt-Bleek, head of the German Wuppertal Institute, called his assertion that the "old" environmental movement had hindered innovation with its pessimist approach "pseudo-psychological humbug". Schmidt-Bleek said of the Cradle-to-Cradle seat cushions Braungart developed for the Airbus 380: "I can feel very nice on Michael's seat covers in the airplane. Nevertheless I am still waiting for a detailed proposal for a design of the other 99.99 percent of the Airbus 380 after his principles."
In 2009 Schmidt-Bleek stated that it is out of the question that the concept can be realized on a bigger scale.
Some claim that C2C certification may not be entirely sufficient in all eco-design approaches. Quantitative methodologies (LCAs) and more adapted tools (regarding the product type which is considered) could be used in tandem. The C2C concept ignores the use phase of a product. According to variants of life-cycle assessment (see: Life-cycle assessment § Variants) the entire life cycle of a product or service has to be evaluated, not only the material itself. For many goods e.g. in transport, the use phase has the most influence on the environmental footprint. For example, the more lightweight a car or a plane the less fuel it consumes and consequently the less impact it has. Braungart fully ignores the use phase.
It is safe to say that every production step or resource-transformation step needs a certain amount of energy.
The C2C concept foresees its own certification of its analysis and therefore is in contradiction to international publishing standards (ISO 14040 and ISO 14044) for life-cycle assessment whereas an independent external review is needed in order to obtain comparative and resilient results.
== See also ==
Appropriate technology
Ellen MacArthur Foundation
List of environment topics
Modular construction systems
Planned obsolescence – the opposite of durable, no waste design
The Blue Economy
Upcycling
== References ==
== External links ==
"Cradle to Cradle Products Innovation Institute". c2ccertified.org.
"William McDonough & Michael Braungart (1998): "The Next Industrial Revolution" (article)". theatlantic.com. October 1998.
"Cradle to Cradle – The Product-Life Institute". product-life.org.
"Platform for Accelerating the Circular Economy (PACE)". acceleratecirculareconomy.org. | Wikipedia/Cradle-to-cradle_design |
Responsive computer-aided design (also simplified to responsive design) is an approach to computer-aided design (CAD) that utilizes real-world sensors and data to modify a three-dimensional (3D) computer model. The concept is related to cyber-physical systems through blurring of the virtual and physical worlds, however, applies specifically to the initial digital design of an object prior to production.
The process begins with a designer creating a basic design of an object using CAD software with parametric or algorithmic relationships. These relationships are then linked to physical sensors, allowing them to drive changes to the CAD model within the established parameters. Reasons to allow sensors to modify a CAD model include customizing a design to fit a user's anthropometry, assisting people without CAD skills to personalize a design, or automating part of an iterative design process in similar fashion to generative design. Once the sensors have affected the design it may then be manufactured as a one-off piece using a digital fabrication technology, or go through further development by a designer.
== Context ==
Responsive computer-aided design is enabled by ubiquitous computing and the Internet of Things, concepts which describe the capacity for everyday objects to contain computing and sensing technologies. It is also enabled by the ability to directly manufacture one-off objects from digital data, using technologies such as 3D printing and computer numerical control (CNC) machines. Such digital fabrication technologies allow for customization, and are drivers of the mass-customization phenomenon. They also provide new opportunities for consumers to participate in the design process, known as co-design.
As these concepts mature, responsive design is emerging as an opportunity to reduce reliance on graphical user interfaces (GUIs) as the only method for designers and consumers to design products, aligning with claims by Golden Krishna that "the best design reduces work. The best computer is unseen. The best interaction is natural. The best interface is no interface." Calls to reduce reliance on GUIs and automate some of the design process connects with Mark Weiser's original vision of ubiquitous computing.
== Related concepts ==
A variety of similar research areas are based on gesture recognition, with many projects using motion capture to track the physical motions of a designer and translate them into three-dimensional geometry suitable for digital fabrication. While these share similarities to responsive design through their cyber-physical systems, they require direct intent to design an object and some level of skill. These are not considered responsive, as responsive design occurs autonomously and may even occur without the user being aware that they are designing at all.
This topic has some common traits with responsive web design and responsive architecture, with both fields focused on systems design and adaptation based on functional conditions.
== Current work ==
Responsive computer-aided design has been used to customize fashion, and is currently an active area of research in footwear by large companies like New Balance who are looking to customize shoe midsoles using foot pressure data from customers.
Sound waves have also been popular to customize 3D models and produce sculptural forms of a baby's first cries, or a favorite song.
== See also ==
Design computing
Four-dimensional product
Industry 4.0
Product design
== References ==
== Further reading ==
Greenfield, Adam (2006). Everyware: The Dawning Age of Ubiquitous Computing. Berkeley, California USA: New Riders. ISBN 0-321-38401-6 | Wikipedia/Responsive_computer-aided_design |
Design fiction is a design practice aiming at exploring and criticising possible futures by creating speculative, and often provocative, scenarios narrated through designed artifacts. It is a way to facilitate and foster debates, as explained by futurist Scott Smith: "... design fiction as a communication and social object creates interactions and dialogues around futures that were missing before. It helps make it real enough for people that you can have a meaningful conversation with".
By inspiring new imaginaries about the future, Design Fiction moves forward innovation perspectives, as conveyed by author Bruce Sterling's own definition: "Design Fiction is the deliberate use of diegetic prototypes to suspend disbelief about change".
Reflecting the diversity of media used to create design fictions and the breadth of concepts that are prototyped in the associated fictional worlds, researchers Joseph Lindley and Paul Coulton propose that design fiction be defined as: "(1) something that creates a story world, (2) has something being prototyped within that story world, (3) does so in order to create a discursive space", where 'something' may mean 'anything'. Examples of the media used to create design fiction storyworlds include physical prototypes, prototypes of user manuals, digital applications, videos, short stories, comics, fictional crowdfunding videos, fictional documentaries, catalogues or newspapers and pastiches of academic papers and abstracts.
== History ==
Design fiction is part of the speculative design discipline, itself a relative of critical design. Although the term design fiction was coined by Bruce Sterling in 2005, where he says it is similar to science fiction but "makes more sense on the page", it was Julian Bleecker's 2009 essay that firmly established the idea. Bleecker brought together Sterling's original idea and combined it with David A. Kirby's notion of the diegetic prototype and a paper written by influential researchers Paul Dourish and Genevieve Bell which argued reading science fiction alongside Ubiquitous Computing research would shed further light on both areas. Since Bleecker's essay was published design fiction has become increasingly popular as demonstrated by the adoption of design fiction in a wide variety of academic research.
== Characteristics ==
Although design fiction shows a lot of overlaps with other Discursive Design practices such as critical design, Adversarial Design, Interrogative Design, Design for Debate, reflective design, and contestational design, it is possible to draw some of its special features.
Design fiction draws its inspiration from weak signals of our everyday lives, such as innovations in new technologies or new cultural trends, and use extrapolation to build disruptive visions of society. Through challenging the status quo, this practice aims at making ourselves question our current uses, norms, ethics or values, whether leading innovation, or consuming it at the other end of the line.
Design fictions tend to stand aside from manichaean utopian/dystopian depictions, and rather dig into more ambiguous grey areas of the explored subjects. As explained by Fabien Girardin, co-founder of The Near Future Laboratory: "Design Fiction doesn't so much 'predict' the future. It is a way to consider the future differently; a way to tell stories about alternatives and unexpected trajectories; a way to discuss a different kind of future than the typical bifurcation into utopian and dystopian".
Design fictions focus on the everyday, exploring and questioning interactions between people or HCI, habits, social behaviors, casual failures or rituals. Fabien Girardin on this point: "To contrast with other similar design approaches, we think Design Fiction is a bit different from critical design, which is a bit more abstract and theoretical compared to our own interest in design happening outside of galleries or museums. Design Fiction is about exploring a future mundane".
Another approach to design fiction is through live action role-playing games (larps). Malthe Stavning Erslev argues that the research larp Civilisation's Waiting Room, which explores a future society run by an AI, is a form of design fiction using what he calls a mimetic method that is "making the technology appear" in "deeply embodied, ephemeral encounters of enactment".
In recent pop culture, design fiction might be bonded to the Black Mirror anticipation series, each episode portraying a disturbing alternative present or near future where characters have to deal with the unexpected consequences of emerging technologies.
== Methodology and process ==
Design fiction is an open and evolving practice, demonstrating a variety of approaches from designers and studios. However it is possible to draw some common lines:
"What if?" questions
Design fictions often rely on a question: "What if?", creating a provocative framework for speculation from the start. This questioning format stimulates the exploration of tensions and sticking points, leading to the construction of the new fictional universe, in an alternative present or near future, which includes a new set of morals and values: "The New Normal".
Diegetic prototypes
The speculative scenario and the fictional world in which it takes place are made tangible thanks to design tools and methods, to conceive what David A. Kirby was the first to call "diegetic prototypes". The term diegetic stands for their narrative attribute, made to be self-explanatory of the world they come from. At the same time, they purposely leave narrative spaces for the viewer's imagination to fill in: they "tell worlds rather than stories". As explained by Julian Bleecker: "Design fiction objects are totems through which a larger story can be told, or imagined or expressed. They are like artifacts from someplace else, telling stories about other worlds".
These prototypes are effective entry points into complex topics subject to socio-technological controversies such as digital technologies, Internet of things, ubiquitous computing, biotechnology, synthetic biology, transhumanism, artificial intelligence, data or algorithms. They "help make things visceral and real enough to jump to discussions and get to decisions".
Discussion and debate generation
Design fictions are meant to be displayed in order to create a space for discussion and debate. They can be exposed in various contexts depending on the targeted audiences: online – video platforms, social media, dedicated websites,... – or offline – galleries or museums, convenient stores, forums, ... – unveiling or not their fictional nature. In 2013, the project 99¢ FUTURES driven by the Extrapolation Factory studio showed that provoked discussions and debates could happen successfully in non-institutional places, such as a convenient store: they shelved artefacts – previously imagined and conceived during a workshop - among "real" current consumption objects. Customers passing by started to discuss about these pieces of futures, even purchasing the one they liked the most for a few dollars.
== Application scope ==
Public policy-making
Design Fiction is a helpful tool used to discuss and move forward public policy-making processes. In 2015, ProtoPolicy, a co-design project led by the Design Friction studio, All-Party Parliamentary Design and Innovation Group (APDIG) and Age UK aimed at building a shared understanding of the constraints and opportunities of political issues around Ageing in Place and loneliness through design fictions. A series of creative workshops involving older people communities led to the conception of "Soulaje", a provocative self-administered euthanasia wearable designed to start discussion around the taboo issues of death and freedom of choice. A second scenario staged "The Smart Home Therapist", a new kind of therapist who, through human psychology and artificial intelligence expertise, facilitates and improves older people's relationship with their smart homes and eases their access to personalized domestic products and services.
Innovative companies
Design fiction can be a powerful tool for companies showing prospective approaches or interests within changing or emerging industries. It can be used to help inspiring new imaginaries about the future, collecting insights and qualitative data that will help to formulate strategic directions and decisions, anticipating risks, social and cultural obstacles, enabling discussion between stakeholders, involving internal teams and external audiences in future orientations, bringing out unexpected feedbacks, frictions, misuses, misappropriations or reappropriations of new technologies and highlight their multiple impacts on potential users and more broadly speaking on the society.
The Near Future Laboratory on its approach of design fiction towards companies: "Design Fiction is one approach among others, but its contribution focus on the near future and is tangible. For instance, instead of participating to workshops of multidisciplinary experts with a powerpoint filled with ideas for a technology, we propose to create the user manual for the envisioned product or produce a video that showcases how an employee appropriates the technologies with its features and limitations. These artifacts are meant to materialize changes, opportunities and implication in the use of technologies. They particularly point out details in situations of use with the objective to avoid a "general discussion". ... For our clients a successful Design Fiction means that they can feel, touch and understand near future opportunities and with convincing material of potential changes of their customers, markets, technologies, or competition."
General public
Design fiction gets closer to activism when it comes to raising the awareness of the general public on emerging social, legal, political or economic issues. Oniria is a project developed by A Parede studio in 2016 in reaction to the Statute of the Unborn, a Brazilian law project settling the beginning of life at the stage of egg fertilisation, therefore prohibiting the Morning-After Pill in a country where abortion is already illegal under most circumstances. As a critique, designers imagined a scenario in which a company launches a contraceptive technology in line with this new measure. People were invited to share their own visions through various social media platforms on how their life would be affected and how they would bond to this new device in their everyday life.
== Publications ==
The Manual of Design Fiction by Julian Bleecker, Nick Foster, Fabien Girardin, and Nicolas Nova, 2022
Speculative Everything: Design, Fiction and Social Dreaming by Dunne and Raby, MIT Press, 2013
2050: "Designing our Tomorrow", Architectural Design, Volume 85, Issue 4, July/August 2015. Edited by Chris Luebkeman with contributions from Tim Maughan, Dan Hill, Liam Young, Mitchell Joachim, et al.
Ecotopia 2121: Visions of Our Future Green Utopia--in 100 Cities, written and illustrated by Alan Marshall, ISBN 978-1-62872-600-8, an outcome of the Ecotopia 2121 Project
"Design Fiction", A short essay on design, science fact and fiction Archived 2017-10-04 at the Wayback Machine by Julian Bleecker, 2009
Little Book of Design Fiction for the Internet of Things by Paul Coulton, Joe lindley, and Rachel Cooper, 2018
== See also ==
Critical design
Critical making
Dystopia
Scenario-based design
Science fiction prototyping
Speculative design
Superfiction
Utopia
== References == | Wikipedia/Design_fiction |
Property design, commonly known as prop design, is the design of props (theatrical property) for use in theatre, film, television, etc. Designers of props work in liaison with the costume designers, set designers and sound designers, under the direction of the art director or technical director.
The term is also associated with home or interior design.
== History ==
As with most theater, props originate from Ancient Greece where they would use urns and pebbles to represent voting ballots and ballot boxes during Aeschulyus' Eurmidine. This is possibly the most simplistic prop design as these props were either found objects in the case of the pebbles, or in the case of the urns terracotta or possibly bronze was used to craft them.
Shakespeare's plays had many props and in the case of Hamlet one of the props was a skull. Back then prop design was not advanced enough to build, or create a skull. This resulted in grave diggers being hired to go and find the needed skulls. If the prop did not require the digging of graves it is most likely hand crafted from wood, metal, stone, or sewn from cloth. Shakespeare's played used natural props such as trees, and moss banks that were brought on stage for A Midsummers Night Dream.
== References == | Wikipedia/Property_designer |
Scenic design, also known as stage design or set design, is the creation of scenery for theatrical productions including plays and musicals. The term can also be applied to film and television productions, where it may be referred to as production design. Scenic designers create sets and scenery to support the overall artistic goals of the production. Scenic design is an aspect of scenography, which includes theatrical set design as well as light and sound.
Modern scenic designers are increasingly taking on the role of co-creators in the artistic process, shaping not only the physical space of a production but also influencing its blocking, pacing, and tone. As Richard Foreman famously stated, scenic design is a way to "create the world through which you perceive things happening." These designers work closely with the director, playwright, and other creative members of the team to develop a visual concept that complements the narrative and emotional tone of the production. Notable scenic designers who have embraced this collaborative role include Robin Wagner, Eugene Lee, and Jim Clayburgh
== History ==
The origins of scenic design may be found in the outdoor amphitheaters of ancient Greece, when acts were staged using basic props and scenery. Because of improvements in stage equipment and drawing perspectives throughout the Renaissance, more complex and realistic sets could be created for scenic design. Scenic design evolved in conjunction with technological and theatrical improvements over the 19th and 20th centuries.
=== The New Stagecraft Movement ===
In the early 20th century, American scenic design underwent a dramatic transformation with the introduction of the New Stagecraft. Drawing inspiration from European pioneers like Adolphe Appia and Edward Gordon Craig, American designers began moving away from the overly detailed naturalism of the 19th century. Instead, they embraced simplified realism, abstraction, mood-driven environments, and symbolic imagery. Leaders of this movement, including Robert Edmond Jones, Lee Simonson, and Norman Bel Geddes, laid the foundation for a more interpretive and artistic approach to stage design in the United States.
=== Poetic Realism and Its Legacy ===
Following the New Stagecraft, designers like Jo Mielziner and Boris Aronson helped define a style known as poetic realism. Characterized by soft lighting, romantic imagery, scrims, and fragmented sets, this style prioritized the emotional tone of a production over strict realism. These designers often collaborated closely with playwrights and directors, shaping the mood and meaning of American theater classics like the early works of Arthur Miller and Tennessee Williams.
=== Modern Trends in Scenic Design ===
A key element of modern trends is the integration of spectacle. This movement towards larger-than-life visuals, mechanized scenery, and intricate special effects has reshaped both Broadway productions and regional theater. Designers like David Mitchell, known for his work on kinetic sets, exemplify the push towards spectacle that mirrors the influence of cinema on stage design. This trend emphasizes the audience's sensory experience, focusing on visual impact and technical prowess rather than traditional storytelling techniques alone.
At the same time, many designers are exploring minimalism and abstraction, moving away from overly realistic representations to create symbolic and suggestive environments that focus on mood rather than realism. The evolving role of the designer as a collaborator with directors and playwrights has also reinforced these trends, as designers today have a more equal voice in shaping the vision and narrative of a production.
== Elements of scenic design ==
Scenic design involves several key elements:
Set pieces: These are physical structures, such as platforms, walls, and furniture, that define the spatial environment of the performance. Set pieces are carefully constructed to reflect the time period, location, and atmosphere of the story.
Props: Objects used by actors during a performance, which help to establish the setting and enhance the narrative. Props can range from everyday objects to fantastical items, and they are integral to the story, helping to reveal character traits, advance the plot, or symbolize themes.
Backdrops: Painted or digitally projected backdrops and flat scenery that create the illusion of depth and perspective on stage. These elements help establish the overall mood of the scene and can be as detailed or abstract as the design requires. With advances in technology, projections and digital elements now allow for dynamic, evolving backdrops that enhance the visual storytelling.
Lighting: Setting the tone, ambiance, and focal point of the performance, lighting design is an essential component of scenic design. Advances in lighting technology have expanded the range of possibilities, enabling designers to control color, intensity, and movement.
Functionality: In order to meet the demands of the actors, crew, and technical specifications of the show, and sets must be useful and practical. When building the set, designers have to take accessibility, perspectives, entrances, and exits into account. Functionality ensures that the set can support the physical actions of the actors, accommodate scene changes, and maintain safety standards. Finding a balance between artistic design and practical design is a fundamental part of for overall design.
Scenic Art and Painting: Scenic artistry involves creating highly detailed, realistic paintings that enhance the visual storytelling of a production. Scenic artists paint backdrops, textures, and other elements that bring a designer's vision to life. They use a range of traditional and modern techniques, including trompe l'oeil (fooling the eye), texture application, and faux finishes to create realistic or abstract environments on stage. As digital and mechanized techniques have advanced, scenic artists now also incorporate technologies such as computer-generated imagery (CGI) and digital projection into their work.
== Scenic designer ==
A scenic designer works with the theatre director and other members of the creative team to establish a visual concept for the production and to design the stage environment. They are responsible for developing a complete set of design drawings that include:
Basic floor plan showing all stationary scenic elements;
Composite floor plan showing all moving scenic elements, indicating both their onstage and storage positions;
Complete floor plan of the stage space incorporating all elements; and
Front elevations of every scenic element and additional elevations of sections of units as required.
In planning, scenic designers often make multiple scale models and renderings. Models are often made before final drawings are completed for construction. These precise drawings help the scenic designer effectively communicate with other production staff, especially the technical director, production manager, charge scenic artist, and prop master.
In Europe and Australia, many scenic designers are also responsible for costume design, lighting design and sound design. They are commonly referred to as theatre designers, scenographers, or production designers.
Scenic design often involves skills such as carpentry, architecture, textual analysis, and budgeting. In addition, successful scenic designers must have a strong understanding of theatrical collaboration, including the ability to communicate ideas clearly, engage with the director’s vision, and address technical challenges in the design.
Many modern scenic designers use 3D CAD models to produce design drawings that used to be done by hand. CAD tools have revolutionized the way designers create technical drawings, allowing for precise, scalable plans that are easier to adjust and communicate to the entire production team.
=== Influential Scenic designers ===
Some of the most influential scenic designers include:Robin Wagner: Known for his work on Broadway musicals like A Chorus Line and The Producers, Wagner's designs often blur the boundaries between traditional and modern aesthetics. His sets are celebrated for their dramatic flair and innovative use of space, enhancing both the storytelling and the audience’s emotional engagement.
Eugene Lee: A key figure in contemporary scenic design, Lee's work on Sweeney Todd and The Glass Menagerie showcases his ability to create immersive environments that serve as a vital part of the narrative. His work often integrates lighting design with set elements to create an emotional connection with the audience.
Jim Clayburgh: Clayburgh's sets for productions like The Red Shoes and Pippin have demonstrated his collaborative process with directors and designers, focusing on creating highly theatrical and dynamic spaces that support the narrative’s emotional core.
Bob Crowley: Recognized for his work on the Broadway musical The Lion King, Crowley’s designs are iconic for their ability to integrate traditional African aesthetics with a modern theatrical approach. His work has influenced the integration of puppetry and stagecraft, making the set an active part of the storytelling process.
== Cultural Differences in Scenic Design ==
Scenic design varies significantly across different cultures, reflecting diverse traditions, artistic sensibilities, and historical contexts. These differences are particularly evident when comparing European, American, and Australian scenic design practices, as well as in non-Western theater traditions.
Designers in countries like Germany and France are typically referred to as scenographers, a term that emphasizes their role in integrating set design, lighting, and costume design into a cohesive artistic vision. This approach to design is especially well known in European operas. American scenic design traditionally focuses more on set construction and the physical environment of a production. Designers are often responsible for creating the illusion of realism, particularly in Broadway musicals and dramatic plays.
In Australia, scenic designers frequently take on multi-disciplinary roles. Many Australian designers, especially in regional theater, are involved in the design of both the sets and costumes, and they often collaborate closely with lighting and sound designers from the early stages of production.
In non-Western theater traditions, such as Chinese, Indian, and Japanese theater, often employ vastly different scenic approaches, relying heavily on symbolic elements, minimalistic sets, and dynamic stage movements. For example, Kabuki theater in Japan uses elaborate costumes and stylized, symbolic sets to convey meaning, with a heavy focus on color symbolism and abstract designs rather than realistic representations. In Chinese opera, the use of large, symbolic backdrops and the minimalistic set serves to enhance the performance of actors and emphasize the gestural language and music.
== Notable scenic designers ==
Some notable scenic designers include: Adolphe Appia, Boris Aronson, Alexandre Benois, Alison Chitty, Antony McDonald, Barry Kay, Caspar Neher, Cyro Del Nero, Aleksandra Ekster, David Gallo, Edward Gordon Craig, Es Devlin, Ezio Frigerio, Christopher Gibbs, Franco Zeffirelli, George Tsypin, Howard Bay, Inigo Jones, Jean-Pierre Ponnelle, Jo Mielziner, John Lee Beatty, Josef Svoboda, Ken Adam, Léon Bakst, Luciano Damiani, Maria Björnson, Ming Cho Lee, Philip James de Loutherbourg, Natalia Goncharova, Nathan Altman, Nicholas Georgiadis, Oliver Smith, Ralph Koltai, Emanuele Luzzati, Neil Patel, Robert Wilson, Russell Patterson, Brian Sidney Bembridge, Santo Loquasto, Sean Kenny, Todd Rosenthal, Robin Wagner, Tony Walton, Louis Daguerre, Ralph Funicello, and Roger Kirk.
== See also ==
== References ==
== Further reading ==
Brockett, Oscar G., Margaret Mitchell, and Linda Hardberger. Making the Scene: A History of Stage Design and Technology in Europe and the United States, Tobin Theatre Arts Fund, distributed by University of Texas Press, 2010. Traces the history of scene design since the ancient Greeks.
Pecktal, Lynn. Designing and Painting for the Theater, McGraw-Hill, 1995. Details production design processes for theater, opera, and ballet. The foundational text provides a professional picture and comprehensive references to the design process. Well-illustrated with detailed lined drawings and photographs to convey the beauty and craft of scenic and production design.
Aronson, Arnold (1991). "Postmodern Design". Theatre Journal. 43 (1): 1–13. doi:10.2307/3207947. JSTOR 3207947.
Gaddy, Davin E. (2017). "Design Elements". Media Design and Technology for Live Entertainment. pp. 27–50. doi:10.4324/9781315442723-2. ISBN 978-1-315-44272-3.
Henke, Robert (2021). "Visual Experiences in Cinquecento Theatrical Spaces by Javier Berzal de Dios (review)". Theatre Journal. 73 (1): 111–112. doi:10.1353/tj.2021.0007. Project MUSE 787014 ProQuest 2507722208.
== External links ==
Media related to Scenography at Wikimedia Commons.
Prague Quadrennial of Performance Design and Space Largest scenography event in the world.
What is Scenography Article illustrating the differences between US and European theatre design practices. | Wikipedia/Scenic_design |
Geodesign is a set of concepts and methods used to involve all stakeholders and various professions in collaboratively designing and realizing the optimal solution for spatial challenges in the built and natural environments, utilizing all available techniques and data in an integrated process. Originally, geodesign was mainly applied during the design and planning phase. "Geodesign is a design and planning method which tightly couples the creation of design proposals with impact simulations informed by geographic contexts." Now, it is also used during realization and maintenance phases and to facilitate re-use of for example buildings or industrial areas. Geodesign includes project conceptualization, analysis, design specification, stakeholder participation and collaboration, design creation, simulation, and evaluation (among other stages).
== History ==
Geodesign builds greatly on a long history of work in geographic information science, computer-aided design, landscape architecture, and other environmental design fields. See for instance, the work of Ian McHarg and Carl Steinitz.
Members of the various disciplines and practices relevant to geodesign have held defining discussions at a workshop on Spatial Concepts in GIS and Design in December 2008 and the GeoDesign Summit in January 2010. GeoDesign Summit 2010 Conference Videos from Day 1 and Day 2 are an important resource to learn about the many different aspects of GeoDesign. ESRI co-founder Jack Dangermond has introduced each of the GeoDesign Summit meetings. Designer and technologist Bran Ferren, was the keynote speaker for the first and fourth Summit meetings in Redlands, California. During the fourth conference he presented a provocative view of how what is needed is a 250-year plan, and how GeoDesign was a key concept in making this a reality. Carl Steinitz was a presenter at both the 2010 and 2015 Summits.
The 2013 Geodesign Summit drew a record 260 attendees from the United States and abroad. That same year, a master's degree in Geodesign — the first of its kind in the nation — began at Philadelphia University. Claudia Goetz Phillips, director of Landscape Architecture and GeoDesign at Philadelphia University says "it is very exciting to be at the forefront of this exciting and relevant paradigm shift in how we address twenty-first-century global to local design and planning issues."
== Theory ==
The theory underpinning Geodesign derives from the work of Patrick Geddes in the first half of the twentieth century and Ian McHarg in its second half. They advocated a layered approach to regional planning, landscape planning and urban planning. McHarg drew the layers on translucent overlays. Through the work of Jack Dangermond, Carl Steinitz, Henk Scholten and others the layers were modeled with Geographical Information Systems (GIS). The three components of this term each say something about its character. 'Geographical' implies that the layers are geographical (geology, soils, hydrology, roads, land use etc.). 'Information' implies a positivist and scientific methodology. 'System' implies the use of computer technology for the information processing.
The scientific aspects of Geodesign contrast with the cultural emphasis of Landscape Urbanism but the two approaches to landscape planning share a concern for layered analysis which sits comfortably with postmodern and post-postmodern theory.
== Technologies ==
Nascent geodesign technology extends geographic information systems so that in addition to analyzing existing environments and geodata, users can synthesize new environments and modify geodata. See, for example, CommunityViz or marinemap.
"GeoDesign brings geographic analysis into the design process, where initial design sketches are instantly vetted for suitability against myriad database layers describing a variety of physical and social factors for the spatial extent of the project. This on-the-fly suitability analysis provides a framework for design, giving land-use planners, engineers, transportation planners, and others involved with design, the tools to leverage geographic information within their design workflows."
== See also ==
Environmental design
Landscape Architecture
Landscape urbanism
Landscape planning
Geographic Information System
Participatory GIS
Public Participation GIS
Spatial Decision Support System
== References ==
== Bibliography ==
Ian L. McHarg. 1969. Design With Nature. Garden City, NY: Doubleday/Natural History Press. ISBN 0-471-11460-X
Ian L. McHarg and Frederick Steiner, editors. 1998.To Heal the Earth: Selected Writings of Ian L. McHarg. Washington, D.C.: Island Press. ISBN 1-55963-573-8
Ian L. McHarg. 1996. A Quest for Life: An Autobiography. New York: John Wiley & Sons. ISBN 0-471-08628-2
Frederick Steiner, editor. 2006. The Essential Ian McHarg: Writings on Design and Nature. Washington, D.C.: Island Press. ISBN 1-59726-117-3
Frederick Steiner. 2008. The Living Landscape (paperback edition). Washington, D.C. Island Press. ISBN 978-1-59726-396-2
Carl Steinitz, Hector Arias, Scott Bassett, Michael Flaxman, Thomas Goode, Thomas Maddock, David Mouat, Richard Peiser, and Allan Shearer. 2003. Alternative Futures for Changing Landscapes: The Upper San Pedro River Basin In Arizona And Sonora. Washington, D.C.: Island Press.
Carl Steinitz. 2012. A framework for Geodesign - changing geography by design. Redlands: Esri Press. ISBN 9781589483330
Danbi J.Lee, Eduardo Dias, Henk J. Scholten. 2014. Geodesign by integrating design and geospatial sciences. Springer International Publishing Switzerland. ISBN 978-3-319-08298-1 DOI 10.1007/978-3-319-08299-8
Frank van der Hoeven, Steffen Nijhuis, Sisi Zlatanova, Eduardo Dias, Stefan van der Spek. 2016. Geo-Design: Advances in bridging geo-information technology, urban planning and landscape architecture. Research in Urbanism Series (RiUS), Volume 4, ISSN 1875-0192 (print), E-ISSN 1879-8217 (online) Delft: TU Delft Open, 2016 ISBN 978-94-92516-42-8.
Paul Cureton and Elliot Hartley, Geodesign, Urban Digital Twins, and Futures, Routledge. ISBN 978-1-032-74861-0.
== External links ==
Participatory Geodesign
GeoDesign: A Bibliography
All Points Blog Coverage of GeoDesign Summit
Placeways Blog on GeoDesign Summit
James Fee GIS Blog on GeoDesign Summit
Vector 1 Media Coverage of GeoDesign Summit
Sasaki Strategies
Directions Magazine - GeoDesign Summit Reflections by Adena Schutzberg
GeoDesign Knowledge Portal
GeoPlanIT - GeoDesign Posts
Geodesign Summit, Redlands
Geodesign Summit, Europe
Geodesign Summit, Beijing
Geodesign Conference, Copenhagen
[1]
Geodesign, Urban Digital Twins and Futures | Wikipedia/Geodesign |
Hostile architecture is an urban-design strategy that uses elements of the built environment to purposefully guide behavior. It often targets people who use or rely on public space more than others, such as youth, poor people, and homeless people, by restricting the physical behaviours they can engage in.
The term hostile architecture is often associated with items like "anti-homeless spikes" – studs embedded in flat surfaces to make sleeping on them uncomfortable and impractical. This form of architecture is most commonly found in densely populated and urban areas. Other measures include sloped window sills to stop people sitting; benches with armrests positioned to stop people lying on them; water sprinklers that spray intermittently; and public trash bins with inconveniently small mouths to prevent the insertion of bulky wastes. Hostile architecture is also employed to deter skateboarding, BMXing, inline skating, littering, loitering, public urination, and trespassing, and as a form of pest control.
== Background ==
Although the term hostile architecture is recent, the use of civil engineering to achieve social engineering is not: antecedents include 19th-century urine deflectors and urban planning in the United States designed for segregation. American urban planner Robert Moses designed a stretch of Long Island Southern State Parkway with low stone bridges so that buses could not pass under them. This made it more difficult for people who relied on public transportation, mainly African Americans, to visit the beach that wealthier car-owners could visit. Outside of the United States, public space design change for the purpose of social control also has historic precedent: the narrow streets of 19th century Paris, France were said to be widened to help the military quash protests.
Its modern form is derived from the design philosophy crime prevention through environmental design (CPTED), which aims to prevent crime or protect property through three strategies: natural surveillance, natural access control, and territorial enforcement. According to experts, exclusionary design is becoming increasingly common, not least in large cities such as Stockholm.
Consistent with the widespread implementation of defensible space guidelines in the 1970s, most implementations of CPTED as of 2004 were based solely upon the theory that the proper design and effective use of the built environment could reduce crime, reduce fear of crime, and improve quality of life. Built environment implementations of CPTED seek to dissuade offenders from committing crimes by manipulating the built environment in which those crimes proceed or occur. The six main concepts according to Moffat are territoriality, surveillance, access control, image/maintenance, activity support and target hardening. Applying all of these strategies is key when trying to prevent crime in any neighborhood, crime-ridden or not.
Beyond CPTED, scholarly research has also found that modern capitalist cities have a vested interest in eliminating signs of homelessness from their communal spaces, fearing that it might discourage investment from wealthier individuals. In England, much of their hostile architecture has been attributed to a desire by the government to combat an anti-social street scene, taking the form of begging and street drinking.
=== Design Apartheid ===
Design Apartheid, a term coined by architect Rob Imrie, can be described as the assumption of sameness and able-bodiedness in a population while designing built forms.The built forms are inscribed with the ableist values of the society. An example of design apartheid can be seen in Le Corbusier's diagram of the Modulor which utilises proportions of the body as an anthropometric aid to help architects design buildings. But it presents an image of an upright male who is six feet tall, muscular, powerful, and showing no evidence of either physical or mental disability.
== Applications ==
=== Spikes ===
Hostile architecture can occur as spikes, bumps or other types of pointed structures. They are typically placed on ledges outside buildings, under roofs or other places where people seek rest or shelter, and also around shops. The property management company Jernhusen uses a variant by placing pipes instead of spikes in several places at Stockholm Central Station. In 2014, images circulated on the internet of a place in London where homeless people used to sleep. The ground had been fitted with sharp upward-pointing spikes to get rid of people who used to sleep there, but after widespread protests, the anti-homeless spikes were removed. There are also anti-homeless spikes which are intended to ensure that people do not, for example, sit against a house wall, or stand in a particular place. It is difficult to adequately assess how many different types exist, but it is certain that there are many forms of the phenomenon, including split bricks which form cracks, various forms of bent metal pipes, and plates welded upwards to form spikes. Former UK Prime Minister Boris Johnson has called the spikes "stupid".
=== Sleeping deterrents ===
In many large cities, for example Tokyo and London, benches have been designed to prevent people from sleeping on them. These benches have been constructed so that the seat slopes at an angle, which requires the user to support themselves entirely with their feet; such benches are ubiquitous on bus stops across the United Kingdom. Another deterrent design is to include armrests placed down the center of the bench, preventing the user from lying down across the seats.
Camden Borough Council in London commissioned concrete-block benches (dubbed "Camden benches") designed to discourage uses such as sleeping, skateboarding and placing stickers. There are other variants, in which level differences are absent but they tend to be either too short to lie on, or have iron pipes placed two-thirds of the way in, or multiple armrests placed along the entire length of the bench. Such benches are common in airports.
Other types of seats, such as Simme seats, are designed to be too small to accommodate lying down or sleeping. They are installed in locations where homeless camping is prevalent.
When the City Tunnel in Malmö, Sweden, was opened in 2010, the design of the benches on the new train platforms was reported to the Equality Ombudsman because the benches were tilted so much that they were difficult to impossible to use for sitting. The Swedish state-owned real estate company Jernhusen has also used so-called "homeless-proof" benches at the train station in Luleå, with seven iron bars at 47 cm (19 in) intervals per bench. Jernhusen's press officer maintained that they "put in the armrests primarily to make it easier for the elderly and disabled to sit and stand up" but admitted in an interview that the perceived orderliness problems at the station building influenced how the benches were designed. Another example of a company that has installed such benches is Berliner Verkehrsbetriebe, Berlin's local public transport company.
Some examples of sleeping deterrents take the form of temporary changes to buildings. An example of this occurred in a Liverpool building, previously the Bank of England headquarters, in December 2016. A blue sloping steel structure covered in oil was placed over the stairs at night, so that the homeless who used to sleep and rest on the stairs would not stay there.
=== Camping deterrents ===
In Seattle, Washington, United States, the city government installed bicycle racks to prevent homeless people from camping.
Since 2013, the Oregon Department of Transportation in Oregon, United States deployed large boulders at eight locations that had been the site of transient camps in Portland. These boulders were installed to deter illegal camping near the freeways.
=== Fences or grates ===
Fences or grates are a common form of exclusionary design, often used to prevent access to places where there is protection from the elements, for example under stairs, bridges, or near fan systems that blow out hot air.
In the spring of 2015, the City of Stockholm, Sweden, erected a 200,000 kr (~22,900 USD) fence to prevent homeless people from seeking shelter under a staircase in Kungsholmen.
=== Removal ===
Sometimes exclusionary design is not about adding features, but rather about taking them away. Fredrik Edin, who has written a book on exclusionary design, says that removal is the most common type of exclusionary design, where, for example, benches used by the public are removed precisely because they are used by the public. One example is when representatives of the New York City Subway announced via social media in 2021 that "benches were removed from stations to prevent the homeless from sleeping on them." The agency later said the tweet was a mistake. Benches at certain locations at Stockholm Central Station were removed in 2015 in favour of chairs and benches were also removed at Luleå railway station. Their press officer stated that they had problems with the station being used as a warming shelter. Many public toilets have begun to be removed in the UK in places considered to be untidy.
=== Security cameras ===
One of the most common forms of hostile architecture takes the form of surveillance. While security cameras do not physically prevent people from engaging in certain behaviors, they can restrict actions in public spaces through enabling remote oversight and increasing the fear of retaliation for socially taboo actions. In cities like Cincinnati, there has been a noted sharp increase in the number of CCTV cameras in public spaces since the 1990s.
=== Urination deterrent ===
=== Hostile architecture as art or embellishment ===
This type of exclusionary design may involve, for example, displaying a large flowerpot where homeless people previously used the pavement to sleep. Other examples that have occurred include a stone painted in rainbow colours, putting out blocking shrubbery on a sidewalk, and "fun" shaped seating.
In Sweden, loudspeakers in Finspång have played music in order to get addicts to leave certain places. In the UK and Germany, so-called anti-loitering devices (see The Mosquito) have been installed to ensure that young people do not stay in places where they are installed. The devices work by emitting a monotone sound at such a high frequency that most people after adolescence lose the ability to hear it. Critics have stated that the devices constitute a violation of human rights and also comment that the phenomenon would create a "dangerous gap" between young people exposed to it and older people who can avoid it. In Germany, classical music has been used in an attempt to keep drug users away. In Berlin, a plan to use atonal music at S-Bahn stations has been withdrawn after criticism.
=== Sprinklers ===
Sprinklers can be found in areas where spikes are considered too permanent; this solution involves spraying water on those staying in a particular place at a particular time.
The Strand Bookstore in New York used such a system in 2013 to deter homeless people sleeping outside the store at night. Bonhams in San Francisco was criticised for an external sprinkler system that it claimed was used to clean "building and perimeter sidewalks during non-business hours intermittently over a 48-hour period", and which was also a point where homeless people gathered.
== Reception ==
Opposition to hostile architecture in urban design states that such architecture makes public spaces hostile to all people and especially targets the transient and homeless populations.
Proponents say that clearly establishing a sense of ownership over the space helps maintain order and safety and deter crime and unwanted behaviors.
Examples of hostile architecture circulating within UK media have led to negative reception. Nonetheless, types of hostile architecture have increased. For example, Selfridges in Manchester installed metal spikes outside their store for the purpose of reducing "litter and smoking," which suggests hostile architecture may be implicated for one reason but explained by another.
Often as part of a larger pattern of tactical urbanism, some opponents of hostile architecture have responded to it directly to undermine its intended effects. Where public seating is absent or inadequate, some have built and installed seating themselves in an act called "chair bombing". Others have removed or vandalized anti-homeless spikes and armrests in protest of anti-homelessness legislation. Some members of England's homeless community interviewed by researchers have noted that hostile design contributes to their displacement and feelings of insignificance, as it appears that local business interests are prioritized over their survival.
=== Identification ===
Some forms of hostile architecture are easy to identify, while others could be interpreted as either exclusionary or non-exclusionary, such as spaced-out singular chairs constructed at a playground in Sweden, which may appear intentionally designed to dissuade homeless sleeping, or as an acknowledgement that Swedes consider it impolite to sit near strangers. Some researchers have said that hostile architecture should be evaluated within the wider context of the community, and should recognize the social and political forces motivating a particular design choice, such as anti-homelessness legislation or sentiments.
== Evidence ==
As of March 2020, no wide-scale empirical study has been conducted to measure the impact of hostile architecture on the wellbeing of homeless people or other targeted populations.
== Gallery ==
== See also ==
Architecture terrible
Anti-trespass panels, spiky rubber and wooden mats meant to discourage trespass on or near rail tracks.
Bird control spike
Defensible space theory
Defensive design
Functionalism (architecture)
Natural surveillance
New Urbanism
Privately owned public space
Skatestopper
Social model of disability
Urban vitality
== Notes ==
== References ==
== External links ==
Cara Chellew, Bars, barriers and ghost amenities: Defensive urban design in Toronto Torontoist.
Lloyd Alter, Hostile design doesn't work for any age group Mother Nature Network.
Cara Chellew, Defensive Inequalities Spacing Magazine.
"When Design Is Hostile On Purpose". Popsci. 28 July 2016. Retrieved 16 August 2017.
HostileDesign.org, Project homepage of Stuart Semple sticker campaign. | Wikipedia/Hostile_design |
Design management is a field of inquiry that uses design, strategy, project management and supply chain techniques to control a creative process, support a culture of creativity, and build a structure and organization for design. The objective of design management is to develop and maintain an efficient business environment in which an organization can achieve its strategic and mission goals through design. Design management is a comprehensive activity at all levels of business (operational to strategic), from the discovery phase to the execution phase. "Simply put, design management is the business side of design. Design management encompasses the ongoing processes, business decisions, and strategies that enable innovation and create effectively-designed products, services, communications, environments, and brands that enhance our quality of life and provide organizational success." The discipline of design management overlaps with marketing management, operations management, and strategic management.
Traditionally, design management was seen as limited to the management of design projects, but over time, it evolved to include other aspects of an organization at the functional and strategic level. A more recent debate concerns the integration of design thinking into strategic management as a cross-disciplinary and human-centered approach to management. This paradigm also focuses on a collaborative and iterative style of work and an abductive mode of inference, compared to practices associated with the more traditional management paradigm.
Design has become a strategic asset in brand equity, differentiation, and product quality for many companies. More and more organizations apply design management to improve design-relevant activities and to better connect design with corporate strategy.
== Extended definition ==
The multifaceted nature of design management leads to varied opinion, making it difficult to give an overall definition; furthermore, design managers have a broad range of roles and responsibilities. These factors, combined with a multitude of other influences such as the industry involved, company size, the market situation, and the importance of design within the organization's activities. As a result, design management is not restricted to a single design discipline and usually depends on the context of its application within an individual organization.
On an abstract level, design management plays three key roles in the interface of design, organization, and market. The three key roles are to:
Align design strategy with corporate or brand strategy, or both
Manage quality and consistency of design outcomes across and within different design disciplines (design classes)
Enhance new methods of user experience, create new solutions for user needs and differentiation from competitor's designs
=== Additional definitions ===
Design management is the effective deployment by line managers of the design resources available to an organization in the pursuance of its corporate objectives. It is therefore directly concerned with the organizational place of design, with the identification with specific design disciplines which are relevant to the resolution of key management issues, and with the training of managers to use design effectively.
Design management is a complex and multi-faceted activity that goes right to the heart of what a company is or does [...] it is not something susceptible to pat formulas, a few bullet points or a manual. Every company's structure and internal culture is different; design management is no exception. But the fact that every firm is different does not diminish the importance of managing design tightly and effectively.
== Definition of related terms ==
=== Design ===
Unlike unique sciences such as mathematics, the perspective, activity, or discipline of design is not brought to a generally accepted common denominator. The historical beginnings of design are complex and the nature of design is still the subject of ongoing discussion. In design, there are strong differentiations between theory and practice. The fluid nature of the theory allows the designer to operate without being constrained by a rigid structure. In practice, decisions are often referred to as intuition. In his Classification of Design (1976), Gorb divided design into three different classes. Design management operates in and across all three classes: product (e.g. industrial design, packaging design, service design), information (e.g. graphic design, branding, media design, web design), and environment (e.g. retail design, exhibition design, interior design).
=== Management ===
Management in all business and organizational activities is the act of getting people together to accomplish desired goals and objectives efficiently and effectively. Management comprises planning, organizing, staffing, leading or directing, and controlling an organization (a group of one or more people or entities), or effort for the purpose of accomplishing a goal. Resourcing encompasses the deployment and manipulation of human resources, financial resources, technological resources, and natural resources. Towards the end of the 20th century, business management came to consist of six separate branches, namely human resource management, operations management (or production management), strategic management, marketing management, financial management, and information technology management, which was responsible for management information systems. Although it is difficult to subdivide management into functional categories in this way, it helps in navigating the discipline of management. Design management overlaps mainly with the branches marketing management, operations management, and strategic management.
=== Design leadership ===
Design managers often operate in the area of design leadership; however, design management and design leadership are interdependent rather than interchangeable. Like management and leadership, they differ in their objectives, achievements of objectives, accomplishments, and outcomes. Design leadership leads from creation of a vision to changes, innovations, and implementation of creative solutions. It stimulates communication and collaboration through motivation, sets ambitions, and points out future directions to achieve long-term objectives. In contrast, design management could be regarded as reactive and responds to a given business situation by using specific skills, tools, methods, and techniques. However, design management can also be viewed from proactive and creative perspectives as found in research (see e.g. the research anthology «Management of Design alliances», eds Bruce & Jevnaker). Design management requires design leadership to know where to go and design leadership requires design management to know how to get there.
== History ==
Difficulties arise in tracing the history of design management. Even though design management as an expression is first mentioned in literature in 1964, earlier contributions created the context in which the expression could arise. Throughout its history, design management was influenced by a number of different disciplines: architecture, industrial design, management, software development, engineering; and movements such as system theory, design methodologies. It cannot be attributed directly to either design or to management.
=== Business ===
==== Managing product aesthetics and corporate design (early contributions) ====
Early contributions to design management show how different design disciplines were coordinated to achieve business objectives at a corporate level, and demonstrate the early understanding of design as a competitive force. In that context, design was merely understood as an aesthetic function, and the management of design was at the level of project planning.
The practice of managing design to achieve a business objective was first documented in 1907. The Deutscher Werkbund (German Work Federation) was established in Munich by twelve architects and twelve business firms as a state-sponsored effort to better compete with Great Britain and the United States by integrating traditional craft and industrial mass-production techniques. A German designer and architect, Peter Behrens, created the entire corporate identity (logotype, product design, marketing communications, company building architecture, etc.) of Allgemeine Elektrizitäts Gesellschaft (AEG), and is regarded as the first corporate design management leader in history. His work for AEG was the first large-scale demonstration of the viability and vitality of the Werkbund's initiatives and objectives and can be considered as first contribution to design management.
In the following years, companies applied the principles of corporate identity and corporate design to increase awareness and recognition by consumers and differentiation from competitors. Olivetti became famous for its attention to design through their corporate design activities. In 1936 Olivetti hired Giovanni Pintori in their publicity department and promoted Marcello Nizzoli from the product design department to develop design in a comprehensive corporate philosophy. In 1956, inspired by the compelling brand character of Olivetti, Thomas Watson, Jr., CEO of IBM, retained American architect and industrial designer Eliot Noyes to develop a corporate-wide IBM Design Program consisting of coherent brand-design strategy together with a design management system to guide and oversee the comprehensive brand identity elements of: products, graphics, exhibits, architecture, interiors and fine art. This seminal effort by Noyes, with his inclusion of Paul Rand and Charles Eames as consultants, is considered to be the first comprehensive corporate design program in America. Up to and during the 1960s, debates in the design community were focused on ergonomics, functionalism, and corporate design, while debates in management addressed Just in time, Total quality management, and product specification. The main proponents of design management at that time were AEG, Bauhaus, HfG Ulm, the British Design Council, Deutscher Werkbund, Olivetti, IBM, Peter Behrens, and Walter Paepcke.
==== Managing design systematically (1960s–1970s) ====
The work of designers in the 1960s was influenced by industry, as the debate on design evolved from an aesthetic function into active cooperation with industry. Designers had to work in a team with engineers and marketers, and design was perceived as one part of the product development process. In the early years, design management was strongly influenced by system science and the emergence of a design science (e.g. the "blooming period of design methodologies" in Germany, the US, and Great Britain), as its main contributors had backgrounds in architecture. Early discussions on design management were strongly influenced by Anglo-Saxon literature (e.g. Farr and Horst Rittel), methodological studies in Design Research (e.g. HfG Ulm and Christopher Alexander), and theories in business studies. Design management dealt with two main issues:
how to develop corporate systems of planning aims
how to solve problems of methodological information processing
Instruments and checklists were developed to structure the processes and decisions of companies for successful corporate development. In this period the main contributors to design management were Michael Farr, Horst Rittel, HfG Ulm, Christopher Alexander, James Pilditch, the London Business School, Peter Gorb, the Design Management Institute, and the Royal Society of Arts. Debates in design disciplines were focusing on design science, design methodology, wicked problems, Ulm methodology, the relationship of design and business, new German design, and semiotic and scenario technique.
==== Managing design as a strategic asset (1980s–1990s) ====
In the 1980s several managers realized the economic effect of design, which increased the demand for design management. As companies were unsure how to manage design, there was a market for consultancy; focusing on helping organizations manage the product development process, including market research, product concepts, projects, communications, and market launch phases—as well as the positioning of products and companies.
Three important works were published in 1990: the Publication of Design Management – A Handbook of Issues and Methods by Mark Oakley (Editor), the book Design Management by French researcher Brigitte Borja de Mozota, and the Publication of Design Management – Papers from the London Business School by Peter Gorb (Editor). This new method-based design management approach helped to improve communication amongst technical and marketing managers. Examples of the new methods included trend research, the product effect triad, style mapping, milieus, product screenings, empiric design methods, and service design, giving design a more communicative and central role within organizations.
In the management community the topics of management theory, positioning strategy, brand management, strategic management, advertisement, competitive strategy, leadership, business ethics, mass customization, core competencies, strategic intent, reputation management, and system theory were discussed. Main issues and debates in design management included the topics of design leadership, design thinking, and corporate identity; plus the involvement of design management at the operational, tactical, and strategic levels.
In 1980 Robert Blaich, the senior managing director of design at Philips, introduced a design management system that regards design, production, and marketing as a single unit. This was an important contribution to the definition of design as a core element in business. At Philips Design, Stefano Marzano became CEO and Chief Creative Director in 1991, continuing the work of Robert Blaich to align design processes with business processes and furthering design strategy as an important asset of the overall business strategy.
Upon being appointed corporate head of the IBM Design Program in 1989, Tom Hardy, initiated a strategic design management effort, in collaboration with IBM design consultant Richard Sapper, to return to the roots of the IBM Design Program first established in 1956 by Eliot Noyes, Paul Rand and Charles Eames. The intent was to reprise IBM's brand image with customer experience-driven quality, approachability and contemporary product innovation. The highly successful IBM ThinkPad was the first product to emerge from this strategy in 1992 and, together with other innovative, award-winning products that followed, served to position design as a strategic asset for IBM's brand turnaround efforts initiated in 1993 by newly appointed CEO Louis V. Gerstner, Jr.
As a consultant following his 22-year tenure at IBM, Hardy served as Corporate Design Advisor to Samsung from 1996 to 2003 where his introduction of a new brand-design ethos and guiding principles, together with a comprehensive design management system, became a strategic corporate asset that significantly helped elevate Samsung's image from follower to global brand-design leader and dramatically increased brand equity value.
==== Managing design for innovation (2000s–2010s) ====
Design management has taken a more strategic role within business since 2000, and more academic programs for design management have been established. Design management has been recognized (and subsidized) throughout the European Union as a function for corporate advantage of both companies and nations. The main issues and debates included the topics of design thinking, strategic design management, design leadership, and product service systems. Design management was influenced by the following design trends: sustainable design, inclusive design, interactive design, design probes, product clinics, and co-design. It was also influenced by the later management trends of open innovation and design thinking.
=== Notion of the term "design management" ===
In 1965 the term design management was first published in a series of articles in the Design Journal. This series includes a pre-publication of the first chapter of the book Design Management by Michael Farr, which is considered as the first comprehensive literature on design management. His thoughts on system theory and project management led to a framework on how to deal with design as a business function at the corporate management level by providing the language and methodology to effectively manage it.
The term "architectural management" was coined by the architects Brunton, Baden Hellard and Boobyer in 1964 where they highlighted the tension and synergy between the management of individual projects (job management) and the management of the business (office management). Although they did not use the term "design management", they stressed identical issues; while the design community discussed methodologies for design. Christopher Alexander's work played an important role in the development of the design methodology, where he devoted his attention to the problems of form and context; and focused on disassembling complex design challenges into constituent parts to approach a solution. His intention was to bring more rationalism and structure into the solving of design problems.
=== Design policy ===
Design policies have a history reaching back to the end of the 19th century, when design programs with roots in the crafts sector were implemented in Sweden (1845) and Finland (1875). In 1907 the Deutscher Werkbund (German Work Federation) was established in Munich to better compete with Great Britain and United States. The success of the Deutscher Werkbund inspired a group of British designers, industrialists and business people after they had seen the Werkbund Exhibition in Cologne in 1914, to found the Design and Industries Association and campaign for a greater involvement of government in the promotion of good design. In 1944 design management by managing design policies was used by the British Government. The British Design Council was founded by Hugh Dalton, president of the Board of Trade in the British wartime government, as the Council of Industrial Design with the objective "to promote by all practicable means the improvement of design in the products of British industry".
Germany also realized the national importance of design during World War II. Between 1933 and 1945 Adolf Hitler used design, architecture and propaganda to increase his power; shown through the annual Reichsparteitage in Nürnberg on September 5. Heinrich Himmler coordinated several design activities for Hitler, including: the all-black SS-uniform designed by Professor Karl Diebitsch and Walter Heck in 1933; the Dachau concentration camp, designed by Theodor Eicke, and prototypes for other Nazi concentration camps; and the Wewelsburg redesign commissioned by Heinrich Himmler in 1944.
Since the 1990s the practice of design promotion evolved, and governments have used policy management and design management to promote design as part of their efforts of fostering technology, manufacturing and innovation.
Today, most developed countries have some kind of design promotion programme. The Design Management Institute has dedicated three issues to design policy development. Although initiatives promote design in different complexities, scopes and focuses, specific targets tend to address the following objectives:
support business: increase use of design by companies, particularly by small and medium enterprises (SMEs), and grow the design sector (use dimension);
promote to the public: increase exports of design and attract international investment (international dimension);
educate designers: improve design education and research (academic dimension).
A very comprehensive analysis on the situation of design on national level in the UK is the Cox review. The then chairman of the Design Council, Sir George Cox, published the Cox Review of Creativity in Business in 2005 to communicate the competitive advantage of design for the British industry.
Innovation policies have been excessively focused on the supply of technologies, neglecting the demand side (the user). There have been several initiatives by the European Commission to support and research design and design management in recent years. However, a European-wide policy to support design has never been planned, due to the inconsistencies and differences in design policies in each nation. Nonetheless, there are currently plans to include design in the EU innovation policy.
=== Promotion of Design Management ===
In America the Chicago industrialist Walter Paepcke, of the Container Corporation of America, founded the Aspen Design Conference after World War II as a way of bringing business and designers together – to the benefit of both. In 1951 the first conference topic, "Design as a function of management", was chosen to ensure the participation of the business community. After several years, business leaders stopped attending because the increased participation of designers changed the dialogue, focusing not on the need for collaboration between business and design, but rather on the business community's failure to understand the value of design.
The Royal Society of Arts (RSA) Presidential Medals for Design Management were instituted in June 1964. These were to recognize outstanding examples of design policy in organizations that maintained a consistently high standard in all aspects of design management, throughout all industries and disciplines. With these awards the RSA introduced the term design management. In 1965 the first medals were given to four companies; Conran & Co Ltd, Jaeger & Co Ltd, S. Hille & Co Ltd and W. & A. Gilbey Ltd. in the category "current achievements" and two companies London Transport and Heal and Son Ltd. in the category "long pioneering in the field of design management". The medal selection committee included representatives of the RSA council and the faculty of Royal Designers for Industry.
The Design Management Institute (DMI) was founded in 1975 at the Massachusetts College of Art in Boston. Since the mid-1980s the DMI has been an international non-profit organization that seeks to heighten the awareness of design as an essential part of business strategy, and become the leading resource and international authority on design management. One year later the first conference was organized. The DMI increased its international presence and established the "European International Conference on Design Management" in 1997, and a professional development program for design management.
In 2007 the European Commission funded the Award for Design Management Innovating and Reinforcing Enterprises (ADMIRE) project for two years, as part of the Pro Inno Europe Initiative, which is the EU's "focal point for innovation policy analysis, learning and development". The aim was to encourage companies – especially small and medium enterprises (SMEs) – to introduce design management procedures to; improve their competitiveness, stimulate innovation, establish a European knowledge-sharing platform, organize the Design Management Europe Award, and to identify and test new activities to promote Design Management.
=== Education ===
Teaching design to managers was pioneered at the London Business School (LBS) in 1976 by Peter Gorb (1926-2013), the first Honorary Fellow of the DMI and a long-standing Fellow of the RSA. Gorb had previously embedded design management in the Burton Retail Group before joining LBS where he later founded the Design Management Unit in 1982 (in collaboration with Charles Handy) which he led for over 20 years. In 1979 his talk at the RSA entitled Design and its Use by Managers provided a background introduction to the wide scope of design within industry and commerce, an appreciation of the power of design as a management resource, and advocated the teaching of design to managers. Gorb produced two books based on seminars at the Design Management Unit at LBS, Design Talks (1988) with Eric Schneider and Design Management: Papers from the London Business School (1990). Gorb is also remembered as introducing the concept of Silent Design, design undertaken by non-designers, in an influential paper with Angela Dumas (1987).
While design management had its origins in business schools, it has increasingly become embedded in the curriculum in design schools, particularly at the postgraduate level. The first design management programmes at design schools were started in the UK in the 1980s at the Royal College of Art, and De Montfort, Middlesex and Staffordshire Universities. Although some of these design management courses have not been sustainable, other postgraduate courses have flourished including ones at Brunel, Lancaster and more recently the University of the Arts with each providing a specific point of view on design management. The Design Leadership Fellowship at the University of Oxford was founded in 2005.
In Europe, the University of Art and Design Helsinki founded the Institute of Design Leadership and Management and established an international training program in 1991. The International Design Management Conference was organised in the same year by them. In 1995 the Helsinki School of Economics (HSE), University of Art and Design Helsinki (TaiK), and Helsinki University of Technology (TKK) cooperated to create the International Design Business Management Program (IDBM), which aims to bring together experts from different fields within the concept of design business management. The Finnish Aalto University was founded in 2010 and is a merger of the three established Finnish universities – the Helsinki School of Economics (HSE), University of Art and Design Helsinki (TaiK), and University of Technology (TKK) – that had been cooperating on the IDBM design management program since 1995. Since 2006 the Lucerne University of Applied Sciences and Arts in Switzerland offers one of the few undergraduate studies in design management, completely taught in English.
In the United States, the Hasso Plattner Institute of Design at Stanford University founded the D-school in 2005, a faculty intended to advance multidisciplinary innovation. Design schools in the United States are now offering graduate degrees in Design Management that focus on bridging the disciplines of design and business to lead organisations in the process of design thinking to create meaningful, human-centric value and business success through innovation. Among those offering M.A and M.F.A programs are:
Savannah College of Art and Design
Pratt Institute
University of Kansas
The New School
Design Management education is also gaining importance in other countries and awareness about role of design in business in increasing. In India in the last decade some of the leading design schools have been offering Masters programs.
MIT Institute of Design Pune - Master of Design in Design Management
National Institute of Design- Strategic Design Management Archived 2019-01-13 at the Wayback Machine
WE School - Business Design
ISDI- Strategic Design Management
World University of Design
BusinessWeek annually publishes a lists of the best programmes that combine design thinking and business thinking (D-schools 2009 and D-school Programmes to Watch 2009). The article Finland – World´s Innovation Hot Spot in the Harvard Business Review shows the interest of business leaders in the blended education of design and management. Business Schools (such as the Rotman School of Management, Wharton University of Pennsylvania and MIT Sloan Executive Education) have acted on this interest and developed new academic curricula.
Integrated education models are emerging in the academic world, a model which is referred to as T-shape and π-shaped education. T-shaped professionals are taught general knowledge in a few disciplines (e.g. management and engineering) and specific, deep knowledge in a single domain (e.g. design). This model also applies to companies, when they shift their focus from small T innovations (innovations involving only one discipline, like chemists) to big T innovations (innovations involving several disciplines, like design, ethnography, lead user, etc.). Like in education, this shift makes breaking down silos of departments and disciplines of knowledge essential.
=== Research ===
The first international research project on design management, the TRIAD research project, was initiated by Earl Powell, then president of DMI and the Harvard Business School in 1989. In the same year Earl Powell and Thomas Walton, Ph.D. developed the Design Management Review and DMI published the first issue. The publication is solely focusing on design management and has become the flagship publication of the discipline.
Design and design management have experienced different generations of theories. In its first generation design focused on the object, in the second on the process, and in the third on the user. Similar shifts can be seen in management and design management in almost parallel steps. For design management this has been illustrated by Brigitte Borja de Mozota, using Findeli's Bremen Model as a framework. Design management research organised itself into:
Organisational studies: design in an economic sector or design in large firms, such as Philips or Olivetti
Descriptive studies of specific methods of design management
It is difficult to predict where design management research is heading.
== Different types ==
Different types of design management depend on the type and strategic orientation of the business.
=== Product design management ===
In product-focused companies, design management focuses mainly on product design management, including strong interactions with product design, product marketing, research and development, and new product development. This perspective of design management is mainly focused on the aesthetic, semiotic, and ergonomic aspects of the product to express the product's qualities and to manage diverse product groups and product design platforms and can be applied together with a user-centered design perspective.
=== Brand design management ===
In market and brand focused companies, design management focuses mainly on brand design management, including corporate brand management and product brand management. Focusing on the brand as the core for design decisions results in a strong focus on the brand experience, customer touch points, reliability, recognition, and trust relations. The design is driven by the brand vision and strategy.
Corporate brand design management
Market and brand focused organizations are concerned with the expression and perception of the corporate brand. Corporate design management implements, develops, and maintains the corporate identity, or brand. This type of brand management is strongly anchored in the organization to control and influence corporate design activities. The design program plays the role of a quality program within many fields of the organization to achieve uniform internal branding. It is strongly linked to strategy, corporate culture, product development, marketing, organizational structure, and technological development. Achieving a consistent corporate brand requires the involvement of designers and a widespread design awareness among employees. A creative culture, knowledge sharing processes, determination, design leadership, and good work relations support the work of corporate brand management.
Product brand design management
The main focus of product brand management lies on the single product or product family. Product design management is linked to research and development, marketing, and brand management, and is present in the fast-moving consumer goods (FMCG) industry. It is responsible for the visual expressions of the individual product brand, with its diverse customer–brand touch points and the execution of the brand through design.
=== Service design management ===
Service design management deals with the newly emerging field of service design. It is the activity of planning and organizing people, infrastructure, communication, and material components of a service. The aim is to improve the quality of the service, the interaction between the service provider and its customers, and the customer's experience. The increasing importance and size of the service sector in terms of people employed and economic importance requires that services should be well-designed in order to remain competitive and to continue to attract customers. Design management traditionally focuses in the design and development of manufactured products; service design managers can apply many of the same theoretical and methodological approaches. Systematic and strategic management of service design helps the business gain competitive advantages and conquer new markets. Companies that proactively identify the interests of their customers and use this information to develop services that create good experiences for the customer will open up new and profitable business opportunities.
Companies in the service sector innovate by addressing the intangibility, heterogeneity, inseparability, and perishability of service (the IHIP challenge):
Services are intangible; they have no physical form and they cannot be seen before purchase or taken home.
Services are heterogenous; unlike tangible products, no two service delivery experiences are alike.
Services are inseparable; the act of supplying a service is inseparable from the customer's act of consuming it.
Services are perishable; they can not be inventoried.
Service design management differs in several ways from product design management. For example, the application of international trading strategies of services is difficult because the evolution of service 'from a craftsmanship attitude to industrialization of services' requires the development of new tools, approaches, and policies. Whereas goods can be manufactured centrally and delivered around the globe, services have to be performed at the place of consumption, which makes it difficult to achieve global quality consistency and effective cost control.
=== Business design management ===
Business design management deals with the newly emerging field of integrating design thinking into management. In organisation and management theory, design thinking forms part of the Architecture / Design / Anthropology (A/D/A) paradigm which characterizes innovative, human-centered enterprises. This paradigm focuses on a collaborative and iterative style of work and an adductive mode of thinking, compared to practices associated with the more traditional Mathematics / Economics / Psychology (M/E/P) management paradigm. Since 2006, the term Business Design is trademarked by the Rotman School of Management; they define business design as the application of design thinking principles to business practice. The designerly way of problem solving is an integrative way of thinking that is characterized by a deep understanding of the user, creative resolution of tensions, collaborative prototyping, and continuous modification and enhancement of ideas and solutions. This approach to problem solving can be applied to all components of business, and the management of the problem solving process forms the core of business design management activity. Universities other than the Rotman School of Management are offering similar academic education concepts, including the Aalto University in Finland, which initiated their International Design Business Management (IDBM) program in 1995.
=== Engineering design management ===
Engineering Design Management is a knowledge area within engineering management. It represents the adaptation and application of customary management practices, with the intention of achieving a productive [engineering design process]. Engineering design management is primarily applied in the context of engineering design teams, whereby the activities, outputs and influences of design teams are planned, guided, monitored and controlled. The output of an engineering design process is ultimately a description of a technical system. That technical system may either be an artefact (technical object), production facility, a process plant or any infrastructure for the benefit of society. Therefore, the domain of engineering design management includes high volume, mass production as well as low-volume, infrastructure.
=== Urban design management ===
Urban design management involves mediation among a range of self-interested stakeholders engaged in the production of the built environment. Such mediation can encourage a joint search for mutually beneficial outcomes or integrative development. Integrative development aims to produce sustainable solutions by increasing stakeholder satisfaction with the process and with the resulting urban development.
Conventional real estate development and urban planning activities are subject to conflicting interests and positional bargaining. The integrative negotiation approach emphasises mutual gains. The approach has been applied in land use planning and environmental management, but has not been used as a coordinated approach to real estate development, city design, and urban planning. Urban design management involves reordering the chain of events in the production of the built environment according to the principles of integrative negotiation. Such negotiation can be used in urban development and planning activities to reach more efficient agreements. This leads to integrative developments and more sustainable ways to produce the built environment.
Urban design management offers prescriptive advice for practitioners trying to organise city planning activities in a way that will increase sustainability by increasing satisfaction levels. Real estate development and urban planning often occur at very different decision-making levels. The practitioners involved may have diverse educational and professional backgrounds. They certainly have conflicting interests. Providing prescriptive advice for differing, possibly conflicting, groups requires construction of a framework that accommodates all of their daily activities and responsibilities. Urban design management provides a common framework to help bring together the conventional practices of urban and regional planning, real estate development, and urban design.
The work on Integrative Negotiation Consensus Building and the Mutual Gains Approach provide a helpful theoretical framework for developing the theory of urban design management. Negotiation theory provides a useful framework for merging the perspectives of urban planning, city design, and real estate project proposals regarding production of the built environment. Interests, a key construct in negotiation theory, is an important variable that will allow integrated development, as defined above, to occur. The path-breaking work of Roger Fisher and William Ury (1981), Getting to yes, advises negotiators to focus on interests and mutual gains instead of bargaining over positions.
=== Architectural management ===
Architectural management can be defined as an ordered way of thinking which helps to realise a quality building for an acceptable cost or as a process function with the aim of delivering greater architectural value to the client and society. Research by Kiran Gandhi describes architectural management as a set of practical techniques for an architect to successfully operate his practice. The term architectural management has been in use since the 1960s. The evolution of the field of architectural management has not been a smooth affair. Architectural practice was merely considered a business until after the Second World War, and even then practitioners appeared to be concerned about the conflict between art and commerce, demonstrating indifference to management. There was apparent conflict between the image of an architect and the need for professional management of the architectural business. Reluctance to embrace management and business as an inherent part of architectural practice could also be seen in architectural education programmes and publications. It appears that the management of architectural design, as well as architectural management in general, is still not being given enough importance. Architectural management falls into two distinct parts: office or practice management and project management. Office management provides an overall framework within which many individual projects are commenced, managed, and completed. Architectural management extends between the management of the design process, construction, and project management, through to facilities management of buildings in use. It is a powerful tool that can be applied to the benefit of professional service firms and the total building processes, yet it continues to receive too little attention both in theory and in practice.
== Business ==
=== Value for business ===
Design plays a vital role in product and brand development, and is of great economic importance for organisations and companies. Creativity and design in particular (as an activity: design skills, methods and processes) play a growing role in creating products and services with high added value to consumers. Design generates 50% of world export revenue in the creative industries' products (goods and services). The creative industry workforce is 3.1% of total employment in the European Union (EU), which creates a revenue that is 2.6% of the EU gross value. Creative industries have attained an unprecedented average annual growth rate of 8.7 per cent across the EU between 2000 and 2005.
The increasing importance of creative industries (and especially design) in knowledge-intense industries is reflected not only in the policies and studies on EU levels, but has initiated design and creative policies and programmes in the most advanced economies. Furthermore, design and creativity has been recognised on a regional and local level as a driving force for competitiveness, economic growth, job market, and citizen's satisfaction. The investment in creative and cultural industries are considered a significant component of EU growth in the Lisbon Strategy and the Europe 2020 strategy; and designers are increasingly involved in innovation issues.
To better understand the value of design and its role in innovation, the EU holds a public consultation on the basis of their publication Design as a driver of user-centred innovation and have published the mini-study Design as a tool for innovation. The report highlights the importance of design in user-centred innovation and recommends the integration of design into the EU innovation policy. In addition to the design share in the export of all creative industry products, design can also have a positive impact on all business performance indicators; from turnover and profit to market share and competitiveness. Design management research results can be classified as follows:
Design improves the performance of the innovation policy and of the communications policy of the firm
Design improves the global performance of the firm; it is a profitable investment
Design is a profession that creates value on a macro economic level
Design improves the competitive edge of a country in the international competition; it develops exports
Design can help the restructuring of an economic sector in regional economic policy
If and how design management is applied in a company correlates with the importance and integration of design in the company, but depends also on industry type, company size, ownership for design and type of competitive competence. A research from the Danish Design Centre (DDC) led to the "Danish Design Ladder", which shows how companies interpreted and applied design in differing depth:
Non-design: Companies that do not use design (15% in 2007).
Design as styling: Companies that use design as styling appearance (17% in 2007).
Design as process: Companies that integrate design into the development process (45% in 2007).
Design as innovation: Companies that consider design as key strategic element (21% in 2007).
The research showed that companies that considered design on a higher level of the ladder were constantly growing. Additionally, the Danish Design Centre published an Evaluation of the Importance of Design in 2006, with the result that most companies considered design as a promoter for innovation (71%), as a growth potential for the company (79%), and to make products more user friendly (71%). With increasing importance of design for the company, design management also becomes more important.
The value of design can be leveraged if it is managed well. Research by Chiva and Alegre shows that there is no link between the level of design investment and business success, but instead a strong correlation between design management skills and business success. This means that efficient and effective design management is crucial for maximising the value of design. Effective design management increases the efficiency of operations and process management, has a significant positive impact on process management, improves quality performance (internal and external quality), and increases operating performance. To measure and communicate the value of design management, Borja de Mozota suggests adapting the Balanced Score Card model and structuring the values in the following four categories:
Internal business processes: Design management as an innovation process, providing improvements in company performance and processes. Here, these innovations and processes are totally invisible to outsiders.
Learning and growing: Beyond advanced design management. Design explicit knowledge is applied to strategic focus and improves the quality of staff.
Customer and brand: Design management as perception and brand. Design knowledge is applied to corporate difference building and strategic positioning.
Financial: The historic design management economic model. Design management as an explicit and measurable value for company reputation and stock market performance.
=== Relation to other disciplines and departments ===
Three different orientations for the choice of design management can be identified in companies. These orientations influence the perception of management and the responsibility of design managers within the organisation. The strategic orientations are; market focus, product focus and brand focus.
Product-driven organisations often have design responsibility in their research and development (R&D) departments.
Market-focus driven organisation often have design responsibility in their marketing departments.
Brand-focus driven organisations often have design responsibility in corporate communication.
Depending on the strategic orientation, design management overlaps with other management branches to differing extents:
Marketing management: The concepts and elements of brand management overlap with those of design management. In practice, design management can be part of the job profile of a marketing manager, though the discipline includes aspects that are not in the domain of marketing management. This intersection is called "brand design management" and consists of positioning, personality, purpose, personnel, project and practice, where the objective is to increase brand equity.
Operations management: At the operational level design management deals with the management of design projects. Processes and tools from operations management can be applied to design management in the execution of design projects.
Strategic management: Due to the increasing importance of design as a differentiator and its supporting role in brand equity, design management deals with strategic design issues and supports the strategic direction of the business or enterprise. The debate on design thinking suggests the integration of design thinking into strategic management. Design thinking and strategic thinking have some commonalities in their characteristics, both are synthetic, adductive, hypothesis-driven, opportunistic, dialectical, enquiring and value-driven.
Innovation management: The value of the coordinating role of design in new product development has been well documented. Design management can help to improve innovation management, which can be measured by three variables: it reduces time-to-market, by improving sources and communication skills and developing cross-functional innovation; it stimulates networking innovation, by managing product and customer information flows with internal (e.g. teams) and external (e.g. suppliers, society) actors; it improves the learning process by promoting a continuous learning process.
=== Hierarchy ===
Like the management of strategy, design can be managed on three levels: strategic (corporate level or enterprise wide), tactical (business level or individual business units), and operational (individual project level). These three levels have been termed differently by various authors over the last 50 years.
Operational level
Operational design management involves the management of individual design projects and design teams. Its goal is to achieve the objectives set by strategic design management. Success of good design management can be measured by evaluating the quality of operational design management outcomes. It includes the selection and management of design suppliers and encompasses the documentation, supervision, and evaluation of design processes and results. It deals with personal leadership, emotional intelligence, and the cooperation with and management of internal communications. Regular management functions, tools, and concepts can often be applied to the management of design on the operational level. It is implemented to achieve specific design objectives and manage the judgment of design proposals. It can help to build brand equity through the consistent creation and implementation of high-quality design solutions that best fit the brand identity and desired consumer experience, in the most efficient way. Depending on the type of company and industry, the following job titles are associated with this role: operational design manager, senior designer, team leader, visual communication manager, corporate design coordinator, and others.
Tactical level
Tactical design management addresses the organisation of design resources and design processes. Its goal is to create a structure for design in the company, bridging the gap between objectives set through strategic design management and the implementation of design on the operational level. It defines how design is organised within the company. This includes the use of a central body to coordinate different design projects and activities. It deals with defining activities, developing design skills and competencies, managing processes, systems and procedures, assigning of roles and responsibilities, developing innovative products and service concepts, and finding new market opportunities. Outcomes of tactical design management are related to the creation of a structure for design within the company, to build internal resources and competencies for the implementation of design. Depending on the type of company and industry, the following job titles are associated with this function: tactical design manager, design director, design & innovation manager, brand design manager, new product development (NPD) manager, visual identity manager, and others.
Strategic level
Strategic design management involves the creation of strategic long-term vision and planning for design, and deals with defining the role of design within the company. The goal of strategic design management is to support and strengthen the corporate visio by creating a relationship between the design and corporate strategy. It includes the creation of design, brand and product strategies, ensuring that design management becomes a central element in the corporate strategy formulation process. Strategic design management is responsible for the development and implementation of a corporate design programme that influences the design vision, mission, and positioning. It allows design to interact with the needs of corporate management and focuses on the long-term capabilities of design. Where strategic design management is applied, there is often a strong belief in the potential to differentiate the company and gain competitive advantage by design. As a result, design thinking becomes integrated into the corporate culture. Depending on the type of company and industry the following job titles are associated with this function: design strategist, strategic design manager, chief design officer, vice president design and innovation, chief creative officer, innovation design director, and others.
=== Role and responsibility ===
Design management is not a standard model that can be projected onto every enterprise, nor is there a specific way of applying it that leads to guaranteed success. Design management processes are carried out by humans with different responsibilities and backgrounds, who work in different industries and enterprises with different sizes and traditions, whilst having different target groups and markets to serve. Design management is multifaceted, and so are the different applications of and views on design management. The function of design management in an organisation depends on its tasks, authority, and practice.
Task
Similar tasks can be grouped into categories to describe the job profile of a design manager. Different categories in management that encompass design were defined by several authors; those tasks occur on all three design management levels (strategic, tactical, and operational):
Authority and position
The authority and position of the design management function has a large influence on what the design manager does in his or her daily job. Kootstra (2006) distinguishes design management types by organisational function: design management as line function, design management as staff function, and design management as support function. Design management as a "line function" is directly responsible for design execution in the "primary" organisational process and can take place on all levels of the design management hierarchy. The main attributes for design managers in the line are authority over and direct responsibility for the result. Design management as a staff function is not directly responsible for design execution in the "primary" organisational process, but consults as a specialist on all levels of the design management hierarchy. The main attributes for design managers in this function are their limited authority and the need to consult line managers and staff. When the design process is defined as a "secondary" organisational process, design management is seen as "supportive function". In this function it has only a supportive character, classifying the design manager as a creative specialist towards product management, brand management, marketing, R&D, and communication. Various authors use different concepts to describe the authority and position of design management; they can be grouped as follows:
== See also ==
== Notes ==
== References ==
== Further reading ==
Books
Bruce, M.; Jevnaker, B. H. (Eds.) (1997/1998). Management of Design Alliances: Sustaining Competitive Advantage. Chichester: John Wiley & Sons. ISBN 978-0471974765 | Wikipedia/Design_management |
Computer-aided garden design describes the use of CAD packages to ease and improve the process of garden design.
Professional garden designers have used CAD packages designed for other professions. This includes architectural design software for the drafting of garden plans, 3-D software and image-editing software for visual representation. But tailor-made computer-aided design software is made for the amateur garden design market. It contains some of the functionality of the more advanced programs, packaged in an easy-to-use format.
Although designers still use drawing by hand, in the 2020s AI has been used including by non-professionals. Apps are widely publicly available including for plant identification.
== See also ==
List of CAD companies
Virtual home design software
== References == | Wikipedia/Computer-aided_garden_design |
Strategic design is the application of future-oriented design principles in order to increase an organization's innovative and competitive qualities. Its foundations lie in the analysis of external and internal trends and data, which enables design decisions to be made on the basis of facts rather than aesthetics or intuition. The discipline is mostly practiced by design agencies or by internal development departments.
== Definition ==
"Traditional definitions of design often focus on creating discrete solutions—be it a product, a building, or a service. Strategic design is about applying some of the principles of traditional design to "big picture" systemic challenges like business growth, health care, education, and climate change. It redefines how problems are approached, identifies opportunities for action, and helps deliver more complete and resilient solutions." The traditional concept of design is mainly associated with artistic work. The addition of the term strategic expands such conception so that creativity is linked with innovation, allowing ideas to become practical and profitable applications "that can be managed effectively, acquired, used and/or consumed by target audiences." Strategic design draws from the body of literature that emerged in recent years, which outline strategic design principles that provide insights and new methods in the areas of merchandising, consuming, and ownership. There are at least four factors that demonstrate the value of strategic design and these are:
it affects consumer behavior through motivation by creating a perceptual value;
it offers a way for firms to differentiate their products and services from the competition;
it creates meaning, by effectively making the customer understand the product and its value; and,
it can be used to manage risks by providing a structure that offers opportunities for collaboration, innovation and the creation of a mechanism to meaningfully address problems.
== Applications ==
Businesses are the main consumers of strategic design, but the public, political and not-for-profit sectors are also making increasing use of the discipline. Its applications are varied, yet often aim to strengthen one of the following: product branding, product development, corporate identity, corporate branding, operating and business models, and service delivery.
Strategic design has become increasingly crucial in recent years, as businesses and organisations compete for a share of today's global and fast-paced marketplace.
"To survive in today’s rapidly changing world, products and services must not only anticipate change, but drive it. Businesses that won’t lose market share to those that do. There have been many examples of strategic design breakthroughs over the years and in an increasingly competitive global market with rapid product cycles, strategic design is becoming more important".
Examples
Strategic design can play a role in helping to resolve the following common problems:
Identifying the most important questions that a company's products and services should address (Example: John Rheinfrank of Fitch Design showed Kodak that its disposable cameras were not intended to replace traditional cameras, but instead to meet specific needs, like weddings, underwater photography and others)
Translating insights into actionable solutions (Example: Jump Associates helped Target turn an understanding of college students into a dorm room line designed by Todd Oldham)
Prioritizing the order in which a portfolio of products and services should be launched (Example: Apple Inc. laid out the iPod+iTunes ecosystem slowly over time, rather than launching all of its pieces at once)
Connecting design efforts to an organization's business strategy (Example: Hewlett-Packard's global design division is focused most intently on designs that simplify technology experiences. This leads to lower manufacturing costs at a time when CEO Mark Hurd is pushing for cost-cutting.) Mark Hurd discussed HP's design strategy for determining environmental footprint of their supply chain.
Integrating design as a fundamental aspect of strategic brand intent (Example: Tom Hardy, Design Strategist, developed the core brand-design principle ″Balance of Reason & Feeling″ for Samsung Electronics, together with rational and emotional attributes, to guide design language within a comprehensive brand-design program that inspired differentiation and elevated the company's global image.)
== See also ==
Experience design
Design management
Design methods
Design thinking
Industrial design
Instructional design
Product design
Service design
U.S. Army Strategist
User-centered design
== References ==
== External links ==
Strategic design as described by Tim Brown, CEO of IDEO
Definition of strategic design by INDEX:
Strategic Design MA course description, SRH Berlin University of Applied Science (former Design Akademie Berlin) | Wikipedia/Strategic_design |
A European Union design is a unitary industrial design right that covers the European Union. It has both unregistered and registered forms. The unregistered Community design (UCD) came into effect on 6 March 2002 and the registered Community design (RCD) was available from 1 April 2003.
The name community design was changed to European Union design (EU design) by Regulation (EU) 2024/2822. This change will take effect on 1 May 2025.
== Legal basis ==
Council Regulation (EC) No 6/2002, as implemented by Commission Regulation (EC) No 2245/2002, created both unregistered and registered European Community designs. The Community design is a unitary right that has equal effect across the European Union. The unregistered form of the right has existed since 6 March 2002 while the registered form came into effect on 1 April 2003.
== Definitions ==
A design is defined as "the appearance of the whole or a part of a product resulting from the features of, in particular, the lines, contours, colours, shape, texture and/or materials of the product itself and/or its ornamentation".
Designs may be protected if:
they are novel, that is if no identical design has been made available to the public;
they have individual character, that is the "informed user" would find it different from other designs which are available to the public. Where a design forms part of a more complex product, the novelty and individual character of the design are judged on the part of the design which is visible during normal use.
== Scope of protection ==
The scope of protection conferred by a Community design includes any design which does not produce a different overall impression on an informed user, taking the degree of freedom of the designer into consideration. A Community design further confers on its holder the exclusive right to use it and to prevent any third party not having his consent from using it. For an unregistered Community design, however, the contested use must have resulted from copying the protected design.
== Term ==
An unregistered Community design lasts for a period of 3 years from the date on which the design was first
made available to the public within the Community. A design shall be deemed to have been made available to the public within the Community if "it has been published, exhibited, used in trade or otherwise disclosed in such a way that, in the normal course of business, these events could reasonably have become known to the circles specialised in the sector concerned, operating within the Community. The design shall not, however, be deemed to have been made available to the public "for the sole reason that it has been disclosed to a third person under explicit or implicit conditions of confidentiality."
A registered Community design (RCD) lasts for up to 25 years from the date on which an application for registration was filed, subject to the payment of maintenance fees. The registration process is administered by the EUIPO in Alicante.
== Effects ==
The unregistered Community design provides useful, short-term protection for items of short market duration. The registered Community design provides substantial cost savings compared to obtaining national registrations in individual European countries. The Community design also permits those having business in a number of European countries to protect their designs in all of those countries more simply.
== References == | Wikipedia/Community_design |
Garden design is the art and process of designing and creating plans for layout and planting of gardens and landscapes. Garden design may be done by the garden owner themselves, or by professionals of varying levels of experience and expertise. Most professional garden designers have some training in horticulture and the principles of design. Some are also landscape architects, a more formal level of training that usually requires an advanced degree and often a state license. Amateur gardeners may also attain a high level of experience from extensive hours working in their own gardens, through casual study, serious study in Master gardener programs, or by joining gardening clubs.
== Elements ==
Whether gardens are designed by a professional or an amateur, certain principles form the basis of effective garden design, resulting in the creation of gardens to meet the needs, goals, and desires of the users or owners of the gardens.
Elements of garden design include the layout of hardscape such as paths, walls, water features, sitting areas and decking, and the softscape, that is, the plants themselves, with consideration for their horticultural requirements, their season-to-season appearance, lifespan, growth habit, size, speed of growth, and combinations with other plants and landscape features. Consideration is also given to the maintenance needs of the garden, including the time or funds available for regular maintenance, which can affect the choice of plants in terms of speed of growth, spreading or self-seeding of the plants, whether annual or perennial, bloom-time, and many other characteristics.
Important considerations in the garden design include how the garden will be used, the desired stylistic genre (formal or informal, modern or traditional, etc.), and the way the garden space will connect to the home or other structures in the surrounding areas. All of these considerations are subject to the limitations of the prescribed budget.
=== Location ===
A garden's location can have a substantial influence on its design. Topographical landscape features such as steep slopes, vistas, hills, and outcrops may suggest or determine aspects of design such as layout and can be used and augmented to create a particular impression. The soils of the site will affect what types of plant may be grown, as will the garden's climate zone and various microclimates. The locational context of the garden can also influence its design. For example, an urban setting may require a different design style in contrast to a rural one. Similarly, a windy coastal location may necessitate a different treatment compared to a sheltered inland site.
=== Soil ===
The quality of a garden's soil can have a significant influence on a garden's design and its subsequent success. Soil influences the availability of water and nutrients, the activity of soil micro-organisms, and temperature within the root zone, and thus may have a determining effect on the types of plants which will grow successfully in the garden. However, soils may be replaced or improved to make them more suitable.
Traditionally, garden soil is improved by amendment, the process of adding beneficial materials to the native subsoil and particularly the topsoil. The added materials, which may consist of compost, peat, sand, mineral dust, or manure, among others, are mixed with the soil to the preferred depth. The amount and type of amendment may depend on many factors, including the amount of existing soil humus, the soil structure (clay, silt, sand, loam, etc.), the soil acidity/alkalinity, and the choice of plants to be grown. One source states that, "conditioning the soil thoroughly before planting enables the plants to establish themselves quickly and so play their part in the design." However, not all gardens are, or should be, amended in this manner, since many plants prefer an impoverished soil. In this case, poor soil is better than a rich soil that has been artificially enriched.
=== Boundaries ===
The design of a garden can be affected by the nature of its boundaries, both external and internal. In turn, the design can influence the boundaries, including via creation of new ones. Planting can be used to modify an existing boundary line by softening or widening it. Introducing internal boundaries can help divide or break up a garden into smaller areas.
The main types of boundary within a garden are hedges, walls and fences. A hedge may be evergreen or deciduous, formal or informal, short or tall, depending on the style of the garden and purpose of the boundary. A wall has a strong foundation beneath it at all points, and is usually – but not always – built from brick, stone or concrete blocks. A fence differs from a wall in that it is anchored only at intervals, and is usually constructed using wood or metal (such as iron or wire mesh).
Boundaries may be constructed for several reasons: to keep out livestock or intruders, to provide privacy, to create shelter from strong winds and provide micro-climates, to screen unattractive structures or views, and to create an element of surprise.
=== Surfaces ===
In temperate western gardens, a smooth expanse of lawn is often considered essential to a garden. However, garden designers may use other surfaces, for example those "made up of loose gravel, small pebbles, or wood chips" to create a different appearance and feel. Designers may also use the contrast in texture and color between different surfaces to create an overall pattern in the design.
Surfaces for paths and access points are chosen for practical as well as aesthetic reasons. Issues such as safety, maintenance and durability may need to be considered by the designer. Gardens designed for public access need to cope with heavier foot traffic and hence may use surfaces – such as resin-bonded gravel – that are rarely used in private gardens.
=== Planting design ===
Planting design requires design talent and aesthetic judgement combined with a good level of horticultural, ecological and cultural knowledge. It includes two major traditions: formal rectilinear planting design (Persia and Europe); and formal asymmetrical (Asia) and naturalistic planting design.
==== History ====
Persian gardens are credited with originating aesthetic and diverse planting design. A correct Persian garden will be divided into four sectors with water being very important for both irrigation and aesthetics. The four sectors symbolize the Zoroastrian elements of sky, earth, water and plants. Planting in ancient and Medieval European gardens was often a mix of herbs for medicinal use, vegetables for consumption, and flowers for decoration. Purely aesthetic planting layouts developed after the medieval period in Renaissance gardens, as are shown in late-Renaissance paintings and plans. The designs of the Italian Renaissance garden were geometrical and plants were used to form spaces and patterns. The gardens of the French Renaissance and Baroque jardin à la française era continued the formal garden planting aesthetic.
In Asia the asymmetrical traditions of planting design in Chinese gardens and Japanese gardens originated in the Jin dynasty (266–420) of China. The gardens' plantings have a controlled but naturalistic aesthetic. In Europe the arrangement of plants in informal groups developed as part of the English Landscape Garden style, and subsequently the French landscape garden, and was strongly influenced by the picturesque art movement.
==== Application ====
A planting plan gives specific instructions, often for a contractor about how the soil is to be prepared, what species are to be planted, what size and spacing is to be used and what maintenance operations are to be carried out under the contract. Owners of private gardens may also use planting plans, not for contractual purposes, as an aid to thinking about a design and as a record of what has been planted. A planting strategy is a long-term strategy for the design, establishment and management of different types of vegetation in a landscape or garden.
Planting can be established by directly employed gardeners and horticulturalists or it can be established by a landscape contractor (also known as a landscape gardener). Landscape contractors work to drawings and specifications prepared by garden designers or landscape architects.
=== Garden furniture ===
Garden furniture may range from a patio set consisting of a table, four or six chairs and a parasol, through benches, swings, various lighting, to stunning artifacts in brutal concrete or weathered oak. Patio heaters, that run on bottled butane or propane, are often used to enable people to sit outside at night or in cold weather. A picnic table is used for the purpose of eating a meal outdoors such as in a garden room. The materials used to manufacture modern patio furniture include stones, metals, vinyl, plastics, resins, glass, and treated woods.
=== Lighting ===
Garden lighting can be an important aspect of garden design. In most cases, various types of lighting techniques may be classified and defined by heights: safety lighting, uplighting, and downlighting. Safety lighting is the most practical application. However, it is more important to determine the type of lamps and fittings needed to create the desired effects.
Light regulates three major plant processes: photosynthesis, phototropism, and photoperiodism. Photosynthesis provides the energy required to produce the energy source of plants. Phototropism is the effect of light on plant growth that causes the plant to grow toward or away from the light. Photoperiodism is a plant's response or capacity to respond to photoperiod, a recurring cycle of light and dark periods of constant length.
=== Sunlight ===
While sunlight is not always easily controlled by the gardener, it is an important element of garden design. The amount of available light is a critical factor in determining what plants may be grown. Sunlight will, therefore, have a substantial influence on the character of the garden. For example, a rose garden is generally not successful in full shade, while a garden of hostas may not thrive in hot sun. As another example, a vegetable garden may need to be placed in a sunny location, and if that location is not ideal for the overall garden design goals, the designer may need to change other aspects of the garden.
In some cases, the amount of available sunlight can be influenced by the gardener. The location of trees, other shade plants, garden structures, or, when designing an entire property, even buildings, might be selected or changed based on their influence in increasing or reducing the amount of sunlight provided to various areas of the property. In other cases, the amount of sunlight is not under the gardener's control. Nearby buildings, plants on other properties, or simply the climate of the local area, may limit the available sunlight. Or, substantial changes in the light conditions of the garden may not be within the gardener's means. In this case, it is important to plan a garden that is compatible with the existing light conditions.
== Notable garden designers ==
== Types of gardens ==
=== Islamic gardens ===
Garden design and the Islamic garden tradition began with creating the Paradise garden in Ancient Persia, in Western Asia. It evolved over the centuries, and in the different cultures Islamic dynasties came to rule in Asia, the Near East, North Africa, and the Iberian Peninsula.
==== Examples ====
Some styles and examples include:
Persian gardens
Eram Garden
Fin Garden
Mughal gardens
Nishat Bagh
Shalimar Gardens (Lahore)
Yadavindra Gardens (Pinjore)
Charbagh
Taj Mahal
Tomb of Humayun gardens
Bagh (garden)
Bagh-e Babur
Shalimar Bagh (Srinagar)
Al-Andalus—Moorish architecture and gardens
Alcázar of Seville
Alhambra
Generalife
=== Mediterranean gardens ===
Garden design history and precedents from the Mediterranean region include:
Ancient Greek and Hellenistic gardens
Ancient Roman gardens
Peristyle gardens – evolved into Monastic gardens
House of the Vettii – in Pompeii
Horti Sallustiani
Byzantine gardens
Spanish gardens
Andalusian patio
=== Renaissance formal gardens ===
A formal garden in the Persian and European garden design traditions is rectilinear and axial in design. The equally formal garden, without axial symmetry (asymmetrical) or other geometries, is the garden design tradition of Chinese and Japanese gardens. The Zen garden of rocks, moss and raked gravel is an example. The Western model is an ordered garden laid out in carefully planned geometric and often symmetrical lines. Lawns and hedges in a formal garden need to be kept neatly clipped for maximum effect. Trees, shrubs, subshrubs and other foliage are carefully arranged, shaped and continually maintained.
A French formal garden or jardin à la française, is a specific kind of formal garden, laid out in the manner of André Le Nôtre; it is centered on the façade of a building, with radiating avenues and paths of gravel, lawns, parterres and pools (bassins) of reflective water enclosed in geometric shapes by stone coping, with fountains and sculpture. The French formal garden style has origins in fifteenth-century Italian Renaissance garden, such as the Villa d'Este, Boboli Gardens, and Villa Lante in Italy. The style was brought to France and expressed in the gardens of the French Renaissance. Some of the earliest formal parterres of clipped evergreens were those laid out at Anet by Claude Mollet, the founder of a dynasty of nurserymen-designers that lasted deep into the 18th century. The Gardens of Versailles are an ultimate example of jardin à la française, composed of many different distinct gardens, and designed by André Le Nôtre.
English Renaissance gardens in a rectilinear formal design were a feature of the stately homes. The introduction of the parterre was at Wilton House in the 1630s. In the early eighteenth century, the publication of Dezallier d'Argenville, La théorie et la pratique du jardinage (1709) was translated into English and German, and was the central document for the later formal gardens of Continental Europe.
Traditional formal Spanish garden design evolved with Persian garden and European Renaissance garden influences. The internationally renowned Alhambra and Generalife in Granada, built in the Moorish Al-Andalus era, have influenced design for centuries. The Ibero-American Exposition of 1929 World's Fair in Seville, Spain was located in the celebrated Maria Luisa Park (Parque de Maria Luisa) designed by Jean-Claude Nicolas Forestier.
Formal gardening in the Italian and French manners was reintroduced at the turn of the twentieth century. Beatrix Farrand's formal Italian garden areas at Dumbarton Oaks in Washington, D.C., and Achille Duchêne's restored French water parterre at Blenheim Palace in England are examples of the modern formal garden. The Conservatory Garden in Central Park of New York City features a formal garden, as do many other parks and estates such as Filoli in California.
The simplest formal garden would be a box-trimmed hedge lining or enclosing a carefully laid out flowerbed or garden bed of simple geometric shape, such as a knot garden. The more developed and elaborate formal gardens contain statuary and fountains.
Features in a formal garden may include:
=== English Landscape and Naturalistic gardens ===
The English landscape garden style practically swept away the geometries of earlier English and European Renaissance formal gardens. William Kent and Lancelot "Capability" Brown were leading proponents, among many other designers. The naturalistic English garden style (French: Jardin anglais, Italian: Giardino all'inglese, German: Englischer Landschaftsgarten) of the 1730s and on transformed private and civic garden design across Europe. The French landscape garden subsequently continued the style's development on the Continent.
=== Cottage gardens ===
A cottage garden uses an informal design, traditional materials, dense plantings, and a mixture of ornamental and edible plants. Cottage gardens go back many centuries, but their popularity grew in 1870s England in response to the more structured Victorian English estate gardens that used restrained designs with massed beds of brilliantly colored greenhouse annuals. They are more casual by design, depending on grace and charm rather than grandeur and formal structure. The influential British garden authors and designers, William Robinson at Gravetye Manor in Sussex, and Gertrude Jekyll at Munstead Wood in Surrey, both wrote and gardened in England. Jekyll's series of thematic gardening books emphasized the importance and value of natural plantings were an influence in Europe and the United States. Also influential half a century later was Margery Fish, whose surviving garden at East Lambrook Manor emphasizes, among other things, native plant life and the natural patterns produced by self-spreading and self-seeding.
The earliest cottage gardens were far more practical than modern versions—with an emphasis on vegetables and herbs, along with fruit trees, beehives, and even livestock if land allowed. Flowers were used to fill any spaces in between. Over time, flowers became more dominant. Modern day cottage gardens include countless regional and personal variations of the more traditional English cottage garden.
=== Kitchen garden or potager ===
The traditional kitchen garden, also known as a potager, is a seasonally used space separate from the rest of the residential garden – the ornamental plants and lawn areas. Most vegetable gardens are still miniature versions of old family farm plots with square or rectangular beds, but the kitchen garden is different not only in its history, but also its design.
The kitchen garden may be a landscape design feature that can be the central feature of an ornamental, all-season landscape, but can be little more than a humble vegetable plot. It is a source of herbs, vegetables, fruits, and flowers, but it is also a structured garden space, a design based on repetitive geometric patterns.
The kitchen garden has year-round visual appeal and can incorporate permanent perennials or woody plantings around (or among) the annual plants.
=== Shakespeare garden ===
A Shakespeare garden is a themed garden that cultivates plants mentioned in the works of William Shakespeare. In English-speaking countries, particularly the United States, these are often public gardens associated with parks, universities, and Shakespeare festivals. Shakespeare gardens are sites of cultural, educational, and romantic interest and can be locations for outdoor weddings.
Signs near the plants usually provide relevant quotations. A Shakespeare garden usually includes several dozen species, either in herbaceous profusion or in a geometric layout with boxwood dividers. Typical amenities are walkways and benches and a weather-resistant bust of Shakespeare. Shakespeare gardens may accompany reproductions of Elizabethan architecture. Some Shakespeare gardens also grow species typical of the Elizabethan period but not mentioned in Shakespeare's plays or poetry.
=== Rock garden ===
A rock garden, also known as rockery or alpine garden, is a type of garden that features extensive use of rocks and stones, along with plants native to rocky or alpine environments. Rock garden plants tend to be small, both because many of the species are naturally small, and so as not to cover up the rocks. They may be grown in troughs (containers), or in the ground. The plants will usually be types that prefer well-drained soil and less water.
The usual form of a rock garden is a pile of rocks, large and small, aesthetically arranged and with small gaps between, where the plants are rooted. Some rock gardens are designed and built to look like natural outcrops of bedrock. Stones are aligned to suggest a bedding plane and plants are used to conceal the joints between the stones. This type of rock garden was popular in Victorian times, often designed and built by professional landscape architects. The same approach is sometimes used in modern campus or commercial landscaping, but can also be applied in smaller private gardens.
The Japanese rock garden, in the west often referred to as "Zen garden", is a special kind of rock garden which contains few plants. Some rock gardens incorporate bonsai.
Rock gardens have become increasingly popular as landscape features in tropical countries such as Thailand. The combination of wet weather and heavy shade trees, along with the use of heavy weed mats to stop unwanted plant growth, has made this type of arrangement ideal for both residential and commercial gardens due to its easier maintenance and drainage.
=== Native garden ===
Natural landscaping, also called native gardening, is the use of native plants, including trees, shrubs, groundcover, and grasses which are indigenous to the geographic area of the garden.
Natural landscaping is adapted to the climate, geography and hydrology and should require no pesticides, fertilizers and watering to maintain, given that native plants have adapted and evolved to local conditions over thousands of years. However, these applications may be necessary for some preventive care of trees and other vegetation in areas of degraded or weedy landscapes.
Native plants suit today's interest in low-maintenance gardening and landscaping, with many species vigorous and hardy and able to survive winter cold and summer heat. Once established, they can flourish without irrigation or fertilization, and are resistant to most pests and diseases. Many municipalities have quickly recognized the benefits of natural landscaping due to municipal budget constraints and reductions and the general public is now benefiting from the implementation of natural landscaping techniques to save water and create more personal time.
Native plants provide suitable habitat for native species of butterflies, birds, pollinators, and other wildlife. They provide more variety in gardens by offering myriad alternatives to the often planted introduced species, cultivars, and invasive species. The indigenous plants have co-evolved with animals, fungi and microbes, to form a complex network of relationships. They are the foundation of their native habitats and ecosystems, or natural communities.
Such gardens often benefit from the plants being evolved and habituated to the local climate, pests and herbivores, and soil conditions, and so may require fewer to no soil amendments, irrigation, pesticides, and herbicides for a lower maintenance, more sustainable landscape.
=== Contemporary garden ===
The contemporary style garden has gained popularity in the UK in the last ten years. This is partly due to the increase of modern housing with small gardens as well as the cultural shift towards contemporary design. This style of garden can be defined by the use "clean" design lines, with focus on hard landscaping materials like stone, hardwood, rendered walls.
Planting style is bold but simple with the use of drifts of one or two plants that repeat throughout the design. Grasses are a very popular choice for this style of design.
Garden lighting plays an integral role in modern garden design. Subtle lighting effects can be achieved with the use of carefully placed low voltage LED lights incorporated into paving and walls. With the combination of increasing demand for more efficient lighting, increasing availability of sustainable designs, light pollution considerations, and aesthetic and safety concerns, the methods and equipment of outdoor illumination have been evolving. The increasing use of LEDs, solar power, low voltage fixtures, energy efficient lamps, and energy-saving lighting design are examples of innovation in the field.
=== Residential gardens ===
A residential or private domestic garden such as the front garden or back garden is the most common form of garden. The front garden may be a formal and semi-public space and so subject to the constraints of convention and local laws. While typically found in the yard of the residence, a garden may also be established on a roof, in an atrium or courtyard, on a balcony, in windowboxes, or on a patio. Residential gardens are typically designed at human scale, as they are most often intended for private use. However, the garden of a great house or a large estate may be larger than a public park, and may contain specialized gardens (such as those for exhibiting one particular type of plant) and eyecatchers.
Some early residential gardens include the Donnell Garden in Sonoma, California. The garden was designed by landscape architect, Thomas Church, with Lawrence Halprin and architect, George T. Rockrise, which was completed in 1948. The garden is currently regarded as a modernist icon and has been applauded for its well maintained garden of its time. The garden was recognized for its unique and organic forms that represented a modern style of California. The garden is on top of a hillside overlooking the northern area of San Francisco Bay.
=== East Asian gardens ===
Japanese and Korean gardens, originally influenced by Chinese gardens, can be found at private homes, in neighbourhood or city parks, and at historical landmarks such as Buddhist temples. Some of the Japanese gardens most famous in the Western world and Japan are Japanese gardens in the karesansui tradition. The Ryōan-ji temple garden is a well-known example. There are Japanese gardens of various styles, with plantings often evoking wabi-sabi simplicity. In Japanese culture, garden-making is a high art, intimately linked to the arts of calligraphy and ink paintin
== See also ==
== References ==
"The Cultural Landscape Foundation". tclf.org.
== Further reading ==
Blomfield, Reginald Theodore. The Formal Garden in England. Internet Archive
San Juan
Gang Chen, Landscape Architecture: Planting Design Illustrated (ArchiteG, Inc. 2012)
Gertrude Jekyll Colour schemes for the flower garden (1914)
Richard L. Austin Elements of Planting Design (Wiley 2001)
Nick Robinson, Jia-Hua WuThe Planting Design Handbook (Ashgate 2004)
Piet Oudolf, Noel Kingsbury Planting Design: Gardens in Time and Space (Timber Press 2005)
Weishan, Michael. The New Traditional Garden: A Practical Guide to Creating and Restoring Authentic American Gardens for Homes of All Ages. ISBN 0-345-42041-1 | Wikipedia/Garden_design |
Service design is the activity of planning and arranging people, infrastructure, communication and material components of a service in order to improve its quality, and the interaction between the service provider and its users. Service design may function as a way to inform changes to an existing service or create a new service entirely.
The purpose of service design methodologies is to establish the most effective practices for designing services, according to both the needs of users and the competencies and capabilities of service providers. If a successful method of service design is adapted then the service will be user-friendly and relevant to the users, while being sustainable and competitive for the service provider. For this purpose, service design uses methods and tools derived from different disciplines, ranging from ethnography to information and management science to interaction design.
Service design concepts and ideas are typically portrayed visually, using different representation techniques according to the culture, skill and level of understanding of the stakeholders involved in the service processes (Krucken and Meroni, 2006). With the advent of emerging technologies from the Fourth Industrial Revolution, the significance of Service Design has increased, as it is believed to facilitate a more feasible productization of these new technologies into the market.
== Definition ==
Service design practice is the specification and construction of processes which deliver valuable capacities for action to a particular user. Service design practice can be both tangible and intangible, and can involve artifacts or other elements such as communication, environment and behaviour. Several of the authors of service design theory including Pierre Eiglier, Richard Normann, Nicola Morelli, propose that services come to existence at the same moment they are both provided and used. In contrast, products are created and "exist" before being purchased and used. While a designer can prescribe the exact configuration of a product, they cannot prescribe in the same way the result of the interaction between users and service providers, nor can they prescribe the form and characteristics of any emotional value produced by the service.
Consequently, service design is an activity that, among other things, suggests behavioural patterns or "scripts" for the actors interacting in the service. Understanding how these patterns interweave and support each other are important aspects of the character of design and service. This allows greater user freedom, and better provider adaptability to the users' needs.
Service design is the process of creating and improving services to meet the needs and expectations of customers.
Service design involves creating a service concept that defines the customer's experience, as well as the physical, human, and technological resources required to deliver the service. Service design focuses on the experience, including customer interactions, service delivery, and support processes.
== History ==
=== Early service design and theory ===
Early contributions to service design were made by G. Lynn Shostack, a bank and marketing manager and consultant, in the form of written articles and books. The activity of designing a service was considered to be part of the domain of marketing and management disciplines in the early years. For instance, in 1982 Shostack proposed the integration of the design of material components (products) and immaterial components (services). This design process, according to Shostack, can be documented and codified using a "service blueprint" to map the sequence of events in a service and its essential functions in an objective and explicit manner. A service blueprint is an extension of a user journey map, and this document specifies all the interactions a user has with an organisation throughout their user lifecycle.
Servicescape is a model developed by B.H. Booms and Mary Jo Bitner to focus upon the impact of the physical environment in which a service process takes place and to explain the actions of people within the service environment, with a view to designing environments which accomplish organisational goals in terms of achieving desired responses.
=== Service design education and practice ===
In 1991, service design was first introduced as a design discipline by professors Michael Erlhoff and Brigit Mager at Köln International School of Design (KISD). In 2004, the Service Design Network was launched by Köln International School of Design, Carnegie Mellon University, Linköpings Universitet, Politecnico di Milano and Domus Academy in order to create an international network for service design academics and professionals.
In 2001, Livework, the first service design and innovation consultancy, opened for business in London. In 2003, Engine, initially founded in 2000 in London as an ideation company, positioned themselves as a service design consultancy.
== Service design principles ==
The 2018 book, This Is Service Design Doing: Applying Service Design Thinking in the Real World, by Adam Lawrence, Jakob Schneider, Marc Stickdorn, and Markus Edgar Hormess, proposes six service design principles:
Human-centred: Consider the experience of all the people affected by the service.
Collaborative: Stakeholders of various backgrounds and functions should be actively engaged in the service design process.
Iterative: Service design is an exploratory, adaptive, and experimental approach, iterating toward implementation.
Sequential: The service should be visualized and orchestrated as a sequence of interrelated actions.
Real: Needs should be researched in reality, ideas prototyped in reality, and intangible values evidenced as physical or digital reality.
Holistic: Services should sustainably address the needs of all stakeholders through the entire service and across the business.
In the 2011 book, This is Service Design Thinking: Basics, Tools, Cases, the first principle is “user-centred”. "User" refers to any user of the service system, including customers and employees. Thus, the authors revised “user-centred” to “human-centred” in their new book, This is service design doing, to clarify that 'human' includes service providers, customers, and all others relevant stakeholders. For instance, service design must consider not only the customer experience, but also the interests of all relevant people in retailing.
“Collaborative” and “iterative” come from the principle “co-creative” in this is service design thinking. The service exists with the participation of users, and is created by a group of people from different backgrounds. In most cases, people tend to focus only on the meaning of “collaborative”, stressing the co-operative and interdisciplinary nature of service design, but ignored the caveat that a service only exists with the participation of a user. Therefore, in the definition of new service design principles, the "co-creative" is divided into two principles of "collaborative" and "iterative". "Collaboration" is used to indicate the process of creation by the entire stakeholders from different backgrounds. "Iteration" is used to describe service design is an iterating process keeping evolve to adapt the change of business posture.
“Sequential” means that services need to be logically, rhythmically and visually displayed. Service design is a dynamic process over a period of time. The timeline is important for users in the service system. For example, when a customer shops at an online website, the first information showed up should be the regions where the products can be delivered. In this way, if the customer finds that the products cannot be delivered to their region, they will not continually browse the products on the website.
Service is often invisible and occurs in a state that the user cannot perceive. “Real” means that the intangible service needs to be displayed in a tangible way. For example, when people order food in a restaurant, they can't perceive the various attributes of the food. If we play the cultivation and picking process of vegetables in the restaurant, people can perceive the intangible services in the backstage, such as the cultivation of organic vegetables, and get a quality service experience. This service also helps the restaurant establish a natural and organic brand image to customers.
Thinking in a holistic way is the cornerstone of service design. Holistic thinking needs to consider both intangible and tangible service, and ensure that every moment the user interacts with the service, such moments known as touchpoints, is considered and optimised. Holistic thinking also needs to understand that users have multiple logics to complete an experience process. Thus, a service designer should think about each aspect from different perspectives to ensure that no needs are left unattended-to.
== Methodology ==
Together with the most traditional methods used for product design, service design requires methods and tools to control new elements of the design process, such as the time and the interaction between actors. An overview of the methodologies for designing services is proposed by Nicola Morelli in 2006, who proposes three main directions:
Identification of the actors involved in the definition of the service by means of appropriate analytical tools
Definition of possible service scenarios, verifying use cases, and sequences of actions and actors’ roles in order to define the requirements for the service and its logical and organisational structure
Representation of the service by means of techniques that illustrate all the components of the service, including physical elements, interactions, logical links and temporal sequences
Analytical tools refer to anthropology, social studies, ethnography and social construction of technology. Appropriate elaborations of those tools have been proposed with video-ethnography and different observation techniques to gather data about users’ actions. Other methods, such as cultural probes, have been developed in the design discipline, which aim to capture information on users in their context of use (Gaver, Dunne et al. 1999; Lindsay and Rocchi 2003).
Design tools aim at producing a blueprint of the service, which describes the nature and characteristics of the interaction in the service. Design tools include service scenarios (which describe the interaction) and use cases (which illustrate the detail of time sequences in a service encounter). Both techniques are already used in software and systems engineering to capture the functional requirements of a system. However, when used in service design, they have been adequately adapted to include more information concerning material and immaterial components of a service, as well as time sequences and physical flows. Crowdsourced information has been shown to be highly beneficial in providing such information for service design purposes, particularly when the information has either a very low or very high monetary value. Other techniques, such as IDEF0, just in time and total quality management are used to produce functional models of the service system and to control its processes. However, it is important to note that such tools may prove too rigid to describe services in which users are supposed to have an active role, because of the high level of uncertainty related to the user's behaviour.
Because of the need for communication between inner mechanisms of services and actors (such as final users), representation techniques are critical in service design. For this reason, storyboards are often used to illustrate the interaction of the front office. Other representation techniques have been used to illustrate the system of interactions or a "platform" in a service (Manzini, Collina et al. 2004). Recently, video sketching (Jegou 2009, Keitsch et al. 2010) and prototypes (Blomkvist 2014) have also been used to produce quick and effective tools to stimulate users' participation in the development of the service and their involvement in the value production process.
== Standards ==
In the United Kingdom, British Standard BS 7000-3:1994, part of the BS 7000 - Design management systems series, covers service design.
== Public sector service design ==
Public sector service design is associated with civic technology, open government, e-government, and can constitute either government-led or citizen-led initiatives. The public sector is the part of the economy composed of public services and public enterprises. Public services include public goods and governmental services such as the military, police, infrastructure (public roads, bridges, tunnels, water supply, sewers, electrical grids, telecommunications, etc.), public transit, public education, along with health care and those working for the government itself, such as elected officials. Due to new investments in hospitals, schools, cultural institutions and security infrastructures in the last few years, the public sector has expanded in many countries. The number of jobs in public services has also grown; such growth can be associated with the large and rapid social change that is in itself a trigger for fresh design. In this context, some governments are considering service design as a means to bring about better-designed public services.
=== Denmark ===
In 2002, MindLab, an innovation public sector service design group was established by the Danish ministries of Business and Growth, Employment, and Children and Education. MindLab was one of the world's first public sector design innovation labs and their work inspired the proliferation of similar labs and user-centred design methodologies deployed in many countries worldwide. The design methods used at MindLab are typically an iterative approach of prototyping and testing, to evolve not just their government projects, but also the government's organisational structure using ethnographic-inspired user research, creative ideation processes, and visualisation and modelling of service prototypes. In Denmark, design within the public sector has been applied to a variety of projects including rethinking Copenhagen's waste management, improving social interactions between convicts and guards in Danish prisons, transforming services in Odense for mentally disabled adults and more.
=== United Kingdom ===
In 2007 and 2008 documents from the British government explore the concept of "user-driven public services" and scenarios of highly personalised public services. The documents proposed a new view on the role of service providers and users in the development of new and highly customised public services, employing user involvement methods. While this approach has been explored through an early initiative in the UK, the possibilities of service design for the public sector are also being researched, picked up, and promoted in European Union countries including Belgium.
The Behavioural Insights Team (BIT) were originally established under the auspices of the Cabinet Office in 2010, in order to apply nudge theory to try to improve UK government policy interventions and save money. In 2014 BIT was 'spun-out' to become a company allied to Nesta (charity), BIT employees and the UK government each owning a third of this new business. That same year a Nudge unit was added to the United States government under President Obama, referred to as the ‘US Nudge Unit,’ working within the White House Office of Science and Technology Policy.
=== New Zealand ===
In recent years New Zealand has seen a significant increase in the use of Service Design approaches and methods applied to challenges faced by the public sector. One instance of service design approaches being applied is with the Family 100 project which focused on the experiences of families living in urban poverty in Auckland. A report "Speaking for Ourselves Archived 2021-01-30 at the Wayback Machine" and a companion empathy tool "Demonstrating the complexities of being poor Archived 2021-01-27 at the Wayback Machine"' were released in July 2014. The report and empathy tool were released as the result of a collective service design effort by the Auckland Council, Auckland City Mission, ThinkPlace (a Service Design consultancy) as well as researchers from Waikato University, Massey University, and the University of Auckland. Since its release the report has seen extensive use and has assisted in both the engagement of stakeholders as well as the development of public services focussed on achieving better outcomes for those experiencing urban poverty.
== Private sector service design ==
Real-world service design work can be experienced as new and useful approaches as well as entail some challenges in practice, as identified in field research (see e.g. Jevnaker et al., 2015).
A practical example of service design thinking can be found at the Myyrmanni shopping mall in Vantaa, Finland. The management attempted to improve the customer flow to the second floor as there were queues at the landscape lifts and the KONE steel car lifts were ignored. To improve customer flow to the second floor of the mall (2010) Kone Lifts implemented their 'People Flow' Service Design Thinking by turning the elevators into a Hall of Fame for the 'Incredibles' comic strip characters. Making their elevators more attractive to the public solved the people flow problem. This case of service design thinking by Kone Elevator Company is used in literature as an example of extending products into services.
== Service design in health care ==
Clinical service redesign is an approach to improving quality and productivity in health care. A redesign is ideally clinically led and involves all stakeholders (e.g. primary and secondary care clinicians, senior management, patients, commissioners etc.) to ensure national and local clinical standards are set and communicated across the care settings. By following the patient's journey or pathway, the team can focus on improving both the patient experience and the outcomes of care.
== See also ==
Chief experience officer
Operations management
Service recovery
Service science, management and engineering
Service-dominant logic
== References ==
== Further reading ==
Bechmann, Søren (2010): "Servicedesign", Gyldendal Akademisk.
Curedale, Robert Service Design Process & Methods 3rd Edition, Design Community College Inc.,2018.ISBN 978-1940805368
Gaver B., Dunne T., Pacenti E., (1999). "Design: Cultural Probes." Interaction 6(1): 21–29.
Hollins, G., Hollins, Bill (1991). Total Design : Managing the design process in the service sector. London, Pitman.
Jegou, F. 2009. Co-design Approaches for Early Phases of Augmented Environments. In: LALOU, S. (ed.) Designing User Friendly Augmented Work Environments: From Meeting Rooms to Digital Collaborative Spaces, Computer Supported Cooperative Work. London: Springer.
Krucken, L. & Meroni, A. 2006. "Building Stakeholder Networks to Develop and Deliver Product-Service-Systems: Practical Experiences on Elaborating Pro-Active Materials for Communication". Journal of Cleaner Production, vol 14 (17)
Løvlie, L., Polaine, A., Reason, B. (2013). Service Design: From Insight to Implementation. New York: Rosenfeld Media. ISBN 1-933820-33-0.
Moritz, S. (2005). Service Design: Practical access to an evolving field. London.
Normann, R. and R. Ramirez (1994). Designing Interactive Strategy. From Value Chain to Value Constellation. New York, John Wiley and Sons.
Ramaswamy, R. (1996). Design and management of service processes. Reading, Mass., Addison–Wesley Pub. Co. | Wikipedia/Service_design |
Design for Six Sigma (DFSS) is a collection of best-practices for the development of new products and processes. It is sometimes deployed as an engineering design process or business process management method. DFSS originated at General Electric to build on the success they had with traditional Six Sigma; but instead of process improvement, DFSS was made to target new product development. It is used in many industries, like finance, marketing, basic engineering, process industries, waste management, and electronics. It is based on the use of statistical tools like linear regression and enables empirical research similar to that performed in other fields, such as social science. While the tools and order used in Six Sigma require a process to be in place and functioning, DFSS has the objective of determining the needs of customers and the business, and driving those needs into the product solution so created. It is used for product or process design in contrast with process improvement. Measurement is the most important part of most Six Sigma or DFSS tools, but whereas in Six Sigma measurements are made from an existing process, DFSS focuses on gaining a deep insight into customer needs and using these to inform every design decision and trade-off.
There are different options for the implementation of DFSS. Unlike Six Sigma, which is commonly driven via DMAIC (Define - Measure - Analyze - Improve - Control) projects, DFSS has spawned a number of stepwise processes, all in the style of the DMAIC procedure.
DMADV, define – measure – analyze – design – verify, is sometimes synonymously referred to as DFSS, although alternatives such as IDOV (Identify, Design, Optimize, Verify) are also used. The traditional DMAIC Six Sigma process, as it is usually practiced, which is focused on evolutionary and continuous improvement manufacturing or service process development, usually occurs after initial system or product design and development have been largely completed. DMAIC Six Sigma as practiced is usually consumed with solving existing manufacturing or service process problems and removal of the defects and variation associated with defects. It is clear that manufacturing variations may impact product reliability. So, a clear link should exist between reliability engineering and Six Sigma (quality). In contrast, DFSS (or DMADV and IDOV) strives to generate a new process where none existed, or where an existing process is deemed to be inadequate and in need of replacement. DFSS aims to create a process with the end in mind of optimally building the efficiencies of Six Sigma methodology into the process before implementation; traditional Six Sigma seeks for continuous improvement after a process already exists.
== DFSS as an approach to design ==
DFSS seeks to avoid manufacturing/service process problems by using advanced techniques to avoid process problems at the outset (e.g., fire prevention). When combined, these methods obtain the proper needs of the customer, and derive engineering system parameter requirements that increase product and service effectiveness in the eyes of the customer and all other people. This yields products and services that provide great customer satisfaction and increased market share. These techniques also include tools and processes to predict, model and simulate the product delivery system (the processes/tools, personnel and organization, training, facilities, and logistics to produce the product/service). In this way, DFSS is closely related to operations research (solving the knapsack problem), workflow balancing. DFSS is largely a design activity requiring tools including: quality function deployment (QFD), axiomatic design, TRIZ, Design for X, design of experiments (DOE), Taguchi methods, tolerance design, robustification and Response Surface Methodology for a single or multiple response optimization. While these tools are sometimes used in the classic DMAIC Six Sigma process, they are uniquely used by DFSS to analyze new and unprecedented products and processes. It is a concurrent analyzes directed to manufacturing optimization related to the design.
=== Critics ===
Response surface methodology and other DFSS tools uses statistical (often empirical) models, and therefore practitioners need to be aware that even the best statistical model is an approximation to reality. In practice, both the models and the parameter values are unknown, and subject to uncertainty on top of ignorance. Of course, an estimated optimum point need not be optimum in reality, because of the errors of the estimates and of the inadequacies of the model. The uncertainties can be handled via a Bayesian predictive approach, which considers the uncertainties in the model parameters as part of the optimization. The optimization is not based on a fitted model for the mean response, E[Y], but rather, the posterior probability that the responses satisfies given specifications is maximized according to the available experimental data.
Nonetheless, response surface methodology has an effective track-record of helping researchers improve products and services: For example, George Box's original response-surface modeling enabled chemical engineers to improve a process that had been stuck at a saddle-point for years.
== Distinctions from DMAIC ==
Proponents of DMAIC, DDICA (Design Develop Initialize Control and Allocate) and Lean techniques might claim that DFSS falls under the general rubric of Six Sigma or Lean Six Sigma (LSS). Both methodologies focus on meeting customer needs and business priorities as the starting-point for analysis.
It is often seen that the tools used for DFSS techniques vary widely from those used for DMAIC Six Sigma. In particular, DMAIC, DDICA practitioners often use new or existing mechanical drawings and manufacturing process instructions as the originating information to perform their analysis, while DFSS practitioners often use simulations and parametric system design/analysis tools to predict both cost and performance of candidate system architectures. While it can be claimed that two processes are similar, in practice the working medium differs enough so that DFSS requires different tool sets in order to perform its design tasks. DMAIC, IDOV and Six Sigma may still be used during depth-first plunges into the system architecture analysis and for "back end" Six Sigma processes; DFSS provides system design processes used in front-end complex system designs. Back-front systems also are used. This makes 3.4 defects per million design opportunities if done well.
Traditional six sigma methodology, DMAIC, has become a standard process optimization tool for the chemical process industries.
However, it has become clear that the promise of six sigma, specifically, 3.4 defects per million opportunities (DPMO), is simply unachievable after the fact. Consequently, there has been a growing movement to implement six sigma design usually called design for six sigma DFSS and DDICA tools. This methodology begins with defining customer needs and leads to the development of robust processes to deliver those needs.
Design for Six Sigma emerged from the Six Sigma and the Define-Measure-Analyze-Improve-Control (DMAIC) quality methodologies, which were originally developed by Motorola to systematically improve processes by eliminating defects. Unlike its traditional Six Sigma/DMAIC predecessors, which are usually focused on solving existing manufacturing issues (i.e., "fire fighting"), DFSS aims at avoiding manufacturing problems by taking a more proactive approach to problem solving and engaging the company efforts at an early stage to reduce problems that could occur (i.e., "fire prevention"). The primary goal of DFSS is to achieve a significant reduction in the number of nonconforming units and production variation. It starts from an understanding of the customer expectations, needs and Critical to Quality issues (CTQs) before a design can be completed. Typically in a DFSS program, only a small portion of the CTQs are reliability-related (CTR), and therefore, reliability does not get center stage attention in DFSS. DFSS rarely looks at the long-term (after manufacturing) issues that might arise in the product (e.g. complex fatigue issues or electrical wear-out, chemical issues, cascade effects of failures, system level interactions).
== Similarities with other methods ==
Arguments about what makes DFSS different from Six Sigma demonstrate the similarities between DFSS and other established engineering practices such as probabilistic design and design for quality. In general Six Sigma with its DMAIC roadmap focuses on improvement of an existing process or processes. DFSS focuses on the creation of new value with inputs from customers, suppliers and business needs. While traditional Six Sigma may also use those inputs, the focus is again on improvement and not design of some new product or system. It also shows the engineering background of DFSS. However, like other methods developed in engineering, there is no theoretical reason why DFSS cannot be used in areas outside of engineering.
== Software engineering applications ==
Historically, although the first successful Design for Six Sigma projects in 1989 and 1991 predate establishment of the DMAIC process improvement process, Design for Six Sigma (DFSS) is accepted in part because Six Sigma organisations found that they could not optimise products past three or four Sigma without fundamentally redesigning the product, and because improving a process or product after launch is considered less efficient and effective than designing in quality. ‘Six Sigma’ levels of performance have to be ‘built-in’.
DFSS for software is essentially a non superficial modification of "classical DFSS" since the character and nature of software is different from other fields of engineering. The methodology describes the detailed process for successfully applying DFSS methods and tools throughout the software product design, covering the overall Software Development life cycle: requirements, architecture, design, implementation, integration, optimization, verification and validation (RADIOV). The methodology explains how to build predictive statistical models for software reliability and robustness and shows how simulation and analysis techniques can be combined with structural design and architecture methods to effectively produce software and information systems at Six Sigma levels.
DFSS in software acts as a glue to blend the classical modelling techniques of software engineering such as object-oriented design or Evolutionary Rapid Development with statistical, predictive models and simulation techniques. The methodology provides Software Engineers with practical tools for measuring and predicting the quality attributes of the software product and also enables them to include software in system reliability models.
== Data mining and predictive analytics application ==
Although many tools used in DFSS consulting such as response surface methodology, transfer function via linear and non linear modeling, axiomatic design, simulation have their origin in inferential statistics, statistical modeling may overlap with data analytics and mining,
However, despite that DFSS as a methodology has been successfully used as an end-to-end [technical project frameworks ] for analytic and mining projects, this has been observed by domain experts to be somewhat similar to the lines of CRISP-DM
DFSS is claimed to be better suited for encapsulating and effectively handling higher number of uncertainties including missing and uncertain data, both in terms of acuteness of definition and their absolute total numbers with respect to analytic s and data-mining tasks, six sigma approaches to data-mining are popularly known as DFSS over CRISP [ CRISP- DM referring to data-mining application framework methodology of SPSS ]
With DFSS data mining projects have been observed to have considerably shortened development life cycle . This is typically achieved by conducting data analysis to pre-designed template match tests via a techno-functional approach using multilevel quality function deployment on the data-set.
Practitioners claim that progressively complex KDD templates are created by multiple DOE runs on simulated complex multivariate data, then the templates along with logs are extensively documented via a decision tree based algorithm
DFSS uses Quality Function Deployment and SIPOC for feature engineering of known independent variables, thereby aiding in techno-functional computation of derived attributes
Once the predictive model has been computed, DFSS studies can also be used to provide stronger probabilistic estimations of predictive model rank in a real world scenario
DFSS framework has been successfully applied for predictive analytics pertaining to the HR analytics field, This application field has been considered to be traditionally very challenging due to the peculiar complexities of predicting human behavior.
== References ==
== Further reading ==
Brue, Greg; Launsby, Robert G. (2003). Design for Six Sigma. New York: McGraw-Hill. ISBN 9780071413763. OCLC 51235576.
Yang, Kai; El-Haik, Basem (2003). Design for Six Sigma: A Roadmap for Product Development. New York: McGraw-Hill. ISBN 9780071412087. OCLC 51861987.
Cavanagh, Roland R.; Neuman, Robert P.; Pande, Peter S. (2005). What Is Design for Six Sigma?. New York: McGraw-Hill. ISBN 9780071423892. OCLC 57465690.
Chowdhury, Subir (2002). Design for Six Sigma. Chicago: Dearborn Trade Publishing. ISBN 9780793152247. OCLC 48796250.
Hasenkamp, Torben (2010). "Engineering Design for Six Sigma". Quality and Reliability Engineering International. 26 (4): 317–324. doi:10.1002/qre.1090. S2CID 35364939.
Del Castillo, E. (2007). Process Optimization, a Statistical Approach. New York: Springer. https://link.springer.com/book/10.1007/978-0-387-71435-6 | Wikipedia/Design_for_Six_Sigma |
In theoretical physics, topological string theory is a version of string theory. Topological string theory appeared in papers by theoretical physicists, such as Edward Witten and Cumrun Vafa, by analogy with Witten's earlier idea of topological quantum field theory.
== Overview ==
There are two main versions of topological string theory: the topological A-model and the topological B-model. The results of the calculations in topological string theory generically encode all holomorphic quantities within the full string theory whose values are protected by spacetime supersymmetry. Various calculations in topological string theory are closely related to Chern–Simons theory, Gromov–Witten invariants, mirror symmetry, geometric Langlands Program, and many other topics.
The operators in topological string theory represent the algebra of operators in the full string theory that preserve a certain amount of supersymmetry. Topological string theory is obtained by a topological twist of the worldsheet description of ordinary string theory: the operators are given different spins. The operation is fully analogous to the construction of topological field theory which is a related concept. Consequently, there are no local degrees of freedom in topological string theory.
== Admissible spacetimes ==
The fundamental strings of string theory are two-dimensional surfaces. A quantum field theory known as the N = (1,1) sigma model is defined on each surface. This theory consist of maps from the surface to a supermanifold. Physically the supermanifold is interpreted as spacetime and each map is interpreted as the embedding of the string in spacetime.
Only special spacetimes admit topological strings. Classically, one must choose a spacetime such that the theory respects an additional pair of supersymmetries, making the spacetime an N = (2,2) sigma model. A particular case of this is if the spacetime is a Kähler manifold and the H-flux is identically equal to zero. Generalized Kähler manifolds can have a nontrivial H-flux.
=== Topological twist ===
Ordinary strings on special backgrounds are never topological. To make these strings topological, one needs to modify the sigma model via a procedure called a topological twist which was invented by Edward Witten in 1988. The central observation is that these theories have two U(1) symmetries known as R-symmetries, and the Lorentz symmetry may be modified by mixing rotations and R-symmetries. One may use either of the two R-symmetries, leading to two different theories, called the A model and the B model. After this twist, the action of the theory is BRST exact, and as a result the theory has no dynamics. Instead, all observables depend on the topology of a configuration. Such theories are known as topological theories.
Classically this procedure is always possible.
Quantum mechanically, the U(1) symmetries may be anomalous, making the twist impossible. For example, in the Kähler case with H = 0 the twist leading to the A-model is always possible but that leading to the B-model is only possible when the first Chern class of the spacetime vanishes, implying that the spacetime is Calabi–Yau. More generally (2,2) theories have two complex structures and the B model exists when the first Chern classes of associated bundles sum to zero whereas the A model exists when the difference of the Chern classes is zero. In the Kähler case the two complex structures are the same and so the difference is always zero, which is why the A model always exists.
There is no restriction on the number of dimensions of spacetime, other than that it must be even because spacetime is generalized Kähler. However, all correlation functions with worldsheets that are not spheres vanish unless the complex dimension of the spacetime is three, and so spacetimes with complex dimension three are the most interesting. This is fortunate for phenomenology, as phenomenological models often use a physical string theory compactified on a 3 complex-dimensional space. The topological string theory is not equivalent to the physical string theory, even on the same space, but certain supersymmetric quantities agree in the two theories.
== Objects ==
=== A-model ===
The topological A-model comes with a target space which is a 6 real-dimensional generalized Kähler spacetime. In the case in which the spacetime is Kähler, the theory describes two objects. There are fundamental strings, which wrap two real-dimensional holomorphic curves. Amplitudes for the scattering of these strings depend only on the Kähler form of the spacetime, and not on the complex structure. Classically these correlation functions are determined by the cohomology ring. There are quantum mechanical instanton effects which correct these and yield Gromov–Witten invariants, which measure the cup product in a deformed cohomology ring called the quantum cohomology. The string field theory of the A-model closed strings is known as Kähler gravity, and was introduced by Michael Bershadsky and Vladimir Sadov in Theory of Kähler Gravity.
In addition, there are D2-branes which wrap Lagrangian submanifolds of spacetime. These are submanifolds whose dimensions are one half that of space time, and such that the pullback of the Kähler form to the submanifold vanishes. The worldvolume theory on a stack of N D2-branes is the string field theory of the open strings of the A-model, which is a U(N) Chern–Simons theory.
The fundamental topological strings may end on the D2-branes. While the embedding of a string depends only on the Kähler form, the embeddings of the branes depends entirely on the complex structure. In particular, when a string ends on a brane the intersection will always be orthogonal, as the wedge product of the Kähler form and the holomorphic 3-form is zero. In the physical string this is necessary for the stability of the configuration, but here it is a property of Lagrangian and holomorphic cycles on a Kahler manifold.
There may also be coisotropic branes in various dimensions other than half dimensions of Lagrangian submanifolds. These were first introduced by Anton Kapustin and Dmitri Orlov in Remarks on A-Branes, Mirror Symmetry, and the Fukaya Category
=== B-model ===
The B-model also contains fundamental strings, but their scattering amplitudes depend entirely upon the complex structure and are independent of the Kähler structure. In particular, they are insensitive to worldsheet instanton effects and so can often be calculated exactly. Mirror symmetry then relates them to A model amplitudes, allowing one to compute Gromov–Witten invariants. The string field theory of the closed strings of the B-model is known as the Kodaira–Spencer theory of gravity and was developed by Michael Bershadsky, Sergio Cecotti, Hirosi Ooguri and Cumrun Vafa in Kodaira–Spencer Theory of Gravity and Exact Results for Quantum String Amplitudes.
The B-model also comes with D(-1), D1, D3 and D5-branes, which wrap holomorphic 0, 2, 4 and 6-submanifolds respectively. The 6-submanifold is a connected component of the spacetime. The theory on a D5-brane is known as holomorphic Chern–Simons theory. The Lagrangian density is the wedge product of that of ordinary Chern–Simons theory with the holomorphic (3,0)-form, which exists in the Calabi–Yau case. The Lagrangian densities of the theories on the lower-dimensional branes may be obtained from holomorphic Chern–Simons theory by dimensional reductions.
=== Topological M-theory ===
Topological M-theory, which enjoys a seven-dimensional spacetime, is not a topological string theory, as it contains no topological strings. However topological M-theory on a circle bundle over a 6-manifold has been conjectured to be equivalent to the topological A-model on that 6-manifold.
In particular, the D2-branes of the A-model lift to points at which the circle bundle degenerates, or more precisely Kaluza–Klein monopoles. The fundamental strings of the A-model lift to membranes named M2-branes in topological M-theory.
One special case that has attracted much interest is topological M-theory on a space with G2 holonomy and the A-model on a Calabi–Yau. In this case, the M2-branes wrap associative 3-cycles. Strictly speaking, the topological M-theory conjecture has only been made in this context, as in this case functions introduced by Nigel Hitchin in The Geometry of Three-Forms in Six and Seven Dimensions and Stable Forms and Special Metrics provide a candidate low energy effective action.
These functions are called "Hitchin functional" and Topological string is closely related to Hitchin's ideas on generalized complex structure, Hitchin system, and ADHM construction etc..
== Observables ==
=== The topological twist ===
The 2-dimensional worldsheet theory is an N = (2,2) supersymmetric sigma model, the (2,2) supersymmetry means that the fermionic generators of the supersymmetry algebra, called supercharges, may be assembled into a single Dirac spinor, which consists of two Majorana–Weyl spinors of each chirality. This sigma model is topologically twisted, which means that the Lorentz symmetry generators that appear in the supersymmetry algebra simultaneously rotate the physical spacetime and also rotate the fermionic directions via the action of one of the R-symmetries. The R-symmetry group of a 2-dimensional N = (2,2) field theory is U(1) × U(1), twists by the two different factors lead to the A and B models respectively. The topological twisted construction of topological string theories was introduced by Edward Witten in his 1988 paper.
=== What do the correlators depend on? ===
The topological twist leads to a topological theory because the stress–energy tensor may be written as an anticommutator of a supercharge and another field. As the stress–energy tensor measures the dependence of the action on the metric tensor, this implies that all correlation functions of Q-invariant operators are independent of the metric. In this sense, the theory is topological.
More generally, any D-term in the action, which is any term which may be expressed as an integral over all of superspace, is an anticommutator of a supercharge and so does not affect the topological observables. Yet more generally, in the B model any term which may be written as an integral over the fermionic
θ
¯
±
{\displaystyle {\overline {\theta }}^{\pm }}
coordinates does not contribute, whereas in the A-model any term which is an integral over
θ
−
{\displaystyle \theta ^{-}}
or over
θ
¯
+
{\displaystyle {\overline {\theta }}^{+}}
does not contribute. This implies that A model observables are independent of the superpotential (as it may be written as an integral over just
θ
¯
±
{\displaystyle {\overline {\theta }}^{\pm }}
) but depend holomorphically on the twisted superpotential, and vice versa for the B model.
== Dualities ==
=== Dualities between TSTs ===
A number of dualities relate the above theories. The A-model and B-model on two mirror manifolds are related by mirror symmetry, which has been described as a T-duality on a three-torus. The A-model and B-model on the same manifold are conjectured to be related by S-duality, which implies the existence of several new branes, called NS branes by analogy with the NS5-brane, which wrap the same cycles as the original branes but in the opposite theory. Also a combination of the A-model and a sum of the B-model and its conjugate are related to topological M-theory by a kind of dimensional reduction. Here the degrees of freedom of the A-model and the B-models appear to not be simultaneously observable, but rather to have a relation similar to that between position and momentum in quantum mechanics.
==== The holomorphic anomaly ====
The sum of the B-model and its conjugate appears in the above duality because it is the theory whose low energy effective action is expected to be described by Hitchin's formalism. This is because the B-model suffers from a holomorphic anomaly, which states that the dependence on complex quantities, while classically holomorphic, receives nonholomorphic quantum corrections. In Quantum Background Independence in String Theory, Edward Witten argued that this structure is analogous to a structure that one finds geometrically quantizing the space of complex structures. Once this space has been quantized, only half of the dimensions simultaneously commute and so the number of degrees of freedom has been halved. This halving depends on an arbitrary choice, called a polarization. The conjugate model contains the missing degrees of freedom, and so by tensoring the B-model and its conjugate one reobtains all of the missing degrees of freedom and also eliminates the dependence on the arbitrary choice of polarization.
=== Geometric transitions ===
There are also a number of dualities that relate configurations with D-branes, which are described by open strings, to those with branes the branes replaced by flux and with the geometry described by the near-horizon geometry of the lost branes. The latter are described by closed strings.
Perhaps the first such duality is the Gopakumar–Vafa duality, which was introduced by Rajesh Gopakumar and Cumrun Vafa in On the Gauge Theory/Geometry Correspondence. This relates a stack of N D6-branes on a 3-sphere in the A-model on the deformed conifold to the closed string theory of the A-model on a resolved conifold with a B field equal to N times the string coupling constant.
The open strings in the A model are described by a U(N) Chern–Simons theory, while the closed string theory on the A-model is described by the Kähler gravity.
Although the conifold is said to be resolved, the area of the blown up two-sphere is zero, it is only the B-field, which is often considered to be the complex part of the area, which is nonvanishing. In fact, as the Chern–Simons theory is topological, one may shrink the volume of the deformed three-sphere to zero and so arrive at the same geometry as in the dual theory.
The mirror dual of this duality is another duality, which relates open strings in the B model on a brane wrapping the 2-cycle in the resolved conifold to closed strings in the B model on the deformed conifold. Open strings in the B-model are described by dimensional reductions of homolomorphic Chern–Simons theory on the branes on which they end, while closed strings in the B model are described by Kodaira–Spencer gravity.
=== Dualities with other theories ===
==== Crystal melting, quantum foam and U(1) gauge theory ====
In the paper Quantum Calabi–Yau and Classical Crystals, Andrei Okounkov, Nicolai Reshetikhin and Cumrun Vafa conjectured that the quantum A-model is dual to a classical melting crystal at a temperature equal to the inverse of the string coupling constant. This conjecture was interpreted in Quantum Foam and Topological Strings, by Amer Iqbal, Nikita Nekrasov, Andrei Okounkov and Cumrun Vafa. They claim that the statistical sum over melting crystal configurations is equivalent to a path integral over changes in spacetime topology supported in small regions with area of order the product of the string coupling constant and α'.
Such configurations, with spacetime full of many small bubbles, dates back to John Archibald Wheeler in 1964, but has rarely appeared in string theory as it is notoriously difficult to make precise. However in this duality the authors are able to cast the dynamics of the quantum foam in the familiar language of a topologically twisted U(1) gauge theory, whose field strength is linearly related to the Kähler form of the A-model. In particular this suggests that the A-model Kähler form should be quantized.
== Applications ==
A-model topological string theory amplitudes are used to compute prepotentials in N=2 supersymmetric gauge theories in four and five dimensions. The amplitudes of the topological B-model, with fluxes and or branes, are used to compute superpotentials in N=1 supersymmetric gauge theories in four dimensions. Perturbative A model calculations also count BPS states of spinning black holes in five dimensions.
== See also ==
Quantum topology
Topological defect
Topological entropy in physics
Topological order
Topological quantum field theory
Topological quantum number
Introduction to M-theory
== References ==
Neitzke, Andrew; Vafa, Cumrun (2004). "Topological strings and their physical applications". arXiv:hep-th/0410178.
Dijkgraaf, Robbert; Gukov, Sergei; Neitzke, Andrew; Vafa, Cumrun (2005). "Topological M-theory as Unification of Form Theories of Gravity". Adv. Theor. Math. Phys. 9 (4): 603–665. arXiv:hep-th/0411073. Bibcode:2004hep.th...11073D. doi:10.4310/ATMP.2005.v9.n4.a5. S2CID 1204839.
Topological string theory on arxiv.org
Naqvi, Asad (2006). "Topological Strings" (PDF-Microsoft PowerPoint). Asad Naqvi - University of Wales, Swansea, United Kingdom. National Center for Physics. | Wikipedia/Topological_A-model |
In theoretical physics, topological string theory is a version of string theory. Topological string theory appeared in papers by theoretical physicists, such as Edward Witten and Cumrun Vafa, by analogy with Witten's earlier idea of topological quantum field theory.
== Overview ==
There are two main versions of topological string theory: the topological A-model and the topological B-model. The results of the calculations in topological string theory generically encode all holomorphic quantities within the full string theory whose values are protected by spacetime supersymmetry. Various calculations in topological string theory are closely related to Chern–Simons theory, Gromov–Witten invariants, mirror symmetry, geometric Langlands Program, and many other topics.
The operators in topological string theory represent the algebra of operators in the full string theory that preserve a certain amount of supersymmetry. Topological string theory is obtained by a topological twist of the worldsheet description of ordinary string theory: the operators are given different spins. The operation is fully analogous to the construction of topological field theory which is a related concept. Consequently, there are no local degrees of freedom in topological string theory.
== Admissible spacetimes ==
The fundamental strings of string theory are two-dimensional surfaces. A quantum field theory known as the N = (1,1) sigma model is defined on each surface. This theory consist of maps from the surface to a supermanifold. Physically the supermanifold is interpreted as spacetime and each map is interpreted as the embedding of the string in spacetime.
Only special spacetimes admit topological strings. Classically, one must choose a spacetime such that the theory respects an additional pair of supersymmetries, making the spacetime an N = (2,2) sigma model. A particular case of this is if the spacetime is a Kähler manifold and the H-flux is identically equal to zero. Generalized Kähler manifolds can have a nontrivial H-flux.
=== Topological twist ===
Ordinary strings on special backgrounds are never topological. To make these strings topological, one needs to modify the sigma model via a procedure called a topological twist which was invented by Edward Witten in 1988. The central observation is that these theories have two U(1) symmetries known as R-symmetries, and the Lorentz symmetry may be modified by mixing rotations and R-symmetries. One may use either of the two R-symmetries, leading to two different theories, called the A model and the B model. After this twist, the action of the theory is BRST exact, and as a result the theory has no dynamics. Instead, all observables depend on the topology of a configuration. Such theories are known as topological theories.
Classically this procedure is always possible.
Quantum mechanically, the U(1) symmetries may be anomalous, making the twist impossible. For example, in the Kähler case with H = 0 the twist leading to the A-model is always possible but that leading to the B-model is only possible when the first Chern class of the spacetime vanishes, implying that the spacetime is Calabi–Yau. More generally (2,2) theories have two complex structures and the B model exists when the first Chern classes of associated bundles sum to zero whereas the A model exists when the difference of the Chern classes is zero. In the Kähler case the two complex structures are the same and so the difference is always zero, which is why the A model always exists.
There is no restriction on the number of dimensions of spacetime, other than that it must be even because spacetime is generalized Kähler. However, all correlation functions with worldsheets that are not spheres vanish unless the complex dimension of the spacetime is three, and so spacetimes with complex dimension three are the most interesting. This is fortunate for phenomenology, as phenomenological models often use a physical string theory compactified on a 3 complex-dimensional space. The topological string theory is not equivalent to the physical string theory, even on the same space, but certain supersymmetric quantities agree in the two theories.
== Objects ==
=== A-model ===
The topological A-model comes with a target space which is a 6 real-dimensional generalized Kähler spacetime. In the case in which the spacetime is Kähler, the theory describes two objects. There are fundamental strings, which wrap two real-dimensional holomorphic curves. Amplitudes for the scattering of these strings depend only on the Kähler form of the spacetime, and not on the complex structure. Classically these correlation functions are determined by the cohomology ring. There are quantum mechanical instanton effects which correct these and yield Gromov–Witten invariants, which measure the cup product in a deformed cohomology ring called the quantum cohomology. The string field theory of the A-model closed strings is known as Kähler gravity, and was introduced by Michael Bershadsky and Vladimir Sadov in Theory of Kähler Gravity.
In addition, there are D2-branes which wrap Lagrangian submanifolds of spacetime. These are submanifolds whose dimensions are one half that of space time, and such that the pullback of the Kähler form to the submanifold vanishes. The worldvolume theory on a stack of N D2-branes is the string field theory of the open strings of the A-model, which is a U(N) Chern–Simons theory.
The fundamental topological strings may end on the D2-branes. While the embedding of a string depends only on the Kähler form, the embeddings of the branes depends entirely on the complex structure. In particular, when a string ends on a brane the intersection will always be orthogonal, as the wedge product of the Kähler form and the holomorphic 3-form is zero. In the physical string this is necessary for the stability of the configuration, but here it is a property of Lagrangian and holomorphic cycles on a Kahler manifold.
There may also be coisotropic branes in various dimensions other than half dimensions of Lagrangian submanifolds. These were first introduced by Anton Kapustin and Dmitri Orlov in Remarks on A-Branes, Mirror Symmetry, and the Fukaya Category
=== B-model ===
The B-model also contains fundamental strings, but their scattering amplitudes depend entirely upon the complex structure and are independent of the Kähler structure. In particular, they are insensitive to worldsheet instanton effects and so can often be calculated exactly. Mirror symmetry then relates them to A model amplitudes, allowing one to compute Gromov–Witten invariants. The string field theory of the closed strings of the B-model is known as the Kodaira–Spencer theory of gravity and was developed by Michael Bershadsky, Sergio Cecotti, Hirosi Ooguri and Cumrun Vafa in Kodaira–Spencer Theory of Gravity and Exact Results for Quantum String Amplitudes.
The B-model also comes with D(-1), D1, D3 and D5-branes, which wrap holomorphic 0, 2, 4 and 6-submanifolds respectively. The 6-submanifold is a connected component of the spacetime. The theory on a D5-brane is known as holomorphic Chern–Simons theory. The Lagrangian density is the wedge product of that of ordinary Chern–Simons theory with the holomorphic (3,0)-form, which exists in the Calabi–Yau case. The Lagrangian densities of the theories on the lower-dimensional branes may be obtained from holomorphic Chern–Simons theory by dimensional reductions.
=== Topological M-theory ===
Topological M-theory, which enjoys a seven-dimensional spacetime, is not a topological string theory, as it contains no topological strings. However topological M-theory on a circle bundle over a 6-manifold has been conjectured to be equivalent to the topological A-model on that 6-manifold.
In particular, the D2-branes of the A-model lift to points at which the circle bundle degenerates, or more precisely Kaluza–Klein monopoles. The fundamental strings of the A-model lift to membranes named M2-branes in topological M-theory.
One special case that has attracted much interest is topological M-theory on a space with G2 holonomy and the A-model on a Calabi–Yau. In this case, the M2-branes wrap associative 3-cycles. Strictly speaking, the topological M-theory conjecture has only been made in this context, as in this case functions introduced by Nigel Hitchin in The Geometry of Three-Forms in Six and Seven Dimensions and Stable Forms and Special Metrics provide a candidate low energy effective action.
These functions are called "Hitchin functional" and Topological string is closely related to Hitchin's ideas on generalized complex structure, Hitchin system, and ADHM construction etc..
== Observables ==
=== The topological twist ===
The 2-dimensional worldsheet theory is an N = (2,2) supersymmetric sigma model, the (2,2) supersymmetry means that the fermionic generators of the supersymmetry algebra, called supercharges, may be assembled into a single Dirac spinor, which consists of two Majorana–Weyl spinors of each chirality. This sigma model is topologically twisted, which means that the Lorentz symmetry generators that appear in the supersymmetry algebra simultaneously rotate the physical spacetime and also rotate the fermionic directions via the action of one of the R-symmetries. The R-symmetry group of a 2-dimensional N = (2,2) field theory is U(1) × U(1), twists by the two different factors lead to the A and B models respectively. The topological twisted construction of topological string theories was introduced by Edward Witten in his 1988 paper.
=== What do the correlators depend on? ===
The topological twist leads to a topological theory because the stress–energy tensor may be written as an anticommutator of a supercharge and another field. As the stress–energy tensor measures the dependence of the action on the metric tensor, this implies that all correlation functions of Q-invariant operators are independent of the metric. In this sense, the theory is topological.
More generally, any D-term in the action, which is any term which may be expressed as an integral over all of superspace, is an anticommutator of a supercharge and so does not affect the topological observables. Yet more generally, in the B model any term which may be written as an integral over the fermionic
θ
¯
±
{\displaystyle {\overline {\theta }}^{\pm }}
coordinates does not contribute, whereas in the A-model any term which is an integral over
θ
−
{\displaystyle \theta ^{-}}
or over
θ
¯
+
{\displaystyle {\overline {\theta }}^{+}}
does not contribute. This implies that A model observables are independent of the superpotential (as it may be written as an integral over just
θ
¯
±
{\displaystyle {\overline {\theta }}^{\pm }}
) but depend holomorphically on the twisted superpotential, and vice versa for the B model.
== Dualities ==
=== Dualities between TSTs ===
A number of dualities relate the above theories. The A-model and B-model on two mirror manifolds are related by mirror symmetry, which has been described as a T-duality on a three-torus. The A-model and B-model on the same manifold are conjectured to be related by S-duality, which implies the existence of several new branes, called NS branes by analogy with the NS5-brane, which wrap the same cycles as the original branes but in the opposite theory. Also a combination of the A-model and a sum of the B-model and its conjugate are related to topological M-theory by a kind of dimensional reduction. Here the degrees of freedom of the A-model and the B-models appear to not be simultaneously observable, but rather to have a relation similar to that between position and momentum in quantum mechanics.
==== The holomorphic anomaly ====
The sum of the B-model and its conjugate appears in the above duality because it is the theory whose low energy effective action is expected to be described by Hitchin's formalism. This is because the B-model suffers from a holomorphic anomaly, which states that the dependence on complex quantities, while classically holomorphic, receives nonholomorphic quantum corrections. In Quantum Background Independence in String Theory, Edward Witten argued that this structure is analogous to a structure that one finds geometrically quantizing the space of complex structures. Once this space has been quantized, only half of the dimensions simultaneously commute and so the number of degrees of freedom has been halved. This halving depends on an arbitrary choice, called a polarization. The conjugate model contains the missing degrees of freedom, and so by tensoring the B-model and its conjugate one reobtains all of the missing degrees of freedom and also eliminates the dependence on the arbitrary choice of polarization.
=== Geometric transitions ===
There are also a number of dualities that relate configurations with D-branes, which are described by open strings, to those with branes the branes replaced by flux and with the geometry described by the near-horizon geometry of the lost branes. The latter are described by closed strings.
Perhaps the first such duality is the Gopakumar–Vafa duality, which was introduced by Rajesh Gopakumar and Cumrun Vafa in On the Gauge Theory/Geometry Correspondence. This relates a stack of N D6-branes on a 3-sphere in the A-model on the deformed conifold to the closed string theory of the A-model on a resolved conifold with a B field equal to N times the string coupling constant.
The open strings in the A model are described by a U(N) Chern–Simons theory, while the closed string theory on the A-model is described by the Kähler gravity.
Although the conifold is said to be resolved, the area of the blown up two-sphere is zero, it is only the B-field, which is often considered to be the complex part of the area, which is nonvanishing. In fact, as the Chern–Simons theory is topological, one may shrink the volume of the deformed three-sphere to zero and so arrive at the same geometry as in the dual theory.
The mirror dual of this duality is another duality, which relates open strings in the B model on a brane wrapping the 2-cycle in the resolved conifold to closed strings in the B model on the deformed conifold. Open strings in the B-model are described by dimensional reductions of homolomorphic Chern–Simons theory on the branes on which they end, while closed strings in the B model are described by Kodaira–Spencer gravity.
=== Dualities with other theories ===
==== Crystal melting, quantum foam and U(1) gauge theory ====
In the paper Quantum Calabi–Yau and Classical Crystals, Andrei Okounkov, Nicolai Reshetikhin and Cumrun Vafa conjectured that the quantum A-model is dual to a classical melting crystal at a temperature equal to the inverse of the string coupling constant. This conjecture was interpreted in Quantum Foam and Topological Strings, by Amer Iqbal, Nikita Nekrasov, Andrei Okounkov and Cumrun Vafa. They claim that the statistical sum over melting crystal configurations is equivalent to a path integral over changes in spacetime topology supported in small regions with area of order the product of the string coupling constant and α'.
Such configurations, with spacetime full of many small bubbles, dates back to John Archibald Wheeler in 1964, but has rarely appeared in string theory as it is notoriously difficult to make precise. However in this duality the authors are able to cast the dynamics of the quantum foam in the familiar language of a topologically twisted U(1) gauge theory, whose field strength is linearly related to the Kähler form of the A-model. In particular this suggests that the A-model Kähler form should be quantized.
== Applications ==
A-model topological string theory amplitudes are used to compute prepotentials in N=2 supersymmetric gauge theories in four and five dimensions. The amplitudes of the topological B-model, with fluxes and or branes, are used to compute superpotentials in N=1 supersymmetric gauge theories in four dimensions. Perturbative A model calculations also count BPS states of spinning black holes in five dimensions.
== See also ==
Quantum topology
Topological defect
Topological entropy in physics
Topological order
Topological quantum field theory
Topological quantum number
Introduction to M-theory
== References ==
Neitzke, Andrew; Vafa, Cumrun (2004). "Topological strings and their physical applications". arXiv:hep-th/0410178.
Dijkgraaf, Robbert; Gukov, Sergei; Neitzke, Andrew; Vafa, Cumrun (2005). "Topological M-theory as Unification of Form Theories of Gravity". Adv. Theor. Math. Phys. 9 (4): 603–665. arXiv:hep-th/0411073. Bibcode:2004hep.th...11073D. doi:10.4310/ATMP.2005.v9.n4.a5. S2CID 1204839.
Topological string theory on arxiv.org
Naqvi, Asad (2006). "Topological Strings" (PDF-Microsoft PowerPoint). Asad Naqvi - University of Wales, Swansea, United Kingdom. National Center for Physics. | Wikipedia/Topological_B-model |
In mathematics, specifically algebraic geometry, Donaldson–Thomas theory is the theory of Donaldson–Thomas invariants. Given a compact moduli space of sheaves on a Calabi–Yau threefold, its Donaldson–Thomas invariant is the virtual number of its points, i.e., the integral of the cohomology class 1 against the virtual fundamental class. The Donaldson–Thomas invariant is a holomorphic analogue of the Casson invariant. The invariants were introduced by Simon Donaldson and Richard Thomas (1998). Donaldson–Thomas invariants have close connections to Gromov–Witten invariants of algebraic three-folds and the theory of stable pairs due to Rahul Pandharipande and Thomas.
Donaldson–Thomas theory is physically motivated by certain BPS states that occur in string and gauge theorypg 5. This is due to the fact the invariants depend on a stability condition on the derived category
D
b
(
M
)
{\displaystyle D^{b}({\mathcal {M}})}
of the moduli spaces being studied. Essentially, these stability conditions correspond to points in the Kahler moduli space of a Calabi-Yau manifold, as considered in mirror symmetry, and the resulting subcategory
P
⊂
D
b
(
M
)
{\displaystyle {\mathcal {P}}\subset D^{b}({\mathcal {M}})}
is the category of BPS states for the corresponding SCFT.
== Definition and examples ==
The basic idea of Gromov–Witten invariants is to probe the geometry of a space by studying pseudoholomorphic maps from Riemann surfaces to a smooth target. The moduli stack of all such maps admits a virtual fundamental class, and intersection theory on this stack yields numerical invariants that can often contain enumerative information. In similar spirit, the approach of Donaldson–Thomas theory is to study curves in an algebraic three-fold by their equations. More accurately, by studying ideal sheaves on a space. This moduli space also admits a virtual fundamental class and yields certain numerical invariants that are enumerative.
Whereas in Gromov–Witten theory, maps are allowed to be multiple covers and collapsed components of the domain curve, Donaldson–Thomas theory allows for nilpotent information contained in the sheaves, however, these are integer valued invariants. There are deep conjectures due to Davesh Maulik, Andrei Okounkov, Nikita Nekrasov and Rahul Pandharipande, proved in increasing generality, that Gromov–Witten and Donaldson–Thomas theories of algebraic three-folds are actually equivalent. More concretely, their generating functions are equal after an appropriate change of variables. For Calabi–Yau threefolds, the Donaldson–Thomas invariants can be formulated as weighted Euler characteristic on the moduli space. There have also been recent connections between these invariants, the motivic Hall algebra, and the ring of functions on the quantum torus.
The moduli space of lines on the quintic threefold is a discrete set of 2875 points. The virtual number of points is the actual number of points, and hence the Donaldson–Thomas invariant of this moduli space is the integer 2875.
Similarly, the Donaldson–Thomas invariant of the moduli space of conics on the quintic is 609250.
=== Definition ===
For a Calabi–Yau threefold
Y
{\displaystyle Y}
and a fixed cohomology class
α
∈
H
even
(
Y
,
Q
)
{\displaystyle \alpha \in H^{\text{even}}(Y,\mathbb {Q} )}
there is an associated moduli stack
M
(
Y
,
α
)
{\displaystyle {\mathcal {M}}(Y,\alpha )}
of coherent sheaves with Chern character
c
(
E
)
=
α
{\displaystyle c({\mathcal {E}})=\alpha }
. In general, this is a non-separated Artin stack of infinite type which is difficult to define numerical invariants upon it. Instead, there are open substacks
M
σ
(
Y
,
α
)
{\displaystyle {\mathcal {M}}^{\sigma }(Y,\alpha )}
parametrizing such coherent sheaves
E
{\displaystyle {\mathcal {E}}}
which have a stability condition
σ
{\displaystyle \sigma }
imposed upon them, i.e.
σ
{\displaystyle \sigma }
-stable sheaves. These moduli stacks have much nicer properties, such as being separated of finite type. The only technical difficulty is they can have bad singularities due to the existence of obstructions of deformations of a fixed sheaf. In particular
T
[
E
]
M
σ
(
Y
,
α
)
≅
Ext
1
(
E
,
E
)
Ob
[
E
]
(
M
σ
(
Y
,
α
)
)
≅
Ext
2
(
E
,
E
)
{\displaystyle {\begin{aligned}T_{[{\mathcal {E}}]}{\mathcal {M}}^{\sigma }(Y,\alpha )&\cong {\text{Ext}}^{1}({\mathcal {E}},{\mathcal {E}})\\{\text{Ob}}_{[{\mathcal {E}}]}({\mathcal {M}}^{\sigma }(Y,\alpha ))&\cong {\text{Ext}}^{2}({\mathcal {E}},{\mathcal {E}})\end{aligned}}}
Now because
Y
{\displaystyle Y}
is Calabi–Yau, Serre duality implies
Ext
2
(
E
,
E
)
≅
Ext
1
(
E
,
E
⊗
ω
Y
)
∨
≅
Ext
1
(
E
,
E
)
∨
{\displaystyle {\text{Ext}}^{2}({\mathcal {E}},{\mathcal {E}})\cong {\text{Ext}}^{1}({\mathcal {E}},{\mathcal {E}}\otimes \omega _{Y})^{\vee }\cong {\text{Ext}}^{1}({\mathcal {E}},{\mathcal {E}})^{\vee }}
which gives a perfect obstruction theory of dimension 0. In particular, this implies the associated virtual fundamental class
[
M
σ
(
Y
,
α
)
]
v
i
r
∈
H
0
(
M
σ
(
Y
,
α
)
,
Z
)
{\displaystyle [{\mathcal {M}}^{\sigma }(Y,\alpha )]^{vir}\in H_{0}({\mathcal {M}}^{\sigma }(Y,\alpha ),\mathbb {Z} )}
is in homological degree
0
{\displaystyle 0}
. We can then define the DT invariant as
∫
[
M
σ
(
Y
,
α
)
]
v
i
r
1
{\displaystyle \int _{[{\mathcal {M}}^{\sigma }(Y,\alpha )]^{vir}}1}
which depends upon the stability condition
σ
{\displaystyle \sigma }
and the cohomology class
α
{\displaystyle \alpha }
. It was proved by Thomas that for a smooth family
Y
t
{\displaystyle Y_{t}}
the invariant defined above does not change. At the outset researchers chose the Gieseker stability condition, but other DT-invariants in recent years have been studied based on other stability conditions, leading to wall-crossing formulas.
== Facts ==
The Donaldson–Thomas invariant of the moduli space M is equal to the weighted Euler characteristic of M. The weight function associates to every point in M an analogue of the Milnor number of a hyperplane singularity.
== Generalizations ==
Instead of moduli spaces of sheaves, one considers moduli spaces of derived category objects. That gives the Pandharipande–Thomas invariants that count stable pairs of a Calabi–Yau 3-fold.
Instead of integer valued invariants, one considers motivic invariants.
== See also ==
Enumerative geometry
Gromov–Witten invariant
Hilbert scheme
Quantum cohomology
== References ==
Donaldson, Simon K.; Thomas, Richard P. (1998), "Gauge theory in higher dimensions", in Huggett, S. A.; Mason, L. J.; Tod, K. P.; Tsou, S. T.; Woodhouse, N. M. J. (eds.), The geometric universe (Oxford, 1996), Oxford University Press, pp. 31–47, ISBN 978-0-19-850059-9, MR 1634503
Kontsevich, Maxim (2007), Donaldson–Thomas invariants (PDF), Mathematische Arbeitstagung, Bonn{{citation}}: CS1 maint: location missing publisher (link) | Wikipedia/Donaldson–Thomas_theory |
In theoretical physics, type II string theory is a unified term that includes both type IIA strings and type IIB strings theories. Type II string theory accounts for two of the five consistent superstring theories in ten dimensions. Both theories have
N
=
2
{\displaystyle {\mathcal {N}}=2}
extended supersymmetry which is maximal amount of supersymmetry — namely 32 supercharges — in ten dimensions. Both theories are based on oriented closed strings. On the worldsheet, they differ only in the choice of GSO projection. They were first discovered by Michael Green and John Henry Schwarz in 1982, with the terminology of type I and type II coined to classify the three string theories known at the time.
== Type IIA string theory ==
At low energies, type IIA string theory is described by type IIA supergravity in ten dimensions which is a non-chiral theory (i.e. left–right symmetric) with (1,1) d=10 supersymmetry; the fact that the anomalies in this theory cancel is therefore trivial.
In the 1990s it was realized by Edward Witten (building on previous insights by Michael Duff, Paul Townsend, and others) that the limit of type IIA string theory in which the string coupling goes to infinity becomes a new 11-dimensional theory called M-theory. Consequently the low energy type IIA supergravity theory can also be derived from the unique maximal supergravity theory in 11 dimensions (low energy version of M-theory) via a dimensional reduction.
The content of the massless sector of the theory (which is relevant in the low energy limit) is given by
(
8
v
⊕
8
s
)
⊗
(
8
v
⊕
8
c
)
{\textstyle (8_{v}\oplus 8_{s})\otimes (8_{v}\oplus 8_{c})}
representation of SO(8) where
8
v
{\displaystyle 8_{v}}
is the irreducible vector representation,
8
c
{\displaystyle 8_{c}}
and
8
s
{\displaystyle 8_{s}}
are the irreducible representations with odd and even eigenvalues of the fermionic parity operator often called co-spinor and spinor representations. These three representations enjoy a triality symmetry which is evident from its Dynkin diagram. The four sectors of the massless spectrum after GSO projection and decomposition into irreducible representations are
NS-NS
:
8
v
⊗
8
v
=
1
⊕
28
⊕
35
=
Φ
⊕
B
μ
ν
⊕
G
μ
ν
{\displaystyle {\text{NS-NS}}:~8_{v}\otimes 8_{v}=1\oplus 28\oplus 35=\Phi \oplus B_{\mu \nu }\oplus G_{\mu \nu }}
NS-R
:
8
v
⊗
8
c
=
8
s
⊕
56
c
=
λ
+
⊕
ψ
m
−
{\displaystyle {\text{NS-R}}:8_{v}\otimes 8_{c}=8_{s}\oplus 56_{c}=\lambda ^{+}\oplus \psi _{m}^{-}}
R-NS
:
8
c
⊗
8
s
=
8
s
⊕
56
s
=
λ
−
⊕
ψ
m
+
{\displaystyle {\text{R-NS}}:8_{c}\otimes 8_{s}=8_{s}\oplus 56_{s}=\lambda ^{-}\oplus \psi _{m}^{+}}
R-R
:
8
s
⊗
8
c
=
8
v
⊕
56
t
=
C
n
⊕
C
n
m
p
{\displaystyle {\text{R-R}}:8_{s}\otimes 8_{c}=8_{v}\oplus 56_{t}=C_{n}\oplus C_{nmp}}
where
R
{\displaystyle {\text{R}}}
and
NS
{\displaystyle {\text{NS}}}
stands for Ramond and Neveu–Schwarz sectors respectively. The numbers denote the dimension of the irreducible representation and equivalently the number of components of the corresponding fields. The various massless fields obtained are the graviton
G
μ
ν
{\displaystyle G_{\mu \nu }}
with two superpartner gravitinos
ψ
m
±
{\displaystyle \psi _{m}^{\pm }}
which gives rise to local spacetime supersymmetry, a scalar dilaton
Φ
{\displaystyle \Phi }
with two superpartner spinors—the dilatinos
λ
±
{\displaystyle \lambda ^{\pm }}
, a 2-form spin-2 gauge field
B
μ
ν
{\displaystyle B_{\mu \nu }}
often called the Kalb–Ramond field, a 1-form
C
n
{\displaystyle C_{n}}
and a 3-form
C
n
m
p
{\displaystyle C_{nmp}}
. Since the
p
{\displaystyle {\text{p}}}
-form gauge fields naturally couple to extended objects with
p+1
{\displaystyle {\text{p+1}}}
dimensional world-volume, Type IIA string theory naturally incorporates various extended objects like D0, D2, D4 and D6 branes (using Hodge duality) among the D-branes (which are
R
{\displaystyle {\text{R}}}
R
{\displaystyle {\text{R}}}
charged) and F1 string and NS5 brane among other objects.
The mathematical treatment of type IIA string theory belongs to symplectic topology and algebraic geometry, particularly Gromov–Witten invariants.
== Type IIB string theory ==
At low energies, type IIB string theory is described by type IIB supergravity in ten dimensions which is a chiral theory (left–right asymmetric) with (2,0) d=10 supersymmetry; the fact that the anomalies in this theory cancel is therefore nontrivial.
In the 1990s it was realized that type IIB string theory with the string coupling constant g is equivalent to the same theory with the coupling 1/g. This equivalence is known as S-duality.
Orientifold of type IIB string theory leads to type I string theory.
The mathematical treatment of type IIB string theory belongs to algebraic geometry, specifically the deformation theory of complex structures originally studied by Kunihiko Kodaira and Donald C. Spencer.
In 1997 Juan Maldacena gave some arguments indicating that type IIB string theory is equivalent to N = 4 supersymmetric Yang–Mills theory in the 't Hooft limit; it was the first suggestion concerning the AdS/CFT correspondence.
== Relationship between the type II theories ==
In the late 1980s, it was realized that type IIA string theory is related to type IIB string theory by T-duality.
== See also ==
Superstring theory
Type I string
Heterotic string
== References == | Wikipedia/Type_IIA_string_theory |
Digital Science (or Digital Science & Research Solutions Ltd) is a technology company with its headquarters in London, United Kingdom. The company focuses on strategic investments into startup companies that support the research lifecycle.
Overleaf is a part of Digital Science
== History ==
Digital Science was founded in 2010. It was initially the technical division of Nature Publishing Group/Macmillan and is now operated as an independent company by Holtzbrinck Publishing Group. They are one of the organizers of Science Foo Camp along with Nature, Google and O'Reilly.
Since 2013, Digital Science has released a number of collaborative reports using data generated from their portfolio companies featured in media outlets. The company worked with HEFCE and King's College London in 2015, following the inclusion of Research Impact in the Research Excellence Framework (REF), to analyse the results and provide access to the case studies to the public.
Digital Science launched a Global Research Identifier Database (GRID) for identifying research institutions around the world in 2015. Through the Digital Science Catalyst Grant the company has supported a number of early-stage ideas such as Nutonian, TetraScience and Penelope as well as community schemes including Ada Lovelace Day.
In 2013 it invested in UberResearch which launched "Dimensions" in 2016, a searchable database of research funds.
On 15 January 2018, Digital Science re-launched an extended version of Dimensions, a commercial scholarly search platform that allows to search publications, datasets, grants, patents and clinical trials. The free version of the platform allows searching for publications and datasets only.
Several studies published in 2021 compared Dimensions with its subscription-based commercial competitors, and unanimously found that Dimensions.ai provides broader temporal and publication source coverage than Scopus and Web of Science in most subject areas, and that Dimensions is closer in its coverage to free aggregation databases, such as The Lens and Google Scholar.
As of October 2021, Dimensions.ai covers nearly 106 million publications with over 1.2 billion citations.
== Key people ==
From 2010 to 2015, Timo Hannay was Managing Director
From 2013 to 2015, Amy Brand held the role of VP academic & research relations before moving to become Director of MIT Press.
From 2015 to present, Daniel W. Hook acts as Chief Executive Officer.[1]
== Catalyst Grant Winners ==
== See also ==
List of academic databases and search engines
== References == | Wikipedia/Digital_Science |
In graph theory, a quotient graph Q of a graph G is a graph whose vertices are blocks of a partition of the vertices of G and where block B is adjacent to block C if some vertex in B is adjacent to some vertex in C with respect to the edge set of G. In other words, if G has edge set E and vertex set V and R is the equivalence relation induced by the partition, then the quotient graph has vertex set V/R and edge set {([u]R, [v]R) | (u, v) ∈ E(G)}.
More formally, a quotient graph is a quotient object in the category of graphs. The category of graphs is concretizable – mapping a graph to its set of vertices makes it a concrete category – so its objects can be regarded as "sets with additional structure", and a quotient graph corresponds to the graph induced on the quotient set V/R of its vertex set V. Further, there is a graph homomorphism (a quotient map) from a graph to a quotient graph, sending each vertex or edge to the equivalence class that it belongs to. Intuitively, this corresponds to "gluing together" (formally, "identifying") vertices and edges of the graph.
== Examples ==
A graph is trivially a quotient graph of itself (each block of the partition is a single vertex), and the graph consisting of a single point is the quotient graph of any non-empty graph (the partition consisting of a single block of all vertices). The simplest non-trivial quotient graph is one obtained by identifying two vertices (vertex identification); if the vertices are connected, this is called edge contraction.
== Special types of quotient ==
The condensation of a directed graph is the quotient graph where the strongly connected components form the blocks of the partition. This construction can be used to derive a directed acyclic graph from any directed graph.
The result of one or more edge contractions in an undirected graph G is a quotient of G, in which the blocks are the connected components of the subgraph of G formed by the contracted edges. However, for quotients more generally, the blocks of the partition giving rise to the quotient do not need to form connected subgraphs.
If G is a covering graph of another graph H, then H is a quotient graph of G. The blocks of the corresponding partition are the inverse images of the vertices of H under the covering map. However, covering maps have an additional requirement that is not true more generally of quotients, that the map be a local isomorphism.
== Computational complexity ==
Given an n-vertex cubic graph G and a parameter k, the computational complexity of determining whether G can be obtained as a quotient of a planar graph with n + k vertices is NP-complete.
== References ==
5. Alain Bretto, Alain Faisant et François Hennecart, Elements of graph theory: From basic concept to moderne theory, European Mathematical Society Press, 2022. | Wikipedia/Quotient_graph |
In topology, a covering or covering projection is a map between topological spaces that, intuitively, locally acts like a projection of multiple copies of a space onto itself. In particular, coverings are special types of local homeomorphisms. If
p
:
X
~
→
X
{\displaystyle p:{\tilde {X}}\to X}
is a covering,
(
X
~
,
p
)
{\displaystyle ({\tilde {X}},p)}
is said to be a covering space or cover of
X
{\displaystyle X}
, and
X
{\displaystyle X}
is said to be the base of the covering, or simply the base. By abuse of terminology,
X
~
{\displaystyle {\tilde {X}}}
and
p
{\displaystyle p}
may sometimes be called covering spaces as well. Since coverings are local homeomorphisms, a covering space is a special kind of étalé space.
Covering spaces first arose in the context of complex analysis (specifically, the technique of analytic continuation), where they were introduced by Riemann as domains on which naturally multivalued complex functions become single-valued. These spaces are now called Riemann surfaces.: 10
Covering spaces are an important tool in several areas of mathematics. In modern geometry, covering spaces (or branched coverings, which have slightly weaker conditions) are used in the construction of manifolds, orbifolds, and the morphisms between them. In algebraic topology, covering spaces are closely related to the fundamental group: for one, since all coverings have the homotopy lifting property, covering spaces are an important tool in the calculation of homotopy groups. A standard example in this vein is the calculation of the fundamental group of the circle by means of the covering of
S
1
{\displaystyle S^{1}}
by
R
{\displaystyle \mathbb {R} }
(see below).: 29 Under certain conditions, covering spaces also exhibit a Galois correspondence with the subgroups of the fundamental group.
== Definition ==
Let
X
{\displaystyle X}
be a topological space. A covering of
X
{\displaystyle X}
is a continuous map
π
:
X
~
→
X
{\displaystyle \pi :{\tilde {X}}\rightarrow X}
such that for every
x
∈
X
{\displaystyle x\in X}
there exists an open neighborhood
U
x
{\displaystyle U_{x}}
of
x
{\displaystyle x}
and a discrete space
D
x
{\displaystyle D_{x}}
such that
π
−
1
(
U
x
)
=
⨆
d
∈
D
x
V
d
{\displaystyle \pi ^{-1}(U_{x})=\displaystyle \bigsqcup _{d\in D_{x}}V_{d}}
and
π
|
V
d
:
V
d
→
U
x
{\displaystyle \pi |_{V_{d}}:V_{d}\rightarrow U_{x}}
is a homeomorphism for every
d
∈
D
x
{\displaystyle d\in D_{x}}
.
The open sets
V
d
{\displaystyle V_{d}}
are called sheets, which are uniquely determined up to homeomorphism if
U
x
{\displaystyle U_{x}}
is connected.: 56 For each
x
∈
X
{\displaystyle x\in X}
the discrete set
π
−
1
(
x
)
{\displaystyle \pi ^{-1}(x)}
is called the fiber of
x
{\displaystyle x}
. If
X
{\displaystyle X}
is connected (and
X
~
{\displaystyle {\tilde {X}}}
is non-empty), it can be shown that
π
{\displaystyle \pi }
is surjective, and the cardinality of
D
x
{\displaystyle D_{x}}
is the same for all
x
∈
X
{\displaystyle x\in X}
; this value is called the degree of the covering. If
X
~
{\displaystyle {\tilde {X}}}
is path-connected, then the covering
π
:
X
~
→
X
{\displaystyle \pi :{\tilde {X}}\rightarrow X}
is called a path-connected covering. This definition is equivalent to the statement that
π
{\displaystyle \pi }
is a locally trivial Fiber bundle.
Some authors also require that
π
{\displaystyle \pi }
be surjective in the case that
X
{\displaystyle X}
is not connected.
== Examples ==
For every topological space
X
{\displaystyle X}
, the identity map
id
:
X
→
X
{\displaystyle \operatorname {id} :X\rightarrow X}
is a covering. Likewise for any discrete space
D
{\displaystyle D}
the projection
π
:
X
×
D
→
X
{\displaystyle \pi :X\times D\rightarrow X}
taking
(
x
,
i
)
↦
x
{\displaystyle (x,i)\mapsto x}
is a covering. Coverings of this type are called trivial coverings; if
D
{\displaystyle D}
has finitely many (say
k
{\displaystyle k}
) elements, the covering is called the trivial
k
{\displaystyle k}
-sheeted covering of
X
{\displaystyle X}
.
The map
r
:
R
→
S
1
{\displaystyle r:\mathbb {R} \to S^{1}}
with
r
(
t
)
=
(
cos
(
2
π
t
)
,
sin
(
2
π
t
)
)
{\displaystyle r(t)=(\cos(2\pi t),\sin(2\pi t))}
is a covering of the unit circle
S
1
{\displaystyle S^{1}}
. The base of the covering is
S
1
{\displaystyle S^{1}}
and the covering space is
R
{\displaystyle \mathbb {R} }
. For any point
x
=
(
x
1
,
x
2
)
∈
S
1
{\displaystyle x=(x_{1},x_{2})\in S^{1}}
such that
x
1
>
0
{\displaystyle x_{1}>0}
, the set
U
:=
{
(
x
1
,
x
2
)
∈
S
1
∣
x
1
>
0
}
{\displaystyle U:=\{(x_{1},x_{2})\in S^{1}\mid x_{1}>0\}}
is an open neighborhood of
x
{\displaystyle x}
. The preimage of
U
{\displaystyle U}
under
r
{\displaystyle r}
is
r
−
1
(
U
)
=
⨆
n
∈
Z
(
n
−
1
4
,
n
+
1
4
)
{\displaystyle r^{-1}(U)=\displaystyle \bigsqcup _{n\in \mathbb {Z} }\left(n-{\frac {1}{4}},n+{\frac {1}{4}}\right)}
and the sheets of the covering are
V
n
=
(
n
−
1
/
4
,
n
+
1
/
4
)
{\displaystyle V_{n}=(n-1/4,n+1/4)}
for
n
∈
Z
.
{\displaystyle n\in \mathbb {Z} .}
The fiber of
x
{\displaystyle x}
is
r
−
1
(
x
)
=
{
t
∈
R
∣
(
cos
(
2
π
t
)
,
sin
(
2
π
t
)
)
=
x
}
.
{\displaystyle r^{-1}(x)=\{t\in \mathbb {R} \mid (\cos(2\pi t),\sin(2\pi t))=x\}.}
Another covering of the unit circle is the map
q
:
S
1
→
S
1
{\displaystyle q:S^{1}\to S^{1}}
with
q
(
z
)
=
z
n
{\displaystyle q(z)=z^{n}}
for some positive
n
∈
N
.
{\displaystyle n\in \mathbb {N} .}
For an open neighborhood
U
{\displaystyle U}
of an
x
∈
S
1
{\displaystyle x\in S^{1}}
, one has:
q
−
1
(
U
)
=
⨆
i
=
1
n
U
{\displaystyle q^{-1}(U)=\displaystyle \bigsqcup _{i=1}^{n}U}
.
A map which is a local homeomorphism but not a covering of the unit circle is
p
:
R
+
→
S
1
{\displaystyle p:\mathbb {R_{+}} \to S^{1}}
with
p
(
t
)
=
(
cos
(
2
π
t
)
,
sin
(
2
π
t
)
)
{\displaystyle p(t)=(\cos(2\pi t),\sin(2\pi t))}
. There is a sheet of an open neighborhood of
(
1
,
0
)
{\displaystyle (1,0)}
, which is not mapped homeomorphically onto
U
{\displaystyle U}
.
== Properties ==
=== Local homeomorphism ===
Since a covering
π
:
E
→
X
{\displaystyle \pi :E\rightarrow X}
maps each of the disjoint open sets of
π
−
1
(
U
)
{\displaystyle \pi ^{-1}(U)}
homeomorphically onto
U
{\displaystyle U}
it is a local homeomorphism, i.e.
π
{\displaystyle \pi }
is a continuous map and for every
e
∈
E
{\displaystyle e\in E}
there exists an open neighborhood
V
⊂
E
{\displaystyle V\subset E}
of
e
{\displaystyle e}
, such that
π
|
V
:
V
→
π
(
V
)
{\displaystyle \pi |_{V}:V\rightarrow \pi (V)}
is a homeomorphism.
It follows that the covering space
E
{\displaystyle E}
and the base space
X
{\displaystyle X}
locally share the same properties.
If
X
{\displaystyle X}
is a connected and non-orientable manifold, then there is a covering
π
:
X
~
→
X
{\displaystyle \pi :{\tilde {X}}\rightarrow X}
of degree
2
{\displaystyle 2}
, whereby
X
~
{\displaystyle {\tilde {X}}}
is a connected and orientable manifold.: 234
If
X
{\displaystyle X}
is a connected Lie group, then there is a covering
π
:
X
~
→
X
{\displaystyle \pi :{\tilde {X}}\rightarrow X}
which is also a Lie group homomorphism and
X
~
:=
{
γ
:
γ
is a path in X with
γ
(
0
)
=
1
X
modulo homotopy with fixed ends
}
{\displaystyle {\tilde {X}}:=\{\gamma :\gamma {\text{ is a path in X with }}\gamma (0)={\boldsymbol {1_{X}}}{\text{ modulo homotopy with fixed ends}}\}}
is a Lie group.: 174
If
X
{\displaystyle X}
is a graph, then it follows for a covering
π
:
E
→
X
{\displaystyle \pi :E\rightarrow X}
that
E
{\displaystyle E}
is also a graph.: 85
If
X
{\displaystyle X}
is a connected manifold, then there is a covering
π
:
X
~
→
X
{\displaystyle \pi :{\tilde {X}}\rightarrow X}
, whereby
X
~
{\displaystyle {\tilde {X}}}
is a connected and simply connected manifold.: 32
If
X
{\displaystyle X}
is a connected Riemann surface, then there is a covering
π
:
X
~
→
X
{\displaystyle \pi :{\tilde {X}}\rightarrow X}
which is also a holomorphic map: 22 and
X
~
{\displaystyle {\tilde {X}}}
is a connected and simply connected Riemann surface.: 32
=== Factorisation ===
Let
X
,
Y
{\displaystyle X,Y}
and
E
{\displaystyle E}
be path-connected, locally path-connected spaces, and
p
,
q
{\displaystyle p,q}
and
r
{\displaystyle r}
be continuous maps, such that the diagram
commutes.
If
p
{\displaystyle p}
and
q
{\displaystyle q}
are coverings, so is
r
{\displaystyle r}
.
If
p
{\displaystyle p}
and
r
{\displaystyle r}
are coverings, so is
q
{\displaystyle q}
.: 485
=== Product of coverings ===
Let
X
{\displaystyle X}
and
X
′
{\displaystyle X'}
be topological spaces and
p
:
E
→
X
{\displaystyle p:E\rightarrow X}
and
p
′
:
E
′
→
X
′
{\displaystyle p':E'\rightarrow X'}
be coverings, then
p
×
p
′
:
E
×
E
′
→
X
×
X
′
{\displaystyle p\times p':E\times E'\rightarrow X\times X'}
with
(
p
×
p
′
)
(
e
,
e
′
)
=
(
p
(
e
)
,
p
′
(
e
′
)
)
{\displaystyle (p\times p')(e,e')=(p(e),p'(e'))}
is a covering.: 339 However, coverings of
X
×
X
′
{\displaystyle X\times X'}
are not all of this form in general.
=== Equivalence of coverings ===
Let
X
{\displaystyle X}
be a topological space and
p
:
E
→
X
{\displaystyle p:E\rightarrow X}
and
p
′
:
E
′
→
X
{\displaystyle p':E'\rightarrow X}
be coverings. Both coverings are called equivalent, if there exists a homeomorphism
h
:
E
→
E
′
{\displaystyle h:E\rightarrow E'}
, such that the diagram
commutes. If such a homeomorphism exists, then one calls the covering spaces
E
{\displaystyle E}
and
E
′
{\displaystyle E'}
isomorphic.
=== Lifting property ===
All coverings satisfy the lifting property, i.e.:
Let
I
{\displaystyle I}
be the unit interval and
p
:
E
→
X
{\displaystyle p:E\rightarrow X}
be a covering. Let
F
:
Y
×
I
→
X
{\displaystyle F:Y\times I\rightarrow X}
be a continuous map and
F
~
0
:
Y
×
{
0
}
→
E
{\displaystyle {\tilde {F}}_{0}:Y\times \{0\}\rightarrow E}
be a lift of
F
|
Y
×
{
0
}
{\displaystyle F|_{Y\times \{0\}}}
, i.e. a continuous map such that
p
∘
F
~
0
=
F
|
Y
×
{
0
}
{\displaystyle p\circ {\tilde {F}}_{0}=F|_{Y\times \{0\}}}
. Then there is a uniquely determined, continuous map
F
~
:
Y
×
I
→
E
{\displaystyle {\tilde {F}}:Y\times I\rightarrow E}
for which
F
~
(
y
,
0
)
=
F
~
0
{\displaystyle {\tilde {F}}(y,0)={\tilde {F}}_{0}}
and which is a lift of
F
{\displaystyle F}
, i.e.
p
∘
F
~
=
F
{\displaystyle p\circ {\tilde {F}}=F}
.: 60
If
X
{\displaystyle X}
is a path-connected space, then for
Y
=
{
0
}
{\displaystyle Y=\{0\}}
it follows that the map
F
~
{\displaystyle {\tilde {F}}}
is a lift of a path in
X
{\displaystyle X}
and for
Y
=
I
{\displaystyle Y=I}
it is a lift of a homotopy of paths in
X
{\displaystyle X}
.
As a consequence, one can show that the fundamental group
π
1
(
S
1
)
{\displaystyle \pi _{1}(S^{1})}
of the unit circle is an infinite cyclic group, which is generated by the homotopy classes of the loop
γ
:
I
→
S
1
{\displaystyle \gamma :I\rightarrow S^{1}}
with
γ
(
t
)
=
(
cos
(
2
π
t
)
,
sin
(
2
π
t
)
)
{\displaystyle \gamma (t)=(\cos(2\pi t),\sin(2\pi t))}
.: 29
Let
X
{\displaystyle X}
be a path-connected space and
p
:
E
→
X
{\displaystyle p:E\rightarrow X}
be a connected covering. Let
x
,
y
∈
X
{\displaystyle x,y\in X}
be any two points, which are connected by a path
γ
{\displaystyle \gamma }
, i.e.
γ
(
0
)
=
x
{\displaystyle \gamma (0)=x}
and
γ
(
1
)
=
y
{\displaystyle \gamma (1)=y}
. Let
γ
~
{\displaystyle {\tilde {\gamma }}}
be the unique lift of
γ
{\displaystyle \gamma }
, then the map
L
γ
:
p
−
1
(
x
)
→
p
−
1
(
y
)
{\displaystyle L_{\gamma }:p^{-1}(x)\rightarrow p^{-1}(y)}
with
L
γ
(
γ
~
(
0
)
)
=
γ
~
(
1
)
{\displaystyle L_{\gamma }({\tilde {\gamma }}(0))={\tilde {\gamma }}(1)}
is bijective.: 69
If
X
{\displaystyle X}
is a path-connected space and
p
:
E
→
X
{\displaystyle p:E\rightarrow X}
a connected covering, then the induced group homomorphism
p
#
:
π
1
(
E
)
→
π
1
(
X
)
{\displaystyle p_{\#}:\pi _{1}(E)\rightarrow \pi _{1}(X)}
with
p
#
(
[
γ
]
)
=
[
p
∘
γ
]
{\displaystyle p_{\#}([\gamma ])=[p\circ \gamma ]}
,
is injective and the subgroup
p
#
(
π
1
(
E
)
)
{\displaystyle p_{\#}(\pi _{1}(E))}
of
π
1
(
X
)
{\displaystyle \pi _{1}(X)}
consists of the homotopy classes of loops in
X
{\displaystyle X}
, whose lifts are loops in
E
{\displaystyle E}
.: 61
== Branched covering ==
=== Definitions ===
==== Holomorphic maps between Riemann surfaces ====
Let
X
{\displaystyle X}
and
Y
{\displaystyle Y}
be Riemann surfaces, i.e. one dimensional complex manifolds, and let
f
:
X
→
Y
{\displaystyle f:X\rightarrow Y}
be a continuous map.
f
{\displaystyle f}
is holomorphic in a point
x
∈
X
{\displaystyle x\in X}
, if for any charts
ϕ
x
:
U
1
→
V
1
{\displaystyle \phi _{x}:U_{1}\rightarrow V_{1}}
of
x
{\displaystyle x}
and
ϕ
f
(
x
)
:
U
2
→
V
2
{\displaystyle \phi _{f(x)}:U_{2}\rightarrow V_{2}}
of
f
(
x
)
{\displaystyle f(x)}
, with
ϕ
x
(
U
1
)
⊂
U
2
{\displaystyle \phi _{x}(U_{1})\subset U_{2}}
, the map
ϕ
f
(
x
)
∘
f
∘
ϕ
x
−
1
:
C
→
C
{\displaystyle \phi _{f(x)}\circ f\circ \phi _{x}^{-1}:\mathbb {C} \rightarrow \mathbb {C} }
is holomorphic.
If
f
{\displaystyle f}
is holomorphic at all
x
∈
X
{\displaystyle x\in X}
, we say
f
{\displaystyle f}
is holomorphic.
The map
F
=
ϕ
f
(
x
)
∘
f
∘
ϕ
x
−
1
{\displaystyle F=\phi _{f(x)}\circ f\circ \phi _{x}^{-1}}
is called the local expression of
f
{\displaystyle f}
in
x
∈
X
{\displaystyle x\in X}
.
If
f
:
X
→
Y
{\displaystyle f:X\rightarrow Y}
is a non-constant, holomorphic map between compact Riemann surfaces, then
f
{\displaystyle f}
is surjective and an open map,: 11 i.e. for every open set
U
⊂
X
{\displaystyle U\subset X}
the image
f
(
U
)
⊂
Y
{\displaystyle f(U)\subset Y}
is also open.
==== Ramification point and branch point ====
Let
f
:
X
→
Y
{\displaystyle f:X\rightarrow Y}
be a non-constant, holomorphic map between compact Riemann surfaces. For every
x
∈
X
{\displaystyle x\in X}
there exist charts for
x
{\displaystyle x}
and
f
(
x
)
{\displaystyle f(x)}
and there exists a uniquely determined
k
x
∈
N
>
0
{\displaystyle k_{x}\in \mathbb {N_{>0}} }
, such that the local expression
F
{\displaystyle F}
of
f
{\displaystyle f}
in
x
{\displaystyle x}
is of the form
z
↦
z
k
x
{\displaystyle z\mapsto z^{k_{x}}}
.: 10 The number
k
x
{\displaystyle k_{x}}
is called the ramification index of
f
{\displaystyle f}
in
x
{\displaystyle x}
and the point
x
∈
X
{\displaystyle x\in X}
is called a ramification point if
k
x
≥
2
{\displaystyle k_{x}\geq 2}
. If
k
x
=
1
{\displaystyle k_{x}=1}
for an
x
∈
X
{\displaystyle x\in X}
, then
x
{\displaystyle x}
is unramified. The image point
y
=
f
(
x
)
∈
Y
{\displaystyle y=f(x)\in Y}
of a ramification point is called a branch point.
==== Degree of a holomorphic map ====
Let
f
:
X
→
Y
{\displaystyle f:X\rightarrow Y}
be a non-constant, holomorphic map between compact Riemann surfaces. The degree
deg
(
f
)
{\displaystyle \operatorname {deg} (f)}
of
f
{\displaystyle f}
is the cardinality of the fiber of an unramified point
y
=
f
(
x
)
∈
Y
{\displaystyle y=f(x)\in Y}
, i.e.
deg
(
f
)
:=
|
f
−
1
(
y
)
|
{\displaystyle \operatorname {deg} (f):=|f^{-1}(y)|}
.
This number is well-defined, since for every
y
∈
Y
{\displaystyle y\in Y}
the fiber
f
−
1
(
y
)
{\displaystyle f^{-1}(y)}
is discrete: 20 and for any two unramified points
y
1
,
y
2
∈
Y
{\displaystyle y_{1},y_{2}\in Y}
, it is:
|
f
−
1
(
y
1
)
|
=
|
f
−
1
(
y
2
)
|
.
{\displaystyle |f^{-1}(y_{1})|=|f^{-1}(y_{2})|.}
It can be calculated by:
∑
x
∈
f
−
1
(
y
)
k
x
=
deg
(
f
)
{\displaystyle \sum _{x\in f^{-1}(y)}k_{x}=\operatorname {deg} (f)}
: 29
=== Branched covering ===
==== Definition ====
A continuous map
f
:
X
→
Y
{\displaystyle f:X\rightarrow Y}
is called a branched covering, if there exists a closed set with dense complement
E
⊂
Y
{\displaystyle E\subset Y}
, such that
f
|
X
∖
f
−
1
(
E
)
:
X
∖
f
−
1
(
E
)
→
Y
∖
E
{\displaystyle f_{|X\smallsetminus f^{-1}(E)}:X\smallsetminus f^{-1}(E)\rightarrow Y\smallsetminus E}
is a covering.
==== Examples ====
Let
n
∈
N
{\displaystyle n\in \mathbb {N} }
and
n
≥
2
{\displaystyle n\geq 2}
, then
f
:
C
→
C
{\displaystyle f:\mathbb {C} \rightarrow \mathbb {C} }
with
f
(
z
)
=
z
n
{\displaystyle f(z)=z^{n}}
is a branched covering of degree
n
{\displaystyle n}
, where by
z
=
0
{\displaystyle z=0}
is a branch point.
Every non-constant, holomorphic map between compact Riemann surfaces
f
:
X
→
Y
{\displaystyle f:X\rightarrow Y}
of degree
d
{\displaystyle d}
is a branched covering of degree
d
{\displaystyle d}
.
== Universal covering ==
=== Definition ===
Let
p
:
X
~
→
X
{\displaystyle p:{\tilde {X}}\rightarrow X}
be a simply connected covering. If
β
:
E
→
X
{\displaystyle \beta :E\rightarrow X}
is another simply connected covering, then there exists a uniquely determined homeomorphism
α
:
X
~
→
E
{\displaystyle \alpha :{\tilde {X}}\rightarrow E}
, such that the diagram
commutes.: 482
This means that
p
{\displaystyle p}
is, up to equivalence, uniquely determined and because of that universal property denoted as the universal covering of the space
X
{\displaystyle X}
.
=== Existence ===
A universal covering does not always exist. The following theorem guarantees its existence for a certain class of base spaces.
Let
X
{\displaystyle X}
be a connected, locally simply connected topological space. Then, there exists a universal covering
p
:
X
~
→
X
.
{\displaystyle p:{\tilde {X}}\rightarrow X.}
The set
X
~
{\displaystyle {\tilde {X}}}
is defined as
X
~
=
{
γ
:
γ
is a path in
X
with
γ
(
0
)
=
x
0
}
/
homotopy with fixed ends
,
{\displaystyle {\tilde {X}}=\{\gamma :\gamma {\text{ is a path in }}X{\text{ with }}\gamma (0)=x_{0}\}/{\text{homotopy with fixed ends}},}
where
x
0
∈
X
{\displaystyle x_{0}\in X}
is any chosen base point. The map
p
:
X
~
→
X
{\displaystyle p:{\tilde {X}}\rightarrow X}
is defined by
p
(
[
γ
]
)
=
γ
(
1
)
.
{\displaystyle p([\gamma ])=\gamma (1).}
: 64
The topology on
X
~
{\displaystyle {\tilde {X}}}
is constructed as follows: Let
γ
:
I
→
X
{\displaystyle \gamma :I\rightarrow X}
be a path with
γ
(
0
)
=
x
0
.
{\displaystyle \gamma (0)=x_{0}.}
Let
U
{\displaystyle U}
be a simply connected neighborhood of the endpoint
x
=
γ
(
1
)
.
{\displaystyle x=\gamma (1).}
Then, for every
y
∈
U
,
{\displaystyle y\in U,}
there is a path
σ
y
{\displaystyle \sigma _{y}}
inside
U
{\displaystyle U}
from
x
{\displaystyle x}
to
y
{\displaystyle y}
that is unique up to homotopy. Now consider the set
U
~
=
{
γ
σ
y
:
y
∈
U
}
/
homotopy with fixed ends
.
{\displaystyle {\tilde {U}}=\{\gamma \sigma _{y}:y\in U\}/{\text{homotopy with fixed ends}}.}
The restriction
p
|
U
~
:
U
~
→
U
{\displaystyle p|_{\tilde {U}}:{\tilde {U}}\rightarrow U}
with
p
(
[
γ
σ
y
]
)
=
γ
σ
y
(
1
)
=
y
{\displaystyle p([\gamma \sigma _{y}])=\gamma \sigma _{y}(1)=y}
is a bijection and
U
~
{\displaystyle {\tilde {U}}}
can be equipped with the final topology of
p
|
U
~
.
{\displaystyle p|_{\tilde {U}}.}
The fundamental group
π
1
(
X
,
x
0
)
=
Γ
{\displaystyle \pi _{1}(X,x_{0})=\Gamma }
acts freely on
X
~
{\displaystyle {\tilde {X}}}
by
(
[
γ
]
,
[
x
~
]
)
↦
[
γ
x
~
]
,
{\displaystyle ([\gamma ],[{\tilde {x}}])\mapsto [\gamma {\tilde {x}}],}
and the orbit space
Γ
∖
X
~
{\displaystyle \Gamma \backslash {\tilde {X}}}
is homeomorphic to
X
{\displaystyle X}
through the map
[
Γ
x
~
]
↦
x
~
(
1
)
.
{\displaystyle [\Gamma {\tilde {x}}]\mapsto {\tilde {x}}(1).}
=== Examples ===
r
:
R
→
S
1
{\displaystyle r:\mathbb {R} \to S^{1}}
with
r
(
t
)
=
(
cos
(
2
π
t
)
,
sin
(
2
π
t
)
)
{\displaystyle r(t)=(\cos(2\pi t),\sin(2\pi t))}
is the universal covering of the unit circle
S
1
{\displaystyle S^{1}}
.
p
:
S
n
→
R
P
n
≅
{
+
1
,
−
1
}
∖
S
n
{\displaystyle p:S^{n}\to \mathbb {R} P^{n}\cong \{+1,-1\}\backslash S^{n}}
with
p
(
x
)
=
[
x
]
{\displaystyle p(x)=[x]}
is the universal covering of the projective space
R
P
n
{\displaystyle \mathbb {R} P^{n}}
for
n
>
1
{\displaystyle n>1}
.
q
:
S
U
(
n
)
⋉
R
→
U
(
n
)
{\displaystyle q:\mathrm {SU} (n)\ltimes \mathbb {R} \to U(n)}
with
q
(
A
,
t
)
=
[
exp
(
2
π
i
t
)
0
0
I
n
−
1
]
x
A
{\displaystyle q(A,t)={\begin{bmatrix}\exp(2\pi it)&0\\0&I_{n-1}\end{bmatrix}}_{\vphantom {x}}A}
is the universal covering of the unitary group
U
(
n
)
{\displaystyle U(n)}
.: 5, Theorem 1
Since
S
U
(
2
)
≅
S
3
{\displaystyle \mathrm {SU} (2)\cong S^{3}}
, it follows that the quotient map
f
:
S
U
(
2
)
→
S
U
(
2
)
/
Z
2
≅
S
O
(
3
)
{\displaystyle f:\mathrm {SU} (2)\rightarrow \mathrm {SU} (2)/\mathbb {Z_{2}} \cong \mathrm {SO} (3)}
is the universal covering of
S
O
(
3
)
{\displaystyle \mathrm {SO} (3)}
.
A topological space which has no universal covering is the Hawaiian earring:
X
=
⋃
n
∈
N
{
(
x
1
,
x
2
)
∈
R
2
:
(
x
1
−
1
n
)
2
+
x
2
2
=
1
n
2
}
{\displaystyle X=\bigcup _{n\in \mathbb {N} }\left\{(x_{1},x_{2})\in \mathbb {R} ^{2}:{\Bigl (}x_{1}-{\frac {1}{n}}{\Bigr )}^{2}+x_{2}^{2}={\frac {1}{n^{2}}}\right\}}
One can show that no neighborhood of the origin
(
0
,
0
)
{\displaystyle (0,0)}
is simply connected.: 487, Example 1
== G-coverings ==
Let G be a discrete group acting on the topological space X. This means that each element g of G is associated to a homeomorphism Hg of X onto itself, in such a way that Hg h is always equal to Hg
∘
{\displaystyle \circ }
Hh for any two elements g and h of G. (Or in other words, a group action of the group G on the space X is just a group homomorphism of the group G into the group Homeo(X) of self-homeomorphisms of X.) It is natural to ask under what conditions the projection from X to the orbit space X/G is a covering map. This is not always true since the action may have fixed points. An example for this is the cyclic group of order 2 acting on a product X × X by the twist action where the non-identity element acts by (x, y) ↦ (y, x). Thus the study of the relation between the fundamental groups of X and X/G is not so straightforward.
However the group G does act on the fundamental groupoid of X, and so the study is best handled by considering groups acting on groupoids, and the corresponding orbit groupoids. The theory for this is set down in Chapter 11 of the book Topology and groupoids referred to below. The main result is that for discontinuous actions of a group G on a Hausdorff space X which admits a universal cover, then the fundamental groupoid of the orbit space X/G is isomorphic to the orbit groupoid of the fundamental groupoid of X, i.e. the quotient of that groupoid by the action of the group G. This leads to explicit computations, for example of the fundamental group of the symmetric square of a space.
== Smooth coverings ==
Let E and M be smooth manifolds with or without boundary. A covering
π
:
E
→
M
{\displaystyle \pi :E\to M}
is called a smooth covering if it is a smooth map and the sheets are mapped diffeomorphically onto the corresponding open subset of M. (This is in contrast to the definition of a covering, which merely requires that the sheets are mapped homeomorphically onto the corresponding open subset.)
== Deck transformation ==
=== Definition ===
Let
p
:
E
→
X
{\displaystyle p:E\rightarrow X}
be a covering. A deck transformation is a homeomorphism
d
:
E
→
E
{\displaystyle d:E\rightarrow E}
, such that the diagram of continuous maps
commutes. Together with the composition of maps, the set of deck transformation forms a group
Deck
(
p
)
{\displaystyle \operatorname {Deck} (p)}
, which is the same as
Aut
(
p
)
{\displaystyle \operatorname {Aut} (p)}
.
Now suppose
p
:
C
→
X
{\displaystyle p:C\to X}
is a covering map and
C
{\displaystyle C}
(and therefore also
X
{\displaystyle X}
) is connected and locally path connected. The action of
Aut
(
p
)
{\displaystyle \operatorname {Aut} (p)}
on each fiber is free. If this action is transitive on some fiber, then it is transitive on all fibers, and we call the cover regular (or normal or Galois). Every such regular cover is a principal
G
{\displaystyle G}
-bundle, where
G
=
Aut
(
p
)
{\displaystyle G=\operatorname {Aut} (p)}
is considered as a discrete topological group.
Every universal cover
p
:
D
→
X
{\displaystyle p:D\to X}
is regular, with deck transformation group being isomorphic to the fundamental group
π
1
(
X
)
{\displaystyle \pi _{1}(X)}
.
=== Examples ===
Let
q
:
S
1
→
S
1
{\displaystyle q:S^{1}\to S^{1}}
be the covering
q
(
z
)
=
z
n
{\displaystyle q(z)=z^{n}}
for some
n
∈
N
{\displaystyle n\in \mathbb {N} }
, then the map
d
k
:
S
1
→
S
1
:
z
↦
z
e
2
π
i
k
/
n
{\displaystyle d_{k}:S^{1}\rightarrow S^{1}:z\mapsto z\,e^{2\pi ik/n}}
for
k
∈
Z
{\displaystyle k\in \mathbb {Z} }
is a deck transformation and
Deck
(
q
)
≅
Z
/
n
Z
{\displaystyle \operatorname {Deck} (q)\cong \mathbb {Z} /n\mathbb {Z} }
.
Let
r
:
R
→
S
1
{\displaystyle r:\mathbb {R} \to S^{1}}
be the covering
r
(
t
)
=
(
cos
(
2
π
t
)
,
sin
(
2
π
t
)
)
{\displaystyle r(t)=(\cos(2\pi t),\sin(2\pi t))}
, then the map
d
k
:
R
→
R
:
t
↦
t
+
k
{\displaystyle d_{k}:\mathbb {R} \rightarrow \mathbb {R} :t\mapsto t+k}
for
k
∈
Z
{\displaystyle k\in \mathbb {Z} }
is a deck transformation and
Deck
(
r
)
≅
Z
{\displaystyle \operatorname {Deck} (r)\cong \mathbb {Z} }
.
As another important example, consider
C
{\displaystyle \mathbb {C} }
the complex plane and
C
×
{\displaystyle \mathbb {C} ^{\times }}
the complex plane minus the origin. Then the map
p
:
C
×
→
C
×
{\displaystyle p:\mathbb {C} ^{\times }\to \mathbb {C} ^{\times }}
with
p
(
z
)
=
z
n
{\displaystyle p(z)=z^{n}}
is a regular cover. The deck transformations are multiplications with
n
{\displaystyle n}
-th roots of unity and the deck transformation group is therefore isomorphic to the cyclic group
Z
/
n
Z
{\displaystyle \mathbb {Z} /n\mathbb {Z} }
. Likewise, the map
exp
:
C
→
C
×
{\displaystyle \exp :\mathbb {C} \to \mathbb {C} ^{\times }}
with
exp
(
z
)
=
e
z
{\displaystyle \exp(z)=e^{z}}
is the universal cover.
=== Properties ===
Let
X
{\displaystyle X}
be a path-connected space and
p
:
E
→
X
{\displaystyle p:E\rightarrow X}
be a connected covering. Since a deck transformation
d
:
E
→
E
{\displaystyle d:E\rightarrow E}
is bijective, it permutes the elements of a fiber
p
−
1
(
x
)
{\displaystyle p^{-1}(x)}
with
x
∈
X
{\displaystyle x\in X}
and is uniquely determined by where it sends a single point. In particular, only the identity map fixes a point in the fiber.: 70 Because of this property every deck transformation defines a group action on
E
{\displaystyle E}
, i.e. let
U
⊂
X
{\displaystyle U\subset X}
be an open neighborhood of a
x
∈
X
{\displaystyle x\in X}
and
U
~
⊂
E
{\displaystyle {\tilde {U}}\subset E}
an open neighborhood of an
e
∈
p
−
1
(
x
)
{\displaystyle e\in p^{-1}(x)}
, then
Deck
(
p
)
×
E
→
E
:
(
d
,
U
~
)
↦
d
(
U
~
)
{\displaystyle \operatorname {Deck} (p)\times E\rightarrow E:(d,{\tilde {U}})\mapsto d({\tilde {U}})}
is a group action.
=== Normal coverings ===
==== Definition ====
A covering
p
:
E
→
X
{\displaystyle p:E\rightarrow X}
is called normal, if
Deck
(
p
)
∖
E
≅
X
{\displaystyle \operatorname {Deck} (p)\backslash E\cong X}
. This means, that for every
x
∈
X
{\displaystyle x\in X}
and any two
e
0
,
e
1
∈
p
−
1
(
x
)
{\displaystyle e_{0},e_{1}\in p^{-1}(x)}
there exists a deck transformation
d
:
E
→
E
{\displaystyle d:E\rightarrow E}
, such that
d
(
e
0
)
=
e
1
{\displaystyle d(e_{0})=e_{1}}
.
==== Properties ====
Let
X
{\displaystyle X}
be a path-connected space and
p
:
E
→
X
{\displaystyle p:E\rightarrow X}
be a connected covering. Let
H
=
p
#
(
π
1
(
E
)
)
{\displaystyle H=p_{\#}(\pi _{1}(E))}
be a subgroup of
π
1
(
X
)
{\displaystyle \pi _{1}(X)}
, then
p
{\displaystyle p}
is a normal covering iff
H
{\displaystyle H}
is a normal subgroup of
π
1
(
X
)
{\displaystyle \pi _{1}(X)}
.
If
p
:
E
→
X
{\displaystyle p:E\rightarrow X}
is a normal covering and
H
=
p
#
(
π
1
(
E
)
)
{\displaystyle H=p_{\#}(\pi _{1}(E))}
, then
Deck
(
p
)
≅
π
1
(
X
)
/
H
{\displaystyle \operatorname {Deck} (p)\cong \pi _{1}(X)/H}
.
If
p
:
E
→
X
{\displaystyle p:E\rightarrow X}
is a path-connected covering and
H
=
p
#
(
π
1
(
E
)
)
{\displaystyle H=p_{\#}(\pi _{1}(E))}
, then
Deck
(
p
)
≅
N
(
H
)
/
H
{\displaystyle \operatorname {Deck} (p)\cong N(H)/H}
, whereby
N
(
H
)
{\displaystyle N(H)}
is the normaliser of
H
{\displaystyle H}
.: 71
Let
E
{\displaystyle E}
be a topological space. A group
Γ
{\displaystyle \Gamma }
acts discontinuously on
E
{\displaystyle E}
, if every
e
∈
E
{\displaystyle e\in E}
has an open neighborhood
V
⊂
E
{\displaystyle V\subset E}
with
V
≠
∅
{\displaystyle V\neq \emptyset }
, such that for every
d
1
,
d
2
∈
Γ
{\displaystyle d_{1},d_{2}\in \Gamma }
with
d
1
V
∩
d
2
V
≠
∅
{\displaystyle d_{1}V\cap d_{2}V\neq \emptyset }
one has
d
1
=
d
2
{\displaystyle d_{1}=d_{2}}
.
If a group
Γ
{\displaystyle \Gamma }
acts discontinuously on a topological space
E
{\displaystyle E}
, then the quotient map
q
:
E
→
Γ
∖
E
{\displaystyle q:E\rightarrow \Gamma \backslash E}
with
q
(
e
)
=
Γ
e
{\displaystyle q(e)=\Gamma e}
is a normal covering.: 72 Hereby
Γ
∖
E
=
{
Γ
e
:
e
∈
E
}
{\displaystyle \Gamma \backslash E=\{\Gamma e:e\in E\}}
is the quotient space and
Γ
e
=
{
γ
(
e
)
:
γ
∈
Γ
}
{\displaystyle \Gamma e=\{\gamma (e):\gamma \in \Gamma \}}
is the orbit of the group action.
==== Examples ====
The covering
q
:
S
1
→
S
1
{\displaystyle q:S^{1}\to S^{1}}
with
q
(
z
)
=
z
n
{\displaystyle q(z)=z^{n}}
is a normal coverings for every
n
∈
N
{\displaystyle n\in \mathbb {N} }
.
Every simply connected covering is a normal covering.
=== Calculation ===
Let
Γ
{\displaystyle \Gamma }
be a group, which acts discontinuously on a topological space
E
{\displaystyle E}
and let
q
:
E
→
Γ
∖
E
{\displaystyle q:E\rightarrow \Gamma \backslash E}
be the normal covering.
If
E
{\displaystyle E}
is path-connected, then
Deck
(
q
)
≅
Γ
{\displaystyle \operatorname {Deck} (q)\cong \Gamma }
.: 72
If
E
{\displaystyle E}
is simply connected, then
Deck
(
q
)
≅
π
1
(
Γ
∖
E
)
{\displaystyle \operatorname {Deck} (q)\cong \pi _{1}(\Gamma \backslash E)}
.: 71
==== Examples ====
Let
n
∈
N
{\displaystyle n\in \mathbb {N} }
. The antipodal map
g
:
S
n
→
S
n
{\displaystyle g:S^{n}\rightarrow S^{n}}
with
g
(
x
)
=
−
x
{\displaystyle g(x)=-x}
generates, together with the composition of maps, a group
D
(
g
)
≅
Z
/
2
Z
{\displaystyle D(g)\cong \mathbb {Z/2Z} }
and induces a group action
D
(
g
)
×
S
n
→
S
n
,
(
g
,
x
)
↦
g
(
x
)
{\displaystyle D(g)\times S^{n}\rightarrow S^{n},(g,x)\mapsto g(x)}
, which acts discontinuously on
S
n
{\displaystyle S^{n}}
. Because of
Z
2
∖
S
n
≅
R
P
n
{\displaystyle \mathbb {Z_{2}} \backslash S^{n}\cong \mathbb {R} P^{n}}
it follows, that the quotient map
q
:
S
n
→
Z
2
∖
S
n
≅
R
P
n
{\displaystyle q:S^{n}\rightarrow \mathbb {Z_{2}} \backslash S^{n}\cong \mathbb {R} P^{n}}
is a normal covering and for
n
>
1
{\displaystyle n>1}
a universal covering, hence
Deck
(
q
)
≅
Z
/
2
Z
≅
π
1
(
R
P
n
)
{\displaystyle \operatorname {Deck} (q)\cong \mathbb {Z/2Z} \cong \pi _{1}({\mathbb {R} P^{n}})}
for
n
>
1
{\displaystyle n>1}
.
Let
S
O
(
3
)
{\displaystyle \mathrm {SO} (3)}
be the special orthogonal group, then the map
f
:
S
U
(
2
)
→
S
O
(
3
)
≅
Z
2
∖
S
U
(
2
)
{\displaystyle f:\mathrm {SU} (2)\rightarrow \mathrm {SO} (3)\cong \mathbb {Z_{2}} \backslash \mathrm {SU} (2)}
is a normal covering and because of
S
U
(
2
)
≅
S
3
{\displaystyle \mathrm {SU} (2)\cong S^{3}}
, it is the universal covering, hence
Deck
(
f
)
≅
Z
/
2
Z
≅
π
1
(
S
O
(
3
)
)
{\displaystyle \operatorname {Deck} (f)\cong \mathbb {Z/2Z} \cong \pi _{1}(\mathrm {SO} (3))}
.
With the group action
(
z
1
,
z
2
)
∗
(
x
,
y
)
=
(
z
1
+
(
−
1
)
z
2
x
,
z
2
+
y
)
{\displaystyle (z_{1},z_{2})*(x,y)=(z_{1}+(-1)^{z_{2}}x,z_{2}+y)}
of
Z
2
{\displaystyle \mathbb {Z^{2}} }
on
R
2
{\displaystyle \mathbb {R^{2}} }
, whereby
(
Z
2
,
∗
)
{\displaystyle (\mathbb {Z^{2}} ,*)}
is the semidirect product
Z
⋊
Z
{\displaystyle \mathbb {Z} \rtimes \mathbb {Z} }
, one gets the universal covering
f
:
R
2
→
(
Z
⋊
Z
)
∖
R
2
≅
K
{\displaystyle f:\mathbb {R^{2}} \rightarrow (\mathbb {Z} \rtimes \mathbb {Z} )\backslash \mathbb {R^{2}} \cong K}
of the klein bottle
K
{\displaystyle K}
, hence
Deck
(
f
)
≅
Z
⋊
Z
≅
π
1
(
K
)
{\displaystyle \operatorname {Deck} (f)\cong \mathbb {Z} \rtimes \mathbb {Z} \cong \pi _{1}(K)}
.
Let
T
=
S
1
×
S
1
{\displaystyle T=S^{1}\times S^{1}}
be the torus which is embedded in the
C
2
{\displaystyle \mathbb {C^{2}} }
. Then one gets a homeomorphism
α
:
T
→
T
:
(
e
i
x
,
e
i
y
)
↦
(
e
i
(
x
+
π
)
,
e
−
i
y
)
{\displaystyle \alpha :T\rightarrow T:(e^{ix},e^{iy})\mapsto (e^{i(x+\pi )},e^{-iy})}
, which induces a discontinuous group action
G
α
×
T
→
T
{\displaystyle G_{\alpha }\times T\rightarrow T}
, whereby
G
α
≅
Z
/
2
Z
{\displaystyle G_{\alpha }\cong \mathbb {Z/2Z} }
. It follows, that the map
f
:
T
→
G
α
∖
T
≅
K
{\displaystyle f:T\rightarrow G_{\alpha }\backslash T\cong K}
is a normal covering of the klein bottle, hence
Deck
(
f
)
≅
Z
/
2
Z
{\displaystyle \operatorname {Deck} (f)\cong \mathbb {Z/2Z} }
.
Let
S
3
{\displaystyle S^{3}}
be embedded in the
C
2
{\displaystyle \mathbb {C^{2}} }
. Since the group action
S
3
×
Z
/
p
Z
→
S
3
:
(
(
z
1
,
z
2
)
,
[
k
]
)
↦
(
e
2
π
i
k
/
p
z
1
,
e
2
π
i
k
q
/
p
z
2
)
{\displaystyle S^{3}\times \mathbb {Z/pZ} \rightarrow S^{3}:((z_{1},z_{2}),[k])\mapsto (e^{2\pi ik/p}z_{1},e^{2\pi ikq/p}z_{2})}
is discontinuously, whereby
p
,
q
∈
N
{\displaystyle p,q\in \mathbb {N} }
are coprime, the map
f
:
S
3
→
Z
p
∖
S
3
=:
L
p
,
q
{\displaystyle f:S^{3}\rightarrow \mathbb {Z_{p}} \backslash S^{3}=:L_{p,q}}
is the universal covering of the lens space
L
p
,
q
{\displaystyle L_{p,q}}
, hence
Deck
(
f
)
≅
Z
/
p
Z
≅
π
1
(
L
p
,
q
)
{\displaystyle \operatorname {Deck} (f)\cong \mathbb {Z/pZ} \cong \pi _{1}(L_{p,q})}
.
== Galois correspondence ==
Let
X
{\displaystyle X}
be a connected and locally simply connected space, then for every subgroup
H
⊆
π
1
(
X
)
{\displaystyle H\subseteq \pi _{1}(X)}
there exists a path-connected covering
α
:
X
H
→
X
{\displaystyle \alpha :X_{H}\rightarrow X}
with
α
#
(
π
1
(
X
H
)
)
=
H
{\displaystyle \alpha _{\#}(\pi _{1}(X_{H}))=H}
.: 66
Let
p
1
:
E
→
X
{\displaystyle p_{1}:E\rightarrow X}
and
p
2
:
E
′
→
X
{\displaystyle p_{2}:E'\rightarrow X}
be two path-connected coverings, then they are equivalent iff the subgroups
H
=
p
1
#
(
π
1
(
E
)
)
{\displaystyle H=p_{1\#}(\pi _{1}(E))}
and
H
′
=
p
2
#
(
π
1
(
E
′
)
)
{\displaystyle H'=p_{2\#}(\pi _{1}(E'))}
are conjugate to each other.: 482
Let
X
{\displaystyle X}
be a connected and locally simply connected space, then, up to equivalence between coverings, there is a bijection:
{
Subgroup of
π
1
(
X
)
}
⟷
{
path-connected covering
p
:
E
→
X
}
H
⟶
α
:
X
H
→
X
p
#
(
π
1
(
E
)
)
⟵
p
{
normal subgroup of
π
1
(
X
)
}
⟷
{
normal covering
p
:
E
→
X
}
{\displaystyle {\begin{matrix}\qquad \displaystyle \{{\text{Subgroup of }}\pi _{1}(X)\}&\longleftrightarrow &\displaystyle \{{\text{path-connected covering }}p:E\rightarrow X\}\\H&\longrightarrow &\alpha :X_{H}\rightarrow X\\p_{\#}(\pi _{1}(E))&\longleftarrow &p\\\displaystyle \{{\text{normal subgroup of }}\pi _{1}(X)\}&\longleftrightarrow &\displaystyle \{{\text{normal covering }}p:E\rightarrow X\}\end{matrix}}}
For a sequence of subgroups
{
e
}
⊂
H
⊂
G
⊂
π
1
(
X
)
{\displaystyle \displaystyle \{{\text{e}}\}\subset H\subset G\subset \pi _{1}(X)}
one gets a sequence of coverings
X
~
⟶
X
H
≅
H
∖
X
~
⟶
X
G
≅
G
∖
X
~
⟶
X
≅
π
1
(
X
)
∖
X
~
{\displaystyle {\tilde {X}}\longrightarrow X_{H}\cong H\backslash {\tilde {X}}\longrightarrow X_{G}\cong G\backslash {\tilde {X}}\longrightarrow X\cong \pi _{1}(X)\backslash {\tilde {X}}}
. For a subgroup
H
⊂
π
1
(
X
)
{\displaystyle H\subset \pi _{1}(X)}
with index
[
π
1
(
X
)
:
H
]
=
d
{\displaystyle \displaystyle [\pi _{1}(X):H]=d}
, the covering
α
:
X
H
→
X
{\displaystyle \alpha :X_{H}\rightarrow X}
has degree
d
{\displaystyle d}
.
== Classification ==
=== Definitions ===
==== Category of coverings ====
Let
X
{\displaystyle X}
be a topological space. The objects of the category
C
o
v
(
X
)
{\displaystyle {\boldsymbol {Cov(X)}}}
are the coverings
p
:
E
→
X
{\displaystyle p:E\rightarrow X}
of
X
{\displaystyle X}
and the morphisms between two coverings
p
:
E
→
X
{\displaystyle p:E\rightarrow X}
and
q
:
F
→
X
{\displaystyle q:F\rightarrow X}
are continuous maps
f
:
E
→
F
{\displaystyle f:E\rightarrow F}
, such that the diagram
commutes.
==== G-Set ====
Let
G
{\displaystyle G}
be a topological group. The category
G
−
S
e
t
{\displaystyle {\boldsymbol {G-Set}}}
is the category of sets which are G-sets. The morphisms are G-maps
ϕ
:
X
→
Y
{\displaystyle \phi :X\rightarrow Y}
between G-sets. They satisfy the condition
ϕ
(
g
x
)
=
g
ϕ
(
x
)
{\displaystyle \phi (gx)=g\,\phi (x)}
for every
g
∈
G
{\displaystyle g\in G}
.
=== Equivalence ===
Let
X
{\displaystyle X}
be a connected and locally simply connected space,
x
∈
X
{\displaystyle x\in X}
and
G
=
π
1
(
X
,
x
)
{\displaystyle G=\pi _{1}(X,x)}
be the fundamental group of
X
{\displaystyle X}
. Since
G
{\displaystyle G}
defines, by lifting of paths and evaluating at the endpoint of the lift, a group action on the fiber of a covering, the functor
F
:
C
o
v
(
X
)
⟶
G
−
S
e
t
:
p
↦
p
−
1
(
x
)
{\displaystyle F:{\boldsymbol {Cov(X)}}\longrightarrow {\boldsymbol {G-Set}}:p\mapsto p^{-1}(x)}
is an equivalence of categories.: 68–70
== Applications ==
An important practical application of covering spaces occurs in charts on SO(3), the rotation group. This group occurs widely in engineering, due to 3-dimensional rotations being heavily used in navigation, nautical engineering, and aerospace engineering, among many other uses. Topologically, SO(3) is the real projective space RP3, with fundamental group Z/2, and only (non-trivial) covering space the hypersphere S3, which is the group Spin(3), and represented by the unit quaternions. Thus quaternions are a preferred method for representing spatial rotations – see quaternions and spatial rotation.
However, it is often desirable to represent rotations by a set of three numbers, known as Euler angles (in numerous variants), both because this is conceptually simpler for someone familiar with planar rotation, and because one can build a combination of three gimbals to produce rotations in three dimensions. Topologically this corresponds to a map from the 3-torus T3 of three angles to the real projective space RP3 of rotations, and the resulting map has imperfections due to this map being unable to be a covering map. Specifically, the failure of the map to be a local homeomorphism at certain points is referred to as gimbal lock, and is demonstrated in the animation at the right – at some points (when the axes are coplanar) the rank of the map is 2, rather than 3, meaning that only 2 dimensions of rotations can be realized from that point by changing the angles. This causes problems in applications, and is formalized by the notion of a covering space.
== See also ==
Bethe lattice is the universal cover of a Cayley graph
Covering graph, a covering space for an undirected graph, and its special case the bipartite double cover
Covering group
Galois connection
Quotient space (topology)
== Literature ==
Hatcher, Allen (2002). Algebraic topology. Cambridge: Cambridge University Press. ISBN 0-521-79160-X. OCLC 45420394.
Forster, Otto (1981). Lectures on Riemann surfaces. New York. ISBN 0-387-90617-7. OCLC 7596520.{{cite book}}: CS1 maint: location missing publisher (link)
Munkres, James R. (2018). Topology. New York, NY. ISBN 978-0-13-468951-7. OCLC 964502066.{{cite book}}: CS1 maint: location missing publisher (link)
Kühnel, Wolfgang (2011). Matrizen und Lie-Gruppen Eine geometrische Einführung (in German). Wiesbaden: Vieweg+Teubner Verlag. doi:10.1007/978-3-8348-9905-7. ISBN 978-3-8348-9905-7. OCLC 706962685.
== References == | Wikipedia/Deck_transformation |
In photography and cinematography, a filter is a camera accessory consisting of an optical filter that can be inserted into the optical path. The filter can be of a square or oblong shape and mounted in a holder accessory, or, more commonly, a glass or plastic disk in a metal or plastic ring frame, which can be screwed into the front of or clipped onto the camera lens.
Filters modify the images recorded. Sometimes they are used to make only subtle changes to images; other times the image would simply not be possible without them. In monochrome photography, coloured filters affect the relative brightness of different colours; red lipstick may be rendered as anything from almost white to almost black with different filters. Others change the colour balance of images, so that photographs under incandescent lighting show colours as they are perceived, rather than with a reddish tinge. There are filters that distort the image in a desired way, diffusing an otherwise sharp image, adding a starry effect, etc. Linear and circular polarising filters reduce oblique reflections from non-metallic surfaces.
== Overview ==
Many filters absorb part of the light available, necessitating longer exposure. As the filter is in the optical path, any imperfections – non-flat or non-parallel surfaces, reflections (minimised by optical coating), scratches, dirt – affect the image.
In digital photography the majority of filters used with film cameras have been rendered redundant by digital filters applied either in-camera or during post processing. Exceptions include the ultraviolet (UV) filter typically used to protect the front surface of the lens, the neutral density (ND) filter, the polarising filter, color-enhancing filters, and the infra red (IR) filter. The neutral density filter permits effects requiring wide apertures or long exposures to be applied to brightly lit scenes, while the graduated neutral density filter is useful in situations where the scene's dynamic range exceeds the capability of the sensor. Not using optical filters in front of the lens has the advantage of avoiding the reduction of image quality caused by the presence of an extra optical element in the light path and may be necessary to avoid vignetting when using wide-angle lenses.
=== Nomenclature ===
There is no universal or reliably standard naming or labelling system for filters. The Wratten numbers adopted in the early twentieth century by Kodak, then a dominant force in film photography, are used by several manufacturers, including B+W,: 18–21 but the actual spectral characteristics of a filter may vary by manufacturer, despite having the same Wratten number. In addition, the Wratten numbers are sometimes used interchangeably with alternative names; for example, the Wratten filter number 6 is also named K1, while #11 is also named X1.: 22
Some manufacturers use a combination of Wratten numbers and wavelengths to identify filters. For example, Nikon offers four UV / skylight filters: L1A, L1B, L37, and L39; the L1A and L1B correspond to Wratten numbers 1A and 1B, while L37 and L39 include the wavelength cutoffs of 370 nm and 390 nm, respectively. Colored filters used to enhance contrast for black and white photography include a letter (Y, O, or R) and a similar wavelength cutoff: for example, R60 is a red filter with a step-like transmission function at 600 nm. For other filters, the alternate Wratten name is used (for example, X0 and X1 for green filters).
Many colour correction filters are identified by a code of the form CCaab, for example, CC50Y:: 38–39, 49
CC = type (for colour correction)
aa = strength or density of the filter (50 = 50%)
b = color (in this case, Y for yellow)
While the same information may be present, the specific sequence of colour and density may vary by manufacturer.: 22–23
== Scientific uses ==
Optical filters are used in various areas of science, including in particular astronomy; photographic filters are roughly the same as "optical" filters, but in practice optical filters often need far more accurately controlled optical properties and precisely defined transmission curves than filters only made for general photography. Photographic filters sell in larger quantities, at correspondingly lower prices, than many laboratory filters. The article on optical filters has information relevant to photographic filters, particularly special-purpose photographic filters like color enhancing filters and high-quality photographic filters, like sharp cut-off UV filters.
== Photographic uses ==
Filters in photography can be classified according to their visible color and use:
Colorless / Neutral
Clear and ultraviolet
Infrared
Polarizing
Neutral density, including the graduated neutral density filter and solar filter
Color
Color conversion (or color balance)
Color correction
Color separation, also called color subtraction
Contrast enhancement
Special effects of various kinds, including
Graduated color, called color grads
Cross screen and star diffractors
Diffusion and contrast reduction
Close-up or macro diopters, and split diopters or split focus
Multi-image
Spot
=== Colorless / Neutral ===
==== Clear and ultraviolet ====
Clear filters, also known as window glass filters or optical flats, are transparent and (ideally) perform no filtering of incoming light. The only use of a clear filter is to protect the front of a lens.
Clear glass will absorb some UV.
UV filters are used to block invisible ultraviolet light, to which most photographic sensors and film are at least slightly sensitive. The UV is typically recorded as if it were blue light, so this non-human UV sensitivity can result in an unwanted exaggeration of the bluish tint of atmospheric haze or, even more unnaturally, of subjects in open shade lit by the ultraviolet-rich sky.
Normally, the glass or plastic of a camera lens is practically opaque to short-wavelength UV, but transparent to long-wavelength (near-visible) UV. A UV filter passes all or nearly all of the visible spectrum but blocks virtually all ultraviolet radiation. (Most spectral manipulation filters are named for the radiation they pass; green and infrared filters pass their named colors, but a UV filter blocks UV.) It can be left on the lens for nearly all shots: UV filters are often used mainly for lens protection in the same way as clear filters. A strong UV filter, such as a Haze-2A or UV17, cuts off some visible light in the violet part of the spectrum, and has a pale yellow color; these strong filters are more effective at cutting haze, reduce purple fringing in digital cameras, and can subtly darken pale blue skies – which improves contrast between sky and clouds. Strong UV filters are also sometimes used for warming color photos taken in shade with daylight-type film. They were originally developed to increase contrast in airborne surveillance photography, and were adopted by mountaineering photographers to remedy the strong UV at high altitude.
While in certain cases, such as harsh environments, a protection filter may be necessary, there are also downsides to this practice. Arguments for the use of protection filters include:
If the lens is dropped, the filter may well suffer scratches or breakage instead of the front lens element.
The filter can be cleaned frequently without damage to the lens surface or coatings; a filter scratched by cleaning is much less expensive to replace than a lens.
If there is blowing sand the filter will protect the front of the lens from abrasion and nicks.
A few lenses, such as some of Canon's L series lenses, require the use of a filter to complete their weather sealing.
Arguments against their use include:
Adding another element may degrade image quality if its surfaces are less than perfectly flat and parallel. Filters from reputable makers are very unlikely to cause any problems, but some "bargain" products are optically inferior.
The two additional reflections at air-glass interfaces inevitably result in some light loss – at least four percent at each interface, if the surfaces are uncoated; they also increase the potential for lens flare problems.
Low-quality filters may cause problems with autofocus.
A filter may be incompatible with the use of a lens hood, since not all filters have the required threading for a screw-in hood or will allow a clip-on hood to be attached. Adding a lens hood on top of one or more filters may space the hood away from the lens enough to cause some vignetting.
There is a wide variation in the spectral UV blocking by filters described as ultraviolet.
==== Infrared ====
Unlike ultraviolet filters, which are suitable for general photography as they are designed to attenuate shorter ultraviolet wavelengths and pass visible wavelengths, filters for infrared photography are designed to block portions of the visible spectrum while passing longer wavelengths of light in the infrared spectrum, and hence they may appear dark red to black in color.
Historically, the Wratten number has been used to describe the spectral absorption characteristics of filters used with infrared photography.: 28–29 : 64–65 Common types include filters in the Wratten #87, 88, and 89 series; since Wratten numbers were assigned sequentially, there is no consistent logic (for instance, the #89B filter has a transition wavelength where the filter achieves 50% transmittance at approximately 720 nm, while #87 has its transition wavelength at approximately 795 nm. Because black-and-white infrared film retains significant sensitivity to blue wavelengths, sometimes red and orange filters are used to decrease contrast.
Other manufacturers may embed the transition wavelength in the name of the filter. For example, the Hoya R72 (720 nm) and RM90 (900 nm) are intended for infrared photography, corresponding to Wratten No. 89B and 87B, respectively.: 62 For use with color infrared film, some manufacturers advise filters which restrict blue and green visible wavelengths, but pass most of the red spectrum, with a transition wavelength around 550 nm.: 28–29
==== Polarizer ====
A polarizing filter, used for both color and black-and-white photography, is colourless and does not affect colour balance, but filters out light with a particular direction of polarisation. This reduces oblique reflections from non-metallic surfaces, can darken the sky in colour photography (in monochrome photography colour filters are more effective), and can saturate the image more by eliminating unwanted reflections.
Linear polarising filters, while effective, can interfere with metering and auto-focus mechanisms when mirrors or beam-splitters are in the light path, as in the digital single lens reflex camera; a circular polarizer is also effective, and does not affect metering or auto-focus.
==== Neutral density ====
A neutral density filter (ND filter) is a filter of uniform density which attenuates light of all colors equally. It is used to allow a longer exposure (to create blur) or larger aperture (for selective focus) than otherwise required for correct exposure in the prevailing light conditions, without changing the tonal balance of the photograph.
A graduated neutral density filter is a neutral density filter with different attenuation at different points, typically clear in one half shading into a higher density in the other. It can be used, for example, to photograph a scene with part in deep shadow and part brightly lit, where otherwise either the shadows would have no detail or the highlights would be burnt out.: 50–51
=== Color filters ===
==== Color conversion ====
Appropriate color conversion filters are used to compensate for colour casts caused by lighting not balanced for the film stock's rated color temperature, which is usually 3200–3400 K for use with professional incandescent light sources and 5500–5700 K for daylight. Color conversion filters attenuate a range of visible wavelengths to shift the perceived color temperature.: 7 : 61–62 The need for these filters has been greatly reduced by the widespread adoption of digital photography, since color balance may be corrected with camera settings as the image is captured, or by software manipulation afterwards.
These color conversion filters are identified by non-standardised numbers which vary from manufacturer to manufacturer. Many filter manufacturers use the Wratten number or make reference to it. The Wratten numbers were assigned sequentially as applications were created (80x and 82x for blue cooling filters, 81x and 85x for amber warming filters), so there is no systematic logic that ties the number to its effect: for example, the 80A filter has the strongest "cooling" effect, followed by the 80B, and both are stronger than the 82C, which is stronger in turn than the 82B. The 80/85 series are regarded as "color conversion" filters, while the corresponding 82/81 series are "light balancing filters" which generally have a weaker effect than the 80/85 series.: 35–36 Typically, the 80A blue filter used with film for daylight use corrects the perceived orange/reddish cast of incandescent photographic photoflood lights, and significantly improves the stronger cast produced by lower-temperature household incandescent lighting, while the 85B amber filter will correct the bluish cast of daylight photographs on tungsten film.: 4
To avoid confusion and to assist photographers in selecting the appropriate filter, some manufacturers, including B+W,: 18–21 Rodenstock,: 7 and Hoya,: 58–59 include or use the mired shift to name their filters, which quantifies the effect of a color conversion filter. The mired value associated with a given color temperature is computed as the reciprocal of the color temperature, in Kelvin, multiplied by
10
6
{\displaystyle 10^{6}}
:: 43
M
=
10
6
T
{\displaystyle M={\frac {{10}^{6}}{T}}}
The shift is the difference in the mired values of the film and light source.: 6–7 Sometimes the decamired is used, where 10 mired = 1 decamired, as the smallest perceptible color temperature change is from a 10 mired shift.: 39
Δ
M
=
M
f
i
l
m
−
M
l
i
g
h
t
=
10
6
T
f
i
l
m
−
10
6
T
l
i
g
h
t
{\displaystyle \Delta M=M_{film}-M_{light}={\frac {{10}^{6}}{T_{film}}}-{\frac {{10}^{6}}{T_{light}}}}
From the equation, when the film has a higher color temperature than the light source, a negative mired shift is required, which calls for a "cooling" filter; these have a perceptible blue color, and the more saturated the color, the stronger the cooling effect. Likewise, when the film has a lower color temperature than the light source, a positive mired shift is required, which calls for an amber "warming" filter.
Stacking color conversion filters creates an additive mired shift: for example, stacking a Wratten 80A (-130 mired) with a Wratten 82C (-60 mired) results in a total mired shift of -190.: 58–59 : 7 A typical set of color conversion filters has a geometric sequence, e.g. ±15, ±30, ±60, and ±120 mired,: 41 which corresponds approximately to the pattern of the Wratten filters, and allows intermediate values to be obtained by stacking.
==== Color correction ====
Color conversion and light balancing (LB) filters must be distinguished from color correction filters (CC filters), which filter out a particular color cast that may have various causes, including reflections from colored surfaces, fluorescent lighting (which has an unbalanced spectrum), underwater photography, or the Schwarzschild effect (also known as reciprocity failure).: 43
In general, CC filters are supplied in densities varying between 5 and 50% in primary colors, both additive (red, green, and blue) and subtractive (cyan, magenta, and yellow). They may be used for graphic effect or to compensate for differences in color balance between film batches for critical work.: 43–44 Fluorescent filters generally have a magenta hue, selectively absorbing excessive green light, and have a name which includes the letters FL, such as FL-D for use with daylight balanced film.: 6
==== Color subtraction ====
Color subtraction filters work by absorbing certain colors of light, letting the remaining colors through. They can be used to demonstrate the primary colors that make up an image. They are perhaps most frequently used in the printing industry for color separations, and again, use has diminished as digital solutions have become more advanced and abundant.
Didymium filters, sold as "color enhancement" or "fall color" filters act similarly: They remove a narrow (or broad) band of color in the yellow part of the spectrum (589 nm). Some astronomical filters similarly use didymium in heavier concentration. Even astronomical filters which don't use didymium typically are some kind of narrow pass-band color filter.
==== Contrast enhancement ====
Colored filters are commonly used in black and white photography to alter the effect of different colors in the scene, changing contrast recorded in black and white of the different colours. The standard rule-of-thumb is a colored filter will selectively lighten its color, while darkening other colors, especially the complementary color, as the filter passes that color while attenuating others.: 20
For example, a yellow filter or, more dramatically, an orange or red filter, will enhance the contrast between clouds and sky by darkening the blue sky while leaving the clouds bright (after exposure compensation). A deep green filter will also darken the sky, and additionally lighten green foliage, making it stand out against the sky. Light yellowish-green filters were used as standard portrait filters for panchromatic film, since they render skin-tones as light to dark grey, while darkening deep reds and blues to nearly black.
A sky-blue filter (cyan) mimics the effect of older orthochromatic film – or with a "true blue" filter, even older film only sensitive to blue light – rendering blue as light and red and green as dark, showing blue skies the same as overcast, with no contrast between sky and clouds, darkening blond hair, making blue eyes nearly white, and red lips nearly black.
Diffusion filters have the opposite, contrast-reducing effect; in addition they "soften" focus, making small blemishes invisible.
=== Special effects ===
==== Cross ====
A cross screen filter, also known as a star filter, creates a star pattern, in which lines radiate outward from bright objects. The star pattern is generated by a very fine diffraction grating embedded in the filter, or sometimes by the use of prisms in the filter. The number of stars varies by the construction of the filter, as does the number of points each star has.: 60–61 : 31–33 The pattern of the diffraction grating can affect the shape of the resulting highlights as well.: 25–26
==== Diffusion ====
A diffusion filter (also called a softening filter) softens subjects and generates a dreamy haze (see photon diffusion).: 30–31 This is most often used for portraits, providing an effect similar to that of a dedicated soft focus lens. It also has the effect of reducing contrast, and the filters are designed, labeled, sold, and used for that purpose too. There are many ways of accomplishing this effect, and thus filters from different manufacturers vary significantly. The two primary approaches are to use some form of grid or netting in the filter, or to use something which is transparent but not optically sharp.: 44–45
Both effects can be achieved in software, which can in principle provide a very precise degree of control of the level of effect, however the "look" may be noticeably different. If there is too much contrast in a scene, the dynamic range of the digital image sensor or film may be exceeded, which post-processing cannot compensate for, so contrast reduction at the time of image capture may be called for.
==== Close-up and split diopter lenses ====
A close-up lens is not technically a filter but accessory lens which attaches to a lens like a filter, hence the alternative but misleading term "close-up filter". They are often sold by filter manufacturers as part of their product lines, using the same holders and attachment systems. A close-up lens is a single or two-element converging lens used for close-up and macro photography, and works in the same way as spectacles used for reading. The insertion of a converging lens in front of the taking lens reduces the focal length of the combination.
Close-up lenses are usually specified by their optical power, the reciprocal of the focal length in meters. Several close-up lenses may be used in combination; the optical power of the combination is the sum of the optical powers of the component lenses; a set of lenses of +1, +2, and +4 diopters can be combined to provide a range from +1 to +7 in steps of one.
A split diopter has just a semicircular half of a close-up lens in a normal filter holder. It can be used to photograph a close object and a much more distant background, with everything in sharp focus; with any non-split lens the depth of field would be far too shallow.: 48–49
==== Multi-image ====
A multi-image filter, sometimes called multiple image or kaleidoscopic, uses a faceted lens which generally repeats the central subject one or more times in the periphery; the images may be repeated with a radial or parallel layout.: 58–59 : 26–27
== Physical design ==
=== Materials and construction ===
Photo filters are commonly made from glass, resin plastics similar to those used for eyeglasses (such as CR-39), polyester and polycarbonate; sometimes acetate is used. Historically, filters were often made from gelatin, and color gels. While some filters are still described as gelatin or gel filters, they are no longer actually made from gelatin but from one of the plastics mentioned above.
Sometimes the filter is dyed in the mass, in other cases the filter is a thin sheet of material sandwiched between two pieces of clear glass or plastic.
Certain kinds of filters use other materials inside a glass sandwich; for example, polarizers often use various special films, netting filters have nylon netting, and so forth.
The rings on screw-on filters are often made of aluminum, though in more expensive filters brass is used. Aluminum filter rings are much lighter in weight, but can "bind" to the aluminum lens threads they are screwed in to, requiring the use of a filter wrench to get the filter off of the lens. Aluminum also dents or deforms more easily.
High quality filters are multi-coated, with multiple-layer optical coatings to reduce reflections. Uncoated filters can reflect up to 12% of the light, single-coated filter can reduce this considerably, and multi-coated filters can allow up to 99.8% of the light to pass through (0.2% unwanted reflection); the loss of light is not important, but part of the light is reflected inside the camera, producing flare and reducing the contrast of the image.
=== Filter sizes and mountings ===
Manufacturers of lenses and filters have standardized on several different sets of sizes over the years.
==== Threaded round filters ====
The most common standard filter sizes for circular filters include 30.5 mm, 35.5 mm, 37 mm, 39 mm, 40.5 mm, 43 mm, 46 mm, 49 mm, 52 mm, 55 mm, 58 mm, 62 mm, 67 mm, 72 mm, 77 mm, 82 mm, 86 mm, 95 mm, 105 mm, 112 mm 122 mm, 127 mm. The filter diameter has a steady increase from 43 to 58 mm every 3 mm and from 62 to 82 mm every 5 mm. Other filter sizes within this range may be hard to find since the filter size may be non-standard or may be rarely used on camera lenses. The specified diameter of the filter in millimeters indicates the diameter of the male threads on the filter housing. The thread pitch is 0.5 mm, 0.75 mm or 1.0 mm, depending on the ring size. A few sizes (e.g. 30.5 mm) come in more than one pitch. Most filters have a 0.75 mm pitch thread, some manufacturers use a 1.0 mm pitch thread; filters with thread pitches are incompatible with lenses with a different thread pitch.
The filter diameter for a particular lens is commonly identified on the lens face by the ⌀ symbol. For example, a lens marking may indicate: “⌀55 mm” or “55⌀” meaning it would accept a 55 mm filter or lens hood.
==== Square filters ====
For square filters, 2" × 2", 3" × 3" and 4" × 4" were historically very common and are still made by some manufacturers. 100 mm × 100 mm is very close to 4" × 4", allowing use of many of the same holders, and is one of the more popular sizes currently (2006) in use; it is virtually a standard in the motion picture industry. 75 mm x 75 mm is very close to 3" × 3" and while less common today, was much in vogue in the 1990s.
The French manufacturer Cokin makes a wide range of filters and holders in three sizes which is collectively known as the Cokin System. "A" (amateur) size is 67 mm wide, "P" (professional) size is 84 mm wide, and "X Pro" is 130 mm wide. Many other manufacturers make filters to fit Cokin holders. Cokin also makes a filter holder for 100 mm filters, which they call the "Z" size. Most of Cokin's filters are made of optical resins such as CR-39. A few round filter elements may be attached to the square/rectangular filter holders, usually polarizers and gradient filters which both need to be rotated and are more expensive to manufacture.
Cokin formerly (1980s through mid-1990s) had competition from Hoya's 'Hoyarex' system (75 mm x 75 mm filters mostly made from resin) and also a range made by Ambico, but both have withdrawn from the market. A small (84 mm) "system" range is still made (as of 2012) by Formatt Hitech. In general, square (and sometimes rectangular) filters from one system could be used in another system's holders if the size was correct, but each made a different system of filter holder which could not be used together. Lee, Tiffen, Formatt Hitech and Singh Ray also make square / rectangular filters in the 100 × 100 mm and Cokin "P" sizes.
Gel filters are very common in square form, rarely being used in circular form. These are thin flexible sheets of gelatin or plastic which must be held in rigid frames to prevent them from sagging. Gels are made not only for use as photo filters, but also in a wide range of colors for use in lighting applications, particularly for theatrical lighting. Gel holders are available from all of the square “system” makers, but are additionally provided by many camera manufacturers, by manufacturers of gel filters, and by makers of expensive professional camera accessories (particularly those manufacturers which target the movie and television camera markets.
Square filter systems often have lens shades available to attach to the filter holders.
==== Rectangular filters ====
Graduated filters of a given width (67 mm, 84 mm, 100 mm, etc.) are often made oblong, rather than square, in order to allow the position of the gradation to be moved up or down in the picture. This allows, for example, the red part of a sunset filter to be placed at the horizon. These are used with the "system" holders described above.
==== Bayonet round filters ====
Certain manufacturers, most notably Rollei and Hasselblad, have created their own systems of bayonet mount for filters. Each design comes in several sizes, such as Bay I through Bay VIII for Rollei, and Bay 50 through Bay 104 for Hasselblad.
==== Series filters ====
Starting in the 1930s, filters were also made in a sizing system known as a Series mount. The Series filters are round pieces of glass (or occasionally other materials) with no threads. Very early Series filters had no rims around the glass, but the more common later production Series filters had the glass mounted in metal rims. The Series size designations are generally written as Roman numerals, I to IX, though there are a few sizes not written that way, such as Series 4.5 and Series 5.5.
Most Series filter sizes are now obsolete, production having ceased by the late 1970s. However, Series 9 (IX) became a standard of the motion picture industry and Series 9 filters are still produced and sold today, particularly for professional motion picture cinematography.
To mount Series filters on a camera lens, first an appropriate adapter is mounted to the lens, either by threading onto the lens, pushing into the lens, or clamping on to the lens barrel. Then the filter is placed in the adapter, and finally, a retaining ring is threaded into the adapter to secure the filter. In some cases, additional accessories, such as a lens hood or a second filter, can be accommodated in the adapter, or the hood itself may act as the retaining ring. Lenses designed for Series filters have a suitable adapter built-in to the front, and generally require only a retaining ring.
== See also ==
Color gel
List of photographic equipment makers
Optical filter
== Footnotes ==
== References ==
== External links ==
Photography Filters
UV filters test - Description of the results and summary - Lenstip.com
Polarizing filters test - Results and summary - Lenstip.com
Analysis of Camera Filters | Camera Filters.biz | Wikipedia/Photographic_filter |
"Forced Perspective" is the tenth episode of the fourth season of the Fox science-fiction drama television series Fringe, and the series' 75th episode overall.
The episode was written by Ethan Gross and directed by David Solomon.
== Plot ==
Olivia (Anna Torv) contemplates the warning from the bald man—known to the viewer as the Observer September (Michael Cerveris)—about how she appears destined to be killed. Broyles (Lance Reddick) cautions her about taking unnecessary risks until they learn more about this man, but Olivia agrees to continue to perform her job.
A man is killed when a girder from a nearby construction site accidentally falls and impales him. The Fringe division is called in when they find the man was given a piece of paper from a young girl (Alexis Raich) moments before the accident, a drawing of his death in perfect detail. The Fringe team uses nearby security footage to determine the identity of the girl, Emily Mallum. They approach her father, Jim (Currie Graham), who initially lies about Emily, but eventually lets them in. Jim is aware that Emily has a gift for seeing the future, and he has been moving his family across country and changing their identities, trying to stay ahead of people who he believes are agents of Massive Dynamic using black panel vans, trying to take Emily and experiment on her ability to see the future. Jim refuses to allow the Fringe team to help out any further, but Olivia leaves him her card. Later, Olivia talks to Nina Sharp (Blair Brown) about Emily, who admits that Massive Dynamic has an interest in the girl, but only to study her abilities. Olivia begins to compare Emily's case to her own as a child in the Cortexiphan experiments, but is interrupted by a call from Emily, who wants to meet privately.
At a park bench near a lake, Emily shows Olivia her latest picture that she drew after encountering a man on a bus but wasn't able to hand to him: a pile of dead bodies. She explains her "gift", that when she is near someone that will die she gets flashes of their death in her mind. These visions have never failed to come to fruition, and she worries for the apparent death toll in this latest drawing. The drawing does not give enough information to guess where it may occur, so Olivia takes Emily to Walter's (John Noble) lab, where Walter believes that Emily's brain is picking up on the vibrations of traumatic events as they flow backwards in time. After obtaining Jim's approval, they hook Emily up to Walter's equipment, to allow her to explore her own mind under hypnosis. Within her vision of the forthcoming event, Emily recognizes that it is a result of an explosion, and enough of a sign to pinpoint the location, a nearby courthouse. The team is also able to identify the man aboard the bus, Albert Duncan. They conclude Duncan is about to blow up the courthouse. The team races there to discover Duncan targeting a judge that ruled against him in a child custody case, ruining his life. With Peter's (Joshua Jackson) help, the radio-controlled bomb is disabled, but Duncan further reveals he has a bomb strapped to himself. Olivia is able to talk him out of detonating it, saving everyone's lives, and taking Duncan into custody.
As they close the case, Olivia contacts Emily to pass on thanks, but gets Jim instead. Jim finds Emily missing from her bedroom, and the Fringe team sets off to follow a black van that Jim had spotted believing Emily was kidnapped; it turns out this was only a dry cleaning delivery van. Olivia quickly realizes where Emily has gone and directs Jim to meet her at the park bench by the lake. Emily is there in the bitter cold, and Jim sits down next to her coaxing her to get help, but she refuses. Olivia spots Emily's most recent drawing, of her and her father on the bench with her watching, and realizes that Emily is dying. Jim holds onto Emily as she dies from an overload of electrical activity in her brain caused by her ability.
Meanwhile, Walter and Peter continue to bond as they work to understand the principles of the Machine to allow Peter to return to his original timeline. Peter later explains to Olivia who the Observers are and their ability to be aware of the consequences of time. That evening, Nina drops by to visit Olivia, and upon hearing that she is still suffering from migraines at night, promises to send her new medicine that may help her. At exactly the same moment, outside of her home, an Observer is watching.
== Production ==
"Forced Perspective" was written by executive story editor Ethan Gross and Buffy the Vampire Slayer veteran David Solomon directed.
== Reception ==
=== Ratings ===
"Forced Perspective" was first broadcast on January 27, 2012 in the United States on Fox. An estimated 3.33 million viewers watched the episode, marking an increase in overall viewership from the previous episode and received its highest ratings of the season since the premiere episode.
=== Reviews ===
The A.V. Club gave the episode a B and called it "a solid case-of-the-week." The Los Angeles Times found Emily’s story unengaging and called it "a textbook example of anticlimcatic."
== See also ==
Speculative fiction portal
Television portal
== References ==
== External links ==
"Forced Perspective" Archived 2012-01-24 at the Wayback Machine at Fox.com
"Forced Perspective" at IMDb | Wikipedia/Forced_Perspective_(Fringe) |
Forced Perspective: The Art and Life of Derek Hess is a 2015 documentary about American based artist Derek Hess. The film tells the story of Hess's life and how his struggles with alcoholism and bipolar disorder have impacted his life and career.
== Accolades ==
Forced Perspective has taken home multiple awards from several film festivals including:
Local Heroes Award - 2015 Cleveland International Film Festival
Best Cinematography Award - 2015 Beverly Hills Film Festival
Best Art Documentary - 2015 Atlanta International Documentary Film Festival
Platinum Award Winner - 2015 Spotlight Awards
Best Documentary Award - 2015 Kingston Film Festival
Best Feature Film Award - 2015 Reel Indie Film Fest in Toronto.
Forced Perspective was also an official selection for "Excellence in title design" at the 2015 SXSW film festival, and was an official selection at the 2015 Indy Film Fest and 2015 Blue Whiskey Independent Film Festival.
== References == | Wikipedia/Forced_Perspective_(film) |
Computer-generated imagery (CGI) is a specific-technology or application of computer graphics for creating or improving images in art, printed media, simulators, videos and video games. These images are either static (i.e. still images) or dynamic (i.e. moving images). CGI both refers to 2D computer graphics and (more frequently) 3D computer graphics with the purpose of designing characters, virtual worlds, or scenes and special effects (in films, television programs, commercials, etc.). The application of CGI for creating/improving animations is called computer animation, or CGI animation.
== History ==
The first feature film to use CGI as well as the composition of live-action film with CGI was Vertigo, which used abstract computer graphics by John Whitney in the opening credits of the film. The first feature film to make use of CGI with live action in the storyline of the film was the 1973 film Westworld. The first feature film to present a fully CGI character was the 1985 film Young Sherlock Holmes, showcasing a fully animated stained glass knight character. Other early films that incorporated CGI include Demon Seed (1977), Star Wars (1977), Tron (1982), Star Trek II: The Wrath of Khan (1982), Golgo 13: The Professional (1983), The Last Starfighter (1984),The Abyss (1989), Terminator 2: Judgement Day (1991), and Jurassic Park (1993). The first music video to use CGI was Will Powers' "Adventures in Success" (1983). In 1995, Pixar’s Toy Story became the first fully CGI feature film, marking a historic milestone for both animation and film-making.
Prior to CGI being prevalent in film, virtual reality, personal computing and gaming, one of the early practical applications of CGI was for aviation and military training, namely the flight simulator. Visual systems developed in flight simulators were also an important precursor to three dimensional computer graphics and Computer Generated Imagery (CGI) systems today. Namely because the object of flight simulation was to reproduce on the ground the behavior of an aircraft in flight. Much of this reproduction had to do with believable visual synthesis that mimicked reality. The Link Digital Image Generator (DIG) by the Singer Company (Singer-Link), was considered one of the world's first generation CGI systems. It was a real-time, 3D capable, day/dusk/night system that was used by NASA shuttles, for F-111s, Black Hawk and the B-52. Link's Digital Image Generator had architecture to provide a visual system that realistically corresponded with the view of the pilot. The basic architecture of the DIG and subsequent improvements contained a scene manager followed by geometric processor, video processor and into the display with the end goal of a visual system that processed realistic texture, shading, translucency capabilties, and free of aliasing.
Combined with the need to pair virtual synthesis with military level training requirements, CGI technologies applied in flight simulation were often years ahead of what would have been available in commercial computing or even in high budget film. Early CGI systems could depict only objects consisting of planar polygons. Advances in algorithms and electronics in flight simulator visual systems and CGI in the 1970s and 1980s influenced many technologies still used in modern CGI adding the ability to superimpose texture over the surfaces as well as transition imagery from one level of detail to the next one in a smooth manner.
The evolution of CGI led to the emergence of virtual cinematography in the 1990s, where the vision of the simulated camera is not constrained by the laws of physics. Availability of CGI software and increased computer speeds have allowed individual artists and small companies to produce professional-grade films, games, and fine art from their home computers.
== Static images and landscapes ==
Not only do animated images form part of computer-generated imagery; natural looking landscapes (such as fractal landscapes) are also generated via computer algorithms. A simple way to generate fractal surfaces is to use an extension of the triangular mesh method, relying on the construction of some special case of a de Rham curve, e.g., midpoint displacement. For instance, the algorithm may start with a large triangle, then recursively zoom in by dividing it into four smaller Sierpinski triangles, then interpolate the height of each point from its nearest neighbors. The creation of a Brownian surface may be achieved not only by adding noise as new nodes are created but by adding additional noise at multiple levels of the mesh. Thus a topographical map with varying levels of height can be created using relatively straightforward fractal algorithms. Some typical, easy-to-program fractals used in CGI are the plasma fractal and the more dramatic fault fractal.
Many specific techniques have been researched and developed to produce highly focused computer-generated effects — e.g., the use of specific models to represent the chemical weathering of stones to model erosion and produce an "aged appearance" for a given stone-based surface.
== Architectural scenes ==
Modern architects use services from computer graphic firms to create 3-dimensional models for both customers and builders. These computer generated models can be more accurate than traditional drawings. Architectural animation (which provides animated movies of buildings, rather than interactive images) can also be used to see the possible relationship a building will have in relation to the environment and its surrounding buildings. The processing of architectural spaces without the use of paper and pencil tools is now a widely accepted practice with a number of computer-assisted architectural design systems.
Architectural modeling tools allow an architect to visualize a space and perform "walk-throughs" in an interactive manner, thus providing "interactive environments" both at the urban and building levels. Specific applications in architecture not only include the specification of building structures (such as walls and windows) and walk-throughs but the effects of light and how sunlight will affect a specific design at different times of the day.
Architectural modeling tools have now become increasingly internet-based. However, the quality of internet-based systems still lags behind sophisticated in-house modeling systems.
In some applications, computer-generated images are used to "reverse engineer" historical buildings. For instance, a computer-generated reconstruction of the monastery at Georgenthal in Germany was derived from the ruins of the monastery, yet provides the viewer with a "look and feel" of what the building would have looked like in its day.
== Anatomical models ==
Computer generated models used in skeletal animation are not always anatomically correct. However, organizations such as the Scientific Computing and Imaging Institute have developed anatomically correct computer-based models. Computer generated anatomical models can be used both for instructional and operational purposes. To date, a large body of artist produced medical images continue to be used by medical students, such as images by Frank H. Netter, e.g. Cardiac images. However, a number of online anatomical models are becoming available.
A single patient X-ray is not a computer generated image, even if digitized. However, in applications which involve CT scans a three-dimensional model is automatically produced from many single-slice x-rays, producing "computer generated image". Applications involving magnetic resonance imaging also bring together a number of "snapshots" (in this case via magnetic pulses) to produce a composite, internal image.
In modern medical applications, patient-specific models are constructed in 'computer assisted surgery'. For instance, in total knee replacement, the construction of a detailed patient-specific model can be used to carefully plan the surgery. These three-dimensional models are usually extracted from multiple CT scans of the appropriate parts of the patient's own anatomy. Such models can also be used for planning aortic valve implantations, one of the common procedures for treating heart disease. Given that the shape, diameter, and position of the coronary openings can vary greatly from patient to patient, the extraction (from CT scans) of a model that closely resembles a patient's valve anatomy can be highly beneficial in planning the procedure.
== Cloth and skin images ==
Models of cloth generally fall into three groups:
The geometric-mechanical structure at yarn crossing
The mechanics of continuous elastic sheets
The geometric macroscopic features of cloth.
To date, making the clothing of a digital character automatically fold in a natural way remains a challenge for many animators.
In addition to their use in film, advertising and other modes of public display, computer generated images of clothing are now routinely used by top fashion design firms.
The challenge in rendering human skin images involves three levels of realism:
Photo realism in resembling real skin at the static level
Physical realism in resembling its movements
Function realism in resembling its response to actions.
The finest visible features such as fine wrinkles and skin pores are the size of about 100 μm or 0.1 millimetres. Skin can be modeled as a 7-dimensional bidirectional texture function (BTF) or a collection of bidirectional scattering distribution function (BSDF) over the target's surfaces.
When animating a texture like hair or fur for a computer generated model, individual base hairs are first created and later duplicated to demonstrate volume. The initial hairs are often different lengths and colors, to each cover several different sections of a model. This technique was notably used in Pixar’s Monsters Inc (2001) for the character Sulley, who had approximately 1,000 initial hairs generated that were later duplicated 2,800 times. The quantity of duplications can range from thousands to millions, depending on the level of detail sought after.
== Interactive simulation and visualization ==
Interactive visualization is the rendering of data that may vary dynamically and allowing a user to view the data from multiple perspectives. The applications areas may vary significantly, ranging from the visualization of the flow patterns in fluid dynamics to specific computer aided design applications. The data rendered may correspond to specific visual scenes that change as the user interacts with the system — e.g. simulators, such as flight simulators, make extensive use of CGI techniques for representing the world.
At the abstract level, an interactive visualization process involves a "data pipeline" in which the raw data is managed and filtered to a form that makes it suitable for rendering. This is often called the "visualization data". The visualization data is then mapped to a "visualization representation" that can be fed to a rendering system. This is usually called a "renderable representation". This representation is then rendered as a displayable image. As the user interacts with the system (e.g. by using joystick controls to change their position within the virtual world) the raw data is fed through the pipeline to create a new rendered image, often making real-time computational efficiency a key consideration in such applications.
== Computer animation ==
While computer-generated images of landscapes may be static, computer animation only applies to dynamic images that resemble a movie. However, in general, the term computer animation refers to dynamic images that do not allow user interaction, and the term virtual world is used for the interactive animated environments.
Computer animation is essentially a digital successor to the art of stop motion animation of 3D models and frame-by-frame animation of 2D illustrations. Computer generated animations are more controllable than other more physically based processes, such as constructing miniatures for effects shots or hiring extras for crowd scenes, and because it allows the creation of images that would not be feasible using any other technology. It can also allow a single graphic artist to produce such content without the use of actors, expensive set pieces, or props.
To create the illusion of movement, an image is displayed on the computer screen and repeatedly replaced by a new image which is similar to the previous image, but advanced slightly in the time domain (usually at a rate of 24 or 30 frames/second). This technique is identical to how the illusion of movement is achieved with television and motion pictures.
== Text-to-image models ==
== Virtual worlds ==
A virtual world is an agent-based and simulated environment allowing users to interact with artificially animated characters (e.g software agent) or with other physical users, through the use of avatars. Virtual worlds are intended for its users to inhabit and interact, and the term today has become largely synonymous with interactive 3D virtual environments, where the users take the form of avatars visible to others graphically. These avatars are usually depicted as textual, two-dimensional, or three-dimensional graphical representations, although other forms are possible (auditory and touch sensations for example). Some, but not all, virtual worlds allow for multiple users.
== In courtrooms ==
Computer-generated imagery has been used in courtrooms, primarily since the early 2000s. However, some experts have argued that it is prejudicial. They are used to help judges or the jury to better visualize the sequence of events, evidence or hypothesis. However, a 1997 study showed that people are poor intuitive physicists and easily influenced by computer generated images. Thus it is important that jurors and other legal decision-makers be made aware that such exhibits are merely a representation of one potential sequence of events.
== Broadcast and live events ==
Weather visualizations were the first application of CGI in television. One of the first companies to offer computer systems for generating weather graphics was ColorGraphics Weather Systems in 1979 with the "LiveLine", based around an Apple II computer, with later models from ColorGraphics using Cromemco computers fitted with their Dazzler video graphics card.
It has now become common in weather casting to display full motion video of images captured in real-time from multiple cameras and other imaging devices. Coupled with 3D graphics symbols and mapped to a common virtual geospatial model, these animated visualizations constitute the first true application of CGI to TV.
CGI has become common in sports telecasting. Sports and entertainment venues are provided with see-through and overlay content through tracked camera feeds for enhanced viewing by the audience. Examples include the yellow "first down" line seen in television broadcasts of American football games showing the line the offensive team must cross to receive a first down. CGI is also used in association with football and other sporting events to show commercial advertisements overlaid onto the view of the playing area. Sections of rugby fields and cricket pitches also display sponsored images. Swimming telecasts often add a line across the lanes to indicate the position of the current record holder as a race proceeds to allow viewers to compare the current race to the best performance. Other examples include hockey puck tracking and annotations of racing car performance and snooker ball trajectories. Sometimes CGI on TV with correct alignment to the real world has been referred to as augmented reality.
== Motion capture ==
Computer-generated imagery is often used in conjunction with motion capture to better cover the faults that come with CGI and animation. Computer-generated imagery is limited in its practical application by how realistic it can look. Unrealistic, or badly managed computer-generated imagery can result in the uncanny valley effect. This effect refers to the human ability to recognize things that look eerily like humans, but are slightly off. Such ability is a fault with normal computer-generated imagery which, due to the complex anatomy of the human body, can often fail to replicate it perfectly. Artists can use motion capture to get footage of a human performing an action and then replicate it perfectly with computer-generated imagery so that it looks normal.
In many instances, motion capture is needed to accurately mimic an actor's full body movements while slightly changing their appearance with de-aging. De-aging is a visual effect used to alter the appearance of an actor, often through facial scanning technologies, motion capture, and photo references. It is commonly used for flashback scenes and cameos to have an actor appear younger. Marvel’s X-Men: The Last Stand was the first film to publicly incorporate de-aging, which was used on actors Patrick Stewart and Ian Mckellen for flashback scenes featuring their characters at a younger age. The visual effects were done by the company Lola VFX, and used photos taken of the actors at a younger age as references to later smooth out the wrinkles on their face with use of CGI. Overtime, de-aging technologies have advanced, with films such as Here (2024), portraying actors at younger ages through the use of digital AI techniques, scanning millions of facial features and incorporating a number of them onto actors’ faces to alter their appearance.
The lack of anatomically correct digital models contributes to the necessity of motion capture as it is used with computer-generated imagery. Because computer-generated imagery reflects only the outside, or skin, of the object being rendered, it fails to capture the infinitesimally small interactions between interlocking muscle groups used in fine motor skills like speaking. The constant motion of the face as it makes sounds with shaped lips and tongue movement, along with the facial expressions that go along with speaking are difficult to replicate by hand. Motion capture can catch the underlying movement of facial muscles and better replicate the visual that goes along with the audio.
== See also ==
== References ==
=== Citations ===
=== Sources ===
== External links ==
A Critical History of Computer Graphics and Animation – a course page at Ohio State University that includes all the course materials and extensive supplementary materials (videos, articles, links).
CG101: A Computer Graphics Industry Reference ISBN 073570046X Unique and personal histories of early computer graphics production, plus a comprehensive foundation of the industry for all reading levels.
F/X Gods, by Anne Thompson, Wired, February 2005.
"History Gets A Computer Graphics Make-Over" Tayfun King, Click, BBC World News (2004-11-19)
NIH Visible Human Gallery | Wikipedia/Computer-generated_imagery |
The Big Bang Theory is an American television sitcom created by Chuck Lorre and Bill Prady for CBS. It aired from September 24, 2007, to May 16, 2019, running for 12 seasons and 279 episodes.
The show originally centered on five characters living in Pasadena, California: Leonard Hofstadter (Johnny Galecki) and Sheldon Cooper (Jim Parsons), both physicists at Caltech, who share an apartment; Penny (Kaley Cuoco), a waitress and aspiring actress who lives across the hall; and Leonard and Sheldon's similarly geeky and socially awkward friends and coworkers, aerospace engineer Howard Wolowitz (Simon Helberg) and astrophysicist Raj Koothrappali (Kunal Nayyar). Over time, supporting characters were promoted to starring roles, including neuroscientist Amy Farrah Fowler (Mayim Bialik), microbiologist Bernadette Rostenkowski (Melissa Rauch), and comic book store owner Stuart Bloom (Kevin Sussman).
The show was filmed in front of a live audience and produced by Chuck Lorre Productions, with Warner Bros. Television handling distribution. It received mixed reviews throughout its first season, but reception was more favorable in the second and third seasons. Despite early mixed reviews, seven seasons were ranked within the top ten of the final season ratings, and it ultimately reached the No. 1 spot in its eleventh season. It was nominated for the Emmy Award for Outstanding Comedy Series from 2011 to 2014 and won the Emmy Award for Outstanding Lead Actor in a Comedy Series four times for Parsons, totaling seven Emmy Awards from 46 nominations. Parsons also won the Golden Globe for Best Actor in a Television Comedy Series in 2011.
The series' success launched a multimedia franchise. A prequel series based on Parsons' character Sheldon Cooper, Young Sheldon, aired from 2017 to 2024, with Parsons as the narrating adult Sheldon. The third series in the franchise, a sequel series to Young Sheldon titled Georgie & Mandy's First Marriage, premiered in October 2024 and follows Sheldon's older brother, Georgie, and his wife, Mandy. A fourth series, following Stuart, his girlfriend Denise, and geologist Bert Kibbler, is in development for Max.
== Plot ==
=== Seasons 1–4 ===
The series centers on the evolving relationships between socially awkward physicists Leonard Hofstadter and Sheldon Cooper, their neighbor Penny, and their friends Howard Wolowitz and Raj Koothrappali. The central romantic storyline begins when Leonard becomes immediately attracted to Penny, an aspiring actress and waitress who moves in across the hall. Throughout the first season, Leonard attempts various schemes to win Penny's affection while she dates a series of conventionally attractive but intellectually incompatible men.
The group's friendship dynamics are established as they navigate their shared interests in science fiction, comic books, and video games, often clashing with their limited social skills. Sheldon's rigid personality and numerous quirks create ongoing conflicts with his roommate Leonard, while Howard's inappropriate behavior toward women and his unhealthy relationship with his mother provide additional comedic tension. Raj's selective mutism around women becomes a recurring obstacle to his romantic pursuits.
Leonard and Penny's relationship experiences its first major development when Leonard returns from a three-month Arctic expedition in the season three premiere, leading to their first serious romantic relationship. However, the relationship becomes strained when Leonard prematurely declares his love for Penny, who cannot reciprocate the sentiment. Their subsequent breakup leads Leonard to pursue a relationship with Raj's sister Priya during much of season four, creating tension within the friend group and jealousy from Penny.
=== Seasons 5–8 ===
The series expands its core cast with the introduction of Amy Farrah Fowler, a neurobiologist matched with Sheldon through online dating, and Bernadette Rostenkowski, a microbiologist who begins dating Howard. These additions create new relationship dynamics and storylines while allowing for character development among the original cast members.
Sheldon and Amy's relationship develops slowly from a purely intellectual connection to a romantic partnership, with Sheldon gradually overcoming his aversion to physical contact and emotional intimacy. Their relationship is marked by formal agreements and scientific approaches to romance, reflecting both characters' analytical personalities.
Howard's relationship with Bernadette leads to significant character growth as he learns to become more mature and less dependent on his mother. Their relationship culminates in marriage during the fifth season finale, coinciding with Howard's departure for a space mission to the International Space Station.
Leonard and Penny reconcile and resume their romantic relationship, though it faces various challenges including Leonard's insecurities and Penny's career struggles. Leonard's multiple attempts to propose marriage are initially rejected by Penny, who feels unprepared for such commitment. The relationship dynamics continue to evolve as both characters mature and better understand each other's needs and perspectives.
=== Seasons 9–12 ===
The final seasons focus on the progression of established relationships toward marriage and long-term commitment. Leonard and Penny finally marry in the season nine premiere, though their wedding is preceded by Leonard's confession of infidelity during his Arctic expedition. Despite this obstacle, they successfully navigate married life and eventually move into their own apartment when Sheldon relocates.
Sheldon and Amy's relationship reaches several major milestones, including their first sexual encounter on Amy's birthday and eventual cohabitation. After a temporary breakup caused by Sheldon's fear of commitment, they reunite and become engaged. Their wedding in the season eleven finale represents the culmination of Sheldon's character development from an emotionally closed individual to someone capable of love and partnership.
The series concludes with the characters achieving personal and professional fulfillment. The final season reveals Penny's pregnancy, suggesting future family expansion for her and Leonard. The series finale focuses on Sheldon and Amy receiving the Nobel Prize in Physics, bringing the characters' scientific careers full circle while emphasizing the importance of their personal relationships and friendships that have sustained them throughout the series.
== Cast and characters ==
Johnny Galecki as Leonard Hofstadter: An experimental physicist with an IQ of 173, who received his Ph.D. when he was 24 years old. Leonard is a nerd who loves video games, comic books, and Dungeons & Dragons. Leonard is the straight man of the series, sharing an apartment in Pasadena, CA, with Sheldon Cooper. Leonard is smitten with his new neighbor Penny when they first meet, and they eventually marry.
Jim Parsons as Sheldon Cooper: Originally from Galveston, Texas, Sheldon was a child prodigy with an eidetic memory who began college at the age of eleven and earned a Ph.D. at age sixteen. He is a theoretical physicist researching quantum mechanics and string theory, and, despite his IQ of 187, he finds many routine aspects of social situations difficult to grasp. He is determined to have his own way, continually boasts of his intelligence, and has an extremely ritualized way of living. Despite these quirks, he begins a relationship with Amy Farrah Fowler, and they eventually marry.
Kaley Cuoco as Penny: An aspiring actress from Omaha, Nebraska. Penny moves in across the hall from Sheldon and Leonard. She waits tables and occasionally tends the bar at The Cheesecake Factory. After giving up hope of becoming a successful actress, Penny becomes a pharmaceutical sales representative. Penny becomes friends with Bernadette and Amy, and they often hang out in each other's apartments. Penny and Leonard form a relationship and eventually marry.
Simon Helberg as Howard Wolowitz: An aerospace engineer who got his master's degree at the Massachusetts Institute of Technology. Howard is Jewish and lived with his mother, Debbie (Carol Ann Susi). Unlike Sheldon, Leonard, Raj, Bernadette, and Amy, Howard does not hold a doctorate. He trains as an astronaut and goes into space as a payload specialist on the International Space Station. Howard initially fancies himself as a ladies man, but he later starts dating Bernadette, and they get engaged and married. Howard also has a tendency to waste money on toys and argues with Bernadette because of his oddly low income as an engineer and her high income as a pharmaceutical biochemist.
Kunal Nayyar as Rajesh Koothrappali: A particle astrophysicist originally from New Delhi, India. Initially, Raj had selective mutism, rendering him unable to talk to or be around women unless under the influence of alcohol. Raj also has very feminine tastes and often takes on a stereotypical female role in his friendship with Howard as well as in the group of four men. Raj later dates Lucy (Kate Micucci), who also suffers from social anxiety, but it eventually ends. He later speaks to Penny without alcohol, overcoming his selective mutism. He begins dating Emily Sweeney, and their relationship later becomes exclusive. In the series' final season, Raj has an on-again, off-again engagement with a fellow Indian, a hotel concierge named Anu (Rati Gupta). He also has a Yorkshire Terrier named Cinnamon, given by Howard and Bernadette.
Sara Gilbert as Leslie Winkle (recurring season 1, starring season 2, guest seasons 3, 9): A physicist who works in the same lab as Leonard. In appearance, she is essentially Leonard's female counterpart and has conflicting scientific theories with Sheldon. Leslie has casual sex with Leonard and later Howard. Gilbert was promoted to a main cast member during the second season but resumed guest star status because producers could not come up with enough material for the character. Gilbert returned to The Big Bang Theory for its 200th episode.
Melissa Rauch as Bernadette Rostenkowski-Wolowitz (recurring season 3, starring seasons 4–12): A young woman who initially is a co-worker at The Cheesecake Factory with Penny to pay her way through graduate school, where she is studying microbiology. Bernadette is introduced to Howard by Penny; at first, they do not get along, apparently having nothing in common. They date and later get engaged and married. Although generally a sweet and good-natured person, Bernadette has a short fuse and can be vindictive and lash out when provoked.
Mayim Bialik as Amy Farrah Fowler (guest star season 3, starring seasons 4–12): A woman selected by an online dating site as Sheldon's perfect mate, Amy is from Glendale, California. While she and Sheldon initially share social cluelessness, after befriending Penny and Bernadette, she eventually becomes more interested in social and romantic interaction. Her relationship with Sheldon slowly progresses to the point where Sheldon considers her his girlfriend, and eventually, they get married. Amy believes she and Penny are best friends, a sentiment that Penny does not initially share. Amy has a Ph.D. in neurobiology.
Kevin Sussman as Stuart Bloom (recurring seasons 2–5, 7, starring seasons 6, 8–12): A mild-mannered, under-confident owner of a comic book store. A competent artist, Stuart is a graduate of the prestigious Rhode Island School of Design. Though he is socially awkward, he possesses slightly better social skills. Stuart implies he is in financial trouble and that the comic book store now also is his home. He is later invited to join the guys' group while Howard is in space. Stuart gets a new job caring for Howard's mother later. After Mrs. Wolowitz's death, Stuart continues to live in her home, along with Howard and Bernadette, until he finds a place of his own.
Laura Spencer as Emily Sweeney (recurring seasons 7–8, 10, starring season 9): A dermatologist at Huntington Hospital. Emily went to Harvard and delights in the macabre, and she states that she likes her job because she can cut things with knives. Prior to meeting Raj, Emily was set up on a blind date with Howard. After finding Emily's online dating profile, Raj has Amy contact her as his wingman instead. Their relationship becomes exclusive, but Raj later breaks up with Emily when he becomes infatuated with Claire (Alessandra Torresani), a bartender and children's author.
== Episodes ==
== Production ==
The show's pilot episode premiered on September 24, 2007. This was the second pilot produced for the show. A different pilot was produced for the 2006–07 television season but never aired. The structure of the original unaired pilot was different from the series' current form. The only main characters retained in both pilots were Leonard (Johnny Galecki) and Sheldon (Jim Parsons), who are named after Sheldon Leonard, a longtime figure in episodic television as a producer, director, and actor. A minor character, Althea (Vernee Watson), appeared in the first scene of both pilots that was retained generally as-is. The first pilot included two female lead characters - Katie, "a street-hardened, tough-as-nails woman with a vulnerable interior" (played by Canadian actress Amanda Walsh), and Gilda, a scientist colleague and friend of the male characters (played by Iris Bahr). Sheldon and Leonard meet Katie after she breaks up with a boyfriend, and they invite her to share their apartment. Gilda is threatened by Katie's presence. Test audiences reacted negatively to Katie, but they liked Sheldon and Leonard. The original pilot used Thomas Dolby's hit "She Blinded Me with Science" as its theme song.
Although the original pilot was not picked up, its creators were given an opportunity to retool it and produce a second pilot. They brought in the remaining cast and retooled the show to its final format. Katie was replaced by Penny (Kaley Cuoco). The original unaired pilot has never been officially released, but it has circulated on the Internet. On the evolution of the show, Chuck Lorre said, "We did the 'Big Bang Pilot' about two and a half years ago, and it sucked ... but there were two remarkable things that worked perfectly, and that was Johnny and Jim. We rewrote the thing entirely, and then we were blessed with Kaley and Simon and Kunal." As to whether the world will ever see the original pilot on a future DVD release, Lorre said, "Wow, that would be something. We will see. Show your failures..."
The first and second pilots of The Big Bang Theory were directed by James Burrows, who did not continue with the show. The reworked second pilot led to a 13-episode order by CBS on May 14, 2007. Prior to its airing on CBS, the pilot episode was distributed on iTunes free of charge. The show premiered on September 24, 2007, and was picked up for a full 22-episode season on October 19, 2007. The show is filmed in front of a live audience, and it is produced by Chuck Lorre Productions and Warner Bros. Television. Production was halted on November 6, 2007, due to the Writers Guild of America strike. Nearly three months later, on February 4, 2008, the series was temporarily replaced by a short-lived sitcom, Welcome to The Captain. The series returned on March 17, 2008, in an earlier time slot, and ultimately only 17 episodes were produced for the first season.
After the strike ended, the show was picked up for a second season, airing in the 2008–2009 season, premiering in the same time slot on September 22, 2008. With increasing ratings, the show received a two-year renewal through the 2010–11 season in 2009. In 2011, the show was picked up for three more seasons. In March 2014, the show was renewed again for three more years through the 2016–17 season. This marked the second time the series gained a three-year renewal. In March 2017, the series was renewed for two additional seasons, bringing its total to 12, and running through the 2018–19 television season.
Several of the actors on The Big Bang Theory previously worked together on the sitcom Roseanne, including Johnny Galecki, Sara Gilbert, Laurie Metcalf (who plays Sheldon's mother, Mary Cooper), and Meagen Fay (who plays Bernadette's mother). Additionally, Lorre was a writer on the series for several seasons.
=== Science consultants ===
David Saltzberg, a professor of physics and astronomy at the University of California, Los Angeles, checked scripts and provided dialogue, mathematics equations, and diagrams used as props. According to series co-creator Bill Prady, Sheldon was given an actual equation to be worked on throughout the first season, with the actual progress displayed on whiteboards in Sheldon and Leonard's apartment. Saltzberg, who has a Ph.D. in physics, served as the science consultant for the show for six seasons and attended every taping. He saw early versions of scripts that needed scientific information added to them, and he also pointed out where the writers, despite their knowledge of science, had made a mistake. He was usually not needed during a taping unless a lot of science, and especially the whiteboard, was involved.
Saltzberg sometimes consulted with Mayim Bialik, who has a Ph.D. in neuroscience, on the subject of biology.
=== Theme song ===
The Canadian alternative rock band Barenaked Ladies wrote and recorded the show's theme song, which describes the history and formation of the universe and the Earth. Co-lead singer Ed Robertson was asked by Lorre and Prady to write a theme song for the show after the producers attended one of the band's concerts in Los Angeles. Coincidentally, Robertson had recently read Simon Singh's 2004 book Big Bang, and at the concert he improvised a freestyle rap about the origins of the universe. Lorre and Prady phoned him shortly thereafter and asked him to write the theme song. Having been asked to write songs for other films and shows, but ending up being rejected because producers favored songs by other artists, Robertson agreed to write the theme only after learning that Lorre and Prady had not asked anyone else.
On October 9, 2007, a full-length (1 minute and 45 seconds) version of the song was released commercially. Although some unofficial pages identify the song title as "History of Everything," the cover art for the single identifies the title as "Big Bang Theory Theme." A music video also was released via special features on The Complete Fourth Season DVD and Blu-ray set. The theme was included on the band's greatest hits album, Hits from Yesterday & the Day Before, released on September 27, 2011. In September 2015, TMZ uncovered court documents showing that Steven Page sued former bandmate Robertson over the song, alleging that he was promised 20 percent of the proceeds, but that Robertson has kept that money for himself.
=== Actors' salaries ===
For the first three seasons, Galecki, Parsons, and Cuoco, the three main stars of the show, received up to $60,000 per episode. Their salaries rose to $200,000 per episode for the fourth season, then went up an additional $50,000 in each of the following three seasons, culminating in $350,000 per episode in the seventh season. In September 2013, Bialik and Rauch renegotiated the contracts they held since they were introduced to the series in 2010. On their old contracts, each was making $20,000–$30,000 per episode, while the new contracts doubled that, beginning at $60,000 per episode, increasing steadily to $100,000 per episode by the end of the contract, as well as adding another year for both.
By season seven, Galecki, Parsons, and Cuoco were also receiving 0.25 percent of the series' back-end money. Before production began on the eighth season, the three plus Helberg and Nayyar looked to renegotiate new contracts, with Galecki, Parsons, and Cuoco seeking around $1 million per episode, as well as more back-end money. Contracts were signed in the beginning of August 2014, giving the three principal actors an estimated $1 million per episode for three years, with the possibility to extend for a fourth year. The deals also include larger pieces of the show, signing bonuses, production deals, and advances towards the back-end. Helberg and Nayyar were also able to renegotiate their contracts, giving them a per-episode pay in the "mid-six-figure range", up from around $100,000 per episode they each received in years prior. The duo, who were looking to have salary parity with Parsons, Galecki, and Cuoco, signed their contracts after the studio and producers threatened to write the characters out of the series if a deal could not be reached before the start of production on season eight. By season 10, Helberg and Nayyar reached the $1 million per episode parity with Galecki, Parsons, and Cuoco, due to a clause in their deals signed in 2014.
In March 2017, the main cast members (Galecki, Parsons, Cuoco, Helberg, and Nayyar) took a 10 percent pay cut to allow Bialik and Rauch an increase in their earnings. This put Galecki, Parsons, Cuoco, Helberg and Nayyar at $900,000 per episode, with Parsons, Galecki, and Helberg also receiving overall deals with Warner Bros. Television. By the end of April, Bialik and Rauch had signed deals to earn $500,000 per episode each, with the deals also including a separate development component for both actors. The deal was an increase from the $175,000–$200,000 the duo had been making per episode.
== Recurring themes and elements ==
=== Science ===
Much of the series focuses on science, particularly physics. The four main male characters are employed at Caltech and have science-related occupations, as do Bernadette and Amy. The characters frequently banter about scientific theories or news (notably around the start of the show) and make science-related jokes.
Science has also interfered with the characters' romantic lives. Leslie breaks up with Leonard when he sides with Sheldon in his support for string theory rather than loop quantum gravity. When Leonard joins Sheldon, Raj, and Howard on a three-month Arctic research trip, it separates Leonard and Penny at a time when their relationship is budding. When Bernadette takes an interest in Leonard's work, it makes both Penny and Howard envious and results in Howard confronting Leonard and Penny asking Sheldon to teach her physics. Sheldon and Amy also briefly end their relationship after an argument over which of their fields is superior.
As the theme of the show revolves around science, many distinguished and high-profile scientists have appeared as guest stars on the show. Astrophysicist and Nobel laureate George Smoot had a cameo appearance in the second season. Chemical engineer and Nobel laureate Frances Arnold portrayed herself in the 12th season. Theoretical physicist Brian Greene appeared in the fourth season, as well as astrophysicist, science popularizer, and physics outreach specialist Neil deGrasse Tyson, who also appeared in the twelfth season. Cosmologist Stephen Hawking made a short guest appearance in a fifth-season episode; in the eighth season, Hawking video conferences with Sheldon and Leonard, and he makes another appearance in the 200th episode. In the fifth and sixth seasons, NASA astronaut Michael J. Massimino played himself multiple times in the role of Howard's fellow astronaut. In the sixth season, NASA astronaut Buzz Aldrin had a cameo appearance. Bill Nye appeared in the seventh and twelfth seasons.
=== "Nerd" media ===
The four main male characters are all avid fans of nerd culture. Among their shared interests are science fiction, fantasy, comic books, and collecting memorabilia.
Star Trek in particular is referred to frequently, and Sheldon identifies strongly with the character of Spock, so much so that when he is given a used napkin signed by Leonard Nimoy as a Christmas gift from Penny, he is overwhelmed with excitement and gratitude ("I possess the DNA of Leonard Nimoy?!"). Star Trek: The Original Series cast members William Shatner and George Takei have made cameos, and Leonard Nimoy made a cameo as the voice of Sheldon's vintage Mr. Spock action figure. Star Trek: The Next Generation cast members Brent Spiner and LeVar Burton have had cameos as themselves, while Wil Wheaton has a recurring role as a fictionalized version of himself. Leonard and Sheldon have had conversations in Klingon.
They are also fans of Star Wars, Battlestar Galactica, and Doctor Who. James Earl Jones, Carrie Fisher and Mark Hamill made guest appearances. In the episode "The Ornithophobia Diffusion", when there is a delay in watching Star Wars on Blu-ray, Howard complains, "If we don't start soon, George Lucas is going to change it again" (referring to Lucas' controversial alterations to the films). In "The Hot Troll Deviation", Katee Sackhoff of Battlestar Galactica appeared as Howard's fantasy dream girl. The characters have different tastes in franchises, with Sheldon praising Firefly but disapproving of Leonard's enjoyment of Babylon 5. With regard to fantasy, the four make frequent references to The Lord of the Rings and Harry Potter novels and movies. Additionally, Howard can speak Sindarin, one of the two Elvish languages from The Lord of the Rings.
Wednesday night is the group's designated "comic book night" because that is the day of the week when new comic books are released. The comic book store is run by fellow geek and recurring character Stuart. On a number of occasions, the group members have dressed up as pop culture characters, including The Flash, Aquaman, Frodo Baggins, Superman, Batman, Spock, The Doctor, Green Lantern, and Thor. As a consequence of losing a bet to Stuart and Wil Wheaton, the group members are forced to visit the comic book store dressed as Catwoman, Wonder Woman, Batgirl, and Supergirl. DC Comics announced that, to promote its comics, the company would sponsor Sheldon wearing Green Lantern T-shirts.
Various games have been featured, as well as referred to, on the series (e.g. World of Warcraft, Halo, Mario, Donkey Kong, etc.), including fictional games like Mystic Warlords of Ka'a (which became a reality in 2011) and Rock-paper-scissors-lizard-Spock.
=== Leonard and Penny's relationship ===
One of the recurring plot lines is the relationship between Leonard and Penny. Leonard becomes attracted to Penny in the pilot episode, and his need to do favors for her is a frequent point of humor in the first season. Meanwhile, Penny dates a series of muscular, stereotypically "attractive," unintelligent, and insensitive jocks. Their first long-term relationship begins when Leonard returns from a three-month expedition to the North Pole in the season 3 premiere. However, when Leonard tells Penny that he loves her, she realizes she cannot say it back, and they break up. Both Leonard and Penny go on to date other people, most notably with Leonard dating Raj's sister Priya for much of season 4. This relationship is jeopardized when Leonard mistakenly comes to believe that Raj has slept with Penny, and it ultimately ends when Priya sleeps with a former boyfriend in "The Good Guy Fluctuation".
Penny, who admits to missing Leonard in "The Roommate Transmogrification", accepts his request to renew their relationship in "The Beta Test Initiation". After Penny suggests having sex in "The Launch Acceleration", Leonard breaks the mood by proposing to her. Penny says "no" but does not break up with him. She stops a proposal a second time in "The Tangible Affection Proof". In the sixth-season episode, "The 43 Peculiarity", Penny finally tells Leonard that she loves him. Although they both feel jealousy when the other receives significant attention from the opposite sex, Penny is secure in their relationship, even when he leaves on a four-month expedition to the North Sea in "The Bon Voyage Reaction". After he returns, the relationship blossoms over the seventh season. In the penultimate episode "The Gorilla Dissolution", Penny admits that they should marry and when Leonard realizes that she is serious, he proposes with a ring that he has been carrying for years. Leonard and Penny decide to elope to Las Vegas in the season 8 finale, but beforehand, wanting no secrets, Leonard admits to kissing another woman, Mandy Chow (Melissa Tang) while on the expedition. Despite this, Leonard and Penny finally marry in the season 9 premiere and remain happy. By the Season 9 finale, Penny and Leonard decide to have a second wedding ceremony for their family and friends, to make up for eloping. In season 10, Sheldon moves into Penny's old apartment with Amy, allowing Penny and Leonard to finally live on their own as husband and wife.
In season 12, Penny announces that she does not want to have any children and Leonard reluctantly supports her decision. Later, her old boyfriend Zack and his new wife want Leonard to be a surrogate father to their kid since Zack is infertile. Penny reluctantly agrees to let Leonard donate his sperm. However, when she tries to seduce Leonard despite knowing he has to be abstinent for a few days, her visiting father, Wyatt, points out to Penny that her own actions suggest she is more conflicted over having kids than she lets on, and she admits she feels bad about letting him and Leonard down if she never has children. He says that despite her flaws, parenthood is the best thing that ever happened to him, and he does not want her to miss out, but that he will support her no matter what she does. Leonard eventually changes his mind, not wanting a child in the world that he cannot raise. In the series finale, Penny is pregnant with Leonard's baby, and she has changed her mind about not wanting children.
=== Sheldon and Amy's relationship ===
In the third-season finale, Raj and Howard sign Sheldon up for online dating to find a woman compatible with Sheldon, and they discover neurobiologist Amy Farrah Fowler. Like Sheldon, she has a history of social ineptitude and participates in online dating only to fulfill an agreement with her mother. This spawns a story line in which Sheldon and Amy communicate daily while insisting to Leonard and Penny that they are not romantically involved. In "The Agreement Dissection", Sheldon and Amy talk in her apartment after a night of dancing, and she kisses him on the lips. Instead of getting annoyed, Sheldon says "fascinating" and later asks Amy to be his girlfriend in "The Flaming Spittoon Acquisition". The same night he draws up "The Relationship Agreement" to verify the ground rules of him as her boyfriend and vice versa (similar to his "Roommate Agreement" with Leonard). Amy agrees but later regrets not having had a lawyer read through it.
In "The Launch Acceleration", Amy tries to use her "neurobiology bag of tricks" to increase the attraction between herself and Sheldon. Her efforts appear to be working because Sheldon is not happy, but he makes no attempt to stop her. In the fifth-season finale, "The Countdown Reflection", Sheldon takes Amy's hand as Howard is launched into space. In the sixth-season premiere, "The Date Night Variable", after a dinner in which Sheldon fails to live up to this expectation, Amy gives Sheldon an ultimatum that their relationship is over unless he tells her something from his heart. Amy accepts Sheldon's romantic speech even after learning that it is a line from the first Spider-Man movie. In "The Cooper/Kripke Inversion", Sheldon states that he has been working on his discomfort about physical contact and admits that "it's a possibility" that he could one day have sex with Amy. Amy is revealed to have similar feelings in "The Love Spell Potential". Sheldon explains that he never thought about intimacy with anyone before Amy.
"The Locomotive Manipulation" is the first episode in which Sheldon initiates a kiss with Amy. Although initially done in a fit of sarcasm, he discovers that he enjoys the feeling. Consequently, Sheldon slowly starts to open up over the rest of the season, and he starts a more intimate relationship with Amy. However, in the season finale, Sheldon leaves town temporarily to cope with several changes and Amy becomes distraught. However, 45 days into the trip, Sheldon gets mugged and calls for Leonard to drive him home, only to be confronted by Amy, who is upset over not being contacted by him in weeks. When Sheldon admits he did not call her because he was too embarrassed to admit that he could not make it on his own, Amy accepts that he is not perfect. In "The Prom Equivalency", Sheldon hides in his room to avoid going to a mock prom reenactment with her. In the resulting standoff, Amy is about to confess that she loves Sheldon, but he surprises her by saying that he loves her too. This prompts Amy to have a panic attack.
In the season-eight finale, Sheldon and Amy get into a fight about commitment on their fifth anniversary. Amy tells Sheldon that she needs to think about the future of their relationship, unaware that Sheldon was about to propose to her. Season nine sees Sheldon harassing Amy about making up her mind until she breaks up with him. Both struggle with singlehood and trying to be friends for the next few weeks until they reunite in episode ten and have sex for the first time on Amy's birthday. In season ten, Amy's apartment is flooded, and she and Sheldon decide to move in together into Penny's apartment as part of a five-week experiment to determine compatibility with each other's living habits. It goes well and they decide to make the arrangement permanent.
In the Season 11 premiere, Sheldon proposes to Amy, and she accepts. The two get married in the eleventh-season finale.
=== "Soft Kitty" ===
The song "Soft Kitty" is described by Sheldon as a song sung by his mother when he was ill. Its repeated use in the series popularized the song. A scene showing the origin of the song in Sheldon's childhood is depicted in an episode of Young Sheldon, which aired on February 1, 2018. It shows Sheldon's mother, Mary, singing the song to her son, who has the flu.
=== Howard's mother ===
In scenes set at Howard's home, he interacts with his rarely seen mother (voiced by Carol Ann Susi until her death) by shouting from room to room in the house. She similarly interacts with other characters in this manner. She reflects the Jewish mother stereotype in some ways, such as being overly controlling of Howard's adult life and sometimes trying to make him feel guilty about causing her trouble. She is dependent on Howard, as she requires him to help her with her wig and makeup in the morning. Howard, in turn, is attached to his mother to the point where she still cuts his meat for him, takes him to the dentist, does his laundry and "grounds" him when he returns home after briefly moving out. Until Howard's marriage to Bernadette in the fifth-season finale, Howard's former living situation led Leonard's psychiatrist mother to speculate that he may suffer from some type of pathology and Sheldon to refer to their relationship as Oedipal. In season 8, Howard's mother dies in her sleep while in Florida, which devastates Howard and Stuart, who briefly lived with Mrs. Wolowitz.
=== Apartment building elevator ===
In the apartment building where Sheldon, Leonard, and Penny (and later Amy) live, the elevator has been out of order throughout most of the series, forcing characters to have to use the stairs. Stairway conversations between the characters as they walk up the three flights to their apartments occur in almost every episode, often serving as a transition between longer scenes. The Season 3 episode, "The Staircase Implementation" reveals that the elevator was broken when Leonard was experimenting with rocket fuel. In the penultimate episode of the series, the elevator is returned to an operational state, causing Sheldon some angst, until he realizes that the fixed elevator reverted things to the "status quo".
=== Vanity cards ===
Like most shows created by Chuck Lorre, The Big Bang Theory ends by showing for one second a vanity card written by Lorre after the credits, followed by the Warner Bros. Television closing logo. These cards are archived on Lorre's website. The series' final vanity card reads simply "The End".
== Release ==
=== Broadcast ===
The Big Bang Theory premiered in the United States on September 24, 2007, on CBS. The series debuted in Canada on CTV in September 2007. On February 14, 2008, the series debuted in the United Kingdom on channels E4 and Channel 4. In Australia, the first seven seasons of the series began airing on the Seven Network and 7mate from October 2015 and also gained the rights to season 8 in 2016, although the Nine Network has rights to air seasons nine & ten. On January 22, 2018, it was announced that Nine had acquired the rights to Season 1–8.
=== Syndication and streaming ===
In May 2010, it was reported that the show had been picked up for syndication, mainly among Fox's owned and operated stations and other local stations, with Warner Bros. Television's sister cable network TBS holding the show's cable syndication rights. Although details of the syndication deal have not been revealed, it was reported the deal "set a record price for a cable off-network sitcom purchase".
On September 17, 2019, as part of an extension of the TBS agreement through 2028, Warner Bros.' then-upcoming streaming service HBO Max (now Max) acquired the exclusive American streaming rights to the series. In December 2024, it was announced that CBS parent company Paramount Global had acquired non-exclusive cable rights to The Big Bang Theory for Nick at Nite and MTV, beginning December 24, 2024 and January 1, 2025 respectively; Deadline Hollywood reported that the current contract with TBS had made the linear television rights non-exclusive, allowing them to be shared with other broadcasters. Beginning in 2025 The Big Bang Theory was made available on Disney+ in certain regions via the Star hub alongside its spin-off Young Sheldon in an unprecedented move.
=== Home media ===
The first and second seasons were only available on DVD at their time of release in 2008 and 2009. Starting with the release of the third season in 2010 and continuing every year with every new season, a Blu-ray disc set has also been released in conjunction with the DVD. In 2012, Warner Bros. released the first two seasons on Blu-ray, marking the first time that all episodes were available on the Blu-ray disc format.
== Reception ==
=== Critical response ===
Although the initial reception was mixed, the show went on to receive a more positive reception. The review aggregation website Rotten Tomatoes reports an 81% approval rating from critics. On Metacritic, the series holds a score of 61 out of 100, based on reviews from 27 critics, indicating generally favorable reviews. In 2013, TV Guide ranked the series #52 on its list of the 60 Best Series of All Time.
=== American ratings ===
The Big Bang Theory started off slowly in the ratings, failing to make the top 50 in its first season (ranking 68th), and ranking 40th in its second season. When the third season premiered on September 21, 2009, however, The Big Bang Theory ranked as CBS's highest-rated show of that evening in the adults 18–49 demographic (4.6/10) along with a then-series-high 12.83 million viewers. After the first three seasons aired at different times on Monday nights, CBS moved the show to Thursdays at 8:00 ET for the 2010–2011 schedule, to be in direct competition with NBC's Comedy Block and Fox's American Idol (then the longest reigning leading primetime show on American television from 2004 to 2011). During its fourth season, it became television's highest rated comedy, just barely beating out Two and a Half Men (which held the position for the past 8 years). However, in the age 18–49 demographic (the show's target age range), it was the second highest-rated comedy, behind ABC's Modern Family. The fifth season opened with viewing figures of over 14 million.
The sixth season boasts some of the highest-rated episodes for the show so far, with a then-new series high set with "The Bakersfield Expedition", with 20 million viewers, a first for the series, which along with NCIS, made CBS the first network to have two scripted series reach that large an audience in the same week since 2007. In the sixth season, the show became the highest rated and viewed scripted show in the 18–49 demographic, trailing only the live regular NBC Sunday Night Football coverage, and was third in total viewers, trailing NCIS and Sunday Night Football. Season seven of the series opened strong, continuing the success gained in season six, with the second episode of the premiere, "The Deception Verification", setting the new series high in viewers with 20.44 million.
Showrunner Steve Molaro, who took over from Bill Prady with the sixth season, credits some of the show's success to the sitcom's exposure in off-network syndication, particularly on TBS, while Michael Schneider of TV Guide attributes it to the timeslot move two seasons earlier. Chuck Lorre and CBS Entertainment president Nina Tassler also credit the success to the influence of Molaro, in particular the deepening exploration of the firmly established regular characters and their interpersonal relationships, such as the on-again, off-again relationship between Leonard and Penny. Throughout much of the 2012–13 season, The Big Bang Theory placed first in all of the syndication ratings, receiving formidable competition from only Judge Judy and Wheel of Fortune (first-run syndication programs). By the end of the 2012–13 television season, The Big Bang Theory had dethroned Judge Judy as the ratings leader in all of the syndicated programming with 7.1, Judy descending to second place for that season with a 7.0. The Big Bang Theory did not place first in syndication ratings for the 2013–14 television season, beaten out by Judge Judy.
=== UK distribution and ratings ===
The show made its United Kingdom debut on Channel 4 on February 14, 2008. The show was also shown as a 'first-look' on Channel 4's digital offshoot E4 prior to the main channel's airing. While the show's ratings were not deemed strong enough to warrant broadcast on the main channel, they were considered the opposite for E4. For each following season, all episodes were shown first-run on E4, with episodes only aired on the main channel in a repeat capacity, usually on a weekend morning. From the third season, the show aired in two parts, being split so that it could air new episodes for longer throughout the year. This was due to rising ratings. The first part began airing on December 17, 2009, at 9:00 p.m. while the second part, containing the remaining eleven episodes, began airing in the same time period from May 6, 2010. The first half of the fourth season began airing on November 4, 2010, at 9:00 p.m., drawing 877,000 viewers, with a further 256,000 watching on the E4+1 hour service. This gave the show an overall total of 1.13 million viewers, making it E4's most-watched programme for that week. The increased ratings continued over subsequent weeks.
The fourth season's second half began on June 30, 2011. Season 5 began airing on November 3, 2011, at 8:00 p.m. as part of E4's Comedy Thursdays, acting as a lead-in to the channel's newest comedy, Perfect Couples. Episode 19, the highest-viewed episode of the season, attracted 1.4 million viewers. Season 6 premiered on November 15, 2012, with 1.89 million viewers and a further 469,000 on the time shift channel, bringing the total to 2.31 million, E4's highest viewing ratings of 2012, and the highest the channel had received since June 2011. The sixth season returned in mid-2013 to finish airing the remaining episodes. Season 7 premiered on E4 on October 31, 2013, at 8:30 pm and hit multiple ratings records this season. The second half of season seven aired in mid 2014. The eighth season premiered on E4 on October 23, 2014, at 8:30 pm. During its eighth season, The Big Bang Theory shared its 8:30 pm time period with fellow CBS comedy, 2 Broke Girls. Following the airing of the first eight episodes of that show's fourth season, The Big Bang Theory returned to finish airing its eighth season on March 19, 2015.
Netflix UK & Ireland announced on February 13, 2016, that seasons 1–8 would be available to stream from February 15, 2016.
=== Canadian ratings ===
The Big Bang Theory started off quietly in Canada, but managed to garner major success in later seasons. The Big Bang Theory is telecast throughout Canada via the CTV Television Network in simultaneous substitution with cross-border CBS affiliates. Now immensely popular in Canada, The Big Bang Theory is also rerun daily on the Canadian cable channel The Comedy Network.
The season 4 premiere garnered an estimated 3.1 million viewers across Canada. This was the largest audience for a sitcom since the series finale of Friends. The show later increased in viewership and became the most-watched entertainment television show in Canada.
=== Accolades ===
In August 2009, the sitcom won the best comedy series TCA award and Jim Parsons (Sheldon) won the award for individual achievement in comedy. In 2010, the show won the People's Choice Award for Favorite Comedy, while Parsons won a Primetime Emmy Award for Outstanding Lead Actor in a Comedy Series. On January 16, 2011, Parsons was awarded a Golden Globe for Best Performance by an Actor in a Television Series – Comedy or Musical, an award that was presented by co-star Kaley Cuoco. On September 18, 2011, Parsons was again awarded an Emmy for Best Actor in a Comedy Series. On January 9, 2013, the show won People's Choice Award for Favorite Comedy for the second time. August 25, 2014, Jim Parsons was awarded an Emmy for Best Actor in a Comedy Series. The Big Bang Theory also won the 2016 People's Choice Awards for under Favorite TV Show and Favorite Network TV Comedy with Jim Parsons winning Favorite Comedic TV Actor. On January 20, 2016, The Big Bang Theory also won the International category at the UK's National Television Awards.
== Merchandise ==
On March 16, 2014, a Lego Ideas project portraying the living room scene in Lego style with the main cast as mini-figures reached 10,000 supporters on the platform, which qualified it to be considered as an official set by the Lego Ideas review board. On November 7, 2014, Lego Ideas approved the design and began refining it. The set was released in August 2015, with an exclusive pre-sale taking place at San Diego Comic-Con.
== Offshoots ==
=== Plagiarized series ===
Through the use of his vanity cards at the end of episodes, Lorre alleged that the program had been plagiarized by a show produced and aired in Belarus in 2010. Officially titled Теоретики (The Theorists), the show features "clones" of the main characters, a similar opening sequence, and what appears to be a very close Russian translation of the scripts. Lorre expressed annoyance and described his inquiry with the Warner Bros. legal department about options. The television production company and station's close relationship with the Belarus government was cited as the reason that any attempt to claim copyright infringement would be in vain because the company copying the episodes is operated by the government.
However, no legal action was required to end production of the other show: as soon as it became known that the show was unlicensed, the actors quit and the producers canceled it. Dmitriy Tankovich (who plays Leonard's counterpart, "Seva") said in an interview, I'm upset. At first, the actors were told all legal issues were resolved. We didn't know it wasn't the case, so when the creators of The Big Bang Theory started talking about the show, I was embarrassed. I can't understand why our people first do, and then think. I consider this to be the rock bottom of my career. And I don't want to take part in a stolen show.
=== Spin-offs ===
==== Young Sheldon ====
In November 2016, it was reported that CBS was in negotiations to create a spin-off of The Big Bang Theory centered on Sheldon as a young boy. The prequel series, described as "a Malcolm in the Middle-esque single-camera family comedy" would be executive-produced by Lorre and Molaro, with Prady expected to be involved in some capacity, and intended to air in the 2017–18 season alongside The Big Bang Theory. The initial idea for the series came from Parsons, who passed it along to The Big Bang Theory producers. In early March 2017, Iain Armitage was cast as the younger Sheldon, as well as Zoe Perry as his mother, Mary Cooper. Perry is the real-life daughter of Laurie Metcalf, who portrays Mary Cooper on The Big Bang Theory.
On March 13, 2017, CBS ordered the spin-off Young Sheldon series. Jon Favreau directed and executive produced the pilot. Created by Lorre and Molaro, the series follows 9-year-old Sheldon Cooper as he attends high school in East Texas. Alongside Armitage as 9-year-old Sheldon Cooper and Perry as Mary Cooper, Lance Barber stars as George Cooper, Sheldon's father; Raegan Revord stars as Missy Cooper, Sheldon's twin sister; and Montana Jordan stars as George Cooper Jr., Sheldon's older brother. Jim Parsons reprises his role as adult Sheldon Cooper, as narrator for the series. Parsons, Lorre, Molaro and Todd Spiewak also serve as executive producers on the series, for Chuck Lorre Productions and Warner Bros. Television. The show's pilot episode premiered on September 25, 2017. Subsequent weekly episodes began airing on November 2, 2017, following the broadcast of the 237th episode of The Big Bang Theory.
Armitage appeared on the series' 265th episode, "The VCR Illumination", by way of a videotape recorded by the younger Sheldon and viewed by the current-day Sheldon.
On January 6, 2018, the show was renewed for a second season. On February 22, 2019, CBS renewed the series for both the third and fourth seasons. On March 30, 2021, CBS renewed the series for a fifth, sixth, and seventh season.
The prequel series came to an end on May 16, 2024, with an hour long episode which included George Cooper's funeral and a cameo from Parsons and Mayim Bialik as their older characters. The audience learns that Young Sheldon has been a memoir of Sheldon's life all along.
==== Georgie & Mandy's First Marriage ====
In January 2024, it was announced that there will be a spin-off series of Young Sheldon focused on Georgie Cooper and Mandy McAllister that will be slated for the 2024–25 season on CBS.
==== Stuart Fails to Save the Universe ====
On April 12, 2023, it was announced that a spin-off of the original series was in development. On October 10, 2024, it was announced that the third spin-off will feature Stuart Bloom, Denise, and Bert Kibbler, with Kevin Sussman, Lauren Lapkus, and Brian Posehn reprising their roles. On March 19, 2025, it was announced that the title of the show will be Stuart Fails to Save the Universe.
=== Television special ===
On May 16, 2019, a television special titled Unraveling the Mystery: A Big Bang Farewell aired following the series finale of The Big Bang Theory. It is a backstage retrospective featuring Johnny Galecki and Kaley Cuoco.
== Lawsuit ==
In March 2023, political analyst Mithun Vijay Kumar filed a court case in Mumbai against Netflix due to the series insulting Madhuri Dixit in an episode of season 2 by calling her a "leprous prostitute".
== References ==
== External links ==
Official website
The Big Bang Theory at IMDb
The Big Bang Theory at Rotten Tomatoes
The Big Bang Theory at Discogs (list of releases) | Wikipedia/The_Big_Bang_Theory |
In mathematics, a reflection (also spelled reflexion) is a mapping from a Euclidean space to itself that is an isometry with a hyperplane as the set of fixed points; this set is called the axis (in dimension 2) or plane (in dimension 3) of reflection. The image of a figure by a reflection is its mirror image in the axis or plane of reflection. For example the mirror image of the small Latin letter p for a reflection with respect to a vertical axis (a vertical reflection) would look like q. Its image by reflection in a horizontal axis (a horizontal reflection) would look like b. A reflection is an involution: when applied twice in succession, every point returns to its original location, and every geometrical object is restored to its original state.
The term reflection is sometimes used for a larger class of mappings from a Euclidean space to itself, namely the non-identity isometries that are involutions. The set of fixed points (the "mirror") of such an isometry is an affine subspace, but is possibly smaller than a hyperplane. For instance a reflection through a point is an involutive isometry with just one fixed point; the image of the letter p under it
would look like a d. This operation is also known as a central inversion (Coxeter 1969, §7.2), and exhibits Euclidean space as a symmetric space. In a Euclidean vector space, the reflection in the point situated at the origin is the same as vector negation. Other examples include reflections in a line in three-dimensional space. Typically, however, unqualified use of the term "reflection" means reflection in a hyperplane.
Some mathematicians use "flip" as a synonym for "reflection".
== Construction ==
In a plane (or, respectively, 3-dimensional) geometry, to find the reflection of a point drop a perpendicular from the point to the line (plane) used for reflection, and extend it the same distance on the other side. To find the reflection of a figure, reflect each point in the figure.
To reflect point P through the line AB using compass and straightedge, proceed as follows (see figure):
Step 1 (red): construct a circle with center at P and some fixed radius r to create points A′ and B′ on the line AB, which will be equidistant from P.
Step 2 (green): construct circles centered at A′ and B′ having radius r. P and Q will be the points of intersection of these two circles.
Point Q is then the reflection of point P through line AB.
== Properties ==
The matrix for a reflection is orthogonal with determinant −1 and eigenvalues −1, 1, 1, ..., 1. The product of two such matrices is a special orthogonal matrix that represents a rotation. Every rotation is the result of reflecting in an even number of reflections in hyperplanes through the origin, and every improper rotation is the result of reflecting in an odd number. Thus reflections generate the orthogonal group, and this result is known as the Cartan–Dieudonné theorem.
Similarly the Euclidean group, which consists of all isometries of Euclidean space, is generated by reflections in affine hyperplanes. In general, a group generated by reflections in affine hyperplanes is known as a reflection group. The finite groups generated in this way are examples of Coxeter groups.
== Reflection across a line in the plane ==
Reflection across an arbitrary line through the origin in two dimensions can be described by the following formula
Ref
l
(
v
)
=
2
v
⋅
l
l
⋅
l
l
−
v
,
{\displaystyle \operatorname {Ref} _{l}(v)=2{\frac {v\cdot l}{l\cdot l}}l-v,}
where
v
{\displaystyle v}
denotes the vector being reflected,
l
{\displaystyle l}
denotes any vector in the line across which the reflection is performed, and
v
⋅
l
{\displaystyle v\cdot l}
denotes the dot product of
v
{\displaystyle v}
with
l
{\displaystyle l}
. Note the formula above can also be written as
Ref
l
(
v
)
=
2
Proj
l
(
v
)
−
v
,
{\displaystyle \operatorname {Ref} _{l}(v)=2\operatorname {Proj} _{l}(v)-v,}
saying that a reflection of
v
{\displaystyle v}
across
l
{\displaystyle l}
is equal to 2 times the projection of
v
{\displaystyle v}
on
l
{\displaystyle l}
, minus the vector
v
{\displaystyle v}
. Reflections in a line have the eigenvalues of 1, and −1.
== Reflection through a hyperplane in n dimensions ==
Given a vector
v
{\displaystyle v}
in Euclidean space
R
n
{\displaystyle \mathbb {R} ^{n}}
, the formula for the reflection in the hyperplane through the origin, orthogonal to
a
{\displaystyle a}
, is given by
Ref
a
(
v
)
=
v
−
2
v
⋅
a
a
⋅
a
a
,
{\displaystyle \operatorname {Ref} _{a}(v)=v-2{\frac {v\cdot a}{a\cdot a}}a,}
where
v
⋅
a
{\displaystyle v\cdot a}
denotes the dot product of
v
{\displaystyle v}
with
a
{\displaystyle a}
. Note that the second term in the above equation is just twice the vector projection of
v
{\displaystyle v}
onto
a
{\displaystyle a}
. One can easily check that
Refa(v) = −v, if
v
{\displaystyle v}
is parallel to
a
{\displaystyle a}
, and
Refa(v) = v, if
v
{\displaystyle v}
is perpendicular to a.
Using the geometric product, the formula is
Ref
a
(
v
)
=
−
a
v
a
a
2
.
{\displaystyle \operatorname {Ref} _{a}(v)=-{\frac {ava}{a^{2}}}.}
Since these reflections are isometries of Euclidean space fixing the origin they may be represented by orthogonal matrices. The orthogonal matrix corresponding to the above reflection is the matrix
R
=
I
−
2
a
a
T
a
T
a
,
{\displaystyle R=I-2{\frac {aa^{T}}{a^{T}a}},}
where
I
{\displaystyle I}
denotes the
n
×
n
{\displaystyle n\times n}
identity matrix and
a
T
{\displaystyle a^{T}}
is the transpose of a. Its entries are
R
i
j
=
δ
i
j
−
2
a
i
a
j
‖
a
‖
2
,
{\displaystyle R_{ij}=\delta _{ij}-2{\frac {a_{i}a_{j}}{\left\|a\right\|^{2}}},}
where δij is the Kronecker delta.
The formula for the reflection in the affine hyperplane
v
⋅
a
=
c
{\displaystyle v\cdot a=c}
not through the origin is
Ref
a
,
c
(
v
)
=
v
−
2
v
⋅
a
−
c
a
⋅
a
a
.
{\displaystyle \operatorname {Ref} _{a,c}(v)=v-2{\frac {v\cdot a-c}{a\cdot a}}a.}
== See also ==
Additive inverse
Coordinate rotations and reflections
Householder transformation
Inversive geometry
Plane of rotation
Reflection mapping
Reflection group
Reflection symmetry
== Notes ==
== References ==
Coxeter, Harold Scott MacDonald (1969), Introduction to Geometry (2nd ed.), New York: John Wiley & Sons, ISBN 978-0-471-50458-0, MR 0123930
Popov, V.L. (2001) [1994], "Reflection", Encyclopedia of Mathematics, EMS Press
Weisstein, Eric W. "Reflection". MathWorld.
== External links ==
Reflection in Line at cut-the-knot
Understanding 2D Reflection and Understanding 3D Reflection by Roger Germundsson, The Wolfram Demonstrations Project. | Wikipedia/Reflection_(linear_algebra) |
In algebra, a transformation semigroup (or composition semigroup) is a collection of transformations (functions from a set to itself) that is closed under function composition. If it includes the identity function, it is a monoid, called a transformation (or composition) monoid. This is the semigroup analogue of a permutation group.
A transformation semigroup of a set has a tautological semigroup action on that set. Such actions are characterized by being faithful, i.e., if two elements of the semigroup have the same action, then they are equal.
An analogue of Cayley's theorem shows that any semigroup can be realized as a transformation semigroup of some set.
In automata theory, some authors use the term transformation semigroup to refer to a semigroup acting faithfully on a set of "states" different from the semigroup's base set. There is a correspondence between the two notions.
== Transformation semigroups and monoids ==
A transformation semigroup is a pair (X,S), where X is a set and S is a semigroup of transformations of X. Here a transformation of X is just a function from a subset of X to X, not necessarily invertible, and therefore S is simply a set of transformations of X which is closed under composition of functions. The set of all partial functions on a given base set, X, forms a regular semigroup called the semigroup of all partial transformations (or the partial transformation semigroup on X), typically denoted by
P
T
X
{\displaystyle {\mathcal {PT}}_{X}}
.
If S includes the identity transformation of X, then it is called a transformation monoid. Any transformation semigroup S determines a transformation monoid M by taking the union of S with the identity transformation. A transformation monoid whose elements are invertible is a permutation group.
The set of all transformations of X is a transformation monoid called the full transformation monoid (or semigroup) of X. It is also called the symmetric semigroup of X and is denoted by TX. Thus a transformation semigroup (or monoid) is just a subsemigroup (or submonoid) of the full transformation monoid of X.
If (X,S) is a transformation semigroup then X can be made into a semigroup action of S by evaluation:
s
⋅
x
=
s
(
x
)
for
s
∈
S
,
x
∈
X
.
{\displaystyle s\cdot x=s(x){\text{ for }}s\in S,x\in X.}
This is a monoid action if S is a transformation monoid.
The characteristic feature of transformation semigroups, as actions, is that they are faithful, i.e., if
s
⋅
x
=
t
⋅
x
for all
x
∈
X
,
{\displaystyle s\cdot x=t\cdot x{\text{ for all }}x\in X,}
then s = t. Conversely if a semigroup S acts on a set X by T(s,x) = s • x then we can define, for s ∈ S, a transformation Ts of X by
T
s
(
x
)
=
T
(
s
,
x
)
.
{\displaystyle T_{s}(x)=T(s,x).\,}
The map sending s to Ts is injective if and only if (X, T) is faithful, in which case the image of this map is a transformation semigroup isomorphic to S.
== Cayley representation ==
In group theory, Cayley's theorem asserts that any group G is isomorphic to a subgroup of the symmetric group of G (regarded as a set), so that G is a permutation group. This theorem generalizes straightforwardly to monoids: any monoid M is a transformation monoid of its underlying set, via the action given by left (or right) multiplication. This action is faithful because if ax = bx for all x in M, then by taking x equal to the identity element, we have a = b.
For a semigroup S without a (left or right) identity element, we take X to be the underlying set of the monoid corresponding to S to realise S as a transformation semigroup of X. In particular any finite semigroup can be represented as a subsemigroup of transformations of a set X with |X| ≤ |S| + 1, and if S is a monoid, we have the sharper bound |X| ≤ |S|, as in the case of finite groups.: 21
=== In computer science ===
In computer science, Cayley representations can be applied to improve the asymptotic efficiency of semigroups by reassociating multiple composed multiplications. The action given by left multiplication results in right-associated multiplication, and vice versa for the action given by right multiplication. Despite having the same results for any semigroup, the asymptotic efficiency will differ. Two examples of useful transformation monoids given by an action of left multiplication are the functional variation of the difference list data structure, and the monadic Codensity transformation (a Cayley representation of a monad, which is a monoid in a particular monoidal functor category).
== Transformation monoid of an automaton ==
Let M be a deterministic automaton with state space S and alphabet A. The words in the free monoid A∗ induce transformations of S giving rise to a monoid morphism from A∗ to the full transformation monoid TS. The image of this morphism is the transformation semigroup of M.: 78
For a regular language, the syntactic monoid is isomorphic to the transformation monoid of the minimal automaton of the language.: 81
== See also ==
Semiautomaton
Krohn–Rhodes theory
Symmetric inverse semigroup
Biordered set
Special classes of semigroups
Composition ring
== References ==
Clifford, A.H.; Preston, G.B. (1961). The algebraic theory of semigroups. Vol. I. Mathematical Surveys. Vol. 7. Providence, R.I.: American Mathematical Society. ISBN 978-0-8218-0272-4. Zbl 0111.03403. {{cite book}}: ISBN / Date incompatibility (help)
Howie, John M. (1995). Fundamentals of Semigroup Theory. London Mathematical Society Monographs. New Series. Vol. 12. Oxford: Clarendon Press. ISBN 978-0-19-851194-6. Zbl 0835.20077.
Mati Kilp, Ulrich Knauer, Alexander V. Mikhalev (2000), Monoids, Acts and Categories: with Applications to Wreath Products and Graphs, Expositions in Mathematics 29, Walter de Gruyter, Berlin, ISBN 978-3-11-015248-7. | Wikipedia/Transformation_semigroup |
In mathematics, a measure-preserving dynamical system is an object of study in the abstract formulation of dynamical systems, and ergodic theory in particular. Measure-preserving systems obey the Poincaré recurrence theorem, and are a special case of conservative systems. They provide the formal, mathematical basis for a broad range of physical systems, and, in particular, many systems from classical mechanics (in particular, most non-dissipative systems) as well as systems in thermodynamic equilibrium.
== Definition ==
A measure-preserving dynamical system is defined as a probability space and a measure-preserving transformation on it. In more detail, it is a system
(
X
,
B
,
μ
,
T
)
{\displaystyle (X,{\mathcal {B}},\mu ,T)}
with the following structure:
X
{\displaystyle X}
is a set,
B
{\displaystyle {\mathcal {B}}}
is a σ-algebra over
X
{\displaystyle X}
,
μ
:
B
→
[
0
,
1
]
{\displaystyle \mu :{\mathcal {B}}\rightarrow [0,1]}
is a probability measure, so that
μ
(
X
)
=
1
{\displaystyle \mu (X)=1}
, and
μ
(
∅
)
=
0
{\displaystyle \mu (\varnothing )=0}
,
T
:
X
→
X
{\displaystyle T:X\rightarrow X}
is a measurable transformation which preserves the measure
μ
{\displaystyle \mu }
, i.e.,
∀
A
∈
B
μ
(
T
−
1
(
A
)
)
=
μ
(
A
)
{\displaystyle \forall A\in {\mathcal {B}}\;\;\mu (T^{-1}(A))=\mu (A)}
.
== Discussion ==
One may ask why the measure preserving transformation is defined in terms of the inverse
μ
(
T
−
1
(
A
)
)
=
μ
(
A
)
{\displaystyle \mu (T^{-1}(A))=\mu (A)}
instead of the forward transformation
μ
(
T
(
A
)
)
=
μ
(
A
)
{\displaystyle \mu (T(A))=\mu (A)}
. This can be understood intuitively.
Consider the typical measure on the unit interval
[
0
,
1
]
{\displaystyle [0,1]}
, and a map
T
x
=
2
x
mod
1
=
{
2
x
if
x
<
1
/
2
2
x
−
1
if
x
>
1
/
2
{\displaystyle Tx=2x\mod 1={\begin{cases}2x{\text{ if }}x<1/2\\2x-1{\text{ if }}x>1/2\\\end{cases}}}
. This is the Bernoulli map. Now, distribute an even layer of paint on the unit interval
[
0
,
1
]
{\displaystyle [0,1]}
, and then map the paint forward. The paint on the
[
0
,
1
/
2
]
{\displaystyle [0,1/2]}
half is spread thinly over all of
[
0
,
1
]
{\displaystyle [0,1]}
, and the paint on the
[
1
/
2
,
1
]
{\displaystyle [1/2,1]}
half as well. The two layers of thin paint, layered together, recreates the exact same paint thickness.
More generally, the paint that would arrive at subset
A
⊂
[
0
,
1
]
{\displaystyle A\subset [0,1]}
comes from the subset
T
−
1
(
A
)
{\displaystyle T^{-1}(A)}
. For the paint thickness to remain unchanged (measure-preserving), the mass of incoming paint should be the same:
μ
(
A
)
=
μ
(
T
−
1
(
A
)
)
{\displaystyle \mu (A)=\mu (T^{-1}(A))}
.
Consider a mapping
T
{\displaystyle {\mathcal {T}}}
of power sets:
T
:
P
(
X
)
→
P
(
X
)
{\displaystyle {\mathcal {T}}:P(X)\to P(X)}
Consider now the special case of maps
T
{\displaystyle {\mathcal {T}}}
which preserve intersections, unions and complements (so that it is a map of Borel sets) and also sends
X
{\displaystyle X}
to
X
{\displaystyle X}
(because we want it to be conservative). Every such conservative, Borel-preserving map can be specified by some surjective map
T
:
X
→
X
{\displaystyle T:X\to X}
by writing
T
(
A
)
=
T
−
1
(
A
)
{\displaystyle {\mathcal {T}}(A)=T^{-1}(A)}
. Of course, one could also define
T
(
A
)
=
T
(
A
)
{\displaystyle {\mathcal {T}}(A)=T(A)}
, but this is not enough to specify all such possible maps
T
{\displaystyle {\mathcal {T}}}
. That is, conservative, Borel-preserving maps
T
{\displaystyle {\mathcal {T}}}
cannot, in general, be written in the form
T
(
A
)
=
T
(
A
)
;
{\displaystyle {\mathcal {T}}(A)=T(A);}
.
μ
(
T
−
1
(
A
)
)
{\displaystyle \mu (T^{-1}(A))}
has the form of a pushforward, whereas
μ
(
T
(
A
)
)
{\displaystyle \mu (T(A))}
is generically called a pullback. Almost all properties and behaviors of dynamical systems are defined in terms of the pushforward. For example, the transfer operator is defined in terms of the pushforward of the transformation map
T
{\displaystyle T}
; the measure
μ
{\displaystyle \mu }
can now be understood as an invariant measure; it is just the Frobenius–Perron eigenvector of the transfer operator (recall, the FP eigenvector is the largest eigenvector of a matrix; in this case it is the eigenvector which has the eigenvalue one: the invariant measure.)
There are two classification problems of interest. One, discussed below, fixes
(
X
,
B
,
μ
)
{\displaystyle (X,{\mathcal {B}},\mu )}
and asks about the isomorphism classes of a transformation map
T
{\displaystyle T}
. The other, discussed in transfer operator, fixes
(
X
,
B
)
{\displaystyle (X,{\mathcal {B}})}
and
T
{\displaystyle T}
, and asks about maps
μ
{\displaystyle \mu }
that are measure-like. Measure-like, in that they preserve the Borel properties, but are no longer invariant; they are in general dissipative and so give insights into dissipative systems and the route to equilibrium.
In terms of physics, the measure-preserving dynamical system
(
X
,
B
,
μ
,
T
)
{\displaystyle (X,{\mathcal {B}},\mu ,T)}
often describes a physical system that is in equilibrium, for example, thermodynamic equilibrium. One might ask: how did it get that way? Often, the answer is by stirring, mixing, turbulence, thermalization or other such processes. If a transformation map
T
{\displaystyle T}
describes this stirring, mixing, etc. then the system
(
X
,
B
,
μ
,
T
)
{\displaystyle (X,{\mathcal {B}},\mu ,T)}
is all that is left, after all of the transient modes have decayed away. The transient modes are precisely those eigenvectors of the transfer operator that have eigenvalue less than one; the invariant measure
μ
{\displaystyle \mu }
is the one mode that does not decay away. The rate of decay of the transient modes are given by (the logarithm of) their eigenvalues; the eigenvalue one corresponds to infinite half-life.
== Informal example ==
The microcanonical ensemble from physics provides an informal example. Consider, for example, a fluid, gas or plasma in a box of width, length and height
w
×
l
×
h
,
{\displaystyle w\times l\times h,}
consisting of
N
{\displaystyle N}
atoms. A single atom in that box might be anywhere, having arbitrary velocity; it would be represented by a single point in
w
×
l
×
h
×
R
3
.
{\displaystyle w\times l\times h\times \mathbb {R} ^{3}.}
A given collection of
N
{\displaystyle N}
atoms would then be a single point somewhere in the space
(
w
×
l
×
h
)
N
×
R
3
N
.
{\displaystyle (w\times l\times h)^{N}\times \mathbb {R} ^{3N}.}
The "ensemble" is the collection of all such points, that is, the collection of all such possible boxes (of which there are an uncountably-infinite number). This ensemble of all-possible-boxes is the space
X
{\displaystyle X}
above.
In the case of an ideal gas, the measure
μ
{\displaystyle \mu }
is given by the Maxwell–Boltzmann distribution. It is a product measure, in that if
p
i
(
x
,
y
,
z
,
v
x
,
v
y
,
v
z
)
d
3
x
d
3
p
{\displaystyle p_{i}(x,y,z,v_{x},v_{y},v_{z})\,d^{3}x\,d^{3}p}
is the probability of atom
i
{\displaystyle i}
having position and velocity
x
,
y
,
z
,
v
x
,
v
y
,
v
z
{\displaystyle x,y,z,v_{x},v_{y},v_{z}}
, then, for
N
{\displaystyle N}
atoms, the probability is the product of
N
{\displaystyle N}
of these. This measure is understood to apply to the ensemble. So, for example, one of the possible boxes in the ensemble has all of the atoms on one side of the box. One can compute the likelihood of this, in the Maxwell–Boltzmann measure. It will be enormously tiny, of order
O
(
2
−
3
N
)
.
{\displaystyle {\mathcal {O}}\left(2^{-3N}\right).}
Of all possible boxes in the ensemble, this is a ridiculously small fraction.
The only reason that this is an "informal example" is because writing down the transition function
T
{\displaystyle T}
is difficult, and, even if written down, it is hard to perform practical computations with it. Difficulties are compounded if there are interactions between the particles themselves, like a van der Waals interaction or some other interaction suitable for a liquid or a plasma; in such cases, the invariant measure is no longer the Maxwell–Boltzmann distribution. The art of physics is finding reasonable approximations.
This system does exhibit one key idea from the classification of measure-preserving dynamical systems: two ensembles, having different temperatures, are inequivalent. The entropy for a given canonical ensemble depends on its temperature; as physical systems, it is "obvious" that when the temperatures differ, so do the systems. This holds in general: systems with different entropy are not isomorphic.
== Examples ==
Unlike the informal example above, the examples below are sufficiently well-defined and tractable that explicit, formal computations can be performed.
μ could be the normalized angle measure dθ/2π on the unit circle, and T a rotation. See equidistribution theorem;
the Bernoulli scheme;
the interval exchange transformation;
with the definition of an appropriate measure, a subshift of finite type;
the base flow of a random dynamical system;
the flow of a Hamiltonian vector field on the tangent bundle of a closed connected smooth manifold is measure-preserving (using the measure induced on the Borel sets by the symplectic volume form) by Liouville's theorem (Hamiltonian);
for certain maps and Markov processes, the Krylov–Bogolyubov theorem establishes the existence of a suitable measure to form a measure-preserving dynamical system.
== Generalization to groups and monoids ==
The definition of a measure-preserving dynamical system can be generalized to the case in which T is not a single transformation that is iterated to give the dynamics of the system, but instead is a monoid (or even a group, in which case we have the action of a group upon the given probability space) of transformations Ts : X → X parametrized by s ∈ Z (or R, or N ∪ {0}, or [0, +∞)), where each transformation Ts satisfies the same requirements as T above. In particular, the transformations obey the rules:
T
0
=
i
d
X
:
X
→
X
{\displaystyle T_{0}=\mathrm {id} _{X}:X\rightarrow X}
, the identity function on X;
T
s
∘
T
t
=
T
t
+
s
{\displaystyle T_{s}\circ T_{t}=T_{t+s}}
, whenever all the terms are well-defined;
T
s
−
1
=
T
−
s
{\displaystyle T_{s}^{-1}=T_{-s}}
, whenever all the terms are well-defined.
The earlier, simpler case fits into this framework by defining Ts = Ts for s ∈ N.
== Homomorphisms ==
The concept of a homomorphism and an isomorphism may be defined.
Consider two dynamical systems
(
X
,
A
,
μ
,
T
)
{\displaystyle (X,{\mathcal {A}},\mu ,T)}
and
(
Y
,
B
,
ν
,
S
)
{\displaystyle (Y,{\mathcal {B}},\nu ,S)}
. Then a mapping
φ
:
X
→
Y
{\displaystyle \varphi :X\to Y}
is a homomorphism of dynamical systems if it satisfies the following three properties:
The map
φ
{\displaystyle \varphi \ }
is measurable.
For each
B
∈
B
{\displaystyle B\in {\mathcal {B}}}
, one has
μ
(
φ
−
1
B
)
=
ν
(
B
)
{\displaystyle \mu (\varphi ^{-1}B)=\nu (B)}
.
For
μ
{\displaystyle \mu }
-almost all
x
∈
X
{\displaystyle x\in X}
, one has
φ
(
T
x
)
=
S
(
φ
x
)
{\displaystyle \varphi (Tx)=S(\varphi x)}
.
The system
(
Y
,
B
,
ν
,
S
)
{\displaystyle (Y,{\mathcal {B}},\nu ,S)}
is then called a factor of
(
X
,
A
,
μ
,
T
)
{\displaystyle (X,{\mathcal {A}},\mu ,T)}
.
The map
φ
{\displaystyle \varphi \;}
is an isomorphism of dynamical systems if, in addition, there exists another mapping
ψ
:
Y
→
X
{\displaystyle \psi :Y\to X}
that is also a homomorphism, which satisfies
for
μ
{\displaystyle \mu }
-almost all
x
∈
X
{\displaystyle x\in X}
, one has
x
=
ψ
(
φ
x
)
{\displaystyle x=\psi (\varphi x)}
;
for
ν
{\displaystyle \nu }
-almost all
y
∈
Y
{\displaystyle y\in Y}
, one has
y
=
φ
(
ψ
y
)
{\displaystyle y=\varphi (\psi y)}
.
Hence, one may form a category of dynamical systems and their homomorphisms.
== Generic points ==
A point x ∈ X is called a generic point if the orbit of the point is distributed uniformly according to the measure.
== Symbolic names and generators ==
Consider a dynamical system
(
X
,
B
,
T
,
μ
)
{\displaystyle (X,{\mathcal {B}},T,\mu )}
, and let Q = {Q1, ..., Qk} be a partition of X into k measurable pair-wise disjoint sets. Given a point x ∈ X, clearly x belongs to only one of the Qi. Similarly, the iterated point Tnx can belong to only one of the parts as well. The symbolic name of x, with regards to the partition Q, is the sequence of integers {an} such that
T
n
x
∈
Q
a
n
.
{\displaystyle T^{n}x\in Q_{a_{n}}.}
The set of symbolic names with respect to a partition is called the symbolic dynamics of the dynamical system. A partition Q is called a generator or generating partition if μ-almost every point x has a unique symbolic name.
== Operations on partitions ==
Given a partition Q = {Q1, ..., Qk} and a dynamical system
(
X
,
B
,
T
,
μ
)
{\displaystyle (X,{\mathcal {B}},T,\mu )}
, define the T-pullback of Q as
T
−
1
Q
=
{
T
−
1
Q
1
,
…
,
T
−
1
Q
k
}
.
{\displaystyle T^{-1}Q=\{T^{-1}Q_{1},\ldots ,T^{-1}Q_{k}\}.}
Further, given two partitions Q = {Q1, ..., Qk} and R = {R1, ..., Rm}, define their refinement as
Q
∨
R
=
{
Q
i
∩
R
j
∣
i
=
1
,
…
,
k
,
j
=
1
,
…
,
m
,
μ
(
Q
i
∩
R
j
)
>
0
}
.
{\displaystyle Q\vee R=\{Q_{i}\cap R_{j}\mid i=1,\ldots ,k,\ j=1,\ldots ,m,\ \mu (Q_{i}\cap R_{j})>0\}.}
With these two constructs, the refinement of an iterated pullback is defined as
⋁
n
=
0
N
T
−
n
Q
=
{
Q
i
0
∩
T
−
1
Q
i
1
∩
⋯
∩
T
−
N
Q
i
N
where
i
ℓ
=
1
,
…
,
k
,
ℓ
=
0
,
…
,
N
,
μ
(
Q
i
0
∩
T
−
1
Q
i
1
∩
⋯
∩
T
−
N
Q
i
N
)
>
0
}
{\displaystyle {\begin{aligned}\bigvee _{n=0}^{N}T^{-n}Q&=\{Q_{i_{0}}\cap T^{-1}Q_{i_{1}}\cap \cdots \cap T^{-N}Q_{i_{N}}\\&{}\qquad {\mbox{ where }}i_{\ell }=1,\ldots ,k,\ \ell =0,\ldots ,N,\ \\&{}\qquad \qquad \mu \left(Q_{i_{0}}\cap T^{-1}Q_{i_{1}}\cap \cdots \cap T^{-N}Q_{i_{N}}\right)>0\}\\\end{aligned}}}
which plays crucial role in the construction of the measure-theoretic entropy of a dynamical system.
== Measure-theoretic entropy ==
The entropy of a partition
Q
{\displaystyle {\mathcal {Q}}}
is defined as
H
(
Q
)
=
−
∑
Q
∈
Q
μ
(
Q
)
log
μ
(
Q
)
.
{\displaystyle H({\mathcal {Q}})=-\sum _{Q\in {\mathcal {Q}}}\mu (Q)\log \mu (Q).}
The measure-theoretic entropy of a dynamical system
(
X
,
B
,
T
,
μ
)
{\displaystyle (X,{\mathcal {B}},T,\mu )}
with respect to a partition Q = {Q1, ..., Qk} is then defined as
h
μ
(
T
,
Q
)
=
lim
N
→
∞
1
N
H
(
⋁
n
=
0
N
T
−
n
Q
)
.
{\displaystyle h_{\mu }(T,{\mathcal {Q}})=\lim _{N\rightarrow \infty }{\frac {1}{N}}H\left(\bigvee _{n=0}^{N}T^{-n}{\mathcal {Q}}\right).}
Finally, the Kolmogorov–Sinai metric or measure-theoretic entropy of a dynamical system
(
X
,
B
,
T
,
μ
)
{\displaystyle (X,{\mathcal {B}},T,\mu )}
is defined as
h
μ
(
T
)
=
sup
Q
h
μ
(
T
,
Q
)
.
{\displaystyle h_{\mu }(T)=\sup _{\mathcal {Q}}h_{\mu }(T,{\mathcal {Q}}).}
where the supremum is taken over all finite measurable partitions. A theorem of Yakov Sinai in 1959 shows that the supremum is actually obtained on partitions that are generators. Thus, for example, the entropy of the Bernoulli process is log 2, since almost every real number has a unique binary expansion. That is, one may partition the unit interval into the intervals [0, 1/2) and [1/2, 1]. Every real number x is either less than 1/2 or not; and likewise so is the fractional part of 2nx.
If the space X is compact and endowed with a topology, or is a metric space, then the topological entropy may also be defined.
If
T
{\displaystyle T}
is an ergodic, piecewise expanding, and Markov on
X
⊂
R
{\displaystyle X\subset \mathbb {R} }
, and
μ
{\displaystyle \mu }
is absolutely continuous with respect to the Lebesgue measure, then we have the Rokhlin formula (section 4.3 and section 12.3 ):
h
μ
(
T
)
=
∫
ln
|
d
T
/
d
x
|
μ
(
d
x
)
{\displaystyle h_{\mu }(T)=\int \ln |dT/dx|\mu (dx)}
This allows calculation of entropy of many interval maps, such as the logistic map.
Ergodic means that
T
−
1
(
A
)
=
A
{\displaystyle T^{-1}(A)=A}
implies
A
{\displaystyle A}
has full measure or zero measure. Piecewise expanding and Markov means that there is a partition of
X
{\displaystyle X}
into finitely many open intervals, such that for some
ϵ
>
0
{\displaystyle \epsilon >0}
,
|
T
′
|
≥
1
+
ϵ
{\displaystyle |T'|\geq 1+\epsilon }
on each open interval. Markov means that for each
I
i
{\displaystyle I_{i}}
from those open intervals, either
T
(
I
i
)
∩
I
i
=
∅
{\displaystyle T(I_{i})\cap I_{i}=\emptyset }
or
T
(
I
i
)
∩
I
i
=
I
i
{\displaystyle T(I_{i})\cap I_{i}=I_{i}}
.
== Classification and anti-classification theorems ==
One of the primary activities in the study of measure-preserving systems is their classification according to their properties. That is, let
(
X
,
B
,
μ
)
{\displaystyle (X,{\mathcal {B}},\mu )}
be a measure space, and let
U
{\displaystyle U}
be the set of all measure preserving systems
(
X
,
B
,
μ
,
T
)
{\displaystyle (X,{\mathcal {B}},\mu ,T)}
. An isomorphism
S
∼
T
{\displaystyle S\sim T}
of two transformations
S
,
T
{\displaystyle S,T}
defines an equivalence relation
R
⊂
U
×
U
.
{\displaystyle {\mathcal {R}}\subset U\times U.}
The goal is then to describe the relation
R
{\displaystyle {\mathcal {R}}}
. A number of classification theorems have been obtained; but quite interestingly, a number of anti-classification theorems have been found as well. The anti-classification theorems state that there are more than a countable number of isomorphism classes, and that a countable amount of information is not sufficient to classify isomorphisms.
The first anti-classification theorem, due to Hjorth, states that if
U
{\displaystyle U}
is endowed with the weak topology, then the set
R
{\displaystyle {\mathcal {R}}}
is not a Borel set. There are a variety of other anti-classification results. For example, replacing isomorphism with Kakutani equivalence, it can be shown that there are uncountably many non-Kakutani equivalent ergodic measure-preserving transformations of each entropy type.
These stand in contrast to the classification theorems. These include:
Ergodic measure-preserving transformations with a pure point spectrum have been classified.
Bernoulli shifts are classified by their metric entropy. See Ornstein theory for more.
== See also ==
Krylov–Bogolyubov theorem on the existence of invariant measures
Poincaré recurrence theorem – Certain dynamical systems will eventually return to (or approximate) their initial state
== References ==
== Further reading ==
Michael S. Keane, "Ergodic theory and subshifts of finite type", (1991), appearing as Chapter 2 in Ergodic Theory, Symbolic Dynamics and Hyperbolic Spaces, Tim Bedford, Michael Keane and Caroline Series, Eds. Oxford University Press, Oxford (1991). ISBN 0-19-853390-X (Provides expository introduction, with exercises, and extensive references.)
Lai-Sang Young, "Entropy in Dynamical Systems" (pdf; ps), appearing as Chapter 16 in Entropy, Andreas Greven, Gerhard Keller, and Gerald Warnecke, eds. Princeton University Press, Princeton, NJ (2003). ISBN 0-691-11338-6
T. Schürmann and I. Hoffmann, The entropy of strange billiards inside n-simplexes. J. Phys. A 28(17), page 5033, 1995. PDF-Document (gives a more involved example of measure-preserving dynamical system.) | Wikipedia/Kolmogorov–Sinai_entropy |
In mathematics, symbolic dynamics is the study of dynamical systems defined on a discrete space consisting of infinite sequences of abstract symbols. The evolution of the dynamical system is defined as a simple shift of the sequence.
Because of their explicit, discrete nature, such systems are often relatively easy to characterize and understand. They form a key tool for studying topological or smooth dynamical systems, because in many important cases it is possible to reduce the dynamics of a more general dynamical system to a symbolic system. To do so, a Markov partition is used to provide a finite cover for the smooth system; each set of the cover is associated with a single symbol, and the sequences of symbols result as a trajectory of the system moves from one covering set to another.
== History ==
The idea goes back to Jacques Hadamard's 1898 paper on the geodesics on surfaces of negative curvature. It was applied by Marston Morse in 1921 to the construction of a nonperiodic recurrent geodesic. Related work was done by Emil Artin in 1924 (for the system now called Artin billiard), Pekka Myrberg, Paul Koebe, Jakob Nielsen, G. A. Hedlund.
The first formal treatment was developed by Morse and Hedlund in their 1938 paper. George Birkhoff, Norman Levinson and the pair Mary Cartwright and J. E. Littlewood have applied similar methods to qualitative analysis of nonautonomous second order differential equations.
Claude Shannon used symbolic sequences and shifts of finite type in his 1948 paper A mathematical theory of communication that gave birth to information theory.
During the late 1960s the method of symbolic dynamics was developed to hyperbolic toral automorphisms by Roy Adler and Benjamin Weiss, and to Anosov diffeomorphisms by Yakov Sinai who used the symbolic model to construct Gibbs measures. In the early 1970s the theory was extended to Anosov flows by Marina Ratner, and to Axiom A diffeomorphisms and flows by Rufus Bowen.
A spectacular application of the methods of symbolic dynamics is Sharkovskii's theorem about periodic orbits of a continuous map of an interval into itself (1964).
== Examples ==
Consider the set of two-sided infinite sequences on two symbols, 0 and 1. A typical element in this set looks like: (..., 0, 1, 0, 0, 1, 0, 1, ... )
There will be exactly two fixed points under the shift map: the sequence of all zeroes, and the sequence of all ones. A periodic sequence will have a periodic orbit. For instance, the sequence (..., 0, 1, 0, 1, 0, 1, 0, 1, ...) will have period two.
More complex concepts such as heteroclinic orbits and homoclinic orbits also have simple descriptions in this system. For example, any sequence that has only a finite number of ones will have a homoclinic orbit, tending to the sequence of all zeros in forward and backward iterations.
=== Itinerary ===
Itinerary of point with respect to the partition is a sequence of symbols. It describes dynamic of the point.
== Applications ==
Symbolic dynamics originated as a method to study general dynamical systems; now its techniques and ideas have found significant applications in data storage and transmission, linear algebra, the motions of the planets and many other areas. The distinct feature in symbolic dynamics is that time is measured in discrete intervals. So at each time interval the system is in a particular state. Each state is associated with a symbol and the evolution of the system is described by an infinite sequence of symbols—represented effectively as strings. If the system states are not inherently discrete, then the state vector must be discretized, so as to get a coarse-grained description of the system.
=== Recent developments ===
Recent work has generalized symbolic dynamics to layered dynamical systems, introducing the concept of symbolic conditional entropy, thus expanding symbolic dynamics to more abstract informational structures and deeper symbolic architectures.
== See also ==
Measure-preserving dynamical system
Combinatorics and dynamical systems
Shift space
Shift of finite type
Complex dynamics
Arithmetic dynamics
== References ==
== Further reading ==
Hao, Bailin (1989). Elementary Symbolic Dynamics and Chaos in Dissipative Systems. World Scientific. ISBN 9971-5-0682-3. Archived from the original on 2009-12-05. Retrieved 2009-12-02.
Bruce Kitchens, Symbolic dynamics. One-sided, two-sided and countable state Markov shifts. Universitext, Springer-Verlag, Berlin, 1998. x+252 pp. ISBN 3-540-62738-3 MR1484730
Lind, Douglas; Marcus, Brian (1995). An introduction to symbolic dynamics and coding. Cambridge University Press. ISBN 0-521-55124-2. MR 1369092. Zbl 1106.37301. Archived from the original on 2016-06-22. Retrieved 2005-06-03.
G. A. Hedlund, Endomorphisms and automorphisms of the shift dynamical system. Math. Systems Theory, Vol. 3, No. 4 (1969) 320–3751
Teschl, Gerald (2012). Ordinary Differential Equations and Dynamical Systems. Providence: American Mathematical Society. ISBN 978-0-8218-8328-0.
"Symbolic dynamics". Scholarpedia.
== External links ==
ChaosBook.org Chapter "Transition graphs"
A simulation of the three-bumper billiard system and its symbolic dynamics, from Chaos V: Duhem's Bull | Wikipedia/Symbolic_dynamics |
In mathematics, the topological entropy of a topological dynamical system is a nonnegative extended real number that is a measure of the complexity of the system. Topological entropy was first introduced in 1965 by Adler, Konheim and McAndrew. Their definition was modelled after the definition of the Kolmogorov–Sinai, or metric entropy. Later, Dinaburg and Rufus Bowen gave a different, weaker definition reminiscent of the Hausdorff dimension. The second definition clarified the meaning of the topological entropy: for a system given by an iterated function, the topological entropy represents the exponential growth rate of the number of distinguishable orbits of the iterates. An important variational principle relates the notions of topological and measure-theoretic entropy.
== Definition ==
A topological dynamical system consists of a Hausdorff topological space X (usually assumed to be compact) and a continuous self-map f : X → X. Its topological entropy is a nonnegative extended real number that can be defined in various ways, which are known to be equivalent.
=== Definition of Adler, Konheim, and McAndrew ===
Let X be a compact Hausdorff topological space. For any finite open cover C of X, let H(C) be the logarithm (usually to base 2) of the smallest number of elements of C that cover X. For two covers C and D, let
C
∨
D
{\displaystyle C\vee D}
be their (minimal) common refinement, which consists of all the non-empty intersections of a set from C with a set from D, and similarly for multiple covers.
For any continuous map f: X → X, the following limit exists:
H
(
f
,
C
)
=
lim
n
→
∞
1
n
H
(
C
∨
f
−
1
C
∨
…
∨
f
−
n
+
1
C
)
.
{\displaystyle H(f,C)=\lim _{n\to \infty }{\frac {1}{n}}H(C\vee f^{-1}C\vee \ldots \vee f^{-n+1}C).}
Then the topological entropy of f, denoted h(f), is defined to be the supremum of H(f,C) over all possible finite covers C of X.
==== Interpretation ====
The parts of C may be viewed as symbols that (partially) describe the position of a point x in X: all points x ∈ Ci are assigned the symbol Ci . Imagine that the position of x is (imperfectly) measured by a certain device and that each part of C corresponds to one possible outcome of the measurement.
H
(
C
∨
f
−
1
C
∨
…
∨
f
−
n
+
1
C
)
{\displaystyle H(C\vee f^{-1}C\vee \ldots \vee f^{-n+1}C)}
then represents the logarithm of the minimal number of "words" of length n needed to encode the points of X according to the behavior of their first n − 1 iterates under f, or, put differently, the total number of "scenarios" of the behavior of these iterates, as "seen" by the partition C. Thus the topological entropy is the average (per iteration) amount of information needed to describe long iterations of the map f.
=== Definition of Bowen and Dinaburg ===
This definition uses a metric on X (actually, a uniform structure would suffice). This is a narrower definition than that of Adler, Konheim, and McAndrew, as it requires the additional metric structure on the topological space (but is independent of the choice of metrics generating the given topology). However, in practice, the Bowen-Dinaburg topological entropy is usually much easier to calculate.
Let (X, d) be a compact metric space and f: X → X be a continuous map. For each natural number n, a new metric dn is defined on X by the formula
d
n
(
x
,
y
)
=
max
{
d
(
f
i
(
x
)
,
f
i
(
y
)
)
:
0
≤
i
<
n
}
.
{\displaystyle d_{n}(x,y)=\max\{d(f^{i}(x),f^{i}(y)):0\leq i<n\}.}
Given any ε > 0 and n ≥ 1, two points of X are ε-close with respect to this metric if their first n iterates are ε-close. This metric allows one to distinguish in a neighborhood of an orbit the points that move away from each other during the iteration from the points that travel together. A subset E of X is said to be (n, ε)-separated if each pair of distinct points of E is at least ε apart in the metric dn.
Denote by N(n, ε) the maximum cardinality of an (n, ε)-separated set. The topological entropy of the map f is defined by
h
(
f
)
=
lim
ϵ
→
0
(
lim sup
n
→
∞
1
n
log
N
(
n
,
ϵ
)
)
.
{\displaystyle h(f)=\lim _{\epsilon \to 0}\left(\limsup _{n\to \infty }{\frac {1}{n}}\log N(n,\epsilon )\right).}
==== Interpretation ====
Since X is compact, N(n, ε) is finite and represents the number of distinguishable orbit segments of length n, assuming that we cannot distinguish points within ε of one another. A straightforward argument shows that the limit defining h(f) always exists in the extended real line (but could be infinite). This limit may be interpreted as the measure of the average exponential growth of the number of distinguishable orbit segments. In this sense, it measures complexity of the topological dynamical system (X, f). Rufus Bowen extended this definition of topological entropy in a way which permits X to be non-compact under the assumption that the map f is uniformly continuous.
== Properties ==
Topological entropy is an invariant of topological dynamical systems, meaning that it is preserved by topological conjugacy.
Let
f
{\displaystyle f}
be an expansive homeomorphism of a compact metric space
X
{\displaystyle X}
and let
C
{\displaystyle C}
be a topological generator. Then the topological entropy of
f
{\displaystyle f}
relative to
C
{\displaystyle C}
is equal to the topological entropy of
f
{\displaystyle f}
, i.e.
h
(
f
)
=
H
(
f
,
C
)
.
{\displaystyle h(f)=H(f,C).}
Let
f
:
X
→
X
{\displaystyle f:X\rightarrow X}
be a continuous transformation of a compact metric space
X
{\displaystyle X}
, let
h
μ
(
f
)
{\displaystyle h_{\mu }(f)}
be the measure-theoretic entropy of
f
{\displaystyle f}
with respect to
μ
{\displaystyle \mu }
and let
M
(
X
,
f
)
{\displaystyle M(X,f)}
be the set of all
f
{\displaystyle f}
-invariant Borel probability measures on X. Then the variational principle for entropy states that
h
(
f
)
=
sup
μ
∈
M
(
X
,
f
)
h
μ
(
f
)
{\displaystyle h(f)=\sup _{\mu \in M(X,f)}h_{\mu }(f)}
.
In general the maximum of the quantities
h
μ
{\displaystyle h_{\mu }}
over the set
M
(
X
,
f
)
{\displaystyle M(X,f)}
is not attained, but if additionally the entropy map
μ
↦
h
μ
(
f
)
:
M
(
X
,
f
)
→
R
{\displaystyle \mu \mapsto h_{\mu }(f):M(X,f)\rightarrow \mathbb {R} }
is upper semicontinuous, then a measure of maximal entropy - meaning a measure
μ
{\displaystyle \mu }
in
M
(
X
,
f
)
{\displaystyle M(X,f)}
with
h
μ
(
f
)
=
h
(
f
)
{\displaystyle h_{\mu }(f)=h(f)}
- exists.
If
f
{\displaystyle f}
has a unique measure of maximal entropy
μ
{\displaystyle \mu }
, then
f
{\displaystyle f}
is ergodic with respect to
μ
{\displaystyle \mu }
.
== Examples ==
Let
σ
:
Σ
k
→
Σ
k
{\displaystyle \sigma :\Sigma _{k}\rightarrow \Sigma _{k}}
by
x
n
↦
x
n
−
1
{\displaystyle x_{n}\mapsto x_{n-1}}
denote the full two-sided k-shift on symbols
{
1
,
…
,
k
}
{\displaystyle \{1,\dots ,k\}}
. Let
C
=
{
[
1
]
,
…
,
[
k
]
}
{\displaystyle C=\{[1],\dots ,[k]\}}
denote the partition of
Σ
k
{\displaystyle \Sigma _{k}}
into cylinders of length 1. Then
⋁
j
=
0
n
−
1
σ
−
j
(
C
)
{\displaystyle \bigvee _{j=0}^{n-1}\sigma ^{-j}(C)}
is a partition of
Σ
k
{\displaystyle \Sigma _{k}}
for all
n
∈
N
{\displaystyle n\in \mathbb {N} }
and the number of sets is
k
n
{\displaystyle k^{n}}
respectively. The partitions are open covers and
C
{\displaystyle C}
is a topological generator. Hence
h
(
σ
)
=
H
(
σ
,
C
)
=
lim
n
→
∞
1
n
log
k
n
=
log
k
{\displaystyle h(\sigma )=H(\sigma ,C)=\lim _{n\rightarrow \infty }{\frac {1}{n}}\log k^{n}=\log k}
. The measure-theoretic entropy of the Bernoulli
(
1
k
,
…
,
1
k
)
{\displaystyle \left({\frac {1}{k}},\dots ,{\frac {1}{k}}\right)}
-measure is also
log
k
{\displaystyle \log k}
. Hence it is a measure of maximal entropy. Further on it can be shown that no other measures of maximal entropy exist.
Let
A
{\displaystyle A}
be an irreducible
k
×
k
{\displaystyle k\times k}
matrix with entries in
{
0
,
1
}
{\displaystyle \{0,1\}}
and let
σ
:
Σ
A
→
Σ
A
{\displaystyle \sigma :\Sigma _{A}\rightarrow \Sigma _{A}}
be the corresponding subshift of finite type. Then
h
(
σ
)
=
log
λ
{\displaystyle h(\sigma )=\log \lambda }
where
λ
{\displaystyle \lambda }
is the largest positive eigenvalue of
A
{\displaystyle A}
.
== Notes ==
== See also ==
Milnor–Thurston kneading theory
For the measure of correlations in systems with topological order see Topological entanglement entropy
Mean dimension
=== Recent developments ===
Recent studies have extended topological entropy to symbolic conditional entropy in layered dynamical systems, generalizing classical entropy measures to more abstract symbolic and informational structures.
== References ==
Adler, R.L.; Konheim, Allan G.; McAndrew, M.H. (1965). "Topological entropy". Transactions of the American Mathematical Society. 114 (2): 309–319. doi:10.2307/1994177. JSTOR 1994177. Zbl 0127.13102.
Dmitri Anosov (2001) [1994], "Topological entropy", Encyclopedia of Mathematics, EMS Press
Roy Adler, Tomasz Downarowicz, Michał Misiurewicz, Topological entropy at Scholarpedia
Walters, Peter (1982). An introduction to ergodic theory. Graduate Texts in Mathematics. Vol. 79. Springer-Verlag. ISBN 0-387-95152-0. Zbl 0475.28009.
== External links ==
http://www.scholarpedia.org/article/Topological_entropy
This article incorporates material from Topological Entropy on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License. | Wikipedia/Topological_entropy |
In mathematics, a continuous function is a function such that a small variation of the argument induces a small variation of the value of the function. This implies there are no abrupt changes in value, known as discontinuities. More precisely, a function is continuous if arbitrarily small changes in its value can be assured by restricting to sufficiently small changes of its argument. A discontinuous function is a function that is not continuous. Until the 19th century, mathematicians largely relied on intuitive notions of continuity and considered only continuous functions. The epsilon–delta definition of a limit was introduced to formalize the definition of continuity.
Continuity is one of the core concepts of calculus and mathematical analysis, where arguments and values of functions are real and complex numbers. The concept has been generalized to functions between metric spaces and between topological spaces. The latter are the most general continuous functions, and their definition is the basis of topology.
A stronger form of continuity is uniform continuity. In order theory, especially in domain theory, a related concept of continuity is Scott continuity.
As an example, the function H(t) denoting the height of a growing flower at time t would be considered continuous. In contrast, the function M(t) denoting the amount of money in a bank account at time t would be considered discontinuous since it "jumps" at each point in time when money is deposited or withdrawn.
== History ==
A form of the epsilon–delta definition of continuity was first given by Bernard Bolzano in 1817. Augustin-Louis Cauchy defined continuity of
y
=
f
(
x
)
{\displaystyle y=f(x)}
as follows: an infinitely small increment
α
{\displaystyle \alpha }
of the independent variable x always produces an infinitely small change
f
(
x
+
α
)
−
f
(
x
)
{\displaystyle f(x+\alpha )-f(x)}
of the dependent variable y (see e.g. Cours d'Analyse, p. 34). Cauchy defined infinitely small quantities in terms of variable quantities, and his definition of continuity closely parallels the infinitesimal definition used today (see microcontinuity). The formal definition and the distinction between pointwise continuity and uniform continuity were first given by Bolzano in the 1830s, but the work wasn't published until the 1930s. Like Bolzano, Karl Weierstrass denied continuity of a function at a point c unless it was defined at and on both sides of c, but Édouard Goursat allowed the function to be defined only at and on one side of c, and Camille Jordan allowed it even if the function was defined only at c. All three of those nonequivalent definitions of pointwise continuity are still in use. Eduard Heine provided the first published definition of uniform continuity in 1872, but based these ideas on lectures given by Peter Gustav Lejeune Dirichlet in 1854.
== Real functions ==
=== Definition ===
A real function that is a function from real numbers to real numbers can be represented by a graph in the Cartesian plane; such a function is continuous if, roughly speaking, the graph is a single unbroken curve whose domain is the entire real line. A more mathematically rigorous definition is given below.
Continuity of real functions is usually defined in terms of limits. A function f with variable x is continuous at the real number c, if the limit of
f
(
x
)
,
{\displaystyle f(x),}
as x tends to c, is equal to
f
(
c
)
.
{\displaystyle f(c).}
There are several different definitions of the (global) continuity of a function, which depend on the nature of its domain.
A function is continuous on an open interval if the interval is contained in the function's domain and the function is continuous at every interval point. A function that is continuous on the interval
(
−
∞
,
+
∞
)
{\displaystyle (-\infty ,+\infty )}
(the whole real line) is often called simply a continuous function; one also says that such a function is continuous everywhere. For example, all polynomial functions are continuous everywhere.
A function is continuous on a semi-open or a closed interval; if the interval is contained in the domain of the function, the function is continuous at every interior point of the interval, and the value of the function at each endpoint that belongs to the interval is the limit of the values of the function when the variable tends to the endpoint from the interior of the interval. For example, the function
f
(
x
)
=
x
{\displaystyle f(x)={\sqrt {x}}}
is continuous on its whole domain, which is the closed interval
[
0
,
+
∞
)
.
{\displaystyle [0,+\infty ).}
Many commonly encountered functions are partial functions that have a domain formed by all real numbers, except some isolated points. Examples include the reciprocal function
x
↦
1
x
{\textstyle x\mapsto {\frac {1}{x}}}
and the tangent function
x
↦
tan
x
.
{\displaystyle x\mapsto \tan x.}
When they are continuous on their domain, one says, in some contexts, that they are continuous, although they are not continuous everywhere. In other contexts, mainly when one is interested in their behavior near the exceptional points, one says they are discontinuous.
A partial function is discontinuous at a point if the point belongs to the topological closure of its domain, and either the point does not belong to the domain of the function or the function is not continuous at the point. For example, the functions
x
↦
1
x
{\textstyle x\mapsto {\frac {1}{x}}}
and
x
↦
sin
(
1
x
)
{\textstyle x\mapsto \sin({\frac {1}{x}})}
are discontinuous at 0, and remain discontinuous whichever value is chosen for defining them at 0. A point where a function is discontinuous is called a discontinuity.
Using mathematical notation, several ways exist to define continuous functions in the three senses mentioned above.
Let
f
:
D
→
R
{\textstyle f:D\to \mathbb {R} }
be a function whose domain
D
{\displaystyle D}
is contained in
R
{\displaystyle \mathbb {R} }
of real numbers.
Some (but not all) possibilities for
D
{\displaystyle D}
are:
D
{\displaystyle D}
is the whole real line; that is,
D
=
R
{\displaystyle D=\mathbb {R} }
D
{\displaystyle D}
is a closed interval of the form
D
=
[
a
,
b
]
=
{
x
∈
R
∣
a
≤
x
≤
b
}
,
{\displaystyle D=[a,b]=\{x\in \mathbb {R} \mid a\leq x\leq b\},}
where a and b are real numbers
D
{\displaystyle D}
is an open interval of the form
D
=
(
a
,
b
)
=
{
x
∈
R
∣
a
<
x
<
b
}
,
{\displaystyle D=(a,b)=\{x\in \mathbb {R} \mid a<x<b\},}
where a and b are real numbers
In the case of an open interval,
a
{\displaystyle a}
and
b
{\displaystyle b}
do not belong to
D
{\displaystyle D}
, and the values
f
(
a
)
{\displaystyle f(a)}
and
f
(
b
)
{\displaystyle f(b)}
are not defined, and if they are, they do not matter for continuity on
D
{\displaystyle D}
.
==== Definition in terms of limits of functions ====
The function f is continuous at some point c of its domain if the limit of
f
(
x
)
,
{\displaystyle f(x),}
as x approaches c through the domain of f, exists and is equal to
f
(
c
)
.
{\displaystyle f(c).}
In mathematical notation, this is written as
lim
x
→
c
f
(
x
)
=
f
(
c
)
.
{\displaystyle \lim _{x\to c}{f(x)}=f(c).}
In detail this means three conditions: first, f has to be defined at c (guaranteed by the requirement that c is in the domain of f). Second, the limit of that equation has to exist. Third, the value of this limit must equal
f
(
c
)
.
{\displaystyle f(c).}
(Here, we have assumed that the domain of f does not have any isolated points.)
==== Definition in terms of neighborhoods ====
A neighborhood of a point c is a set that contains, at least, all points within some fixed distance of c. Intuitively, a function is continuous at a point c if the range of f over the neighborhood of c shrinks to a single point
f
(
c
)
{\displaystyle f(c)}
as the width of the neighborhood around c shrinks to zero. More precisely, a function f is continuous at a point c of its domain if, for any neighborhood
N
1
(
f
(
c
)
)
{\displaystyle N_{1}(f(c))}
there is a neighborhood
N
2
(
c
)
{\displaystyle N_{2}(c)}
in its domain such that
f
(
x
)
∈
N
1
(
f
(
c
)
)
{\displaystyle f(x)\in N_{1}(f(c))}
whenever
x
∈
N
2
(
c
)
.
{\displaystyle x\in N_{2}(c).}
As neighborhoods are defined in any topological space, this definition of a continuous function applies not only for real functions but also when the domain and the codomain are topological spaces and is thus the most general definition. It follows that a function is automatically continuous at every isolated point of its domain. For example, every real-valued function on the integers is continuous.
==== Definition in terms of limits of sequences ====
One can instead require that for any sequence
(
x
n
)
n
∈
N
{\displaystyle (x_{n})_{n\in \mathbb {N} }}
of points in the domain which converges to c, the corresponding sequence
(
f
(
x
n
)
)
n
∈
N
{\displaystyle \left(f(x_{n})\right)_{n\in \mathbb {N} }}
converges to
f
(
c
)
.
{\displaystyle f(c).}
In mathematical notation,
∀
(
x
n
)
n
∈
N
⊂
D
:
lim
n
→
∞
x
n
=
c
⇒
lim
n
→
∞
f
(
x
n
)
=
f
(
c
)
.
{\displaystyle \forall (x_{n})_{n\in \mathbb {N} }\subset D:\lim _{n\to \infty }x_{n}=c\Rightarrow \lim _{n\to \infty }f(x_{n})=f(c)\,.}
==== Weierstrass and Jordan definitions (epsilon–delta) of continuous functions ====
Explicitly including the definition of the limit of a function, we obtain a self-contained definition: Given a function
f
:
D
→
R
{\displaystyle f:D\to \mathbb {R} }
as above and an element
x
0
{\displaystyle x_{0}}
of the domain
D
{\displaystyle D}
,
f
{\displaystyle f}
is said to be continuous at the point
x
0
{\displaystyle x_{0}}
when the following holds: For any positive real number
ε
>
0
,
{\displaystyle \varepsilon >0,}
however small, there exists some positive real number
δ
>
0
{\displaystyle \delta >0}
such that for all
x
{\displaystyle x}
in the domain of
f
{\displaystyle f}
with
x
0
−
δ
<
x
<
x
0
+
δ
,
{\displaystyle x_{0}-\delta <x<x_{0}+\delta ,}
the value of
f
(
x
)
{\displaystyle f(x)}
satisfies
f
(
x
0
)
−
ε
<
f
(
x
)
<
f
(
x
0
)
+
ε
.
{\displaystyle f\left(x_{0}\right)-\varepsilon <f(x)<f(x_{0})+\varepsilon .}
Alternatively written, continuity of
f
:
D
→
R
{\displaystyle f:D\to \mathbb {R} }
at
x
0
∈
D
{\displaystyle x_{0}\in D}
means that for every
ε
>
0
,
{\displaystyle \varepsilon >0,}
there exists a
δ
>
0
{\displaystyle \delta >0}
such that for all
x
∈
D
{\displaystyle x\in D}
:
|
x
−
x
0
|
<
δ
implies
|
f
(
x
)
−
f
(
x
0
)
|
<
ε
.
{\displaystyle \left|x-x_{0}\right|<\delta ~~{\text{ implies }}~~|f(x)-f(x_{0})|<\varepsilon .}
More intuitively, we can say that if we want to get all the
f
(
x
)
{\displaystyle f(x)}
values to stay in some small neighborhood around
f
(
x
0
)
,
{\displaystyle f\left(x_{0}\right),}
we need to choose a small enough neighborhood for the
x
{\displaystyle x}
values around
x
0
.
{\displaystyle x_{0}.}
If we can do that no matter how small the
f
(
x
0
)
{\displaystyle f(x_{0})}
neighborhood is, then
f
{\displaystyle f}
is continuous at
x
0
.
{\displaystyle x_{0}.}
In modern terms, this is generalized by the definition of continuity of a function with respect to a basis for the topology, here the metric topology.
Weierstrass had required that the interval
x
0
−
δ
<
x
<
x
0
+
δ
{\displaystyle x_{0}-\delta <x<x_{0}+\delta }
be entirely within the domain
D
{\displaystyle D}
, but Jordan removed that restriction.
==== Definition in terms of control of the remainder ====
In proofs and numerical analysis, we often need to know how fast limits are converging, or in other words, control of the remainder. We can formalize this to a definition of continuity.
A function
C
:
[
0
,
∞
)
→
[
0
,
∞
]
{\displaystyle C:[0,\infty )\to [0,\infty ]}
is called a control function if
C is non-decreasing
inf
δ
>
0
C
(
δ
)
=
0
{\displaystyle \inf _{\delta >0}C(\delta )=0}
A function
f
:
D
→
R
{\displaystyle f:D\to R}
is C-continuous at
x
0
{\displaystyle x_{0}}
if there exists such a neighbourhood
N
(
x
0
)
{\textstyle N(x_{0})}
that
|
f
(
x
)
−
f
(
x
0
)
|
≤
C
(
|
x
−
x
0
|
)
for all
x
∈
D
∩
N
(
x
0
)
{\displaystyle |f(x)-f(x_{0})|\leq C\left(\left|x-x_{0}\right|\right){\text{ for all }}x\in D\cap N(x_{0})}
A function is continuous in
x
0
{\displaystyle x_{0}}
if it is C-continuous for some control function C.
This approach leads naturally to refining the notion of continuity by restricting the set of admissible control functions. For a given set of control functions
C
{\displaystyle {\mathcal {C}}}
a function is
C
{\displaystyle {\mathcal {C}}}
-continuous if it is
C
{\displaystyle C}
-continuous for some
C
∈
C
.
{\displaystyle C\in {\mathcal {C}}.}
For example, the Lipschitz, the Hölder continuous functions of exponent α and the uniformly continuous functions below are defined by the set of control functions
C
L
i
p
s
c
h
i
t
z
=
{
C
:
C
(
δ
)
=
K
|
δ
|
,
K
>
0
}
{\displaystyle {\mathcal {C}}_{\mathrm {Lipschitz} }=\{C:C(\delta )=K|\delta |,\ K>0\}}
C
Hölder
−
α
=
{
C
:
C
(
δ
)
=
K
|
δ
|
α
,
K
>
0
}
{\displaystyle {\mathcal {C}}_{{\text{Hölder}}-\alpha }=\{C:C(\delta )=K|\delta |^{\alpha },\ K>0\}}
C
uniform cont.
=
{
C
:
C
(
0
)
=
0
}
{\displaystyle {\mathcal {C}}_{\text{uniform cont.}}=\{C:C(0)=0\}}
respectively.
==== Definition using oscillation ====
Continuity can also be defined in terms of oscillation: a function f is continuous at a point
x
0
{\displaystyle x_{0}}
if and only if its oscillation at that point is zero; in symbols,
ω
f
(
x
0
)
=
0.
{\displaystyle \omega _{f}(x_{0})=0.}
A benefit of this definition is that it quantifies discontinuity: the oscillation gives how much the function is discontinuous at a point.
This definition is helpful in descriptive set theory to study the set of discontinuities and continuous points – the continuous points are the intersection of the sets where the oscillation is less than
ε
{\displaystyle \varepsilon }
(hence a
G
δ
{\displaystyle G_{\delta }}
set) – and gives a rapid proof of one direction of the Lebesgue integrability condition.
The oscillation is equivalent to the
ε
−
δ
{\displaystyle \varepsilon -\delta }
definition by a simple re-arrangement and by using a limit (lim sup, lim inf) to define oscillation: if (at a given point) for a given
ε
0
{\displaystyle \varepsilon _{0}}
there is no
δ
{\displaystyle \delta }
that satisfies the
ε
−
δ
{\displaystyle \varepsilon -\delta }
definition, then the oscillation is at least
ε
0
,
{\displaystyle \varepsilon _{0},}
and conversely if for every
ε
{\displaystyle \varepsilon }
there is a desired
δ
,
{\displaystyle \delta ,}
the oscillation is 0. The oscillation definition can be naturally generalized to maps from a topological space to a metric space.
==== Definition using the hyperreals ====
Cauchy defined the continuity of a function in the following intuitive terms: an infinitesimal change in the independent variable corresponds to an infinitesimal change of the dependent variable (see Cours d'analyse, page 34). Non-standard analysis is a way of making this mathematically rigorous. The real line is augmented by adding infinite and infinitesimal numbers to form the hyperreal numbers. In nonstandard analysis, continuity can be defined as follows.
(see microcontinuity). In other words, an infinitesimal increment of the independent variable always produces an infinitesimal change of the dependent variable, giving a modern expression to Augustin-Louis Cauchy's definition of continuity.
=== Rules for continuity ===
Proving the continuity of a function by a direct application of the definition is generaly a noneasy task. Fortunately, in practice, most functions are built from simpler functions, and their continuity can be deduced immediately from the way they are defined, by applying the following rules:
Every constant function is continuous
The identity function
f
(
x
)
=
x
{\displaystyle f(x)=x}
is continuous
Addition and multiplication: If the functions
f
{\displaystyle f}
and
g
{\displaystyle g}
are continuous on their respective domains
D
f
{\displaystyle D_{f}}
and
D
g
{\displaystyle D_{g}}
, then their sum
f
+
g
{\displaystyle f+g}
and their product
f
⋅
g
{\displaystyle f\cdot g}
are continuous on the intersection
D
f
∩
D
g
{\displaystyle D_{f}\cap D_{g}}
, where
f
+
g
{\displaystyle f+g}
and
f
g
{\displaystyle fg}
are defined by
(
f
+
g
)
(
x
)
=
f
(
x
)
+
g
(
x
)
{\displaystyle (f+g)(x)=f(x)+g(x)}
and
(
f
⋅
g
)
(
x
)
=
f
(
x
)
⋅
g
(
x
)
{\displaystyle (f\cdot g)(x)=f(x)\cdot g(x)}
.
Reciprocal: If the function
f
{\displaystyle f}
is continuous on the domain
D
f
{\displaystyle D_{f}}
, then its reciprocal
1
f
{\displaystyle {\tfrac {1}{f}}}
, defined by
(
1
f
)
(
x
)
=
1
f
(
x
)
{\displaystyle ({\tfrac {1}{f}})(x)={\tfrac {1}{f(x)}}}
is continuous on the domain
D
f
∖
f
−
1
(
0
)
{\displaystyle D_{f}\setminus f^{-1}(0)}
, that is, the domain
D
f
{\displaystyle D_{f}}
from which the points
x
{\displaystyle x}
such that
f
(
x
)
=
0
{\displaystyle f(x)=0}
are removed.
Function composition: If the functions
f
{\displaystyle f}
and
g
{\displaystyle g}
are continuous on their respective domains
D
f
{\displaystyle D_{f}}
and
D
g
{\displaystyle D_{g}}
, then the composition
g
∘
f
{\displaystyle g\circ f}
defined by
1
{\displaystyle {1}}
is continuous on
D
f
∩
f
−
1
(
D
g
)
{\displaystyle D_{f}\cap f^{-1}(D_{g})}
, that the part of
D
f
{\displaystyle D_{f}}
that is mapped by
f
{\displaystyle f}
inside
D
g
{\displaystyle D_{g}}
.
The sine and cosine functions (
sin
x
{\displaystyle \sin x}
and
cos
x
{\displaystyle \cos x}
) are continuous everywhere.
The exponential function
e
x
{\displaystyle e^{x}}
is continuous everywhere.
The natural logarithm
ln
x
{\displaystyle \ln x}
is continuous on the domain formed by all positive real numbers
{
x
∣
x
>
0
}
{\displaystyle \{x\mid x>0\}}
.
These rules imply that every polynomial function is continuous everywhere and that a rational function is continuous everywhere where it is defined, if the numerator and the denominator have no common zeros. More generally, the quotient of two continuous functions is continuous outside the zeros of the denominator.
An example of a function for which the above rules are not sufficirent is the sinc function, which is defined by
sinc
(
0
)
=
1
{\displaystyle \operatorname {sinc} (0)=1}
and
sinc
(
x
)
=
sin
x
x
{\displaystyle \operatorname {sinc} (x)={\tfrac {\sin x}{x}}}
for
x
≠
0
{\displaystyle x\neq 0}
. The above rules show immediately that the function is continuous for
x
≠
0
{\displaystyle x\neq 0}
, but, for proving the continuity at
0
{\displaystyle 0}
, one has to prove
lim
x
→
0
sin
x
x
=
1.
{\displaystyle \lim _{x\to 0}{\frac {\sin x}{x}}=1.}
As this is true, one gets that the sinc function is continuous function on all real numbers.
=== Examples of discontinuous functions ===
An example of a discontinuous function is the Heaviside step function
H
{\displaystyle H}
, defined by
H
(
x
)
=
{
1
if
x
≥
0
0
if
x
<
0
{\displaystyle H(x)={\begin{cases}1&{\text{ if }}x\geq 0\\0&{\text{ if }}x<0\end{cases}}}
Pick for instance
ε
=
1
/
2
{\displaystyle \varepsilon =1/2}
. Then there is no
δ
{\displaystyle \delta }
-neighborhood around
x
=
0
{\displaystyle x=0}
, i.e. no open interval
(
−
δ
,
δ
)
{\displaystyle (-\delta ,\;\delta )}
with
δ
>
0
,
{\displaystyle \delta >0,}
that will force all the
H
(
x
)
{\displaystyle H(x)}
values to be within the
ε
{\displaystyle \varepsilon }
-neighborhood of
H
(
0
)
{\displaystyle H(0)}
, i.e. within
(
1
/
2
,
3
/
2
)
{\displaystyle (1/2,\;3/2)}
. Intuitively, we can think of this type of discontinuity as a sudden jump in function values.
Similarly, the signum or sign function
sgn
(
x
)
=
{
1
if
x
>
0
0
if
x
=
0
−
1
if
x
<
0
{\displaystyle \operatorname {sgn}(x)={\begin{cases}\;\;\ 1&{\text{ if }}x>0\\\;\;\ 0&{\text{ if }}x=0\\-1&{\text{ if }}x<0\end{cases}}}
is discontinuous at
x
=
0
{\displaystyle x=0}
but continuous everywhere else. Yet another example: the function
f
(
x
)
=
{
sin
(
x
−
2
)
if
x
≠
0
0
if
x
=
0
{\displaystyle f(x)={\begin{cases}\sin \left(x^{-2}\right)&{\text{ if }}x\neq 0\\0&{\text{ if }}x=0\end{cases}}}
is continuous everywhere apart from
x
=
0
{\displaystyle x=0}
.
Besides plausible continuities and discontinuities like above, there are also functions with a behavior, often coined pathological, for example, Thomae's function,
f
(
x
)
=
{
1
if
x
=
0
1
q
if
x
=
p
q
(in lowest terms) is a rational number
0
if
x
is irrational
.
{\displaystyle f(x)={\begin{cases}1&{\text{ if }}x=0\\{\frac {1}{q}}&{\text{ if }}x={\frac {p}{q}}{\text{(in lowest terms) is a rational number}}\\0&{\text{ if }}x{\text{ is irrational}}.\end{cases}}}
is continuous at all irrational numbers and discontinuous at all rational numbers. In a similar vein, Dirichlet's function, the indicator function for the set of rational numbers,
D
(
x
)
=
{
0
if
x
is irrational
(
∈
R
∖
Q
)
1
if
x
is rational
(
∈
Q
)
{\displaystyle D(x)={\begin{cases}0&{\text{ if }}x{\text{ is irrational }}(\in \mathbb {R} \setminus \mathbb {Q} )\\1&{\text{ if }}x{\text{ is rational }}(\in \mathbb {Q} )\end{cases}}}
is nowhere continuous.
=== Properties ===
==== A useful lemma ====
Let
f
(
x
)
{\displaystyle f(x)}
be a function that is continuous at a point
x
0
,
{\displaystyle x_{0},}
and
y
0
{\displaystyle y_{0}}
be a value such
f
(
x
0
)
≠
y
0
.
{\displaystyle f\left(x_{0}\right)\neq y_{0}.}
Then
f
(
x
)
≠
y
0
{\displaystyle f(x)\neq y_{0}}
throughout some neighbourhood of
x
0
.
{\displaystyle x_{0}.}
Proof: By the definition of continuity, take
ε
=
|
y
0
−
f
(
x
0
)
|
2
>
0
{\displaystyle \varepsilon ={\frac {|y_{0}-f(x_{0})|}{2}}>0}
, then there exists
δ
>
0
{\displaystyle \delta >0}
such that
|
f
(
x
)
−
f
(
x
0
)
|
<
|
y
0
−
f
(
x
0
)
|
2
whenever
|
x
−
x
0
|
<
δ
{\displaystyle \left|f(x)-f(x_{0})\right|<{\frac {\left|y_{0}-f(x_{0})\right|}{2}}\quad {\text{ whenever }}\quad |x-x_{0}|<\delta }
Suppose there is a point in the neighbourhood
|
x
−
x
0
|
<
δ
{\displaystyle |x-x_{0}|<\delta }
for which
f
(
x
)
=
y
0
;
{\displaystyle f(x)=y_{0};}
then we have the contradiction
|
f
(
x
0
)
−
y
0
|
<
|
f
(
x
0
)
−
y
0
|
2
.
{\displaystyle \left|f(x_{0})-y_{0}\right|<{\frac {\left|f(x_{0})-y_{0}\right|}{2}}.}
==== Intermediate value theorem ====
The intermediate value theorem is an existence theorem, based on the real number property of completeness, and states:
If the real-valued function f is continuous on the closed interval
[
a
,
b
]
,
{\displaystyle [a,b],}
and k is some number between
f
(
a
)
{\displaystyle f(a)}
and
f
(
b
)
,
{\displaystyle f(b),}
then there is some number
c
∈
[
a
,
b
]
,
{\displaystyle c\in [a,b],}
such that
f
(
c
)
=
k
.
{\displaystyle f(c)=k.}
For example, if a child grows from 1 m to 1.5 m between the ages of two and six years, then, at some time between two and six years of age, the child's height must have been 1.25 m.
As a consequence, if f is continuous on
[
a
,
b
]
{\displaystyle [a,b]}
and
f
(
a
)
{\displaystyle f(a)}
and
f
(
b
)
{\displaystyle f(b)}
differ in sign, then, at some point
c
∈
[
a
,
b
]
,
{\displaystyle c\in [a,b],}
f
(
c
)
{\displaystyle f(c)}
must equal zero.
==== Extreme value theorem ====
The extreme value theorem states that if a function f is defined on a closed interval
[
a
,
b
]
{\displaystyle [a,b]}
(or any closed and bounded set) and is continuous there, then the function attains its maximum, i.e. there exists
c
∈
[
a
,
b
]
{\displaystyle c\in [a,b]}
with
f
(
c
)
≥
f
(
x
)
{\displaystyle f(c)\geq f(x)}
for all
x
∈
[
a
,
b
]
.
{\displaystyle x\in [a,b].}
The same is true of the minimum of f. These statements are not, in general, true if the function is defined on an open interval
(
a
,
b
)
{\displaystyle (a,b)}
(or any set that is not both closed and bounded), as, for example, the continuous function
f
(
x
)
=
1
x
,
{\displaystyle f(x)={\frac {1}{x}},}
defined on the open interval (0,1), does not attain a maximum, being unbounded above.
==== Relation to differentiability and integrability ====
Every differentiable function
f
:
(
a
,
b
)
→
R
{\displaystyle f:(a,b)\to \mathbb {R} }
is continuous, as can be shown. The converse does not hold: for example, the absolute value function
f
(
x
)
=
|
x
|
=
{
x
if
x
≥
0
−
x
if
x
<
0
{\displaystyle f(x)=|x|={\begin{cases}\;\;\ x&{\text{ if }}x\geq 0\\-x&{\text{ if }}x<0\end{cases}}}
is everywhere continuous. However, it is not differentiable at
x
=
0
{\displaystyle x=0}
(but is so everywhere else). Weierstrass's function is also everywhere continuous but nowhere differentiable.
The derivative f′(x) of a differentiable function f(x) need not be continuous. If f′(x) is continuous, f(x) is said to be continuously differentiable. The set of such functions is denoted
C
1
(
(
a
,
b
)
)
.
{\displaystyle C^{1}((a,b)).}
More generally, the set of functions
f
:
Ω
→
R
{\displaystyle f:\Omega \to \mathbb {R} }
(from an open interval (or open subset of
R
{\displaystyle \mathbb {R} }
)
Ω
{\displaystyle \Omega }
to the reals) such that f is
n
{\displaystyle n}
times differentiable and such that the
n
{\displaystyle n}
-th derivative of f is continuous is denoted
C
n
(
Ω
)
.
{\displaystyle C^{n}(\Omega ).}
See differentiability class. In the field of computer graphics, properties related (but not identical) to
C
0
,
C
1
,
C
2
{\displaystyle C^{0},C^{1},C^{2}}
are sometimes called
G
0
{\displaystyle G^{0}}
(continuity of position),
G
1
{\displaystyle G^{1}}
(continuity of tangency), and
G
2
{\displaystyle G^{2}}
(continuity of curvature); see Smoothness of curves and surfaces.
Every continuous function
f
:
[
a
,
b
]
→
R
{\displaystyle f:[a,b]\to \mathbb {R} }
is integrable (for example in the sense of the Riemann integral). The converse does not hold, as the (integrable but discontinuous) sign function shows.
==== Pointwise and uniform limits ====
Given a sequence
f
1
,
f
2
,
…
:
I
→
R
{\displaystyle f_{1},f_{2},\dotsc :I\to \mathbb {R} }
of functions such that the limit
f
(
x
)
:=
lim
n
→
∞
f
n
(
x
)
{\displaystyle f(x):=\lim _{n\to \infty }f_{n}(x)}
exists for all
x
∈
D
,
{\displaystyle x\in D,}
, the resulting function
f
(
x
)
{\displaystyle f(x)}
is referred to as the pointwise limit of the sequence of functions
(
f
n
)
n
∈
N
.
{\displaystyle \left(f_{n}\right)_{n\in N}.}
The pointwise limit function need not be continuous, even if all functions
f
n
{\displaystyle f_{n}}
are continuous, as the animation at the right shows. However, f is continuous if all functions
f
n
{\displaystyle f_{n}}
are continuous and the sequence converges uniformly, by the uniform convergence theorem. This theorem can be used to show that the exponential functions, logarithms, square root function, and trigonometric functions are continuous.
=== Directional Continuity ===
Discontinuous functions may be discontinuous in a restricted way, giving rise to the concept of directional continuity (or right and left continuous functions) and semi-continuity. Roughly speaking, a function is right-continuous if no jump occurs when the limit point is approached from the right. Formally, f is said to be right-continuous at the point c if the following holds: For any number
ε
>
0
{\displaystyle \varepsilon >0}
however small, there exists some number
δ
>
0
{\displaystyle \delta >0}
such that for all x in the domain with
c
<
x
<
c
+
δ
,
{\displaystyle c<x<c+\delta ,}
the value of
f
(
x
)
{\displaystyle f(x)}
will satisfy
|
f
(
x
)
−
f
(
c
)
|
<
ε
.
{\displaystyle |f(x)-f(c)|<\varepsilon .}
This is the same condition as continuous functions, except it is required to hold for x strictly larger than c only. Requiring it instead for all x with
c
−
δ
<
x
<
c
{\displaystyle c-\delta <x<c}
yields the notion of left-continuous functions. A function is continuous if and only if it is both right-continuous and left-continuous.
=== Semicontinuity ===
A function f is lower semi-continuous if, roughly, any jumps that might occur only go down, but not up. That is, for any
ε
>
0
,
{\displaystyle \varepsilon >0,}
there exists some number
δ
>
0
{\displaystyle \delta >0}
such that for all x in the domain with
|
x
−
c
|
<
δ
,
{\displaystyle |x-c|<\delta ,}
the value of
f
(
x
)
{\displaystyle f(x)}
satisfies
f
(
x
)
≥
f
(
c
)
−
ϵ
.
{\displaystyle f(x)\geq f(c)-\epsilon .}
The reverse condition is upper semi-continuity.
== Continuous functions between metric spaces ==
The concept of continuous real-valued functions can be generalized to functions between metric spaces. A metric space is a set
X
{\displaystyle X}
equipped with a function (called metric)
d
X
,
{\displaystyle d_{X},}
that can be thought of as a measurement of the distance of any two elements in X. Formally, the metric is a function
d
X
:
X
×
X
→
R
{\displaystyle d_{X}:X\times X\to \mathbb {R} }
that satisfies a number of requirements, notably the triangle inequality. Given two metric spaces
(
X
,
d
X
)
{\displaystyle \left(X,d_{X}\right)}
and
(
Y
,
d
Y
)
{\displaystyle \left(Y,d_{Y}\right)}
and a function
f
:
X
→
Y
{\displaystyle f:X\to Y}
then
f
{\displaystyle f}
is continuous at the point
c
∈
X
{\displaystyle c\in X}
(with respect to the given metrics) if for any positive real number
ε
>
0
,
{\displaystyle \varepsilon >0,}
there exists a positive real number
δ
>
0
{\displaystyle \delta >0}
such that all
x
∈
X
{\displaystyle x\in X}
satisfying
d
X
(
x
,
c
)
<
δ
{\displaystyle d_{X}(x,c)<\delta }
will also satisfy
d
Y
(
f
(
x
)
,
f
(
c
)
)
<
ε
.
{\displaystyle d_{Y}(f(x),f(c))<\varepsilon .}
As in the case of real functions above, this is equivalent to the condition that for every sequence
(
x
n
)
{\displaystyle \left(x_{n}\right)}
in
X
{\displaystyle X}
with limit
lim
x
n
=
c
,
{\displaystyle \lim x_{n}=c,}
we have
lim
f
(
x
n
)
=
f
(
c
)
.
{\displaystyle \lim f\left(x_{n}\right)=f(c).}
The latter condition can be weakened as follows:
f
{\displaystyle f}
is continuous at the point
c
{\displaystyle c}
if and only if for every convergent sequence
(
x
n
)
{\displaystyle \left(x_{n}\right)}
in
X
{\displaystyle X}
with limit
c
{\displaystyle c}
, the sequence
(
f
(
x
n
)
)
{\displaystyle \left(f\left(x_{n}\right)\right)}
is a Cauchy sequence, and
c
{\displaystyle c}
is in the domain of
f
{\displaystyle f}
.
The set of points at which a function between metric spaces is continuous is a
G
δ
{\displaystyle G_{\delta }}
set – this follows from the
ε
−
δ
{\displaystyle \varepsilon -\delta }
definition of continuity.
This notion of continuity is applied, for example, in functional analysis. A key statement in this area says that a linear operator
T
:
V
→
W
{\displaystyle T:V\to W}
between normed vector spaces
V
{\displaystyle V}
and
W
{\displaystyle W}
(which are vector spaces equipped with a compatible norm, denoted
‖
x
‖
{\displaystyle \|x\|}
) is continuous if and only if it is bounded, that is, there is a constant
K
{\displaystyle K}
such that
‖
T
(
x
)
‖
≤
K
‖
x
‖
{\displaystyle \|T(x)\|\leq K\|x\|}
for all
x
∈
V
.
{\displaystyle x\in V.}
=== Uniform, Hölder and Lipschitz continuity ===
The concept of continuity for functions between metric spaces can be strengthened in various ways by limiting the way
δ
{\displaystyle \delta }
depends on
ε
{\displaystyle \varepsilon }
and c in the definition above. Intuitively, a function f as above is uniformly continuous if the
δ
{\displaystyle \delta }
does
not depend on the point c. More precisely, it is required that for every real number
ε
>
0
{\displaystyle \varepsilon >0}
there exists
δ
>
0
{\displaystyle \delta >0}
such that for every
c
,
b
∈
X
{\displaystyle c,b\in X}
with
d
X
(
b
,
c
)
<
δ
,
{\displaystyle d_{X}(b,c)<\delta ,}
we have that
d
Y
(
f
(
b
)
,
f
(
c
)
)
<
ε
.
{\displaystyle d_{Y}(f(b),f(c))<\varepsilon .}
Thus, any uniformly continuous function is continuous. The converse does not generally hold but holds when the domain space X is compact. Uniformly continuous maps can be defined in the more general situation of uniform spaces.
A function is Hölder continuous with exponent α (a real number) if there is a constant K such that for all
b
,
c
∈
X
,
{\displaystyle b,c\in X,}
the inequality
d
Y
(
f
(
b
)
,
f
(
c
)
)
≤
K
⋅
(
d
X
(
b
,
c
)
)
α
{\displaystyle d_{Y}(f(b),f(c))\leq K\cdot (d_{X}(b,c))^{\alpha }}
holds. Any Hölder continuous function is uniformly continuous. The particular case
α
=
1
{\displaystyle \alpha =1}
is referred to as Lipschitz continuity. That is, a function is Lipschitz continuous if there is a constant K such that the inequality
d
Y
(
f
(
b
)
,
f
(
c
)
)
≤
K
⋅
d
X
(
b
,
c
)
{\displaystyle d_{Y}(f(b),f(c))\leq K\cdot d_{X}(b,c)}
holds for any
b
,
c
∈
X
.
{\displaystyle b,c\in X.}
The Lipschitz condition occurs, for example, in the Picard–Lindelöf theorem concerning the solutions of ordinary differential equations.
== Continuous functions between topological spaces ==
Another, more abstract, notion of continuity is the continuity of functions between topological spaces in which there generally is no formal notion of distance, as there is in the case of metric spaces. A topological space is a set X together with a topology on X, which is a set of subsets of X satisfying a few requirements with respect to their unions and intersections that generalize the properties of the open balls in metric spaces while still allowing one to talk about the neighborhoods of a given point. The elements of a topology are called open subsets of X (with respect to the topology).
A function
f
:
X
→
Y
{\displaystyle f:X\to Y}
between two topological spaces X and Y is continuous if for every open set
V
⊆
Y
,
{\displaystyle V\subseteq Y,}
the inverse image
f
−
1
(
V
)
=
{
x
∈
X
|
f
(
x
)
∈
V
}
{\displaystyle f^{-1}(V)=\{x\in X\;|\;f(x)\in V\}}
is an open subset of X. That is, f is a function between the sets X and Y (not on the elements of the topology
T
X
{\displaystyle T_{X}}
), but the continuity of f depends on the topologies used on X and Y.
This is equivalent to the condition that the preimages of the closed sets (which are the complements of the open subsets) in Y are closed in X.
An extreme example: if a set X is given the discrete topology (in which every subset is open), all functions
f
:
X
→
T
{\displaystyle f:X\to T}
to any topological space T are continuous. On the other hand, if X is equipped with the indiscrete topology (in which the only open subsets are the empty set and X) and the space T set is at least T0, then the only continuous functions are the constant functions. Conversely, any function whose codomain is indiscrete is continuous.
=== Continuity at a point ===
The translation in the language of neighborhoods of the
(
ε
,
δ
)
{\displaystyle (\varepsilon ,\delta )}
-definition of continuity leads to the following definition of the continuity at a point:
This definition is equivalent to the same statement with neighborhoods restricted to open neighborhoods and can be restated in several ways by using preimages rather than images.
Also, as every set that contains a neighborhood is also a neighborhood, and
f
−
1
(
V
)
{\displaystyle f^{-1}(V)}
is the largest subset U of X such that
f
(
U
)
⊆
V
,
{\displaystyle f(U)\subseteq V,}
this definition may be simplified into:
As an open set is a set that is a neighborhood of all its points, a function
f
:
X
→
Y
{\displaystyle f:X\to Y}
is continuous at every point of X if and only if it is a continuous function.
If X and Y are metric spaces, it is equivalent to consider the neighborhood system of open balls centered at x and f(x) instead of all neighborhoods. This gives back the above
ε
−
δ
{\displaystyle \varepsilon -\delta }
definition of continuity in the context of metric spaces. In general topological spaces, there is no notion of nearness or distance. If, however, the target space is a Hausdorff space, it is still true that f is continuous at a if and only if the limit of f as x approaches a is f(a). At an isolated point, every function is continuous.
Given
x
∈
X
,
{\displaystyle x\in X,}
a map
f
:
X
→
Y
{\displaystyle f:X\to Y}
is continuous at
x
{\displaystyle x}
if and only if whenever
B
{\displaystyle {\mathcal {B}}}
is a filter on
X
{\displaystyle X}
that converges to
x
{\displaystyle x}
in
X
,
{\displaystyle X,}
which is expressed by writing
B
→
x
,
{\displaystyle {\mathcal {B}}\to x,}
then necessarily
f
(
B
)
→
f
(
x
)
{\displaystyle f({\mathcal {B}})\to f(x)}
in
Y
.
{\displaystyle Y.}
If
N
(
x
)
{\displaystyle {\mathcal {N}}(x)}
denotes the neighborhood filter at
x
{\displaystyle x}
then
f
:
X
→
Y
{\displaystyle f:X\to Y}
is continuous at
x
{\displaystyle x}
if and only if
f
(
N
(
x
)
)
→
f
(
x
)
{\displaystyle f({\mathcal {N}}(x))\to f(x)}
in
Y
.
{\displaystyle Y.}
Moreover, this happens if and only if the prefilter
f
(
N
(
x
)
)
{\displaystyle f({\mathcal {N}}(x))}
is a filter base for the neighborhood filter of
f
(
x
)
{\displaystyle f(x)}
in
Y
.
{\displaystyle Y.}
=== Alternative definitions ===
Several equivalent definitions for a topological structure exist; thus, several equivalent ways exist to define a continuous function.
==== Sequences and nets ====
In several contexts, the topology of a space is conveniently specified in terms of limit points. This is often accomplished by specifying when a point is the limit of a sequence. Still, for some spaces that are too large in some sense, one specifies also when a point is the limit of more general sets of points indexed by a directed set, known as nets. A function is (Heine-)continuous only if it takes limits of sequences to limits of sequences. In the former case, preservation of limits is also sufficient; in the latter, a function may preserve all limits of sequences yet still fail to be continuous, and preservation of nets is a necessary and sufficient condition.
In detail, a function
f
:
X
→
Y
{\displaystyle f:X\to Y}
is sequentially continuous if whenever a sequence
(
x
n
)
{\displaystyle \left(x_{n}\right)}
in
X
{\displaystyle X}
converges to a limit
x
,
{\displaystyle x,}
the sequence
(
f
(
x
n
)
)
{\displaystyle \left(f\left(x_{n}\right)\right)}
converges to
f
(
x
)
.
{\displaystyle f(x).}
Thus, sequentially continuous functions "preserve sequential limits." Every continuous function is sequentially continuous. If
X
{\displaystyle X}
is a first-countable space and countable choice holds, then the converse also holds: any function preserving sequential limits is continuous. In particular, if
X
{\displaystyle X}
is a metric space, sequential continuity and continuity are equivalent. For non-first-countable spaces, sequential continuity might be strictly weaker than continuity. (The spaces for which the two properties are equivalent are called sequential spaces.) This motivates the consideration of nets instead of sequences in general topological spaces. Continuous functions preserve the limits of nets, and this property characterizes continuous functions.
For instance, consider the case of real-valued functions of one real variable:
==== Closure operator and interior operator definitions ====
In terms of the interior and closure operators, we have the following equivalences,
If we declare that a point
x
{\displaystyle x}
is close to a subset
A
⊆
X
{\displaystyle A\subseteq X}
if
x
∈
cl
X
A
,
{\displaystyle x\in \operatorname {cl} _{X}A,}
then this terminology allows for a plain English description of continuity:
f
{\displaystyle f}
is continuous if and only if for every subset
A
⊆
X
,
{\displaystyle A\subseteq X,}
f
{\displaystyle f}
maps points that are close to
A
{\displaystyle A}
to points that are close to
f
(
A
)
.
{\displaystyle f(A).}
Similarly,
f
{\displaystyle f}
is continuous at a fixed given point
x
∈
X
{\displaystyle x\in X}
if and only if whenever
x
{\displaystyle x}
is close to a subset
A
⊆
X
,
{\displaystyle A\subseteq X,}
then
f
(
x
)
{\displaystyle f(x)}
is close to
f
(
A
)
.
{\displaystyle f(A).}
Instead of specifying topological spaces by their open subsets, any topology on
X
{\displaystyle X}
can alternatively be determined by a closure operator or by an interior operator.
Specifically, the map that sends a subset
A
{\displaystyle A}
of a topological space
X
{\displaystyle X}
to its topological closure
cl
X
A
{\displaystyle \operatorname {cl} _{X}A}
satisfies the Kuratowski closure axioms. Conversely, for any closure operator
A
↦
cl
A
{\displaystyle A\mapsto \operatorname {cl} A}
there exists a unique topology
τ
{\displaystyle \tau }
on
X
{\displaystyle X}
(specifically,
τ
:=
{
X
∖
cl
A
:
A
⊆
X
}
{\displaystyle \tau :=\{X\setminus \operatorname {cl} A:A\subseteq X\}}
) such that for every subset
A
⊆
X
,
{\displaystyle A\subseteq X,}
cl
A
{\displaystyle \operatorname {cl} A}
is equal to the topological closure
cl
(
X
,
τ
)
A
{\displaystyle \operatorname {cl} _{(X,\tau )}A}
of
A
{\displaystyle A}
in
(
X
,
τ
)
.
{\displaystyle (X,\tau ).}
If the sets
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are each associated with closure operators (both denoted by
cl
{\displaystyle \operatorname {cl} }
) then a map
f
:
X
→
Y
{\displaystyle f:X\to Y}
is continuous if and only if
f
(
cl
A
)
⊆
cl
(
f
(
A
)
)
{\displaystyle f(\operatorname {cl} A)\subseteq \operatorname {cl} (f(A))}
for every subset
A
⊆
X
.
{\displaystyle A\subseteq X.}
Similarly, the map that sends a subset
A
{\displaystyle A}
of
X
{\displaystyle X}
to its topological interior
int
X
A
{\displaystyle \operatorname {int} _{X}A}
defines an interior operator. Conversely, any interior operator
A
↦
int
A
{\displaystyle A\mapsto \operatorname {int} A}
induces a unique topology
τ
{\displaystyle \tau }
on
X
{\displaystyle X}
(specifically,
τ
:=
{
int
A
:
A
⊆
X
}
{\displaystyle \tau :=\{\operatorname {int} A:A\subseteq X\}}
) such that for every
A
⊆
X
,
{\displaystyle A\subseteq X,}
int
A
{\displaystyle \operatorname {int} A}
is equal to the topological interior
int
(
X
,
τ
)
A
{\displaystyle \operatorname {int} _{(X,\tau )}A}
of
A
{\displaystyle A}
in
(
X
,
τ
)
.
{\displaystyle (X,\tau ).}
If the sets
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are each associated with interior operators (both denoted by
int
{\displaystyle \operatorname {int} }
) then a map
f
:
X
→
Y
{\displaystyle f:X\to Y}
is continuous if and only if
f
−
1
(
int
B
)
⊆
int
(
f
−
1
(
B
)
)
{\displaystyle f^{-1}(\operatorname {int} B)\subseteq \operatorname {int} \left(f^{-1}(B)\right)}
for every subset
B
⊆
Y
.
{\displaystyle B\subseteq Y.}
==== Filters and prefilters ====
Continuity can also be characterized in terms of filters. A function
f
:
X
→
Y
{\displaystyle f:X\to Y}
is continuous if and only if whenever a filter
B
{\displaystyle {\mathcal {B}}}
on
X
{\displaystyle X}
converges in
X
{\displaystyle X}
to a point
x
∈
X
,
{\displaystyle x\in X,}
then the prefilter
f
(
B
)
{\displaystyle f({\mathcal {B}})}
converges in
Y
{\displaystyle Y}
to
f
(
x
)
.
{\displaystyle f(x).}
This characterization remains true if the word "filter" is replaced by "prefilter."
=== Properties ===
If
f
:
X
→
Y
{\displaystyle f:X\to Y}
and
g
:
Y
→
Z
{\displaystyle g:Y\to Z}
are continuous, then so is the composition
g
∘
f
:
X
→
Z
.
{\displaystyle g\circ f:X\to Z.}
If
f
:
X
→
Y
{\displaystyle f:X\to Y}
is continuous and
X is compact, then f(X) is compact.
X is connected, then f(X) is connected.
X is path-connected, then f(X) is path-connected.
X is Lindelöf, then f(X) is Lindelöf.
X is separable, then f(X) is separable.
The possible topologies on a fixed set X are partially ordered: a topology
τ
1
{\displaystyle \tau _{1}}
is said to be coarser than another topology
τ
2
{\displaystyle \tau _{2}}
(notation:
τ
1
⊆
τ
2
{\displaystyle \tau _{1}\subseteq \tau _{2}}
) if every open subset with respect to
τ
1
{\displaystyle \tau _{1}}
is also open with respect to
τ
2
.
{\displaystyle \tau _{2}.}
Then, the identity map
id
X
:
(
X
,
τ
2
)
→
(
X
,
τ
1
)
{\displaystyle \operatorname {id} _{X}:\left(X,\tau _{2}\right)\to \left(X,\tau _{1}\right)}
is continuous if and only if
τ
1
⊆
τ
2
{\displaystyle \tau _{1}\subseteq \tau _{2}}
(see also comparison of topologies). More generally, a continuous function
(
X
,
τ
X
)
→
(
Y
,
τ
Y
)
{\displaystyle \left(X,\tau _{X}\right)\to \left(Y,\tau _{Y}\right)}
stays continuous if the topology
τ
Y
{\displaystyle \tau _{Y}}
is replaced by a coarser topology and/or
τ
X
{\displaystyle \tau _{X}}
is replaced by a finer topology.
=== Homeomorphisms ===
Symmetric to the concept of a continuous map is an open map, for which images of open sets are open. If an open map f has an inverse function, that inverse is continuous, and if a continuous map g has an inverse, that inverse is open. Given a bijective function f between two topological spaces, the inverse function
f
−
1
{\displaystyle f^{-1}}
need not be continuous. A bijective continuous function with a continuous inverse function is called a homeomorphism.
If a continuous bijection has as its domain a compact space and its codomain is Hausdorff, then it is a homeomorphism.
=== Defining topologies via continuous functions ===
Given a function
f
:
X
→
S
,
{\displaystyle f:X\to S,}
where X is a topological space and S is a set (without a specified topology), the final topology on S is defined by letting the open sets of S be those subsets A of S for which
f
−
1
(
A
)
{\displaystyle f^{-1}(A)}
is open in X. If S has an existing topology, f is continuous with respect to this topology if and only if the existing topology is coarser than the final topology on S. Thus, the final topology is the finest topology on S that makes f continuous. If f is surjective, this topology is canonically identified with the quotient topology under the equivalence relation defined by f.
Dually, for a function f from a set S to a topological space X, the initial topology on S is defined by designating as an open set every subset A of S such that
A
=
f
−
1
(
U
)
{\displaystyle A=f^{-1}(U)}
for some open subset U of X. If S has an existing topology, f is continuous with respect to this topology if and only if the existing topology is finer than the initial topology on S. Thus, the initial topology is the coarsest topology on S that makes f continuous. If f is injective, this topology is canonically identified with the subspace topology of S, viewed as a subset of X.
A topology on a set S is uniquely determined by the class of all continuous functions
S
→
X
{\displaystyle S\to X}
into all topological spaces X. Dually, a similar idea can be applied to maps
X
→
S
.
{\displaystyle X\to S.}
== Related notions ==
If
f
:
S
→
Y
{\displaystyle f:S\to Y}
is a continuous function from some subset
S
{\displaystyle S}
of a topological space
X
{\displaystyle X}
then a continuous extension of
f
{\displaystyle f}
to
X
{\displaystyle X}
is any continuous function
F
:
X
→
Y
{\displaystyle F:X\to Y}
such that
F
(
s
)
=
f
(
s
)
{\displaystyle F(s)=f(s)}
for every
s
∈
S
,
{\displaystyle s\in S,}
which is a condition that often written as
f
=
F
|
S
.
{\displaystyle f=F{\big \vert }_{S}.}
In words, it is any continuous function
F
:
X
→
Y
{\displaystyle F:X\to Y}
that restricts to
f
{\displaystyle f}
on
S
.
{\displaystyle S.}
This notion is used, for example, in the Tietze extension theorem and the Hahn–Banach theorem. If
f
:
S
→
Y
{\displaystyle f:S\to Y}
is not continuous, then it could not possibly have a continuous extension. If
Y
{\displaystyle Y}
is a Hausdorff space and
S
{\displaystyle S}
is a dense subset of
X
{\displaystyle X}
then a continuous extension of
f
:
S
→
Y
{\displaystyle f:S\to Y}
to
X
,
{\displaystyle X,}
if one exists, will be unique. The Blumberg theorem states that if
f
:
R
→
R
{\displaystyle f:\mathbb {R} \to \mathbb {R} }
is an arbitrary function then there exists a dense subset
D
{\displaystyle D}
of
R
{\displaystyle \mathbb {R} }
such that the restriction
f
|
D
:
D
→
R
{\displaystyle f{\big \vert }_{D}:D\to \mathbb {R} }
is continuous; in other words, every function
R
→
R
{\displaystyle \mathbb {R} \to \mathbb {R} }
can be restricted to some dense subset on which it is continuous.
Various other mathematical domains use the concept of continuity in different but related meanings. For example, in order theory, an order-preserving function
f
:
X
→
Y
{\displaystyle f:X\to Y}
between particular types of partially ordered sets
X
{\displaystyle X}
and
Y
{\displaystyle Y}
is continuous if for each directed subset
A
{\displaystyle A}
of
X
,
{\displaystyle X,}
we have
sup
f
(
A
)
=
f
(
sup
A
)
.
{\displaystyle \sup f(A)=f(\sup A).}
Here
sup
{\displaystyle \,\sup \,}
is the supremum with respect to the orderings in
X
{\displaystyle X}
and
Y
,
{\displaystyle Y,}
respectively. This notion of continuity is the same as topological continuity when the partially ordered sets are given the Scott topology.
In category theory, a functor
F
:
C
→
D
{\displaystyle F:{\mathcal {C}}\to {\mathcal {D}}}
between two categories is called continuous if it commutes with small limits. That is to say,
lim
←
i
∈
I
F
(
C
i
)
≅
F
(
lim
←
i
∈
I
C
i
)
{\displaystyle \varprojlim _{i\in I}F(C_{i})\cong F\left(\varprojlim _{i\in I}C_{i}\right)}
for any small (that is, indexed by a set
I
,
{\displaystyle I,}
as opposed to a class) diagram of objects in
C
{\displaystyle {\mathcal {C}}}
.
A continuity space is a generalization of metric spaces and posets, which uses the concept of quantales, and that can be used to unify the notions of metric spaces and domains.
In measure theory, a function
f
:
E
→
R
k
{\displaystyle f:E\to \mathbb {R} ^{k}}
defined on a Lebesgue measurable set
E
⊆
R
n
{\displaystyle E\subseteq \mathbb {R} ^{n}}
is called approximately continuous at a point
x
0
∈
E
{\displaystyle x_{0}\in E}
if the approximate limit of
f
{\displaystyle f}
at
x
0
{\displaystyle x_{0}}
exists and equals
f
(
x
0
)
{\displaystyle f(x_{0})}
. This generalizes the notion of continuity by replacing the ordinary limit with the approximate limit. A fundamental result known as the Stepanov-Denjoy theorem states that a function is measurable if and only if it is approximately continuous almost everywhere.
== See also ==
Direction-preserving function - an analog of a continuous function in discrete spaces.
== References ==
== Bibliography ==
Dugundji, James (1966). Topology. Boston: Allyn and Bacon. ISBN 978-0-697-06889-7. OCLC 395340485.
"Continuous function", Encyclopedia of Mathematics, EMS Press, 2001 [1994] | Wikipedia/Continuous_map_(topology) |
In the mathematical subject of group theory, small cancellation theory studies groups given by group presentations satisfying small cancellation conditions, that is where defining relations have "small overlaps" with each other. Small cancellation conditions imply algebraic, geometric and algorithmic properties of the group. Finitely presented groups satisfying sufficiently strong small cancellation conditions are word hyperbolic and have word problem solvable by Dehn's algorithm. Small cancellation methods are also used for constructing Tarski monsters, and for solutions of Burnside's problem.
== History ==
Some ideas underlying the small cancellation theory go back to the work of Max Dehn in the 1910s. Dehn proved that fundamental groups of closed orientable surfaces of genus at least two have word problem solvable by what is now called Dehn's algorithm. His proof involved drawing the Cayley graph of such a group in the hyperbolic plane and performing curvature estimates via the Gauss–Bonnet theorem for a closed loop in the Cayley graph to conclude that such a loop must contain a large portion (more than a half) of a defining relation.
A 1949 paper of Tartakovskii was an immediate precursor for small cancellation theory: this paper provided a solution of the word problem for a class of groups satisfying a complicated set of combinatorial conditions, where small cancellation type assumptions played a key role. The standard version of small cancellation theory, as it is used today, was developed by Martin Greendlinger in a series of papers in the early 1960s, who primarily dealt with the "metric" small cancellation conditions. In particular, Greendlinger proved that finitely presented groups satisfying the C′(1/6) small cancellation condition have word problem solvable by Dehn's algorithm. The theory was further refined and formalized in the subsequent work of Lyndon, Schupp and Lyndon-Schupp, who also treated the case of non-metric small cancellation conditions and developed a version of small cancellation theory for amalgamated free products and HNN-extensions.
Small cancellation theory was further generalized by Alexander Ol'shanskii who developed a "graded" version of the theory where the set of defining relations comes equipped with a filtration and where a defining relator of a particular grade is allowed to have a large overlap with a defining relator of a higher grade. Olshaskii used graded small cancellation theory to construct various "monster" groups, including the Tarski monster and also to give a new proof that free Burnside groups of large odd exponent are infinite (this result was originally proved by Adian and Novikov in 1968 using more combinatorial methods).
Small cancellation theory supplied a basic set of examples and ideas for the theory of word-hyperbolic groups that was put forward by Gromov in a seminal 1987 monograph "Hyperbolic groups".
== Main definitions ==
The exposition below largely follows Ch. V of the book of Lyndon and Schupp.
=== Pieces ===
Let
G
=
⟨
X
∣
R
⟩
(
∗
)
{\displaystyle G=\langle X\mid R\rangle \qquad (*)}
be a group presentation where R ⊆ F(X) is a set of freely reduced and cyclically reduced words in the free group F(X) such that R is symmetrized, that is, closed under taking cyclic permutations and inverses.
A nontrivial freely reduced word u in F(X) is called a piece with respect to (∗) if there exist two distinct elements r1, r2 in R that have u as maximal common initial segment.
Note that if
G
=
⟨
X
∣
S
⟩
{\displaystyle G=\langle X\mid S\rangle }
is a group presentation where the set of defining relators S is not symmetrized, we can always take the symmetrized closure R of S, where R consists of all cyclic permutations of elements of S and S−1. Then R is symmetrized and
G
=
⟨
X
∣
R
⟩
{\displaystyle G=\langle X\mid R\rangle }
is also a presentation of G.
=== Metric small cancellation conditions ===
Let 0 < λ < 1. Presentation (∗) as above is said to satisfy the C′(λ) small cancellation condition if whenever u is a piece with respect to (∗) and u is a subword of some r ∈ R, then |u| < λ|r|. Here |v| is the length of a word v.
The condition C′(λ) is sometimes called a metric small cancellation condition.
=== Non-metric small cancellation conditions ===
Let p ≥ 3 be an integer. A group presentation (∗) as above is said to satisfy the C(p) small cancellation condition if whenever r ∈ R and
r
=
u
1
…
u
m
{\displaystyle r=u_{1}\dots u_{m}}
where ui are pieces and where the above product is freely reduced as written, then m ≥ p. That is, no defining relator can be written as a reduced product of fewer than p pieces.
Let q ≥ 3 be an integer. A group presentation (∗) as above is said to satisfy the T(q) small cancellation condition if whenever 3 ≤ t < q and r1,...,rt in R are such that r1 ≠ r2−1,...,
rt ≠ r1−1 then at least one of the products r1r2,...,rt−1rt, rtr1 is freely reduced as written.
Geometrically, condition T(q) essentially means that if D is a reduced van Kampen diagram over (∗) then every interior vertex of D of degree at least three actually has degree at least q.
=== Examples ===
Let
G
=
⟨
a
,
b
∣
a
b
a
−
1
b
−
1
⟩
{\displaystyle G=\langle a,b\mid aba^{-1}b^{-1}\rangle }
be the standard presentation of the free abelian group of rank two. Then for the symmetrized closure of this presentation the only pieces are words of length 1. This symmetrized form satisfies the C(4)–T(4) small cancellation conditions and the C′(λ) condition for any 1 > λ > 1/4.
Let
G
=
⟨
a
1
,
b
1
,
…
,
a
k
,
b
k
∣
[
a
1
,
b
1
]
⋅
⋯
⋅
[
a
k
,
b
k
]
⟩
{\displaystyle G=\langle a_{1},b_{1},\dots ,a_{k},b_{k}\mid [a_{1},b_{1}]\cdot \dots \cdot [a_{k},b_{k}]\rangle }
, where k ≥ 2, be the standard presentation of the fundamental group of a closed orientable surface of genus k. Then for the symmetrization of this presentation the only pieces are words of length 1 and this symmetrization satisfies the C′(1/7) and C(8) small cancellation conditions.
Let
G
=
⟨
a
,
b
∣
a
b
a
b
2
a
b
3
…
a
b
100
⟩
{\displaystyle G=\langle a,b\mid abab^{2}ab^{3}\dots ab^{100}\rangle }
. Then, up to inversion, every piece for the symmetrized version of this presentation, has the form biabj or bi, where 0 ≤ i,j ≤ 100. This symmetrization satisfies the C′(1/20) small cancellation condition.
If a symmetrized presentation satisfies the C′(1/m) condition then it also satisfies the C(m) condition.
Let r ∈ F(X) be a nontrivial cyclically reduced word which is not a proper power in F(X) and let n ≥ 2. Then the symmetrized closure of the presentation
G
=
⟨
X
∣
r
n
⟩
{\displaystyle G=\langle X\mid r^{n}\rangle }
satisfies the C(2n) and C′(1/n) small cancellation conditions.
== Basic results of small cancellation theory ==
=== Greendlinger's lemma ===
The main result regarding the metric small cancellation condition is the following statement (see Theorem 4.4 in Ch. V of ) which is usually called
Greendlinger's lemma:
Let (∗) be a group presentation as above satisfying the C′(λ) small cancellation condition where 0 ≤ λ ≤ 1/6. Let w ∈ F(X) be a nontrivial freely reduced word such that w = 1 in G. Then there is a subword v of w and a defining relator r ∈ R such that v is also a subword of r and such that
|
v
|
>
(
1
−
3
λ
)
|
r
|
{\displaystyle \left|v\right|>\left(1-3\lambda \right)\left|r\right|}
Note that the assumption λ ≤ 1/6 implies that (1 − 3λ) ≥ 1/2, so that w contains a subword more than a half of some defining relator.
Greendlinger's lemma is obtained as a corollary of the following geometric statement:
Under the assumptions of Greendlinger's lemma, let D be a reduced van Kampen diagram over (∗) with a cyclically reduced boundary label such that D contains at least two regions. Then there exist two distinct regions D1 and D2 in D such that for j = 1,2 the region Dj intersects the boundary cycle ∂D of D in a simple arc whose length is bigger than (1 − 3λ)|∂Dj|.
This result in turn is proved by considering a dual diagram for D. There one defines a combinatorial notion of curvature (which, by the small cancellation assumptions, is negative at every interior vertex), and one then obtains a combinatorial version of the Gauss–Bonnet theorem. Greendlinger's lemma is proved as a consequence of this analysis and in this way the proof evokes the ideas of the original proof of Dehn for the case of surface groups.
=== Dehn's algorithm ===
For any symmetrized group presentation (∗), the following abstract procedure is called Dehn's algorithm:
Given a freely reduced word w on X±1, construct a sequence of freely reduced words w = w0, w1, w2,..., as follows.
Suppose wj is already constructed. If it is the empty word, terminate the algorithm. Otherwise check if wj contains a subword v such that v is also a subword of some defining relator r = vu ∈ R such that |v| > |r|/2. If no, terminate the algorithm with output wj. If yes, replace v by u−1 in wj, then freely reduce, denote the resulting freely reduced word by wj+1 and go to the next step of the algorithm.
Note that we always have
|w0| > |w1| > |w2| >...
which implies that the process must terminate in at most |w| steps. Moreover, all the words wj represent the same element of G as does w and hence if the process terminates with the empty word, then w represents the identity element of G.
One says that for a symmetrized presentation (∗) Dehn's algorithm solves the word problem in G if the converse is also true, that is if for any freely reduced word w in F(X) this word represents the identity element of G if and only if Dehn's algorithm, starting from w, terminates in the empty word.
Greendlinger's lemma implies that for a C′(1/6) presentation Dehn's algorithm solves the word problem.
If a C′(1/6) presentation (∗) is finite (that is both X and R are finite), then Dehn's algorithm is an actual non-deterministic algorithm in the sense of recursion theory. However, even if (∗) is an infinite C′(1/6) presentation, Dehn's algorithm, understood as an abstract procedure, still correctly decides whether or not a word in the generators X±1 represents the identity element of G.
=== Asphericity ===
Let (∗) be a C′(1/6) or, more generally, C(6) presentation where every r ∈ R is not a proper power in F(X) then G is aspherical in the following sense. Consider a minimal subset S of R such that the symmetrized closure of S is equal to R. Thus if r and s are distinct elements of S then r is not a cyclic permutation of s±1 and
G
=
⟨
X
∣
S
⟩
{\displaystyle G=\langle X\mid S\rangle }
is another presentation for G. Let Y be the presentation complex for this presentation. Then (see and Theorem 13.3 in ), under the above assumptions on (∗), Y is a classifying space for G, that is G = π1(Y) and the universal cover of Y is contractible. In particular, this implies that G is torsion-free and has cohomological dimension two.
=== More general curvature ===
More generally, it is possible to define various sorts of local "curvature" on any van Kampen diagram to be - very roughly - the average excess of vertices + faces − edges (which, by Euler's formula, must total 2) and, by showing, in a particular group, that this is always non-positive (or – even better – negative) internally, show that the curvature must all be on or near the boundary and thereby try to obtain a solution of the word problem. Furthermore, one can restrict attention to diagrams that do not contain any of a set of "regions" such that there is a "smaller" region with the same boundary.
=== Other basic properties of small cancellation groups ===
Let (∗) be a C′(1/6) presentation. Then an element g in G has order n > 1 if and only if there is a relator r in R of the form r = sn in F(X) such that g is conjugate to s in G. In particular, if all elements of R are not proper powers in F(X) then G is torsion-free.
If (∗) is a finite C′(1/6) presentation, the group G is word-hyperbolic.
If R and S are finite symmetrized subsets of F(X) with equal normal closures in F(X) such that both presentations
⟨
X
∣
R
⟩
{\displaystyle \langle X\mid R\rangle }
and
⟨
X
∣
S
⟩
{\displaystyle \langle X\mid S\rangle }
satisfy the C′(1/6) condition then R = S.
If a finite presentation (∗) satisfies one of C′(1/6), C′(1/4)–T(4), C(6), C(4)–T(4), C(3)–T(6) then the group G has solvable word problem and solvable conjugacy problem
== Applications ==
Examples of applications of small cancellation theory include:
Solution of the conjugacy problem for groups of alternating knots (see and Chapter V, Theorem 8.5 in ), via showing that for such knots augmented knot groups admit C(4)–T(4) presentations.
Finitely presented C′(1/6) small cancellation groups are basic examples of word-hyperbolic groups. One of the equivalent characterizations of word-hyperbolic groups is as those admitting finite presentations where Dehn's algorithm solves the word problem.
Finitely presented groups given by finite C(4)–T(4) presentations where every piece has length one are basic examples of CAT(0) groups: for such a presentation the universal cover of the presentation complex is a CAT(0) square complex.
Early applications of small cancellation theory involve obtaining various embeddability results. Examples include a 1974 paper of Sacerdote and Schupp with a proof that every one-relator group with at least three generators is SQ-universal and a 1976 paper of Schupp with a proof that every countable group can be embedded into a simple group generated by an element of order two and an element of order three.
The so-called Rips construction, due to Eliyahu Rips, provides a rich source of counter-examples regarding various subgroup properties of word-hyperbolic groups: Given an arbitrary finitely presented group Q, the construction produces a short exact sequence
1
→
K
→
G
→
Q
→
1
{\displaystyle 1\to K\to G\to Q\to 1}
where K is two-generated and where G is torsion-free and given by a finite C′(1/6)–presentation (and thus G is word-hyperbolic). The construction yields proofs of unsolvability of several algorithmic problems for word-hyperbolic groups, including the subgroup membership problem, the generation problem and the rank problem. Also, with a few exceptions, the group K in the Rips construction is not finitely presentable. This implies that there exist word-hyperbolic groups that are not coherent that is which contain subgroups that are finitely generated but not finitely presentable.
Small cancellation methods (for infinite presentations) were used by Ol'shanskii to construct various "monster" groups, including the Tarski monster and also to give a proof that free Burnside groups of large odd exponent are infinite (a similar result was originally proved by Adian and Novikov in 1968 using more combinatorial methods). Some other "monster" groups constructed by Ol'shanskii using this methods include: an infinite simple Noetherian group; an infinite group in which every proper subgroup has prime order and any two subgroups of the same order are conjugate; a nonamenable group where every proper subgroup is cyclic; and others.
Bowditch used infinite small cancellation presentations to prove that there exist continuumly many quasi-isometry types of two-generator groups.
Thomas and Velickovic used small cancellation theory to construct a finitely generated group with two non-homeomorphic asymptotic cones, thus answering a question of Gromov.
McCammond and Wise showed how to overcome difficulties posed by the Rips construction and produce large classes of small cancellation groups that are coherent (that is where all finitely generated subgroups are finitely presented) and, moreover, locally quasiconvex (that is where all finitely generated subgroups are quasiconvex).
Small cancellation methods play a key role in the study of various models of "generic" or "random" finitely presented groups (see ). In particular, for a fixed number m ≥ 2 of generators and a fixed number t ≥ 1 of defining relations and for any λ < 1 a random m-generator t-relator group satisfies the C′(λ) small cancellation condition. Even if the number of defining relations t is not fixed but grows as (2m − 1)εn (where ε ≥ 0 is the fixed density parameter in Gromov's density model of "random" groups, and where
n
→
∞
{\displaystyle n\to \infty }
is the length of the defining relations), then an ε-random group satisfies the C′(1/6) condition provided ε < 1/12.
Gromov used a version of small cancellation theory with respect to a graph to prove the existence of a finitely presented group that "contains" (in the appropriate sense) an infinite sequence of expanders and therefore does not admit a uniform embedding into a Hilbert space. This result provides a direction (the only one available so far) for looking for counter-examples to the Novikov conjecture.
Osin used a generalization of small cancellation theory to obtain an analog of Thurston's hyperbolic Dehn surgery theorem for relatively hyperbolic groups.
== Generalizations ==
A version of small cancellation theory for quotient groups of amalgamated free products and HNN extensions was developed in the paper of Sacerdote and Schupp and then in the book of Lyndon and Schupp.
Rips and Ol'shanskii developed a "stratified" version of small cancellation theory where the set of relators is filtered as an ascending union of strata (each stratum satisfying a small cancellation condition) and for a relator r from some stratum and a relator s from a higher stratum their overlap is required to be small with respect to |s| but is allowed to have a large with respect to |r|. This theory allowed Ol'shanskii to construct various "monster" groups including the Tarski monster and to give a new proof that free Burnside groups of large odd exponent are infinite.
Ol'shanskii and Delzant later on developed versions of small cancellation theory for quotients of word-hyperbolic groups.
McCammond provided a higher-dimensional version of small cancellation theory.
McCammond and Wise pushed substantially further the basic results of the standard small cancellation theory (such as Greendlinger's lemma) regarding the geometry of van Kampen diagrams over small cancellation presentations.
Gromov used a version of small cancellation theory with respect to a graph to prove the existence of a finitely presented group that "contains" (in the appropriate sense) an infinite sequence of expanders and therefore does not admit a uniform embedding into a Hilbert space.
Osin gave a version of small cancellation theory for quotients of relatively hyperbolic groups and used it to obtain a relatively hyperbolic generalization of Thurston's hyperbolic Dehn surgery theorem.
== Basic references ==
Roger Lyndon and Paul Schupp, Combinatorial group theory. Reprint of the 1977 edition. Classics in Mathematics. Springer-Verlag, Berlin, 2001. ISBN 3-540-41158-5.
Alexander Yu. Olʹshanskii, Geometry of defining relations in groups. Translated from the 1989 Russian original by Yu. A. Bakhturin. Mathematics and its Applications (Soviet Series), 70. Kluwer Academic Publishers Group, Dordrecht, 1991. ISBN 0-7923-1394-1.
Ralph Strebel, Appendix. Small cancellation groups. Sur les groupes hyperboliques d'après Mikhael Gromov (Bern, 1988), pp. 227–273, Progress in Mathematics, 83, Birkhäuser Boston, Boston, Massachusetts, 1990. ISBN 0-8176-3508-4.
Milé Krajčevski, Tilings of the plane, hyperbolic groups and small cancellation conditions. Memoirs of the American Mathematical Society, vol. 154 (2001), no. 733.
== See also ==
Geometric group theory
Word-hyperbolic group
Tarski monster group
Burnside problem
Finitely presented group
Word problem for groups
Van Kampen diagram
== Notes == | Wikipedia/Dehn's_algorithm |
In mathematics, hyperbolic Dehn surgery is an operation by which one can obtain further hyperbolic 3-manifolds from a given cusped hyperbolic 3-manifold. Hyperbolic Dehn surgery exists only in dimension three and is one which distinguishes hyperbolic geometry in three dimensions from other dimensions.
Such an operation is often also called hyperbolic Dehn filling, as Dehn surgery proper refers to a "drill and fill" operation on a link which consists of drilling out a neighborhood of the link and then filling back in with solid tori. Hyperbolic Dehn surgery actually only involves "filling".
We will generally assume that a hyperbolic 3-manifold is complete. Suppose M is a cusped hyperbolic 3-manifold with n cusps. M can be thought of, topologically, as the interior of a compact manifold with toral boundary. Suppose we have chosen a meridian and longitude for each boundary torus, i.e. simple closed curves that are generators for the fundamental group of the torus. Let
M
(
u
1
,
u
2
,
…
,
u
n
)
{\displaystyle M(u_{1},u_{2},\dots ,u_{n})}
denote the manifold obtained from M by filling in the i-th boundary torus with a solid torus using the slope
u
i
=
p
i
/
q
i
{\displaystyle u_{i}=p_{i}/q_{i}}
where each pair
p
i
{\displaystyle p_{i}}
and
q
i
{\displaystyle q_{i}}
are coprime integers. We allow a
u
i
{\displaystyle u_{i}}
to be
∞
{\displaystyle \infty }
which means we do not fill in that cusp, i.e. do the "empty" Dehn filling. So M =
M
(
∞
,
…
,
∞
)
{\displaystyle M(\infty ,\dots ,\infty )}
.
We equip the space H of finite volume hyperbolic 3-manifolds with the geometric topology.
== Related theorems ==
The Thurston's hyperbolic Dehn surgery theorem states
M
(
u
1
,
u
2
,
…
,
u
n
)
{\displaystyle M(u_{1},u_{2},\dots ,u_{n})}
is hyperbolic as long as a finite set of exceptional slopes
E
i
{\displaystyle E_{i}}
is avoided for the i-th cusp for each i.
M
(
u
1
,
u
2
,
…
,
u
n
)
{\displaystyle M(u_{1},u_{2},\dots ,u_{n})}
converges to M in H as all
p
i
2
+
q
i
2
→
∞
{\displaystyle p_{i}^{2}+q_{i}^{2}\rightarrow \infty }
for all
p
i
/
q
i
{\displaystyle p_{i}/q_{i}}
corresponding to non-empty Dehn fillings
u
i
{\displaystyle u_{i}}
. This theorem is due to William Thurston and fundamental to the theory of hyperbolic 3-manifolds. It shows that nontrivial limits exist in H.
Troels Jorgensen's study of the geometric topology further shows that all nontrivial limits arise by Dehn filling as in the theorem. Another important result by Thurston is that volume decreases under hyperbolic Dehn filling. The theorem states that volume decreases under topological Dehn filling, assuming of course that the Dehn-filled manifold is hyperbolic. The proof relies on basic properties of the Gromov norm. Jørgensen also showed that the volume function on this space is a continuous, proper function. Thus by the previous results, nontrivial limits in H are taken to nontrivial limits in the set of volumes. In fact, one can further conclude, as did Thurston, that the set of volumes of finite volume hyperbolic 3-manifolds has ordinal type
ω
ω
{\displaystyle \omega ^{\omega }}
. This result is known as the Thurston-Jørgensen theorem. Further work characterizing this set was done by Gromov.
The figure-eight knot and the (-2, 3, 7) pretzel knot are the only two knots whose complements are known to have more than 6 exceptional surgeries; they have 10 and 7, respectively. Cameron Gordon conjectured that 10 is the largest possible number of exceptional surgeries of any hyperbolic knot complement. This was proved by Marc Lackenby and Rob Meyerhoff, who show that the number of exceptional slopes is 10 for any compact orientable 3-manifold with boundary a torus and interior finite-volume hyperbolic. Their proof relies on the proof of the geometrization conjecture originated by Grigori Perelman and on computer assistance. It is currently unknown whether the figure-eight knot is the only one that achieves the bound of 10. One conjecture is that the bound (except for the two knots mentioned) is 6. Agol has shown that there are only finitely many cases in which the number of exceptional slopes is 9 or 10.
== References ==
Ian Agol, Bounds on exceptional Dehn filling II, Geom. Topol. 14 (2010) 1921-1940. arxiv:0803:3088
Robion Kirby, Problems in low-dimensional topology, (see problem 1.77, due to Cameron Gordon, for exceptional slopes)
Marc Lackenby and Robert Meyerhoff, The maximal number of exceptional Dehn surgeries, arXiv:0808.1176
William Thurston, The geometry and topology of 3-manifolds, Princeton lecture notes (1978–1981). | Wikipedia/Hyperbolic_Dehn_surgery |
In group theory, the normal closure of a subset
S
{\displaystyle S}
of a group
G
{\displaystyle G}
is the smallest normal subgroup of
G
{\displaystyle G}
containing
S
.
{\displaystyle S.}
== Properties and description ==
Formally, if
G
{\displaystyle G}
is a group and
S
{\displaystyle S}
is a subset of
G
,
{\displaystyle G,}
the normal closure
ncl
G
(
S
)
{\displaystyle \operatorname {ncl} _{G}(S)}
of
S
{\displaystyle S}
is the intersection of all normal subgroups of
G
{\displaystyle G}
containing
S
{\displaystyle S}
:
ncl
G
(
S
)
=
⋂
S
⊆
N
◃
G
N
.
{\displaystyle \operatorname {ncl} _{G}(S)=\bigcap _{S\subseteq N\triangleleft G}N.}
The normal closure
ncl
G
(
S
)
{\displaystyle \operatorname {ncl} _{G}(S)}
is the smallest normal subgroup of
G
{\displaystyle G}
containing
S
,
{\displaystyle S,}
in the sense that
ncl
G
(
S
)
{\displaystyle \operatorname {ncl} _{G}(S)}
is a subset of every normal subgroup of
G
{\displaystyle G}
that contains
S
.
{\displaystyle S.}
The subgroup
ncl
G
(
S
)
{\displaystyle \operatorname {ncl} _{G}(S)}
is the subgroup generated by the set
S
G
=
{
s
g
:
s
∈
S
,
g
∈
G
}
=
{
g
−
1
s
g
:
s
∈
S
,
g
∈
G
}
{\displaystyle S^{G}=\{s^{g}:s\in S,g\in G\}=\{g^{-1}sg:s\in S,g\in G\}}
of all conjugates of elements of
S
{\displaystyle S}
in
G
.
{\displaystyle G.}
Therefore one can also write the subgroup as the set of all products of conjugates of elements of
S
{\displaystyle S}
or their inverses:
ncl
G
(
S
)
=
{
g
1
−
1
s
1
ϵ
1
g
1
⋯
g
n
−
1
s
n
ϵ
n
g
n
:
n
≥
0
,
ϵ
i
=
±
1
,
s
i
∈
S
,
g
i
∈
G
}
.
{\displaystyle \operatorname {ncl} _{G}(S)=\{g_{1}^{-1}s_{1}^{\epsilon _{1}}g_{1}\cdots g_{n}^{-1}s_{n}^{\epsilon _{n}}g_{n}:n\geq 0,\epsilon _{i}=\pm 1,s_{i}\in S,g_{i}\in G\}.}
Any normal subgroup is equal to its normal closure. The normal closure of the empty set
∅
{\displaystyle \varnothing }
is the trivial subgroup.
A variety of other notations are used for the normal closure in the literature, including
⟨
S
G
⟩
,
{\displaystyle \langle S^{G}\rangle ,}
⟨
S
⟩
G
,
{\displaystyle \langle S\rangle ^{G},}
⟨
⟨
S
⟩
⟩
G
,
{\displaystyle \langle \langle S\rangle \rangle _{G},}
and
⟨
⟨
S
⟩
⟩
G
.
{\displaystyle \langle \langle S\rangle \rangle ^{G}.}
Dual to the concept of normal closure is that of normal interior or normal core, defined as the join of all normal subgroups contained in
S
.
{\displaystyle S.}
== Group presentations ==
For a group
G
{\displaystyle G}
given by a presentation
G
=
⟨
S
∣
R
⟩
{\displaystyle G=\langle S\mid R\rangle }
with generators
S
{\displaystyle S}
and defining relators
R
,
{\displaystyle R,}
the presentation notation means that
G
{\displaystyle G}
is the quotient group
G
=
F
(
S
)
/
ncl
F
(
S
)
(
R
)
,
{\displaystyle G=F(S)/\operatorname {ncl} _{F(S)}(R),}
where
F
(
S
)
{\displaystyle F(S)}
is a free group on
S
.
{\displaystyle S.}
== References == | Wikipedia/Normal_closure_(group_theory) |
In ring theory, a branch of mathematics, a semisimple algebra is an associative Artinian algebra over a field which has trivial Jacobson radical (only the zero element of the algebra is in the Jacobson radical). If the algebra is finite-dimensional this is equivalent to saying that it can be expressed as a Cartesian product of simple subalgebras.
== Definition ==
The Jacobson radical of an algebra over a field is the ideal consisting of all elements that annihilate every simple left-module. The radical contains all nilpotent ideals, and if the algebra is finite-dimensional, the radical itself is a nilpotent ideal. A finite-dimensional algebra is then said to be semisimple if its radical contains only the zero element.
An algebra A is called simple if it has no proper ideals and A2 = {ab | a, b ∈ A} ≠ {0}. As the terminology suggests, simple algebras are semisimple. The only possible ideals of a simple algebra A are A and {0}. Thus if A is simple, then A is not nilpotent. Because A2 is an ideal of A and A is simple, A2 = A. By induction, An = A for every positive integer n, i.e. A is not nilpotent.
Any self-adjoint subalgebra A of n × n matrices with complex entries is semisimple. Let Rad(A) be the radical of A. Suppose a matrix M is in Rad(A). Then M*M lies in some nilpotent ideals of A, therefore (M*M)k = 0 for some positive integer k. By positive-semidefiniteness of M*M, this implies M*M = 0. So M x is the zero vector for all x, i.e. M = 0.
If {Ai} is a finite collection of simple algebras, then their Cartesian product A=Π Ai is semisimple. If (ai) is an element of Rad(A) and e1 is the multiplicative identity in A1 (all simple algebras possess a multiplicative identity), then (a1, a2, ...) · (e1, 0, ...) = (a1, 0..., 0) lies in some nilpotent ideal of Π Ai. This implies, for all b in A1, a1b is nilpotent in A1, i.e. a1 ∈ Rad(A1). So a1 = 0. Similarly, ai = 0 for all other i.
It is less apparent from the definition that the converse of the above is also true, that is, any finite-dimensional semisimple algebra is isomorphic to a Cartesian product of a finite number of simple algebras.
== Characterization ==
Let A be a finite-dimensional semisimple algebra, and
{
0
}
=
J
0
⊂
⋯
⊂
J
n
⊂
A
{\displaystyle \{0\}=J_{0}\subset \cdots \subset J_{n}\subset A}
be a composition series of A, then A is isomorphic to the following Cartesian product:
A
≃
J
1
×
J
2
/
J
1
×
J
3
/
J
2
×
.
.
.
×
J
n
/
J
n
−
1
×
A
/
J
n
{\displaystyle A\simeq J_{1}\times J_{2}/J_{1}\times J_{3}/J_{2}\times ...\times J_{n}/J_{n-1}\times A/J_{n}}
where each
J
i
+
1
/
J
i
{\displaystyle J_{i+1}/J_{i}\,}
is a simple algebra.
The proof can be sketched as follows. First, invoking the assumption that A is semisimple, one can show that the J1 is a simple algebra (therefore unital). So J1 is a unital subalgebra and an ideal of J2. Therefore, one can decompose
J
2
≃
J
1
×
J
2
/
J
1
.
{\displaystyle J_{2}\simeq J_{1}\times J_{2}/J_{1}.}
By maximality of J1 as an ideal in J2 and also the semisimplicity of A, the algebra
J
2
/
J
1
{\displaystyle J_{2}/J_{1}\,}
is simple. Proceed by induction in similar fashion proves the claim. For example, J3 is the Cartesian product of simple algebras
J
3
≃
J
2
×
J
3
/
J
2
≃
J
1
×
J
2
/
J
1
×
J
3
/
J
2
.
{\displaystyle J_{3}\simeq J_{2}\times J_{3}/J_{2}\simeq J_{1}\times J_{2}/J_{1}\times J_{3}/J_{2}.}
The above result can be restated in a different way. For a semisimple algebra A = A1 ×...× An expressed in terms of its simple factors, consider the units ei ∈ Ai. The elements Ei = (0,...,ei,...,0) are idempotent elements in A and they lie in the center of A. Furthermore, Ei A = Ai, EiEj = 0 for i ≠ j, and Σ Ei = 1, the multiplicative identity in A.
Therefore, for every semisimple algebra A, there exists idempotents {Ei} in the center of A, such that
EiEj = 0 for i ≠ j (such a set of idempotents is called central orthogonal),
Σ Ei = 1,
A is isomorphic to the Cartesian product of simple algebras E1 A ×...× En A.
== Classification ==
A theorem due to Joseph Wedderburn completely classifies finite-dimensional semisimple algebras over a field
k
{\displaystyle k}
. Any such algebra is isomorphic to a finite product
∏
M
n
i
(
D
i
)
{\displaystyle \prod M_{n_{i}}(D_{i})}
where the
n
i
{\displaystyle n_{i}}
are natural numbers, the
D
i
{\displaystyle D_{i}}
are division algebras over
k
{\displaystyle k}
, and
M
n
i
(
D
i
)
{\displaystyle M_{n_{i}}(D_{i})}
is the algebra of
n
i
×
n
i
{\displaystyle n_{i}\times n_{i}}
matrices over
D
i
{\displaystyle D_{i}}
. This product is unique up to permutation of the factors.
This theorem was later generalized by Emil Artin to semisimple rings. This more general result is called the Wedderburn–Artin theorem.
== References ==
Springer Encyclopedia of Mathematics | Wikipedia/Semisimple_algebra |
Orthographic projection (also orthogonal projection and analemma) is a means of representing three-dimensional objects in two dimensions. Orthographic projection is a form of parallel projection in which all the projection lines are orthogonal to the projection plane, resulting in every plane of the scene appearing in affine transformation on the viewing surface. The obverse of an orthographic projection is an oblique projection, which is a parallel projection in which the projection lines are not orthogonal to the projection plane.
The term orthographic sometimes means a technique in multiview projection in which principal axes or the planes of the subject are also parallel with the projection plane to create the primary views. If the principal planes or axes of an object in an orthographic projection are not parallel with the projection plane, the depiction is called axonometric or an auxiliary views. (Axonometric projection is synonymous with parallel projection.) Sub-types of primary views include plans, elevations, and sections; sub-types of auxiliary views include isometric, dimetric, and trimetric projections.
A lens that provides an orthographic projection is an object-space telecentric lens.
== Geometry ==
A simple orthographic projection onto the plane z = 0 can be defined by the following matrix:
P
=
[
1
0
0
0
1
0
0
0
0
]
{\displaystyle P={\begin{bmatrix}1&0&0\\0&1&0\\0&0&0\\\end{bmatrix}}}
For each point v = (vx, vy, vz), the transformed point Pv would be
P
v
=
[
1
0
0
0
1
0
0
0
0
]
[
v
x
v
y
v
z
]
=
[
v
x
v
y
0
]
{\displaystyle Pv={\begin{bmatrix}1&0&0\\0&1&0\\0&0&0\\\end{bmatrix}}{\begin{bmatrix}v_{x}\\v_{y}\\v_{z}\end{bmatrix}}={\begin{bmatrix}v_{x}\\v_{y}\\0\end{bmatrix}}}
Often, it is more useful to use homogeneous coordinates. The transformation above can be represented for homogeneous coordinates as
P
=
[
1
0
0
0
0
1
0
0
0
0
0
0
0
0
0
1
]
{\displaystyle P={\begin{bmatrix}1&0&0&0\\0&1&0&0\\0&0&0&0\\0&0&0&1\end{bmatrix}}}
For each homogeneous vector v = (vx, vy, vz, 1), the transformed vector Pv would be
P
v
=
[
1
0
0
0
0
1
0
0
0
0
0
0
0
0
0
1
]
[
v
x
v
y
v
z
1
]
=
[
v
x
v
y
0
1
]
{\displaystyle Pv={\begin{bmatrix}1&0&0&0\\0&1&0&0\\0&0&0&0\\0&0&0&1\end{bmatrix}}{\begin{bmatrix}v_{x}\\v_{y}\\v_{z}\\1\end{bmatrix}}={\begin{bmatrix}v_{x}\\v_{y}\\0\\1\end{bmatrix}}}
In computer graphics, one of the most common matrices used for orthographic projection can be defined by a 6-tuple, (left, right, bottom, top, near, far), which defines the clipping planes. These planes form a box with the minimum corner at (left, bottom, -near) and the maximum corner at (right, top, -far).
The box is translated so that its center is at the origin, then it is scaled to the unit cube which is defined by having a minimum corner at (−1,−1,−1) and a maximum corner at (1,1,1).
The orthographic transform can be given by the following matrix:
P
=
[
2
right
−
left
0
0
−
right
+
left
right
−
left
0
2
top
−
bottom
0
−
top
+
bottom
top
−
bottom
0
0
−
2
far
−
near
−
far
+
near
far
−
near
0
0
0
1
]
{\displaystyle P={\begin{bmatrix}{\frac {2}{{\text{right}}-{\text{left}}}}&0&0&-{\frac {{\text{right}}+{\text{left}}}{{\text{right}}-{\text{left}}}}\\0&{\frac {2}{{\text{top}}-{\text{bottom}}}}&0&-{\frac {{\text{top}}+{\text{bottom}}}{{\text{top}}-{\text{bottom}}}}\\0&0&{\frac {-2}{{\text{far}}-{\text{near}}}}&-{\frac {{\text{far}}+{\text{near}}}{{\text{far}}-{\text{near}}}}\\0&0&0&1\end{bmatrix}}}
which can be given as a scaling S followed by a translation T of the form
P
=
S
T
=
[
2
right
−
left
0
0
0
0
2
top
−
bottom
0
0
0
0
2
far
−
near
0
0
0
0
1
]
[
1
0
0
−
left
+
right
2
0
1
0
−
top
+
bottom
2
0
0
−
1
−
far
+
near
2
0
0
0
1
]
{\displaystyle P=ST={\begin{bmatrix}{\frac {2}{{\text{right}}-{\text{left}}}}&0&0&0\\0&{\frac {2}{{\text{top}}-{\text{bottom}}}}&0&0\\0&0&{\frac {2}{{\text{far}}-{\text{near}}}}&0\\0&0&0&1\end{bmatrix}}{\begin{bmatrix}1&0&0&-{\frac {{\text{left}}+{\text{right}}}{2}}\\0&1&0&-{\frac {{\text{top}}+{\text{bottom}}}{2}}\\0&0&-1&-{\frac {{\text{far}}+{\text{near}}}{2}}\\0&0&0&1\end{bmatrix}}}
The inversion of the projection matrix P−1, which can be used as the unprojection matrix is defined:
P
−
1
=
[
right
−
left
2
0
0
left
+
right
2
0
top
−
bottom
2
0
top
+
bottom
2
0
0
far
−
near
−
2
−
far
+
near
2
0
0
0
1
]
{\displaystyle P^{-1}={\begin{bmatrix}{\frac {{\text{right}}-{\text{left}}}{2}}&0&0&{\frac {{\text{left}}+{\text{right}}}{2}}\\0&{\frac {{\text{top}}-{\text{bottom}}}{2}}&0&{\frac {{\text{top}}+{\text{bottom}}}{2}}\\0&0&{\frac {{\text{far}}-{\text{near}}}{-2}}&-{\frac {{\text{far}}+{\text{near}}}{2}}\\0&0&0&1\end{bmatrix}}}
== Types ==
Three sub-types of orthographic projection are isometric projection, dimetric projection, and trimetric projection, depending on the exact angle at which the view deviates from the orthogonal. Typically in axonometric drawing, as in other types of pictorials, one axis of space is shown to be vertical.
In isometric projection, the most commonly used form of axonometric projection in engineering drawing, the direction of viewing is such that the three axes of space appear equally foreshortened, and there is a common angle of 120° between them. As the distortion caused by foreshortening is uniform, the proportionality between lengths is preserved, and the axes share a common scale; this eases one's ability to take measurements directly from the drawing. Another advantage is that 120° angles are easily constructed using only a compass and straightedge.
In dimetric projection, the direction of viewing is such that two of the three axes of space appear equally foreshortened, of which the attendant scale and angles of presentation are determined according to the angle of viewing; the scale of the third direction is determined separately.
In trimetric projection, the direction of viewing is such that all of the three axes of space appear unequally foreshortened. The scale along each of the three axes and the angles among them are determined separately as dictated by the angle of viewing. Trimetric perspective is seldom used in technical drawings.
== Multiview projection ==
In multiview projection, up to six pictures of an object are produced, called primary views, with each projection plane parallel to one of the coordinate axes of the object. The views are positioned relative to each other according to either of two schemes: first-angle or third-angle projection. In each, the appearances of views may be thought of as being projected onto planes that form a six-sided box around the object. Although six different sides can be drawn, usually three views of a drawing give enough information to make a three-dimensional object. These views are known as front view (also elevation), top view (also plan) and end view (also section). When the plane or axis of the object depicted is not parallel to the projection plane, and where multiple sides of an object are visible in the same image, it is called an auxiliary view. Thus isometric projection, dimetric projection and trimetric projection would be considered auxiliary views in multiview projection. A typical characteristic of multiview projection is that one axis of space is usually displayed as vertical.
== Cartography ==
An orthographic projection map is a map projection of cartography. Like the stereographic projection and gnomonic projection, orthographic projection is a perspective (or azimuthal) projection, in which the sphere is projected onto a tangent plane or secant plane. The point of perspective for the orthographic projection is at infinite distance. It depicts a hemisphere of the globe as it appears from outer space, where the horizon is a great circle. The shapes and areas are distorted, particularly near the edges.
The orthographic projection has been known since antiquity, with its cartographic uses being well documented. Hipparchus used the projection in the 2nd century BC to determine the places of star-rise and star-set. In about 14 BC, Roman engineer Marcus Vitruvius Pollio used the projection to construct sundials and to compute sun positions.
Vitruvius also seems to have devised the term orthographic – from the Greek orthos ("straight") and graphē ("drawing") – for the projection. However, the name analemma, which also meant a sundial showing latitude and longitude, was the common name until François d'Aguilon of Antwerp promoted its present name in 1613.
The earliest surviving maps on the projection appear as woodcut drawings of terrestrial globes of 1509 (anonymous), 1533 and 1551 (Johannes Schöner), and 1524 and 1551 (Apian).
== Notes ==
== References ==
== External links ==
Normale (orthogonale) Axonometrie (in German)
Orthographic Projection Video and mathematics | Wikipedia/Orthographic_projection |
Dykstra's algorithm is a method that computes a point in the intersection of convex sets, and is a variant of the alternating projection method (also called the projections onto convex sets method). In its simplest form, the method finds a point in the intersection of two convex sets by iteratively projecting onto each of the convex set; it differs from the alternating projection method in that there are intermediate steps. A parallel version of the algorithm was developed by Gaffke and Mathar.
The method is named after Richard L. Dykstra who proposed it in the 1980s.
A key difference between Dykstra's algorithm and the standard alternating projection method occurs when there is more than one point in the intersection of the two sets. In this case, the alternating projection method gives some arbitrary point in this intersection, whereas Dykstra's algorithm gives a specific point: the projection of r onto the intersection, where r is the initial point used in the algorithm,
== Algorithm ==
Dykstra's algorithm finds for each
r
{\displaystyle r}
the only
x
¯
∈
C
∩
D
{\displaystyle {\bar {x}}\in C\cap D}
such that:
‖
x
¯
−
r
‖
2
≤
‖
x
−
r
‖
2
,
for all
x
∈
C
∩
D
,
{\displaystyle \|{\bar {x}}-r\|^{2}\leq \|x-r\|^{2},{\text{for all }}x\in C\cap D,}
where
C
,
D
{\displaystyle C,D}
are convex sets. This problem is equivalent to finding the projection of
r
{\displaystyle r}
onto the set
C
∩
D
{\displaystyle C\cap D}
, which we denote by
P
C
∩
D
{\displaystyle {\mathcal {P}}_{C\cap D}}
.
To use Dykstra's algorithm, one must know how to project onto the sets
C
{\displaystyle C}
and
D
{\displaystyle D}
separately.
First, consider the basic alternating projection (aka POCS) method (first studied, in the case when the sets
C
,
D
{\displaystyle C,D}
were linear subspaces, by John von Neumann), which initializes
x
0
=
r
{\displaystyle x_{0}=r}
and then generates the sequence
x
k
+
1
=
P
C
(
P
D
(
x
k
)
)
{\displaystyle x_{k+1}={\mathcal {P}}_{C}\left({\mathcal {P}}_{D}(x_{k})\right)}
.
Dykstra's algorithm is of a similar form, but uses additional auxiliary variables. Start with
x
0
=
r
,
p
0
=
q
0
=
0
{\displaystyle x_{0}=r,p_{0}=q_{0}=0}
and update by
y
k
=
P
D
(
x
k
+
p
k
)
{\displaystyle y_{k}={\mathcal {P}}_{D}(x_{k}+p_{k})}
p
k
+
1
=
x
k
+
p
k
−
y
k
{\displaystyle p_{k+1}=x_{k}+p_{k}-y_{k}}
x
k
+
1
=
P
C
(
y
k
+
q
k
)
{\displaystyle x_{k+1}={\mathcal {P}}_{C}(y_{k}+q_{k})}
q
k
+
1
=
y
k
+
q
k
−
x
k
+
1
.
{\displaystyle q_{k+1}=y_{k}+q_{k}-x_{k+1}.}
Then the sequence
(
x
k
)
{\displaystyle (x_{k})}
converges to the solution of the original problem. For convergence results and a modern perspective on the literature, see.
== Citations ==
== References ==
Boyle, J. P.; Dykstr, R. L. (1986). "A Method for Finding Projections onto the Intersection of Convex Sets in Hilbert Spaces". Advances in Order Restricted Statistical Inference. Lecture Notes in Statistics. Vol. 37. pp. 28–47. doi:10.1007/978-1-4613-9940-7_3. ISBN 978-0-387-96419-5.
Gaffke, N.; Mathar, R. (1989). "A cyclic projection algorithm via duality". Metrika. 36: 29–54. doi:10.1007/bf02614077. S2CID 120944669. | Wikipedia/Dykstra's_projection_algorithm |
In linear algebra, the rank of a matrix A is the dimension of the vector space generated (or spanned) by its columns. This corresponds to the maximal number of linearly independent columns of A. This, in turn, is identical to the dimension of the vector space spanned by its rows. Rank is thus a measure of the "nondegenerateness" of the system of linear equations and linear transformation encoded by A. There are multiple equivalent definitions of rank. A matrix's rank is one of its most fundamental characteristics.
The rank is commonly denoted by rank(A) or rk(A); sometimes the parentheses are not written, as in rank A.
== Main definitions ==
In this section, we give some definitions of the rank of a matrix. Many definitions are possible; see Alternative definitions for several of these.
The column rank of A is the dimension of the column space of A, while the row rank of A is the dimension of the row space of A.
A fundamental result in linear algebra is that the column rank and the row rank are always equal. (Three proofs of this result are given in § Proofs that column rank = row rank, below.) This number (i.e., the number of linearly independent rows or columns) is simply called the rank of A.
A matrix is said to have full rank if its rank equals the largest possible for a matrix of the same dimensions, which is the lesser of the number of rows and columns. A matrix is said to be rank-deficient if it does not have full rank. The rank deficiency of a matrix is the difference between the lesser of the number of rows and columns, and the rank.
The rank of a linear map or operator
Φ
{\displaystyle \Phi }
is defined as the dimension of its image:
rank
(
Φ
)
:=
dim
(
img
(
Φ
)
)
{\displaystyle \operatorname {rank} (\Phi ):=\dim(\operatorname {img} (\Phi ))}
where
dim
{\displaystyle \dim }
is the dimension of a vector space, and
img
{\displaystyle \operatorname {img} }
is the image of a map.
== Examples ==
The matrix
[
1
0
1
0
1
1
0
1
1
]
{\displaystyle {\begin{bmatrix}1&0&1\\0&1&1\\0&1&1\end{bmatrix}}}
has rank 2: the first two columns are linearly independent, so the rank is at least 2, but since the third is a linear combination of the first two (the first column plus the second), the three columns are linearly dependent so the rank must be less than 3.
The matrix
A
=
[
1
1
0
2
−
1
−
1
0
−
2
]
{\displaystyle A={\begin{bmatrix}1&1&0&2\\-1&-1&0&-2\end{bmatrix}}}
has rank 1: there are nonzero columns, so the rank is positive, but any pair of columns is linearly dependent. Similarly, the transpose
A
T
=
[
1
−
1
1
−
1
0
0
2
−
2
]
{\displaystyle A^{\mathrm {T} }={\begin{bmatrix}1&-1\\1&-1\\0&0\\2&-2\end{bmatrix}}}
of A has rank 1. Indeed, since the column vectors of A are the row vectors of the transpose of A, the statement that the column rank of a matrix equals its row rank is equivalent to the statement that the rank of a matrix is equal to the rank of its transpose, i.e., rank(A) = rank(AT).
== Computing the rank of a matrix ==
=== Rank from row echelon forms ===
A common approach to finding the rank of a matrix is to reduce it to a simpler form, generally row echelon form, by elementary row operations. Row operations do not change the row space (hence do not change the row rank), and, being invertible, map the column space to an isomorphic space (hence do not change the column rank). Once in row echelon form, the rank is clearly the same for both row rank and column rank, and equals the number of pivots (or basic columns) and also the number of non-zero rows.
For example, the matrix A given by
A
=
[
1
2
1
−
2
−
3
1
3
5
0
]
{\displaystyle A={\begin{bmatrix}1&2&1\\-2&-3&1\\3&5&0\end{bmatrix}}}
can be put in reduced row-echelon form by using the following elementary row operations:
[
1
2
1
−
2
−
3
1
3
5
0
]
→
2
R
1
+
R
2
→
R
2
[
1
2
1
0
1
3
3
5
0
]
→
−
3
R
1
+
R
3
→
R
3
[
1
2
1
0
1
3
0
−
1
−
3
]
→
R
2
+
R
3
→
R
3
[
1
2
1
0
1
3
0
0
0
]
→
−
2
R
2
+
R
1
→
R
1
[
1
0
−
5
0
1
3
0
0
0
]
.
{\displaystyle {\begin{aligned}{\begin{bmatrix}1&2&1\\-2&-3&1\\3&5&0\end{bmatrix}}&\xrightarrow {2R_{1}+R_{2}\to R_{2}} {\begin{bmatrix}1&2&1\\0&1&3\\3&5&0\end{bmatrix}}\xrightarrow {-3R_{1}+R_{3}\to R_{3}} {\begin{bmatrix}1&2&1\\0&1&3\\0&-1&-3\end{bmatrix}}\\&\xrightarrow {R_{2}+R_{3}\to R_{3}} \,\,{\begin{bmatrix}1&2&1\\0&1&3\\0&0&0\end{bmatrix}}\xrightarrow {-2R_{2}+R_{1}\to R_{1}} {\begin{bmatrix}1&0&-5\\0&1&3\\0&0&0\end{bmatrix}}~.\end{aligned}}}
The final matrix (in reduced row echelon form) has two non-zero rows and thus the rank of matrix A is 2.
=== Computation ===
When applied to floating point computations on computers, basic Gaussian elimination (LU decomposition) can be unreliable, and a rank-revealing decomposition should be used instead. An effective alternative is the singular value decomposition (SVD), but there are other less computationally expensive choices, such as QR decomposition with pivoting (so-called rank-revealing QR factorization), which are still more numerically robust than Gaussian elimination. Numerical determination of rank requires a criterion for deciding when a value, such as a singular value from the SVD, should be treated as zero, a practical choice which depends on both the matrix and the application.
== Proofs that column rank = row rank ==
=== Proof using row reduction ===
The fact that the column and row ranks of any matrix are equal forms is fundamental in linear algebra. Many proofs have been given. One of the most elementary ones has been sketched in § Rank from row echelon forms. Here is a variant of this proof:
It is straightforward to show that neither the row rank nor the column rank are changed by an elementary row operation. As Gaussian elimination proceeds by elementary row operations, the reduced row echelon form of a matrix has the same row rank and the same column rank as the original matrix. Further elementary column operations allow putting the matrix in the form of an identity matrix possibly bordered by rows and columns of zeros. Again, this changes neither the row rank nor the column rank. It is immediate that both the row and column ranks of this resulting matrix is the number of its nonzero entries.
We present two other proofs of this result. The first uses only basic properties of linear combinations of vectors, and is valid over any field. The proof is based upon Wardlaw (2005). The second uses orthogonality and is valid for matrices over the real numbers; it is based upon Mackiw (1995). Both proofs can be found in the book by Banerjee and Roy (2014).
=== Proof using linear combinations ===
Let A be an m × n matrix. Let the column rank of A be r, and let c1, ..., cr be any basis for the column space of A. Place these as the columns of an m × r matrix C. Every column of A can be expressed as a linear combination of the r columns in C. This means that there is an r × n matrix R such that A = CR. R is the matrix whose ith column is formed from the coefficients giving the ith column of A as a linear combination of the r columns of C. In other words, R is the matrix which contains the multiples for the bases of the column space of A (which is C), which are then used to form A as a whole. Now, each row of A is given by a linear combination of the r rows of R. Therefore, the rows of R form a spanning set of the row space of A and, by the Steinitz exchange lemma, the row rank of A cannot exceed r. This proves that the row rank of A is less than or equal to the column rank of A. This result can be applied to any matrix, so apply the result to the transpose of A. Since the row rank of the transpose of A is the column rank of A and the column rank of the transpose of A is the row rank of A, this establishes the reverse inequality and we obtain the equality of the row rank and the column rank of A. (Also see Rank factorization.)
=== Proof using orthogonality ===
Let A be an m × n matrix with entries in the real numbers whose row rank is r. Therefore, the dimension of the row space of A is r. Let x1, x2, …, xr be a basis of the row space of A. We claim that the vectors Ax1, Ax2, …, Axr are linearly independent. To see why, consider a linear homogeneous relation involving these vectors with scalar coefficients c1, c2, …, cr:
0
=
c
1
A
x
1
+
c
2
A
x
2
+
⋯
+
c
r
A
x
r
=
A
(
c
1
x
1
+
c
2
x
2
+
⋯
+
c
r
x
r
)
=
A
v
,
{\displaystyle 0=c_{1}A\mathbf {x} _{1}+c_{2}A\mathbf {x} _{2}+\cdots +c_{r}A\mathbf {x} _{r}=A(c_{1}\mathbf {x} _{1}+c_{2}\mathbf {x} _{2}+\cdots +c_{r}\mathbf {x} _{r})=A\mathbf {v} ,}
where v = c1x1 + c2x2 + ⋯ + crxr. We make two observations: (a) v is a linear combination of vectors in the row space of A, which implies that v belongs to the row space of A, and (b) since Av = 0, the vector v is orthogonal to every row vector of A and, hence, is orthogonal to every vector in the row space of A. The facts (a) and (b) together imply that v is orthogonal to itself, which proves that v = 0 or, by the definition of v,
c
1
x
1
+
c
2
x
2
+
⋯
+
c
r
x
r
=
0.
{\displaystyle c_{1}\mathbf {x} _{1}+c_{2}\mathbf {x} _{2}+\cdots +c_{r}\mathbf {x} _{r}=0.}
But recall that the xi were chosen as a basis of the row space of A and so are linearly independent. This implies that c1 = c2 = ⋯ = cr = 0. It follows that Ax1, Ax2, …, Axr are linearly independent.
Now, each Axi is obviously a vector in the column space of A. So, Ax1, Ax2, …, Axr is a set of r linearly independent vectors in the column space of A and, hence, the dimension of the column space of A (i.e., the column rank of A) must be at least as big as r. This proves that row rank of A is no larger than the column rank of A. Now apply this result to the transpose of A to get the reverse inequality and conclude as in the previous proof.
== Alternative definitions ==
In all the definitions in this section, the matrix A is taken to be an m × n matrix over an arbitrary field F.
=== Dimension of image ===
Given the matrix
A
{\displaystyle A}
, there is an associated linear mapping
f
:
F
n
→
F
m
{\displaystyle f:F^{n}\to F^{m}}
defined by
f
(
x
)
=
A
x
.
{\displaystyle f(x)=Ax.}
The rank of
A
{\displaystyle A}
is the dimension of the image of
f
{\displaystyle f}
. This definition has the advantage that it can be applied to any linear map without need for a specific matrix.
=== Rank in terms of nullity ===
Given the same linear mapping f as above, the rank is n minus the dimension of the kernel of f. The rank–nullity theorem states that this definition is equivalent to the preceding one.
=== Column rank – dimension of column space ===
The rank of A is the maximal number of linearly independent columns
c
1
,
c
2
,
…
,
c
k
{\displaystyle \mathbf {c} _{1},\mathbf {c} _{2},\dots ,\mathbf {c} _{k}}
of A; this is the dimension of the column space of A (the column space being the subspace of Fm generated by the columns of A, which is in fact just the image of the linear map f associated to A).
=== Row rank – dimension of row space ===
The rank of A is the maximal number of linearly independent rows of A; this is the dimension of the row space of A.
=== Decomposition rank ===
The rank of A is the smallest positive integer k such that A can be factored as
A
=
C
R
{\displaystyle A=CR}
, where C is an m × k matrix and R is a k × n matrix. In fact, for all integers k, the following are equivalent:
the column rank of A is less than or equal to k,
there exist k columns
c
1
,
…
,
c
k
{\displaystyle \mathbf {c} _{1},\ldots ,\mathbf {c} _{k}}
of size m such that every column of A is a linear combination of
c
1
,
…
,
c
k
{\displaystyle \mathbf {c} _{1},\ldots ,\mathbf {c} _{k}}
,
there exist an
m
×
k
{\displaystyle m\times k}
matrix C and a
k
×
n
{\displaystyle k\times n}
matrix R such that
A
=
C
R
{\displaystyle A=CR}
(when k is the rank, this is a rank factorization of A),
there exist k rows
r
1
,
…
,
r
k
{\displaystyle \mathbf {r} _{1},\ldots ,\mathbf {r} _{k}}
of size n such that every row of A is a linear combination of
r
1
,
…
,
r
k
{\displaystyle \mathbf {r} _{1},\ldots ,\mathbf {r} _{k}}
,
the row rank of A is less than or equal to k.
Indeed, the following equivalences are obvious:
(
1
)
⇔
(
2
)
⇔
(
3
)
⇔
(
4
)
⇔
(
5
)
{\displaystyle (1)\Leftrightarrow (2)\Leftrightarrow (3)\Leftrightarrow (4)\Leftrightarrow (5)}
.
For example, to prove (3) from (2), take C to be the matrix whose columns are
c
1
,
…
,
c
k
{\displaystyle \mathbf {c} _{1},\ldots ,\mathbf {c} _{k}}
from (2).
To prove (2) from (3), take
c
1
,
…
,
c
k
{\displaystyle \mathbf {c} _{1},\ldots ,\mathbf {c} _{k}}
to be the columns of C.
It follows from the equivalence
(
1
)
⇔
(
5
)
{\displaystyle (1)\Leftrightarrow (5)}
that the row rank is equal to the column rank.
As in the case of the "dimension of image" characterization, this can be generalized to a definition of the rank of any linear map: the rank of a linear map f : V → W is the minimal dimension k of an intermediate space X such that f can be written as the composition of a map V → X and a map X → W. Unfortunately, this definition does not suggest an efficient manner to compute the rank (for which it is better to use one of the alternative definitions). See rank factorization for details.
=== Rank in terms of singular values ===
The rank of A equals the number of non-zero singular values, which is the same as the number of non-zero diagonal elements in Σ in the singular value decomposition
A
=
U
Σ
V
∗
{\displaystyle A=U\Sigma V^{*}}
.
=== Determinantal rank – size of largest non-vanishing minor ===
The rank of A is the largest order of any non-zero minor in A. (The order of a minor is the side-length of the square sub-matrix of which it is the determinant.) Like the decomposition rank characterization, this does not give an efficient way of computing the rank, but it is useful theoretically: a single non-zero minor witnesses a lower bound (namely its order) for the rank of the matrix, which can be useful (for example) to prove that certain operations do not lower the rank of a matrix.
A non-vanishing p-minor (p × p submatrix with non-zero determinant) shows that the rows and columns of that submatrix are linearly independent, and thus those rows and columns of the full matrix are linearly independent (in the full matrix), so the row and column rank are at least as large as the determinantal rank; however, the converse is less straightforward. The equivalence of determinantal rank and column rank is a strengthening of the statement that if the span of n vectors has dimension p, then p of those vectors span the space (equivalently, that one can choose a spanning set that is a subset of the vectors): the equivalence implies that a subset of the rows and a subset of the columns simultaneously define an invertible submatrix (equivalently, if the span of n vectors has dimension p, then p of these vectors span the space and there is a set of p coordinates on which they are linearly independent).
=== Tensor rank – minimum number of simple tensors ===
The rank of A is the smallest number k such that A can be written as a sum of k rank 1 matrices, where a matrix is defined to have rank 1 if and only if it can be written as a nonzero product
c
⋅
r
{\displaystyle c\cdot r}
of a column vector c and a row vector r. This notion of rank is called tensor rank; it can be generalized in the separable models interpretation of the singular value decomposition.
== Properties ==
We assume that A is an m × n matrix, and we define the linear map f by f(x) = Ax as above.
The rank of an m × n matrix is a nonnegative integer and cannot be greater than either m or n. That is,
rank
(
A
)
≤
min
(
m
,
n
)
.
{\displaystyle \operatorname {rank} (A)\leq \min(m,n).}
A matrix that has rank min(m, n) is said to have full rank; otherwise, the matrix is rank deficient.
Only a zero matrix has rank zero.
f is injective (or "one-to-one") if and only if A has rank n (in this case, we say that A has full column rank).
f is surjective (or "onto") if and only if A has rank m (in this case, we say that A has full row rank).
If A is a square matrix (i.e., m = n), then A is invertible if and only if A has rank n (that is, A has full rank).
If B is any n × k matrix, then
rank
(
A
B
)
≤
min
(
rank
(
A
)
,
rank
(
B
)
)
.
{\displaystyle \operatorname {rank} (AB)\leq \min(\operatorname {rank} (A),\operatorname {rank} (B)).}
If B is an n × k matrix of rank n, then
rank
(
A
B
)
=
rank
(
A
)
.
{\displaystyle \operatorname {rank} (AB)=\operatorname {rank} (A).}
If C is an l × m matrix of rank m, then
rank
(
C
A
)
=
rank
(
A
)
.
{\displaystyle \operatorname {rank} (CA)=\operatorname {rank} (A).}
The rank of A is equal to r if and only if there exists an invertible m × m matrix X and an invertible n × n matrix Y such that
X
A
Y
=
[
I
r
0
0
0
]
,
{\displaystyle XAY={\begin{bmatrix}I_{r}&0\\0&0\end{bmatrix}},}
where Ir denotes the r × r identity matrix and the three zero matrices have the sizes r × (n − r), (m − r) × r and (m − r) × (n − r).
Sylvester’s rank inequality: if A is an m × n matrix and B is n × k, then
rank
(
A
)
+
rank
(
B
)
−
n
≤
rank
(
A
B
)
.
{\displaystyle \operatorname {rank} (A)+\operatorname {rank} (B)-n\leq \operatorname {rank} (AB).}
This is a special case of the next inequality.
The inequality due to Frobenius: if AB, ABC and BC are defined, then
rank
(
A
B
)
+
rank
(
B
C
)
≤
rank
(
B
)
+
rank
(
A
B
C
)
.
{\displaystyle \operatorname {rank} (AB)+\operatorname {rank} (BC)\leq \operatorname {rank} (B)+\operatorname {rank} (ABC).}
Subadditivity:
rank
(
A
+
B
)
≤
rank
(
A
)
+
rank
(
B
)
{\displaystyle \operatorname {rank} (A+B)\leq \operatorname {rank} (A)+\operatorname {rank} (B)}
when A and B are of the same dimension. As a consequence, a rank-k matrix can be written as the sum of k rank-1 matrices, but not fewer.
The rank of a matrix plus the nullity of the matrix equals the number of columns of the matrix. (This is the rank–nullity theorem.)
If A is a matrix over the real numbers then the rank of A and the rank of its corresponding Gram matrix are equal. Thus, for real matrices
rank
(
A
T
A
)
=
rank
(
A
A
T
)
=
rank
(
A
)
=
rank
(
A
T
)
.
{\displaystyle \operatorname {rank} (A^{\mathrm {T} }A)=\operatorname {rank} (AA^{\mathrm {T} })=\operatorname {rank} (A)=\operatorname {rank} (A^{\mathrm {T} }).}
This can be shown by proving equality of their null spaces. The null space of the Gram matrix is given by vectors x for which
A
T
A
x
=
0.
{\displaystyle A^{\mathrm {T} }A\mathbf {x} =0.}
If this condition is fulfilled, we also have
0
=
x
T
A
T
A
x
=
|
A
x
|
2
.
{\displaystyle 0=\mathbf {x} ^{\mathrm {T} }A^{\mathrm {T} }A\mathbf {x} =\left|A\mathbf {x} \right|^{2}.}
If A is a matrix over the complex numbers and
A
¯
{\displaystyle {\overline {A}}}
denotes the complex conjugate of A and A∗ the conjugate transpose of A (i.e., the adjoint of A), then
rank
(
A
)
=
rank
(
A
¯
)
=
rank
(
A
T
)
=
rank
(
A
∗
)
=
rank
(
A
∗
A
)
=
rank
(
A
A
∗
)
.
{\displaystyle \operatorname {rank} (A)=\operatorname {rank} ({\overline {A}})=\operatorname {rank} (A^{\mathrm {T} })=\operatorname {rank} (A^{*})=\operatorname {rank} (A^{*}A)=\operatorname {rank} (AA^{*}).}
== Applications ==
One useful application of calculating the rank of a matrix is the computation of the number of solutions of a system of linear equations. According to the Rouché–Capelli theorem, the system is inconsistent if the rank of the augmented matrix is greater than the rank of the coefficient matrix. If on the other hand, the ranks of these two matrices are equal, then the system must have at least one solution. The solution is unique if and only if the rank equals the number of variables. Otherwise the general solution has k free parameters where k is the difference between the number of variables and the rank. In this case (and assuming the system of equations is in the real or complex numbers) the system of equations has infinitely many solutions.
In control theory, the rank of a matrix can be used to determine whether a linear system is controllable, or observable.
In the field of communication complexity, the rank of the communication matrix of a function gives bounds on the amount of communication needed for two parties to compute the function.
== Generalization ==
There are different generalizations of the concept of rank to matrices over arbitrary rings, where column rank, row rank, dimension of column space, and dimension of row space of a matrix may be different from the others or may not exist.
Thinking of matrices as tensors, the tensor rank generalizes to arbitrary tensors; for tensors of order greater than 2 (matrices are order 2 tensors), rank is very hard to compute, unlike for matrices.
There is a notion of rank for smooth maps between smooth manifolds. It is equal to the linear rank of the derivative.
== Matrices as tensors ==
Matrix rank should not be confused with tensor order, which is called tensor rank. Tensor order is the number of indices required to write a tensor, and thus matrices all have tensor order 2. More precisely, matrices are tensors of type (1,1), having one row index and one column index, also called covariant order 1 and contravariant order 1; see Tensor (intrinsic definition) for details.
The tensor rank of a matrix can also mean the minimum number of simple tensors necessary to express the matrix as a linear combination, and that this definition does agree with matrix rank as here discussed.
== See also ==
Matroid rank
Nonnegative rank (linear algebra)
Rank (differential topology)
Multicollinearity
Linear dependence
== Notes ==
== References ==
== Sources ==
Axler, Sheldon (2015). Linear Algebra Done Right. Undergraduate Texts in Mathematics (3rd ed.). Springer. ISBN 978-3-319-11079-0.
Halmos, Paul Richard (1974) [1958]. Finite-Dimensional Vector Spaces. Undergraduate Texts in Mathematics (2nd ed.). Springer. ISBN 0-387-90093-4.
Hefferon, Jim (2020). Linear Algebra (4th ed.). Orthogonal Publishing L3C. ISBN 978-1-944325-11-4.
Katznelson, Yitzhak; Katznelson, Yonatan R. (2008). A (Terse) Introduction to Linear Algebra. American Mathematical Society. ISBN 978-0-8218-4419-9.
Roman, Steven (2005). Advanced Linear Algebra. Undergraduate Texts in Mathematics (2nd ed.). Springer. ISBN 0-387-24766-1.
Valenza, Robert J. (1993) [1951]. Linear Algebra: An Introduction to Abstract Mathematics. Undergraduate Texts in Mathematics (3rd ed.). Springer. ISBN 3-540-94099-5.
== Further reading ==
Roger A. Horn and Charles R. Johnson (1985). Matrix Analysis. Cambridge University Press. ISBN 978-0-521-38632-6.
Kaw, Autar K. Two Chapters from the book Introduction to Matrix Algebra: 1. Vectors [1] and System of Equations [2]
Mike Brookes: Matrix Reference Manual. [3] | Wikipedia/Rank_of_a_linear_transformation |
In linear algebra, the minimal polynomial μA of an n × n matrix A over a field F is the monic polynomial P over F of least degree such that P(A) = 0. Any other polynomial Q with Q(A) = 0 is a (polynomial) multiple of μA.
The following three statements are equivalent:
λ is a root of μA,
λ is a root of the characteristic polynomial χA of A,
λ is an eigenvalue of matrix A.
The multiplicity of a root λ of μA is the largest power m such that ker((A − λIn)m) strictly contains ker((A − λIn)m−1). In other words, increasing the exponent up to m will give ever larger kernels, but further increasing the exponent beyond m will just give the same kernel.
If the field F is not algebraically closed, then the minimal and characteristic polynomials need not factor according to their roots (in F) alone, in other words they may have irreducible polynomial factors of degree greater than 1. For irreducible polynomials P one has similar equivalences:
P divides μA,
P divides χA,
the kernel of P(A) has dimension at least 1.
the kernel of P(A) has dimension at least deg(P).
Like the characteristic polynomial, the minimal polynomial does not depend on the base field. In other words, considering the matrix as one with coefficients in a larger field does not change the minimal polynomial. The reason for this differs from the case with the characteristic polynomial (where it is immediate from the definition of determinants), namely by the fact that the minimal polynomial is determined by the relations of linear dependence between the powers of A: extending the base field will not introduce any new such relations (nor of course will it remove existing ones).
The minimal polynomial is often the same as the characteristic polynomial, but not always. For example, if A is a multiple aIn of the identity matrix, then its minimal polynomial is X − a since the kernel of aIn − A = 0 is already the entire space; on the other hand its characteristic polynomial is (X − a)n (the only eigenvalue is a, and the degree of the characteristic polynomial is always equal to the dimension of the space). The minimal polynomial always divides the characteristic polynomial, which is one way of formulating the Cayley–Hamilton theorem (for the case of matrices over a field).
== Formal definition ==
Given an endomorphism T on a finite-dimensional vector space V over a field F, let IT be the set defined as
I
T
=
{
p
∈
F
[
t
]
∣
p
(
T
)
=
0
}
,
{\displaystyle {\mathit {I}}_{T}=\{p\in \mathbf {F} [t]\mid p(T)=0\},}
where F[t] is the space of all polynomials over the field F. IT is a proper ideal of F[t]. Since F is a field, F[t] is a principal ideal domain, thus any ideal is generated by a single polynomial, which is unique up to a unit in F. A particular choice among the generators can be made, since precisely one of the generators is monic. The minimal polynomial is thus defined to be the monic polynomial that generates IT. It is the monic polynomial of least degree in IT.
== Applications ==
An endomorphism φ of a finite-dimensional vector space over a field F is diagonalizable if and only if its minimal polynomial factors completely over F into distinct linear factors. The fact that there is only one factor X − λ for every eigenvalue λ means that the generalized eigenspace for λ is the same as the eigenspace for λ: every Jordan block has size 1. More generally, if φ satisfies a polynomial equation P(φ) = 0 where P factors into distinct linear factors over F, then it will be diagonalizable: its minimal polynomial is a divisor of P and therefore also factors into distinct linear factors. In particular one has:
P = X k − 1: finite order endomorphisms of complex vector spaces are diagonalizable. For the special case k = 2 of involutions, this is even true for endomorphisms of vector spaces over any field of characteristic other than 2, since X 2 − 1 = (X − 1)(X + 1) is a factorization into distinct factors over such a field. This is a part of representation theory of cyclic groups.
P = X 2 − X = X(X − 1): endomorphisms satisfying φ2 = φ are called projections, and are always diagonalizable (moreover their only eigenvalues are 0 and 1).
By contrast if μφ = X k with k ≥ 2 then φ (a nilpotent endomorphism) is not necessarily diagonalizable, since X k has a repeated root 0.
These cases can also be proved directly, but the minimal polynomial gives a unified perspective and proof.
== Computation ==
For a nonzero vector v in V define:
I
T
,
v
=
{
p
∈
F
[
t
]
|
p
(
T
)
(
v
)
=
0
}
.
{\displaystyle {\mathit {I}}_{T,v}=\{p\in \mathbf {F} [t]\;|\;p(T)(v)=0\}.}
This definition satisfies the properties of a proper ideal. Let μT,v be the monic polynomial which generates it.
=== Properties ===
=== Example ===
Define T to be the endomorphism of R3 with matrix, on the canonical basis,
(
1
−
1
−
1
1
−
2
1
0
1
−
3
)
.
{\displaystyle {\begin{pmatrix}1&-1&-1\\1&-2&1\\0&1&-3\end{pmatrix}}.}
Taking the first canonical basis vector e1 and its repeated images by T one obtains
e
1
=
[
1
0
0
]
,
T
⋅
e
1
=
[
1
1
0
]
.
T
2
⋅
e
1
=
[
0
−
1
1
]
and
T
3
⋅
e
1
=
[
0
3
−
4
]
{\displaystyle e_{1}={\begin{bmatrix}1\\0\\0\end{bmatrix}},\quad T\cdot e_{1}={\begin{bmatrix}1\\1\\0\end{bmatrix}}.\quad T^{2}\!\cdot e_{1}={\begin{bmatrix}0\\-1\\1\end{bmatrix}}{\mbox{ and}}\quad T^{3}\!\cdot e_{1}={\begin{bmatrix}0\\3\\-4\end{bmatrix}}}
of which the first three are easily seen to be linearly independent, and therefore span all of R3. The last one then necessarily is a linear combination of the first three, in fact
T 3 ⋅ e1 = −4T 2 ⋅ e1 − T ⋅ e1 + e1,
so that:
μT,e1 = X 3 + 4X 2 + X − I.
This is in fact also the minimal polynomial μT and the characteristic polynomial χT: indeed μT,e1 divides μT which divides χT, and since the first and last are of degree 3 and all are monic, they must all be the same. Another reason is that in general if any polynomial in T annihilates a vector v, then it also annihilates T⋅v (just apply T to the equation that says that it annihilates v), and therefore by iteration it annihilates the entire space generated by the iterated images by T of v; in the current case we have seen that for v = e1 that space is all of R3, so μT,e1(T) = 0. Indeed one verifies for the full matrix that T 3 + 4T 2 + T − I3 is the zero matrix:
[
0
1
−
3
3
−
13
23
−
4
19
−
36
]
+
4
[
0
0
1
−
1
4
−
6
1
−
5
10
]
+
[
1
−
1
−
1
1
−
2
1
0
1
−
3
]
+
[
−
1
0
0
0
−
1
0
0
0
−
1
]
=
[
0
0
0
0
0
0
0
0
0
]
{\displaystyle {\begin{bmatrix}0&1&-3\\3&-13&23\\-4&19&-36\end{bmatrix}}+4{\begin{bmatrix}0&0&1\\-1&4&-6\\1&-5&10\end{bmatrix}}+{\begin{bmatrix}1&-1&-1\\1&-2&1\\0&1&-3\end{bmatrix}}+{\begin{bmatrix}-1&0&0\\0&-1&0\\0&0&-1\end{bmatrix}}={\begin{bmatrix}0&0&0\\0&0&0\\0&0&0\end{bmatrix}}}
== See also ==
Annihilating polynomial
== References ==
Lang, Serge (2002), Algebra, Graduate Texts in Mathematics, vol. 211 (Revised third ed.), New York: Springer-Verlag, ISBN 978-0-387-95385-4, MR 1878556 | Wikipedia/Minimal_polynomial_(linear_algebra) |
A 3D projection (or graphical projection) is a design technique used to display a three-dimensional (3D) object on a two-dimensional (2D) surface. These projections rely on visual perspective and aspect analysis to project a complex object for viewing capability on a simpler plane.
3D projections use the primary qualities of an object's basic shape to create a map of points, that are then connected to one another to create a visual element. The result is a graphic that contains conceptual properties to interpret the figure or image as not actually flat (2D), but rather, as a solid object (3D) being viewed on a 2D display.
3D objects are largely displayed on two-dimensional mediums (such as paper and computer monitors). As such, graphical projections are a commonly used design element; notably, in engineering drawing, drafting, and computer graphics. Projections can be calculated through employment of mathematical analysis and formulae, or by using various geometric and optical techniques.
== Overview ==
In order to display a three-dimensional (3D) object on a two-dimensional (2D) surface, a projection transformation is applied to the 3D object using a projection matrix. This transformation removes information in the third dimension while preserving it in the first two. See Projective Geometry for more details.
If the size and shape of the 3D object should not be distorted by its relative position to the 2D surface, a parallel projection may be used.
Examples of parallel projections:
If the 3D perspective of an object should be preserved on a 2D surface, the transformation must include scaling and translation based on the object's relative position to the 2D surface. This process is called perspective projection.
Examples of perspective projections:
== Parallel projection ==
In parallel projection, the lines of sight from the object to the projection plane are parallel to each other. Thus, lines that are parallel in three-dimensional space remain parallel in the two-dimensional projected image. Parallel projection also corresponds to a perspective projection with an infinite focal length (the distance from a camera's lens and focal point), or "zoom".
Images drawn in parallel projection rely upon the technique of axonometry ("to measure along axes"), as described in Pohlke's theorem. In general, the resulting image is oblique (the rays are not perpendicular to the image plane); but in special cases the result is orthographic (the rays are perpendicular to the image plane). Axonometry should not be confused with axonometric projection, as in English literature the latter usually refers only to a specific class of pictorials (see below).
=== Orthographic projection ===
The orthographic projection is derived from the principles of descriptive geometry and is a two-dimensional representation of a three-dimensional object. It is a parallel projection (the lines of projection are parallel both in reality and in the projection plane). It is the projection type of choice for working drawings.
If the normal of the viewing plane (the camera direction) is parallel to one of the primary axes (which is the x, y, or z axis), the mathematical transformation is as follows;
To project the 3D point
a
x
{\displaystyle a_{x}}
,
a
y
{\displaystyle a_{y}}
,
a
z
{\displaystyle a_{z}}
onto the 2D point
b
x
{\displaystyle b_{x}}
,
b
y
{\displaystyle b_{y}}
using an orthographic projection parallel to the y axis (where positive y represents forward direction - profile view), the following equations can be used:
b
x
=
s
x
a
x
+
c
x
{\displaystyle b_{x}=s_{x}a_{x}+c_{x}}
b
y
=
s
z
a
z
+
c
z
{\displaystyle b_{y}=s_{z}a_{z}+c_{z}}
where the vector s is an arbitrary scale factor, and c is an arbitrary offset. These constants are optional, and can be used to properly align the viewport. Using matrix multiplication, the equations become:
[
b
x
b
y
]
=
[
s
x
0
0
0
0
s
z
]
[
a
x
a
y
a
z
]
+
[
c
x
c
z
]
.
{\displaystyle {\begin{bmatrix}b_{x}\\b_{y}\end{bmatrix}}={\begin{bmatrix}s_{x}&0&0\\0&0&s_{z}\end{bmatrix}}{\begin{bmatrix}a_{x}\\a_{y}\\a_{z}\end{bmatrix}}+{\begin{bmatrix}c_{x}\\c_{z}\end{bmatrix}}.}
While orthographically projected images represent the three dimensional nature of the object projected, they do not represent the object as it would be recorded photographically or perceived by a viewer observing it directly. In particular, parallel lengths at all points in an orthographically projected image are of the same scale regardless of whether they are far away or near to the virtual viewer. As a result, lengths are not foreshortened as they would be in a perspective projection.
==== Multiview projection ====
With multiview projections, up to six pictures (called primary views) of an object are produced, with each projection plane parallel to one of the coordinate axes of the object. The views are positioned relative to each other according to either of two schemes: first-angle or third-angle projection. In each, the appearances of views may be thought of as being projected onto planes that form a 6-sided box around the object. Although six different sides can be drawn, usually three views of a drawing give enough information to make a 3D object. These views are known as front view, top view, and end view. The terms elevation, plan and section are also used.
=== Oblique projection ===
In oblique projections the parallel projection rays are not perpendicular to the viewing plane as with orthographic projection, but strike the projection plane at an angle other than ninety degrees. In both orthographic and oblique projection, parallel lines in space appear parallel on the projected image. Because of its simplicity, oblique projection is used exclusively for pictorial purposes rather than for formal, working drawings. In an oblique pictorial drawing, the displayed angles among the axes as well as the foreshortening factors (scale) are arbitrary. The distortion created thereby is usually attenuated by aligning one plane of the imaged object to be parallel with the plane of projection thereby creating a true shape, full-size image of the chosen plane. Special types of oblique projections are:
==== Cavalier projection (45°) ====
In cavalier projection (sometimes cavalier perspective or high view point) a point of the object is represented by three coordinates, x, y and z. On the drawing, it is represented by only two coordinates, x″ and y″. On the flat drawing, two axes, x and z on the figure, are perpendicular and the length on these axes are drawn with a 1:1 scale; it is thus similar to the dimetric projections, although it is not an axonometric projection, as the third axis, here y, is drawn in diagonal, making an arbitrary angle with the x″ axis, usually 30 or 45°. The length of the third axis is not scaled.
==== Cabinet projection ====
The term cabinet projection (sometimes cabinet perspective) stems from its use in illustrations by the furniture industry. Like cavalier perspective, one face of the projected object is parallel to the viewing plane, and the third axis is projected as going off in an angle (typically 30° or 45° or arctan(2) = 63.4°). Unlike cavalier projection, where the third axis keeps its length, with cabinet projection the length of the receding lines is cut in half.
==== Military projection ====
A variant of oblique projection is called military projection. In this case, the horizontal sections are isometrically drawn so that the floor plans are not distorted and the verticals are drawn at an angle. The military projection is given by rotation in the xy-plane and a vertical translation an amount z.
=== Axonometric projection ===
Axonometric projections show an image of an object as viewed from a skew direction in order to reveal all three directions (axes) of space in one picture. Axonometric projections may be either orthographic or oblique. Axonometric instrument drawings are often used to approximate graphical perspective projections, but there is attendant distortion in the approximation. Because pictorial projections innately contain this distortion, in instrument drawings of pictorials great liberties may then be taken for economy of effort and best effect.
Axonometric projection is further subdivided into three categories: isometric projection, dimetric projection, and trimetric projection, depending on the exact angle at which the view deviates from the orthogonal. A typical characteristic of orthographic pictorials is that one axis of space is usually displayed as vertical.
==== Isometric projection ====
In isometric pictorials (for methods, see Isometric projection), the direction of viewing is such that the three axes of space appear equally foreshortened, and there is a common angle of 120° between them. The distortion caused by foreshortening is uniform, therefore the proportionality of all sides and lengths are preserved, and the axes share a common scale. This enables measurements to be read or taken directly from the drawing.
==== Dimetric projection ====
In dimetric pictorials (for methods, see Dimetric projection), the direction of viewing is such that two of the three axes of space appear equally foreshortened, of which the attendant scale and angles of presentation are determined according to the angle of viewing; the scale of the third direction (vertical) is determined separately. Approximations are common in dimetric drawings.
==== Trimetric projection ====
In trimetric pictorials (for methods, see Trimetric projection), the direction of viewing is such that all of the three axes of space appear unequally foreshortened. The scale along each of the three axes and the angles among them are determined separately as dictated by the angle of viewing. Approximations in Trimetric drawings are common.
=== Limitations of parallel projection ===
Objects drawn with parallel projection do not appear larger or smaller as they extend closer to or away from the viewer. While advantageous for architectural drawings, where measurements must be taken directly from the image, the result is a perceived distortion, since unlike perspective projection, this is not how our eyes or photography normally work. It also can easily result in situations where depth and altitude are difficult to gauge, as is shown in the illustration to the right.
In this isometric drawing, the blue sphere is two units higher than the red one. However, this difference in elevation is not apparent if one covers the right half of the picture, as the boxes (which serve as clues suggesting height) are then obscured.
This visual ambiguity has been exploited in op art, as well as "impossible object" drawings. M. C. Escher's Waterfall (1961), while not strictly utilizing parallel projection, is a well-known example, in which a channel of water seems to travel unaided along a downward path, only to then paradoxically fall once again as it returns to its source. The water thus appears to disobey the law of conservation of energy. An extreme example is depicted in the film Inception, where by a forced perspective trick an immobile stairway changes its connectivity. The video game Fez uses tricks of perspective to determine where a player can and cannot move in a puzzle-like fashion.
== Perspective projection ==
Perspective projection or perspective transformation is a projection where three-dimensional objects are projected on a picture plane. This has the effect that distant objects appear smaller than nearer objects.
It also means that lines which are parallel in nature (that is, meet at the point at infinity) appear to intersect in the projected image. For example, if railways are pictured with perspective projection, they appear to converge towards a single point, called the vanishing point. Photographic lenses and the human eye work in the same way, therefore the perspective projection looks the most realistic. Perspective projection is usually categorized into one-point, two-point and three-point perspective, depending on the orientation of the projection plane towards the axes of the depicted object.
Graphical projection methods rely on the duality between lines and points, whereby two straight lines determine a point while two points determine a straight line. The orthogonal projection of the eye point onto the picture plane is called the principal vanishing point (P.P. in the scheme on the right, from the Italian term punto principale, coined during the renaissance).
Two relevant points of a line are:
its intersection with the picture plane, and
its vanishing point, found at the intersection between the parallel line from the eye point and the picture plane.
The principal vanishing point is the vanishing point of all horizontal lines perpendicular to the picture plane. The vanishing points of all horizontal lines lie on the horizon line. If, as is often the case, the picture plane is vertical, all vertical lines are drawn vertically, and have no finite vanishing point on the picture plane. Various graphical methods can be easily envisaged for projecting geometrical scenes. For example, lines traced from the eye point at 45° to the picture plane intersect the latter along a circle whose radius is the distance of the eye point from the plane, thus tracing that circle aids the construction of all the vanishing points of 45° lines; in particular, the intersection of that circle with the horizon line consists of two distance points. They are useful for drawing chessboard floors which, in turn, serve for locating the base of objects on the scene. In the perspective of a geometric solid on the right, after choosing the principal vanishing point —which determines the horizon line— the 45° vanishing point on the left side of the drawing completes the characterization of the (equally distant) point of view. Two lines are drawn from the orthogonal projection of each vertex, one at 45° and one at 90° to the picture plane. After intersecting the ground line, those lines go toward the distance point (for 45°) or the principal point (for 90°). Their new intersection locates the projection of the map. Natural heights are measured above the ground line and then projected in the same way until they meet the vertical from the map.
While orthographic projection ignores perspective to allow accurate measurements, perspective projection shows distant objects as smaller to provide additional realism.
=== Mathematical formula ===
The perspective projection requires a more involved definition as compared to orthographic projections. A conceptual aid to understanding the mechanics of this projection is to imagine the 2D projection as though the object(s) are being viewed through a camera viewfinder. The camera's position, orientation, and field of view control the behavior of the projection transformation. The following variables are defined to describe this transformation:
a
x
,
y
,
z
{\displaystyle \mathbf {a} _{x,y,z}}
– the 3D position of a point A that is to be projected
c
x
,
y
,
z
{\displaystyle \mathbf {c} _{x,y,z}}
– the 3D position of a point C representing the camera
θ
x
,
y
,
z
{\displaystyle \mathbf {\theta } _{x,y,z}}
– The orientation of the camera (represented by Tait–Bryan angles)
e
x
,
y
,
z
{\displaystyle \mathbf {e} _{x,y,z}}
– the display surface's position relative to aforementioned
c
{\displaystyle \mathbf {c} }
Most conventions use positive z values (the plane being in front of the pinhole
c
{\displaystyle \mathbf {c} }
), however negative z values are physically more correct, but the image will be inverted both horizontally and vertically.
Which results in:
b
x
,
y
{\displaystyle \mathbf {b} _{x,y}}
– the 2D projection of
a
.
{\displaystyle \mathbf {a} .}
When
c
x
,
y
,
z
=
⟨
0
,
0
,
0
⟩
,
{\displaystyle \mathbf {c} _{x,y,z}=\langle 0,0,0\rangle ,}
and
θ
x
,
y
,
z
=
⟨
0
,
0
,
0
⟩
,
{\displaystyle \mathbf {\theta } _{x,y,z}=\langle 0,0,0\rangle ,}
the 3D vector
⟨
1
,
2
,
0
⟩
{\displaystyle \langle 1,2,0\rangle }
is projected to the 2D vector
⟨
1
,
2
⟩
{\displaystyle \langle 1,2\rangle }
.
Otherwise, to compute
b
x
,
y
{\displaystyle \mathbf {b} _{x,y}}
we first define a vector
d
x
,
y
,
z
{\displaystyle \mathbf {d} _{x,y,z}}
as the position of point A with respect to a coordinate system defined by the camera, with origin in C and rotated by
θ
{\displaystyle \mathbf {\theta } }
with respect to the initial coordinate system. This is achieved by subtracting
c
{\displaystyle \mathbf {c} }
from
a
{\displaystyle \mathbf {a} }
and then applying a rotation by
−
θ
{\displaystyle -\mathbf {\theta } }
to the result. This transformation is often called a camera transform, and can be expressed as follows, expressing the rotation in terms of rotations about the x, y, and z axes (these calculations assume that the axes are ordered as a left-handed system of axes):
[
d
x
d
y
d
z
]
=
[
1
0
0
0
cos
(
θ
x
)
sin
(
θ
x
)
0
−
sin
(
θ
x
)
cos
(
θ
x
)
]
[
cos
(
θ
y
)
0
−
sin
(
θ
y
)
0
1
0
sin
(
θ
y
)
0
cos
(
θ
y
)
]
[
cos
(
θ
z
)
sin
(
θ
z
)
0
−
sin
(
θ
z
)
cos
(
θ
z
)
0
0
0
1
]
(
[
a
x
a
y
a
z
]
−
[
c
x
c
y
c
z
]
)
{\displaystyle {\begin{bmatrix}\mathbf {d} _{x}\\\mathbf {d} _{y}\\\mathbf {d} _{z}\end{bmatrix}}={\begin{bmatrix}1&0&0\\0&\cos(\mathbf {\theta } _{x})&\sin(\mathbf {\theta } _{x})\\0&-\sin(\mathbf {\theta } _{x})&\cos(\mathbf {\theta } _{x})\end{bmatrix}}{\begin{bmatrix}\cos(\mathbf {\theta } _{y})&0&-\sin(\mathbf {\theta } _{y})\\0&1&0\\\sin(\mathbf {\theta } _{y})&0&\cos(\mathbf {\theta } _{y})\end{bmatrix}}{\begin{bmatrix}\cos(\mathbf {\theta } _{z})&\sin(\mathbf {\theta } _{z})&0\\-\sin(\mathbf {\theta } _{z})&\cos(\mathbf {\theta } _{z})&0\\0&0&1\end{bmatrix}}\left({{\begin{bmatrix}\mathbf {a} _{x}\\\mathbf {a} _{y}\\\mathbf {a} _{z}\\\end{bmatrix}}-{\begin{bmatrix}\mathbf {c} _{x}\\\mathbf {c} _{y}\\\mathbf {c} _{z}\\\end{bmatrix}}}\right)}
This representation corresponds to rotating by three Euler angles (more properly, Tait–Bryan angles), using the xyz convention, which can be interpreted either as "rotate about the extrinsic axes (axes of the scene) in the order z, y, x (reading right-to-left)" or "rotate about the intrinsic axes (axes of the camera) in the order x, y, z (reading left-to-right)". If the camera is not rotated (
θ
x
,
y
,
z
=
⟨
0
,
0
,
0
⟩
{\displaystyle \mathbf {\theta } _{x,y,z}=\langle 0,0,0\rangle }
), then the matrices drop out (as identities), and this reduces to simply a shift:
d
=
a
−
c
.
{\displaystyle \mathbf {d} =\mathbf {a} -\mathbf {c} .}
Alternatively, without using matrices (let us replace
a
x
−
c
x
{\displaystyle a_{x}-c_{x}}
with
x
{\displaystyle \mathbf {x} }
and so on, and abbreviate
cos
(
θ
α
)
{\displaystyle \cos \left(\theta _{\alpha }\right)}
to
c
o
s
α
{\displaystyle cos_{\alpha }}
and
sin
(
θ
α
)
{\displaystyle \sin \left(\theta _{\alpha }\right)}
to
s
i
n
α
{\displaystyle sin_{\alpha }}
):
d
x
=
c
o
s
y
(
s
i
n
z
y
+
c
o
s
z
x
)
−
s
i
n
y
z
d
y
=
s
i
n
x
(
c
o
s
y
z
+
s
i
n
y
(
s
i
n
z
y
+
c
o
s
z
x
)
)
+
c
o
s
x
(
c
o
s
z
y
−
s
i
n
z
x
)
d
z
=
c
o
s
x
(
c
o
s
y
z
+
s
i
n
y
(
s
i
n
z
y
+
c
o
s
z
x
)
)
−
s
i
n
x
(
c
o
s
z
y
−
s
i
n
z
x
)
{\displaystyle {\begin{aligned}\mathbf {d} _{x}&=cos_{y}(sin_{z}\mathbf {y} +cos_{z}\mathbf {x} )-sin_{y}\mathbf {z} \\\mathbf {d} _{y}&=sin_{x}(cos_{y}\mathbf {z} +sin_{y}(sin_{z}\mathbf {y} +cos_{z}\mathbf {x} ))+cos_{x}(cos_{z}\mathbf {y} -sin_{z}\mathbf {x} )\\\mathbf {d} _{z}&=cos_{x}(cos_{y}\mathbf {z} +sin_{y}(sin_{z}\mathbf {y} +cos_{z}\mathbf {x} ))-sin_{x}(cos_{z}\mathbf {y} -sin_{z}\mathbf {x} )\end{aligned}}}
This transformed point can then be projected onto the 2D plane using the formula (here, x/y is used as the projection plane; literature also may use x/z):
b
x
=
e
z
d
z
d
x
+
e
x
,
b
y
=
e
z
d
z
d
y
+
e
y
.
{\displaystyle {\begin{aligned}\mathbf {b} _{x}&={\frac {\mathbf {e} _{z}}{\mathbf {d} _{z}}}\mathbf {d} _{x}+\mathbf {e} _{x},\\[5pt]\mathbf {b} _{y}&={\frac {\mathbf {e} _{z}}{\mathbf {d} _{z}}}\mathbf {d} _{y}+\mathbf {e} _{y}.\end{aligned}}}
Or, in matrix form using homogeneous coordinates, the system
[
f
x
f
y
f
w
]
=
[
1
0
e
x
e
z
0
1
e
y
e
z
0
0
1
e
z
]
[
d
x
d
y
d
z
]
{\displaystyle {\begin{bmatrix}\mathbf {f} _{x}\\\mathbf {f} _{y}\\\mathbf {f} _{w}\end{bmatrix}}={\begin{bmatrix}1&0&{\frac {\mathbf {e} _{x}}{\mathbf {e} _{z}}}\\0&1&{\frac {\mathbf {e} _{y}}{\mathbf {e} _{z}}}\\0&0&{\frac {1}{\mathbf {e} _{z}}}\end{bmatrix}}{\begin{bmatrix}\mathbf {d} _{x}\\\mathbf {d} _{y}\\\mathbf {d} _{z}\end{bmatrix}}}
in conjunction with an argument using similar triangles, leads to division by the homogeneous coordinate, giving
b
x
=
f
x
/
f
w
b
y
=
f
y
/
f
w
{\displaystyle {\begin{aligned}\mathbf {b} _{x}&=\mathbf {f} _{x}/\mathbf {f} _{w}\\\mathbf {b} _{y}&=\mathbf {f} _{y}/\mathbf {f} _{w}\end{aligned}}}
The distance of the viewer from the display surface,
e
z
{\displaystyle \mathbf {e} _{z}}
, directly relates to the field of view, where
α
=
2
⋅
arctan
(
1
/
e
z
)
{\displaystyle \alpha =2\cdot \arctan(1/\mathbf {e} _{z})}
is the viewed angle. (Note: This assumes that you map the points (-1,-1) and (1,1) to the corners of your viewing surface)
The above equations can also be rewritten as:
b
x
=
(
d
x
s
x
)
/
(
d
z
r
x
)
r
z
,
b
y
=
(
d
y
s
y
)
/
(
d
z
r
y
)
r
z
.
{\displaystyle {\begin{aligned}\mathbf {b} _{x}&=(\mathbf {d} _{x}\mathbf {s} _{x})/(\mathbf {d} _{z}\mathbf {r} _{x})\mathbf {r} _{z},\\\mathbf {b} _{y}&=(\mathbf {d} _{y}\mathbf {s} _{y})/(\mathbf {d} _{z}\mathbf {r} _{y})\mathbf {r} _{z}.\end{aligned}}}
In which
s
x
,
y
{\displaystyle \mathbf {s} _{x,y}}
is the display size,
r
x
,
y
{\displaystyle \mathbf {r} _{x,y}}
is the recording surface size (CCD or Photographic film),
r
z
{\displaystyle \mathbf {r} _{z}}
is the distance from the recording surface to the entrance pupil (camera center), and
d
z
{\displaystyle \mathbf {d} _{z}}
is the distance, from the 3D point being projected, to the entrance pupil.
Subsequent clipping and scaling operations may be necessary to map the 2D plane onto any particular display media.
=== Weak perspective projection ===
A "weak" perspective projection uses the same principles of an orthographic projection, but requires the scaling factor to be specified, thus ensuring that closer objects appear bigger in the projection, and vice versa. It can be seen as a hybrid between an orthographic and a perspective projection, and described either as a perspective projection with individual point depths
Z
i
{\displaystyle Z_{i}}
replaced by an average constant depth
Z
ave
{\displaystyle Z_{\text{ave}}}
, or simply as an orthographic projection plus a scaling.
The weak-perspective model thus approximates perspective projection while using a simpler model, similar to the pure (unscaled) orthographic perspective.
It is a reasonable approximation when the depth of the object along the line of sight is small compared to the distance from the camera, and the field of view is small. With these conditions, it can be assumed that all points on a 3D object are at the same distance
Z
ave
{\displaystyle Z_{\text{ave}}}
from the camera without significant errors in the projection (compared to the full perspective model).
Equation
P
x
=
X
Z
ave
P
y
=
Y
Z
ave
{\displaystyle {\begin{aligned}&P_{x}={\frac {X}{Z_{\text{ave}}}}\\[5pt]&P_{y}={\frac {Y}{Z_{\text{ave}}}}\end{aligned}}}
assuming focal length
f
=
1
{\textstyle f=1}
.
== Diagram ==
To determine which screen x-coordinate corresponds to a point at
A
x
,
A
z
{\displaystyle A_{x},A_{z}}
multiply the point coordinates by:
B
x
=
A
x
B
z
A
z
{\displaystyle B_{x}=A_{x}{\frac {B_{z}}{A_{z}}}}
where
B
x
{\displaystyle B_{x}}
is the screen x coordinate
A
x
{\displaystyle A_{x}}
is the model x coordinate
B
z
{\displaystyle B_{z}}
is the focal length—the axial distance from the camera center to the image plane
A
z
{\displaystyle A_{z}}
is the subject distance.
Since the camera operates in 3D, the same principle applies to the screen’s y coordinate— one can substitute y for x in the diagram and equation above.
Alternatively, clipping techniques can be used. These involve substituting values of a point outside the field of view (FOV) with interpolated values from a corresponding point inside the camera's view matrix.
This approach, often referred to as the inverse camera method, involves performing a perspective projection calculation using known values. It determines the last visible point along the viewing frustum by projecting from an out-of-view (invisible) point after all necessary transformations have been applied.
== See also ==
== References ==
== Further reading ==
Kenneth C. Finney (2004). 3D Game Programming All in One. Thomson Course. p. 93. ISBN 978-1-59200-136-1. 3D projection.
Koehler; Ralph (December 2000). 2D/3D Graphics and Splines with Source Code. Author Solutions Incorporated. ISBN 978-0759611870.
== External links ==
Creating 3D Environments from Digital Photographs | Wikipedia/Graphical_projection |
In physics, circulation is the line integral of a vector field around a closed curve embedded in the field. In fluid dynamics, the field is the fluid velocity field. In electrodynamics, it can be the electric or the magnetic field.
In aerodynamics, it finds applications in the calculation of lift, for which circulation was first used independently by Frederick Lanchester, Ludwig Prandtl, Martin Kutta and Nikolay Zhukovsky. It is usually denoted Γ (uppercase gamma).
== Definition and properties ==
If V is a vector field and dl is a vector representing the differential length of a small element of a defined curve, the contribution of that differential length to circulation is dΓ:
d
Γ
=
V
⋅
d
l
=
|
V
|
|
d
l
|
cos
θ
.
{\displaystyle \mathrm {d} \Gamma =\mathbf {V} \cdot \mathrm {d} \mathbf {l} =\left|\mathbf {V} \right|\left|\mathrm {d} \mathbf {l} \right|\cos \theta .}
Here, θ is the angle between the vectors V and dl.
The circulation Γ of a vector field V around a closed curve C is the line integral:
Γ
=
∮
C
V
⋅
d
l
.
{\displaystyle \Gamma =\oint _{C}\mathbf {V} \cdot \mathrm {d} \mathbf {l} .}
In a conservative vector field this integral evaluates to zero for every closed curve. That means that a line integral between any two points in the field is independent of the path taken. It also implies that the vector field can be expressed as the gradient of a scalar function, which is called a potential.
== Relation to vorticity and curl ==
Circulation can be related to curl of a vector field V and, more specifically, to vorticity if the field is a fluid velocity field,
ω
=
∇
×
V
.
{\displaystyle {\boldsymbol {\omega }}=\nabla \times \mathbf {V} .}
By Stokes' theorem, the flux of curl or vorticity vectors through a surface S is equal to the circulation around its perimeter,
Γ
=
∮
∂
S
V
⋅
d
l
=
∬
S
∇
×
V
⋅
d
S
=
∬
S
ω
⋅
d
S
{\displaystyle \Gamma =\oint _{\partial S}\mathbf {V} \cdot \mathrm {d} \mathbf {l} =\iint _{S}\nabla \times \mathbf {V} \cdot \mathrm {d} \mathbf {S} =\iint _{S}{\boldsymbol {\omega }}\cdot \mathrm {d} \mathbf {S} }
Here, the closed integration path ∂S is the boundary or perimeter of an open surface S, whose infinitesimal element normal dS = ndS is oriented according to the right-hand rule. Thus curl and vorticity are the circulation per unit area, taken around a local infinitesimal loop.
In potential flow of a fluid with a region of vorticity, all closed curves that enclose the vorticity have the same value for circulation.
== Uses ==
=== Kutta–Joukowski theorem in fluid dynamics ===
In fluid dynamics, the lift per unit span (L') acting on a body in a two-dimensional flow field is directly proportional to the circulation. Lift per unit span can be expressed as the product of the circulation Γ about the body, the fluid density
ρ
{\displaystyle \rho }
, and the speed of the body relative to the free-stream
v
∞
{\displaystyle v_{\infty }}
:
L
′
=
ρ
v
∞
Γ
{\displaystyle L'=\rho v_{\infty }\Gamma }
This is known as the Kutta–Joukowski theorem.
This equation applies around airfoils, where the circulation is generated by airfoil action; and around spinning objects experiencing the Magnus effect where the circulation is induced mechanically. In airfoil action, the magnitude of the circulation is determined by the Kutta condition.
The circulation on every closed curve around the airfoil has the same value, and is related to the lift generated by each unit length of span. Provided the closed curve encloses the airfoil, the choice of curve is arbitrary.
Circulation is often used in computational fluid dynamics as an intermediate variable to calculate forces on an airfoil or other body.
=== Fundamental equations of electromagnetism ===
In electrodynamics, the Maxwell-Faraday law of induction can be stated in two equivalent forms: that the curl of the electric field is equal to the negative rate of change of the magnetic field,
∇
×
E
=
−
∂
B
∂
t
{\displaystyle \nabla \times \mathbf {E} =-{\frac {\partial \mathbf {B} }{\partial t}}}
or that the circulation of the electric field around a loop is equal to the negative rate of change of the magnetic field flux through any surface spanned by the loop, by Stokes' theorem
∮
∂
S
E
⋅
d
l
=
∬
S
∇
×
E
⋅
d
S
=
−
d
d
t
∫
S
B
⋅
d
S
.
{\displaystyle \oint _{\partial S}\mathbf {E} \cdot \mathrm {d} \mathbf {l} =\iint _{S}\nabla \times \mathbf {E} \cdot \mathrm {d} \mathbf {S} =-{\frac {\mathrm {d} }{\mathrm {d} t}}\int _{S}\mathbf {B} \cdot \mathrm {d} \mathbf {S} .}
Circulation of a static magnetic field is, by Ampère's law, proportional to the total current enclosed by the loop
∮
∂
S
B
⋅
d
l
=
μ
0
∬
S
J
⋅
d
S
=
μ
0
I
enc
.
{\displaystyle \oint _{\partial S}\mathbf {B} \cdot \mathrm {d} \mathbf {l} =\mu _{0}\iint _{S}\mathbf {J} \cdot \mathrm {d} \mathbf {S} =\mu _{0}I_{\text{enc}}.}
For systems with electric fields that change over time, the law must be modified to include a term known as Maxwell's correction.
== See also ==
Maxwell's equations
Biot–Savart law in aerodynamics
Kelvin's circulation theorem
== References == | Wikipedia/Circulation_(fluid_dynamics) |
In thermodynamics, a reversible process is a process, involving a system and its surroundings, whose direction can be reversed by infinitesimal changes in some properties of the surroundings, such as pressure or temperature.
Throughout an entire reversible process, the system is in thermodynamic equilibrium, both physical and chemical, and nearly in pressure and temperature equilibrium with its surroundings. This prevents unbalanced forces and acceleration of moving system boundaries, which in turn avoids friction and other dissipation.
To maintain equilibrium, reversible processes are extremely slow (quasistatic). The process must occur slowly enough that after some small change in a thermodynamic parameter, the physical processes in the system have enough time for the other parameters to self-adjust to match the new, changed parameter value. For example, if a container of water has sat in a room long enough to match the steady temperature of the surrounding air, for a small change in the air temperature to be reversible, the whole system of air, water, and container must wait long enough for the container and air to settle into a new, matching temperature before the next small change can occur.
While processes in isolated systems are never reversible, cyclical processes can be reversible or irreversible. Reversible processes are hypothetical or idealized but central to the second law of thermodynamics. Melting or freezing of ice in water is an example of a realistic process that is nearly reversible.
Additionally, the system must be in (quasistatic) equilibrium with the surroundings at all time, and there must be no dissipative effects, such as friction, for a process to be considered reversible.
Reversible processes are useful in thermodynamics because they are so idealized that the equations for heat and expansion/compression work are simple. This enables the analysis of model processes, which usually define the maximum efficiency attainable in corresponding real processes. Other applications exploit that entropy and internal energy are state functions whose change depends only on the initial and final states of the system, not on how the process occurred. Therefore, the entropy and internal-energy change in a real process can be calculated quite easily by analyzing a reversible process connecting the real initial and final system states. In addition, reversibility defines the thermodynamic condition for chemical equilibrium.
== Overview ==
Thermodynamic processes can be carried out in one of two ways: reversibly or irreversibly. An ideal thermodynamically reversible process is free of dissipative losses and therefore the magnitude of work performed by or on the system would be maximized. The incomplete conversion of heat to work in a cyclic process, however, applies to both reversible and irreversible cycles. The dependence of work on the path of the thermodynamic process is also unrelated to reversibility, since expansion work, which can be visualized on a pressure–volume diagram as the area beneath the equilibrium curve, is different for different reversible expansion processes (e.g. adiabatic, then isothermal; vs. isothermal, then adiabatic) connecting the same initial and final states.
== Irreversibility ==
In an irreversible process, finite changes are made; therefore the system is not at equilibrium throughout the process. In a cyclic process, the difference between the reversible work
(
W
r
e
v
)
{\displaystyle (\,W_{\mathsf {rev}}\,)}
and the actual work
(
W
a
c
t
)
{\displaystyle (\,W_{\mathsf {act}}\,)}
for a process as shown in the following equation:
I
=
W
r
e
v
−
W
a
c
t
.
{\displaystyle \;I=W_{\mathsf {rev}}-W_{\mathsf {act}}~.}
== Boundaries and states ==
Simple reversible processes change the state of a system in such a way that the net change in the combined entropy of the system and its surroundings is zero. (The entropy of the system alone is conserved only in reversible adiabatic processes.) Nevertheless, the Carnot cycle demonstrates that the state of the surroundings may change in a reversible process as the system returns to its initial state. Reversible processes define the boundaries of how efficient heat engines can be in thermodynamics and engineering: a reversible process is one where the machine has maximum efficiency (see Carnot cycle).
In some cases, it may be important to distinguish between reversible and quasistatic processes. Reversible processes are always quasistatic, but the converse is not always true. For example, an infinitesimal compression of a gas in a cylinder where there is friction between the piston and the cylinder is a quasistatic, but not reversible process. Although the system has been driven from its equilibrium state by only an infinitesimal amount, energy has been irreversibly lost to waste heat, due to friction, and cannot be recovered by simply moving the piston in the opposite direction by the infinitesimally same amount.
== Engineering archaisms ==
Historically, the term Tesla principle was used to describe (among other things) certain reversible processes invented by Nikola Tesla. However, this phrase is no longer in conventional use. The principle stated that some systems could be reversed and operated in a complementary manner. It was developed during Tesla's research in alternating currents where the current's magnitude and direction varied cyclically. During a demonstration of the Tesla turbine, the disks revolved and machinery fastened to the shaft was operated by the engine. If the turbine's operation was reversed, the disks acted as a pump.
== Footnotes ==
== See also ==
== References == | Wikipedia/Reversible_process_(thermodynamics) |
Downforce is a downwards lift force created by the aerodynamic features of a vehicle. If the vehicle is a car, the purpose of downforce is to allow the car to travel faster by increasing the vertical force on the tires, thus creating more grip. If the vehicle is a fixed-wing aircraft, the purpose of the downforce on the horizontal stabilizer is to maintain longitudinal stability and allow the pilot to control the aircraft in pitch.
== Fundamental principles ==
The same principle that allows an airplane to rise off the ground by creating lift from its wings is used in reverse to apply force that presses the race car against the surface of the track. This effect is referred to as "aerodynamic grip" and is distinguished from "mechanical grip", which is a function of the car's mass, tires, and suspension. The creation of downforce by passive devices can be achieved only at the cost of increased aerodynamic drag (or friction), and the optimum setup is almost always a compromise between the two.
The aerodynamic setup for a car can vary considerably between race tracks, depending on the length of the straights and the types of corners. Because it is a function of the flow of air over and under the car, downforce increases with the square of the car's speed and requires a certain minimum speed in order to produce a significant effect. Some cars have had rather unstable aerodynamics, such that a minor change in angle of attack or height of the vehicle can cause large changes in downforce. In the very worst cases this can cause the car to experience lift, not downforce; for example, by passing over a bump on a track or slipstreaming over a crest: this could have some disastrous consequences, such as Mark Webber's and Peter Dumbreck's Mercedes-Benz CLR in the 1999 24 Hours of Le Mans, which flipped spectacularly after closely following a competitor car over a hump.
Two primary components of a racing car can be used to create downforce when the car is travelling at racing speed:
the shape of the body, and
the use of airfoils.
Most racing formulae have a ban on aerodynamic devices that can be adjusted during a race, except during pit stops.
The downforce exerted by a wing is usually expressed as a function of its lift coefficient:
F
=
−
C
L
1
2
ρ
v
2
A
{\displaystyle F=-C_{L}{\frac {1}{2}}\rho v^{2}A}
where:
F is downforce (SI unit: newtons)
CL is the lift coefficient
ρ is air density (SI unit: kg/m3)
v is velocity (SI unit: m/s)
A is the area of the wing (SI unit: meters squared), which depends on its wingspan and chord if using top wing area basis for CL, or the wingspan and thickness of the wing if using frontal area basis
In certain ranges of operating conditions and when the wing is not stalled, the lift coefficient has a constant value: the downforce is then proportional to the square of airspeed.
In aerodynamics, it is usual to use the top-view projected area of the wing as a reference surface to define the lift coefficient.
== Body ==
The rounded and tapered shape of the top of a car is designed to slice through the air and minimize wind resistance. Detailed pieces of bodywork on top of the car can be added to allow a smooth flow of air to reach the downforce-creating elements (e.g., wings or spoilers, and underbody tunnels).
The overall shape of a car resembles an airplane wing. Almost all road cars produce aerodynamic lift as a result of this shape. There are many techniques that are used to counterbalance this lift. Looking at the profile of most road cars, the front bumper has the lowest ground clearance followed by the section between the front and rear tires, and followed by a rear bumper, usually with the highest clearance. Using this layout, the air flowing under the front bumper will be constricted to a lower cross-sectional area, and thus achieve a lower pressure. Additional downforce comes from the rake (or angle) of the vehicle's body, which directs the underside air up and creates a downward force, increasing the pressure on top of the car because the airflow direction comes closer to perpendicular to the surface.
Volume does not affect the air pressure because it is not an enclosed volume, despite the common misconception. Race cars amplify this effect by adding a rear diffuser to accelerate air under the car in front of the diffuser, and raise the air pressure behind it, lessening the car's wake. Other aerodynamic components that can be found on the underside to improve downforce and/or reduce drag, include splitters and vortex generators.
Some cars, such as the DeltaWing, do not have wings, and generate all of their downforce through their body.
== Airfoils ==
The magnitude of the downforce created by the wings or spoilers on a car is dependent primarily on three things:
The shape, including surface area, aspect ratio and cross-section of the device,
The device's orientation (or angle of attack), and
The speed of the vehicle.
A larger surface area creates greater downforce and greater drag. The aspect ratio is the width of the airfoil divided by its chord. If the wing is not rectangular, aspect ratio is written AR=b2/s, where AR=aspect ratio, b=span, and s=wing area. Also, a greater angle of attack (or tilt) of the wing or spoiler, creates more downforce, which puts more pressure on the rear wheels and creates more drag.
=== Front ===
The function of the airfoils at the front of the car is twofold. They create downforce that enhances the grip of the front tires, while also optimizing (or minimizing disturbance to) the flow of air to the rest of the car. The front wings on an open-wheeled car undergo constant modification as data is gathered from race to race, and are customized for every characteristic of a particular circuit (see top photos). In most series, the wings are even designed for adjustment during the race itself when the car is serviced.
=== Rear ===
The flow of air at the rear of the car is affected by the front wings, front wheels, mirrors, driver's helmet, side pods and exhaust. This causes the rear wing to be less aerodynamically efficient than the front wing, Yet, because it must generate more than twice as much downforce as the front wings in order to maintain the handling to balance the car, the rear wing typically has a much larger aspect ratio, and often uses two or more elements to compound the amount of downforce created (see photo at left). Like the front wings, each of these elements can often be adjusted when the car is serviced, before or even during a race, and are the object of constant attention and modification.
=== Wings in unusual places ===
Partly as a consequence of rules aimed at reducing downforce from the front and rear wings of F1 cars, several teams have sought to find other places to position wings. Small wings mounted on the rear of the cars' sidepods began to appear in mid-1994, and were virtually standard on all F1 cars in one form or another, until all such devices were outlawed in 2009. Other wings have sprung up in various other places about the car, but these modifications are usually only used at circuits where downforce is most sought, particularly the twisty Hungary and Monaco racetracks.
The 1995 McLaren Mercedes MP4/10 was one of the first cars to feature a "midwing", using a loophole in the regulations to mount a wing on top of the engine cover. This arrangement has since been used by every team on the grid at one time or another, and in the 2007 Monaco Grand Prix all but two teams used them. These midwings are not to be confused either with the roll-hoop mounted cameras which each car carries as standard in all races, or with the bull-horn shaped flow controllers first used by McLaren and since by BMW Sauber, whose primary function is to smooth and redirect the airflow in order to make the rear wing more effective rather than to generate downforce themselves.
A variation on this theme was "X-wings", high wings mounted on the front of the sidepods which used a similar loophole to midwings. These were first used by Tyrrell in 1997, and were last used in the 1998 San Marino Grand Prix, by which time Ferrari, Sauber, Jordan and others had used such an arrangement. However it was decided they would have to be banned in view of the obstruction they caused during refueling and the risk they posed to the driver should a car roll over.
Various other extra wings have been tried from time to time, but nowadays it is more common for teams to seek to improve the performance of the front and rear wings by the use of various flow controllers such as the afore-mentioned "bull-horns" used by McLaren.
== See also ==
Bernoulli's principle
Body kit
Formula One car
Grip (auto racing)
Ground effect in cars
Lift (force)
Wind tunnel
== Further reading ==
Simon McBeath, Competition Car Downforce: A Practical Handbook, SAE International, 2000, ISBN 1-85960-662-8
Simon McBeath, Competition Car Aerodynamics, Sparkford, Haynes, 2006
Enrico Benzing, Ali / Wings. Progettazione e applicazione su auto da corsa. Their design and application to racing car, Milano, Nada, 2012. Bilingual (Italian-English)
== References ==
== External links ==
Aerodynamics In Car Racing Archived 2009-12-06 at the Wayback Machine | Wikipedia/Downforce |
When a fluid flows around an object, the fluid exerts a force on the object. Lift is the component of this force that is perpendicular to the oncoming flow direction. It contrasts with the drag force, which is the component of the force parallel to the flow direction. Lift conventionally acts in an upward direction in order to counter the force of gravity, but it is defined to act perpendicular to the flow and therefore can act in any direction.
If the surrounding fluid is air, the force is called an aerodynamic force. In water or any other liquid, it is called a hydrodynamic force.
Dynamic lift is distinguished from other kinds of lift in fluids. Aerostatic lift or buoyancy, in which an internal fluid is lighter than the surrounding fluid, does not require movement and is used by balloons, blimps, dirigibles, boats, and submarines. Planing lift, in which only the lower portion of the body is immersed in a liquid flow, is used by motorboats, surfboards, windsurfers, sailboats, and water-skis.
== Overview ==
A fluid flowing around the surface of a solid object applies a force on it. It does not matter whether the object is moving through a stationary fluid (e.g. an aircraft flying through the air) or whether the object is stationary and the fluid is moving (e.g. a wing in a wind tunnel) or whether both are moving (e.g. a sailboat using the wind to move forward). Lift is the component of this force that is perpendicular to the oncoming flow direction. Lift is always accompanied by a drag force, which is the component of the surface force parallel to the flow direction.
Lift is mostly associated with the wings of fixed-wing aircraft, although it is more widely generated by many other streamlined bodies such as propellers, kites, helicopter rotors, racing car wings, maritime sails, wind turbines, and by sailboat keels, ship's rudders, and hydrofoils in water. Lift is also used by flying and gliding animals, especially by birds, bats, and insects, and even in the plant world by the seeds of certain trees.
While the common meaning of the word "lift" assumes that lift opposes weight, lift can be in any direction with respect to gravity, since it is defined with respect to the direction of flow rather than to the direction of gravity. When an aircraft is cruising in straight and level flight, the lift opposes gravity. However, when an aircraft is climbing, descending, or banking in a turn the lift is tilted with respect to the vertical. Lift may also act as downforce on the wing of a fixed-wing aircraft at the top of an aerobatic loop, and on the horizontal stabiliser of an aircraft. Lift may also be largely horizontal, for instance on a sailing ship.
The lift discussed in this article is mainly in relation to airfoils; marine hydrofoils and propellers share the same physical principles and work in the same way, despite differences between air and water such as density, compressibility, and viscosity.
The flow around a lifting airfoil is a fluid mechanics phenomenon that can be understood on essentially two levels: There are mathematical theories, which are based on established laws of physics and represent the flow accurately, but which require solving equations. And there are physical explanations without math, which are less rigorous. Correctly explaining lift in these qualitative terms is difficult because the cause-and-effect relationships involved are subtle. A comprehensive explanation that captures all of the essential aspects is necessarily complex. There are also many simplified explanations, but all leave significant parts of the phenomenon unexplained, while some also have elements that are simply incorrect.
== Simplified physical explanations of lift on an airfoil ==
An airfoil is a streamlined shape that is capable of generating significantly more lift than drag. A flat plate can generate lift, but not as much as a streamlined airfoil, and with somewhat higher drag.
Most simplified explanations follow one of two basic approaches, based either on Newton's laws of motion or on Bernoulli's principle.
=== Explanation based on flow deflection and Newton's laws ===
An airfoil generates lift by exerting a downward force on the air as it flows past. According to Newton's third law, the air must exert an equal and opposite (upward) force on the airfoil, which is lift.
As the airflow approaches the airfoil it is curving upward, but as it passes the airfoil it changes direction and follows a path that is curved downward. According to Newton's second law, this change in flow direction requires a downward force applied to the air by the airfoil. Then Newton's third law requires the air to exert an upward force on the airfoil; thus a reaction force, lift, is generated opposite to the directional change. In the case of an airplane wing, the wing exerts a downward force on the air and the air exerts an upward force on the wing.
The downward turning of the flow is not produced solely by the lower surface of the airfoil, and the air flow above the airfoil accounts for much of the downward-turning action.
This explanation is correct but it is incomplete. It does not explain how the airfoil can impart downward turning to a much deeper swath of the flow than it actually touches. Furthermore, it does not mention that the lift force is exerted by pressure differences, and does not explain how those pressure differences are sustained.
==== Controversy regarding the Coandă effect ====
Some versions of the flow-deflection explanation of lift cite the Coandă effect as the reason the flow is able to follow the convex upper surface of the airfoil. The conventional definition in the aerodynamics field is that the Coandă effect refers to the tendency of a fluid jet to stay attached to an adjacent surface that curves away from the flow, and the resultant entrainment of ambient air into the flow.
More broadly, some consider the effect to include the tendency of any fluid boundary layer to adhere to a curved surface, not just the boundary layer accompanying a fluid jet. It is in this broader sense that the Coandă effect is used by some popular references to explain why airflow remains attached to the top side of an airfoil. This is a controversial use of the term "Coandă effect"; the flow following the upper surface simply reflects an absence of boundary-layer separation, thus it is not an example of the Coandă effect. Regardless of whether this broader definition of the "Coandă effect" is applicable, calling it the "Coandă effect" does not provide an explanation, it just gives the phenomenon a name.
The ability of a fluid flow to follow a curved path is not dependent on shear forces, viscosity of the fluid, or the presence of a boundary layer. Air flowing around an airfoil, adhering to both upper and lower surfaces, and generating lift, is accepted as a phenomenon in inviscid flow.
=== Explanations based on an increase in flow speed and Bernoulli's principle ===
There are two common versions of this explanation, one based on "equal transit time", and one based on "obstruction" of the airflow.
==== False explanation based on equal transit-time ====
The "equal transit time" explanation starts by arguing that the flow over the upper surface is faster than the flow over the lower surface because the path length over the upper surface is longer and must be traversed in equal transit time. Bernoulli's principle states that under certain conditions increased flow speed is associated with reduced pressure. It is concluded that the reduced pressure over the upper surface results in upward lift.
While it is true that the flow speeds up, a serious flaw in this explanation is that it does not correctly explain what causes the flow to speed up. The longer-path-length explanation is incorrect. No difference in path length is needed, and even when there is a difference, it is typically much too small to explain the observed speed difference. This is because the assumption of equal transit time is wrong when applied to a body generating lift. There is no physical principle that requires equal transit time in all situations and experimental results confirm that for a body generating lift the transit times are not equal. In fact, the air moving past the top of an airfoil generating lift moves much faster than equal transit time predicts. The much higher flow speed over the upper surface can be clearly seen in this animated flow visualization.
==== Obstruction of the airflow ====
Like the equal transit time explanation, the "obstruction" or "streamtube pinching" explanation argues that the flow over the upper surface is faster than the flow over the lower surface, but gives a different reason for the difference in speed. It argues that the curved upper surface acts as more of an obstacle to the flow, forcing the streamlines to pinch closer together, making the streamtubes narrower. When streamtubes become narrower, conservation of mass requires that flow speed must increase. Reduced upper-surface pressure and upward lift follow from the higher speed by Bernoulli's principle, just as in the equal transit time explanation. Sometimes an analogy is made to a venturi nozzle, claiming the upper surface of the wing acts like a venturi nozzle to constrict the flow.
One serious flaw in the obstruction explanation is that it does not explain how streamtube pinching comes about, or why it is greater over the upper surface than the lower surface. For conventional wings that are flat on the bottom and curved on top this makes some intuitive sense, but it does not explain how flat plates, symmetric airfoils, sailboat sails, or conventional airfoils flying upside down can generate lift, and attempts to calculate lift based on the amount of constriction or obstruction do not predict experimental results. Another flaw is that conservation of mass is not a satisfying physical reason why the flow would speed up. Effectively explaining the acceleration of an object requires identifying the force that accelerates it.
==== Issues common to both versions of the Bernoulli-based explanation ====
A serious flaw common to all the Bernoulli-based explanations is that they imply that a speed difference can arise from causes other than a pressure difference, and that the speed difference then leads to a pressure difference, by Bernoulli's principle. This implied one-way causation is a misconception. The real relationship between pressure and flow speed is a mutual interaction. As explained below under a more comprehensive physical explanation, producing a lift force requires maintaining pressure differences in both the vertical and horizontal directions. The Bernoulli-only explanations do not explain how the pressure differences in the vertical direction are sustained. That is, they leave out the flow-deflection part of the interaction.
Although the two simple Bernoulli-based explanations above are incorrect, there is nothing incorrect about Bernoulli's principle or the fact that the air goes faster on the top of the wing, and Bernoulli's principle can be used correctly as part of a more complicated explanation of lift.
== Basic attributes of lift ==
Lift is a result of pressure differences and depends on angle of attack, airfoil shape, air density, and airspeed.
=== Pressure differences ===
Pressure is the normal force per unit area exerted by the air on itself and on surfaces that it touches. The lift force is transmitted through the pressure, which acts perpendicular to the surface of the airfoil. Thus, the net force manifests itself as pressure differences. The direction of the net force implies that the average pressure on the upper surface of the airfoil is lower than the average pressure on the underside.
These pressure differences arise in conjunction with the curved airflow. When a fluid follows a curved path, there is a pressure gradient perpendicular to the flow direction with higher pressure on the outside of the curve and lower pressure on the inside. This direct relationship between curved streamlines and pressure differences, sometimes called the streamline curvature theorem, was derived from Newton's second law by Leonhard Euler in 1754:
d
p
d
R
=
ρ
v
2
R
{\displaystyle {\frac {\operatorname {d} p}{\operatorname {d} R}}=\rho {\frac {v^{2}}{R}}}
The left side of this equation represents the pressure difference perpendicular to the fluid flow. On the right side of the equation, ρ is the density, v is the velocity, and R is the radius of curvature. This formula shows that higher velocities and tighter curvatures create larger pressure differentials and that for straight flow (R → ∞), the pressure difference is zero.
=== Angle of attack ===
The angle of attack is the angle between the chord line of an airfoil and the oncoming airflow. A symmetrical airfoil generates zero lift at zero angle of attack. But as the angle of attack increases, the air is deflected through a larger angle and the vertical component of the airstream velocity increases, resulting in more lift. For small angles, a symmetrical airfoil generates a lift force roughly proportional to the angle of attack.
As the angle of attack increases, the lift reaches a maximum at some angle; increasing the angle of attack beyond this critical angle of attack causes the upper-surface flow to separate from the wing; there is less deflection downward so the airfoil generates less lift. The airfoil is said to be stalled.
=== Airfoil shape ===
The maximum lift force that can be generated by an airfoil at a given airspeed depends on the shape of the airfoil, especially the amount of camber (curvature such that the upper surface is more convex than the lower surface, as illustrated at right). Increasing the camber generally increases the maximum lift at a given airspeed.
Cambered airfoils generate lift at zero angle of attack. When the chord line is horizontal, the trailing edge has a downward direction and since the air follows the trailing edge it is deflected downward. When a cambered airfoil is upside down, the angle of attack can be adjusted so that the lift force is upward. This explains how a plane can fly upside down.
=== Flow conditions ===
The ambient flow conditions which affect lift include the fluid density, viscosity and speed of flow. Density is affected by temperature, and by the medium's acoustic velocity – i.e. by compressibility effects.
=== Air speed and density ===
Lift is proportional to the density of the air and approximately proportional to the square of the flow speed. Lift also depends on the size of the wing, being generally proportional to the wing's area projected in the lift direction. In calculations it is convenient to quantify lift in terms of a lift coefficient based on these factors.
=== Boundary layer and profile drag ===
No matter how smooth the surface of an airfoil seems, any surface is rough on the scale of air molecules. Air molecules flying into the surface bounce off the rough surface in random directions relative to their original velocities. The result is that when the air is viewed as a continuous material, it is seen to be unable to slide along the surface, and the air's velocity relative to the airfoil decreases to nearly zero at the surface (i.e., the air molecules "stick" to the surface instead of sliding along it), something known as the no-slip condition. Because the air at the surface has near-zero velocity but the air away from the surface is moving, there is a thin boundary layer in which air close to the surface is subjected to a shearing motion. The air's viscosity resists the shearing, giving rise to a shear stress at the airfoil's surface called skin friction drag. Over most of the surface of most airfoils, the boundary layer is naturally turbulent, which increases skin friction drag.
Under usual flight conditions, the boundary layer remains attached to both the upper and lower surfaces all the way to the trailing edge, and its effect on the rest of the flow is modest. Compared to the predictions of inviscid flow theory, in which there is no boundary layer, the attached boundary layer reduces the lift by a modest amount and modifies the pressure distribution somewhat, which results in a viscosity-related pressure drag over and above the skin friction drag. The total of the skin friction drag and the viscosity-related pressure drag is usually called the profile drag.
=== Stalling ===
An airfoil's maximum lift at a given airspeed is limited by boundary-layer separation. As the angle of attack is increased, a point is reached where the boundary layer can no longer remain attached to the upper surface. When the boundary layer separates, it leaves a region of recirculating flow above the upper surface, as illustrated in the flow-visualization photo at right. This is known as the stall, or stalling. At angles of attack above the stall, lift is significantly reduced, though it does not drop to zero. The maximum lift that can be achieved before stall, in terms of the lift coefficient, is generally less than 1.5 for single-element airfoils and can be more than 3.0 for airfoils with high-lift slotted flaps and leading-edge devices deployed.
=== Bluff bodies ===
The flow around bluff bodies – i.e. without a streamlined shape, or stalling airfoils – may also generate lift, in addition to a strong drag force. This lift may be steady, or it may oscillate due to vortex shedding. Interaction of the object's flexibility with the vortex shedding may enhance the effects of fluctuating lift and cause vortex-induced vibrations. For instance, the flow around a circular cylinder generates a Kármán vortex street: vortices being shed in an alternating fashion from the cylinder's sides. The oscillatory nature of the flow produces a fluctuating lift force on the cylinder, even though the net (mean) force is negligible. The lift force frequency is characterised by the dimensionless Strouhal number, which depends on the Reynolds number of the flow.
For a flexible structure, this oscillatory lift force may induce vortex-induced vibrations. Under certain conditions – for instance resonance or strong spanwise correlation of the lift force – the resulting motion of the structure due to the lift fluctuations may be strongly enhanced. Such vibrations may pose problems and threaten collapse in tall man-made structures like industrial chimneys.
In the Magnus effect, a lift force is generated by a spinning cylinder in a freestream. Here the mechanical rotation acts on the boundary layer, causing it to separate at different locations on the two sides of the cylinder. The asymmetric separation changes the effective shape of the cylinder as far as the flow is concerned such that the cylinder acts like a lifting airfoil with circulation in the outer flow.
== A more comprehensive physical explanation ==
As described above under "Simplified physical explanations of lift on an airfoil", there are two main popular explanations: one based on downward deflection of the flow (Newton's laws), and one based on pressure differences accompanied by changes in flow speed (Bernoulli's principle). Either of these, by itself, correctly identifies some aspects of the lifting flow but leaves other important aspects of the phenomenon unexplained. A more comprehensive explanation involves both downward deflection and pressure differences (including changes in flow speed associated with the pressure differences), and requires looking at the flow in more detail.
=== Lift at the airfoil surface ===
The airfoil shape and angle of attack work together so that the airfoil exerts a downward force on the air as it flows past. According to Newton's third law, the air must then exert an equal and opposite (upward) force on the airfoil, which is the lift.
The net force exerted by the air occurs as a pressure difference over the airfoil's surfaces. Pressure in a fluid is always positive in an absolute sense, so that pressure must always be thought of as pushing, and never as pulling. The pressure thus pushes inward on the airfoil everywhere on both the upper and lower surfaces. The flowing air reacts to the presence of the wing by reducing the pressure on the wing's upper surface and increasing the pressure on the lower surface. The pressure on the lower surface pushes up harder than the reduced pressure on the upper surface pushes down, and the net result is upward lift.
The pressure difference which results in lift acts directly on the airfoil surfaces; however, understanding how the pressure difference is produced requires understanding what the flow does over a wider area.
=== The wider flow around the airfoil ===
An airfoil affects the speed and direction of the flow over a wide area, producing a pattern called a velocity field. When an airfoil produces lift, the flow ahead of the airfoil is deflected upward, the flow above and below the airfoil is deflected downward leaving the air far behind the airfoil in the same state as the oncoming flow far ahead. The flow above the upper surface is sped up, while the flow below the airfoil is slowed down. Together with the upward deflection of air in front and the downward deflection of the air immediately behind, this establishes a net circulatory component of the flow. The downward deflection and the changes in flow speed are pronounced and extend over a wide area, as can be seen in the flow animation on the right. These differences in the direction and speed of the flow are greatest close to the airfoil and decrease gradually far above and below. All of these features of the velocity field also appear in theoretical models for lifting flows.
The pressure is also affected over a wide area, in a pattern of non-uniform pressure called a pressure field. When an airfoil produces lift, there is a diffuse region of low pressure above the airfoil, and usually a diffuse region of high pressure below, as illustrated by the isobars (curves of constant pressure) in the drawing. The pressure difference that acts on the surface is just part of this pressure field.
=== Mutual interaction of pressure differences and changes in flow velocity ===
The non-uniform pressure exerts forces on the air in the direction from higher pressure to lower pressure. The direction of the force is different at different locations around the airfoil, as indicated by the block arrows in the pressure field around an airfoil figure. Air above the airfoil is pushed toward the center of the low-pressure region, and air below the airfoil is pushed outward from the center of the high-pressure region.
According to Newton's second law, a force causes air to accelerate in the direction of the force. Thus the vertical arrows in the accompanying pressure field diagram indicate that air above and below the airfoil is accelerated, or turned downward, and that the non-uniform pressure is thus the cause of the downward deflection of the flow visible in the flow animation. To produce this downward turning, the airfoil must have a positive angle of attack or have sufficient positive camber. Note that the downward turning of the flow over the upper surface is the result of the air being pushed downward by higher pressure above it than below it. Some explanations that refer to the "Coandă effect" suggest that viscosity plays a key role in the downward turning, but this is false. (see above under "Controversy regarding the Coandă effect").
The arrows ahead of the airfoil indicate that the flow ahead of the airfoil is deflected upward, and the arrows behind the airfoil indicate that the flow behind is deflected upward again, after being deflected downward over the airfoil. These deflections are also visible in the flow animation.
The arrows ahead of the airfoil and behind also indicate that air passing through the low-pressure region above the airfoil is sped up as it enters, and slowed back down as it leaves. Air passing through the high-pressure region below the airfoil is slowed down as it enters and then sped back up as it leaves. Thus the non-uniform pressure is also the cause of the changes in flow speed visible in the flow animation. The changes in flow speed are consistent with Bernoulli's principle, which states that in a steady flow without viscosity, lower pressure means higher speed, and higher pressure means lower speed.
Thus changes in flow direction and speed are directly caused by the non-uniform pressure. But this cause-and-effect relationship is not just one-way; it works in both directions simultaneously. The air's motion is affected by the pressure differences, but the existence of the pressure differences depends on the air's motion. The relationship is thus a mutual, or reciprocal, interaction: Air flow changes speed or direction in response to pressure differences, and the pressure differences are sustained by the air's resistance to changing speed or direction. A pressure difference can exist only if something is there for it to push against. In aerodynamic flow, the pressure difference pushes against the air's inertia, as the air is accelerated by the pressure difference. This is why the air's mass is part of the calculation, and why lift depends on air density.
Sustaining the pressure difference that exerts the lift force on the airfoil surfaces requires sustaining a pattern of non-uniform pressure in a wide area around the airfoil. This requires maintaining pressure differences in both the vertical and horizontal directions, and thus requires both downward turning of the flow and changes in flow speed according to Bernoulli's principle. The pressure differences and the changes in flow direction and speed sustain each other in a mutual interaction. The pressure differences follow naturally from Newton's second law and from the fact that flow along the surface follows the predominantly downward-sloping contours of the airfoil. And the fact that the air has mass is crucial to the interaction.
=== How simpler explanations fall short ===
Producing a lift force requires both downward turning of the flow and changes in flow speed consistent with Bernoulli's principle. Each of the simplified explanations given above in Simplified physical explanations of lift on an airfoil falls short by trying to explain lift in terms of only one or the other, thus explaining only part of the phenomenon and leaving other parts unexplained.
== Quantifying lift ==
=== Pressure integration ===
When the pressure distribution on the airfoil surface is known, determining the total lift requires adding up the contributions to the pressure force from local elements of the surface, each with its own local value of pressure. The total lift is thus the integral of the pressure, in the direction perpendicular to the farfield flow, over the airfoil surface.
L
=
∮
p
n
⋅
k
d
S
,
{\displaystyle L=\oint p\mathbf {n} \cdot \mathbf {k} \;\mathrm {d} S,}
where:
S is the projected (planform) area of the airfoil, measured normal to the mean airflow;
n is the normal unit vector pointing into the wing;
k is the vertical unit vector, normal to the freestream direction.
The above lift equation neglects the skin friction forces, which are small compared to the pressure forces.
By using the streamwise vector i parallel to the freestream in place of k in the integral, we obtain an expression for the pressure drag Dp (which includes the pressure portion of the profile drag and, if the wing is three-dimensional, the induced drag). If we use the spanwise vector j, we obtain the side force Y.
D
p
=
∮
p
n
⋅
i
d
S
,
Y
=
∮
p
n
⋅
j
d
S
.
{\displaystyle {\begin{aligned}D_{p}&=\oint p\mathbf {n} \cdot \mathbf {i} \;\mathrm {d} S,\\Y&=\oint p\mathbf {n} \cdot \mathbf {j} \;\mathrm {d} S.\end{aligned}}}
The validity of this integration generally requires the airfoil shape to be a closed curve that is piecewise smooth.
=== Lift coefficient ===
Lift depends on the size of the wing, being approximately proportional to the wing area. It is often convenient to quantify the lift of a given airfoil by its lift coefficient
C
L
{\displaystyle C_{L}}
, which defines its overall lift in terms of a unit area of the wing.
If the value of
C
L
{\displaystyle C_{L}}
for a wing at a specified angle of attack is given, then the lift produced for specific flow conditions can be determined:
L
=
1
2
ρ
v
2
S
C
L
{\displaystyle L={\tfrac {1}{2}}\rho v^{2}SC_{L}}
where
L
{\displaystyle L}
is the lift force
ρ
{\displaystyle \rho }
is the air density
v
{\displaystyle v}
is the velocity or true airspeed
S
{\displaystyle S}
is the planform (projected) wing area
C
L
{\displaystyle C_{L}}
is the lift coefficient at the desired angle of attack, Mach number, and Reynolds number
== Mathematical theories of lift ==
Mathematical theories of lift are based on continuum fluid mechanics, assuming that air flows as a continuous fluid. Lift is generated in accordance with the fundamental principles of physics, the most relevant being the following three principles:
Conservation of momentum, which is a consequence of Newton's laws of motion, especially Newton's second law which relates the net force on an element of air to its rate of momentum change,
Conservation of mass, including the assumption that the airfoil's surface is impermeable for the air flowing around, and
Conservation of energy, which says that energy is neither created nor destroyed.
Because an airfoil affects the flow in a wide area around it, the conservation laws of mechanics are embodied in the form of partial differential equations combined with a set of boundary condition requirements which the flow has to satisfy at the airfoil surface and far away from the airfoil.
To predict lift requires solving the equations for a particular airfoil shape and flow condition, which generally requires calculations that are so voluminous that they are practical only on a computer, through the methods of computational fluid dynamics (CFD). Determining the net aerodynamic force from a CFD solution requires "adding up" (integrating) the forces due to pressure and shear determined by the CFD over every surface element of the airfoil as described under "pressure integration".
The Navier–Stokes equations (NS) provide the potentially most accurate theory of lift, but in practice, capturing the effects of turbulence in the boundary layer on the airfoil surface requires sacrificing some accuracy, and requires use of the Reynolds-averaged Navier–Stokes equations (RANS). Simpler but less accurate theories have also been developed.
=== Navier–Stokes (NS) equations ===
These equations represent conservation of mass, Newton's second law (conservation of momentum), conservation of energy, the Newtonian law for the action of viscosity, the Fourier heat conduction law, an equation of state relating density, temperature, and pressure, and formulas for the viscosity and thermal conductivity of the fluid.
In principle, the NS equations, combined with boundary conditions of no through-flow and no slip at the airfoil surface, could be used to predict lift with high accuracy in any situation in ordinary atmospheric flight. However, airflows in practical situations always involve turbulence in the boundary layer next to the airfoil surface, at least over the aft portion of the airfoil. Predicting lift by solving the NS equations in their raw form would require the calculations to resolve the details of the turbulence, down to the smallest eddy. This is not yet possible, even on the most powerful computer. So in principle the NS equations provide a complete and very accurate theory of lift, but practical prediction of lift requires that the effects of turbulence be modeled in the RANS equations rather than computed directly.
=== Reynolds-averaged Navier–Stokes (RANS) equations ===
These are the NS equations with the turbulence motions averaged over time, and the effects of the turbulence on the time-averaged flow represented by turbulence modeling (an additional set of equations based on a combination of dimensional analysis and empirical information on how turbulence affects a boundary layer in a time-averaged average sense). A RANS solution consists of the time-averaged velocity vector, pressure, density, and temperature defined at a dense grid of points surrounding the airfoil.
The amount of computation required is a minuscule fraction (billionths) of what would be required to resolve all of the turbulence motions in a raw NS calculation, and with large computers available it is now practical to carry out RANS calculations for complete airplanes in three dimensions. Because turbulence models are not perfect, the accuracy of RANS calculations is imperfect, but it is adequate for practical aircraft design. Lift predicted by RANS is usually within a few percent of the actual lift.
=== Inviscid-flow equations (Euler or potential) ===
The Euler equations are the NS equations without the viscosity, heat conduction, and turbulence effects. As with a RANS solution, an Euler solution consists of the velocity vector, pressure, density, and temperature defined at a dense grid of points surrounding the airfoil. While the Euler equations are simpler than the NS equations, they do not lend themselves to exact analytic solutions.
Further simplification is available through potential flow theory, which reduces the number of unknowns to be determined, and makes analytic solutions possible in some cases, as described below.
Either Euler or potential-flow calculations predict the pressure distribution on the airfoil surfaces roughly correctly for angles of attack below stall, where they might miss the total lift by as much as 10–20%. At angles of attack above stall, inviscid calculations do not predict that stall has happened, and as a result they grossly overestimate the lift.
In potential-flow theory, the flow is assumed to be irrotational, i.e. that small fluid parcels have no net rate of rotation. Mathematically, this is expressed by the statement that the curl of the velocity vector field is everywhere equal to zero. Irrotational flows have the convenient property that the velocity can be expressed as the gradient of a scalar function called a potential. A flow represented in this way is called potential flow.
In potential-flow theory, the flow is assumed to be incompressible. Incompressible potential-flow theory has the advantage that the equation (Laplace's equation) to be solved for the potential is linear, which allows solutions to be constructed by superposition of other known solutions. The incompressible-potential-flow equation can also be solved by conformal mapping, a method based on the theory of functions of a complex variable. In the early 20th century, before computers were available, conformal mapping was used to generate solutions to the incompressible potential-flow equation for a class of idealized airfoil shapes, providing some of the first practical theoretical predictions of the pressure distribution on a lifting airfoil.
A solution of the potential equation directly determines only the velocity field. The pressure field is deduced from the velocity field through Bernoulli's equation.
Applying potential-flow theory to a lifting flow requires special treatment and an additional assumption. The problem arises because lift on an airfoil in inviscid flow requires circulation in the flow around the airfoil (See "Circulation and the Kutta–Joukowski theorem" below), but a single potential function that is continuous throughout the domain around the airfoil cannot represent a flow with nonzero circulation. The solution to this problem is to introduce a branch cut, a curve or line from some point on the airfoil surface out to infinite distance, and to allow a jump in the value of the potential across the cut. The jump in the potential imposes circulation in the flow equal to the potential jump and thus allows nonzero circulation to be represented. However, the potential jump is a free parameter that is not determined by the potential equation or the other boundary conditions, and the solution is thus indeterminate. A potential-flow solution exists for any value of the circulation and any value of the lift. One way to resolve this indeterminacy is to impose the Kutta condition, which is that, of all the possible solutions, the physically reasonable solution is the one in which the flow leaves the trailing edge smoothly. The streamline sketches illustrate one flow pattern with zero lift, in which the flow goes around the trailing edge and leaves the upper surface ahead of the trailing edge, and another flow pattern with positive lift, in which the flow leaves smoothly at the trailing edge in accordance with the Kutta condition.
=== Linearized potential flow ===
This is potential-flow theory with the further assumptions that the airfoil is very thin and the angle of attack is small. The linearized theory predicts the general character of the airfoil pressure distribution and how it is influenced by airfoil shape and angle of attack, but is not accurate enough for design work. For a 2D airfoil, such calculations can be done in a fraction of a second in a spreadsheet on a PC.
=== Circulation and the Kutta–Joukowski theorem ===
When an airfoil generates lift, several components of the overall velocity field contribute to a net circulation of air around it: the upward flow ahead of the airfoil, the accelerated flow above, the decelerated flow below, and the downward flow behind.
The circulation can be understood as the total amount of "spinning" (or vorticity) of an inviscid fluid around the airfoil.
The Kutta–Joukowski theorem relates the lift per unit width of span of a two-dimensional airfoil to this circulation component of the flow. It is a key element in an explanation of lift that follows the development of the flow around an airfoil as the airfoil starts its motion from rest and a starting vortex is formed and left behind, leading to the formation of circulation around the airfoil. Lift is then inferred from the Kutta-Joukowski theorem. This explanation is largely mathematical, and its general progression is based on logical inference, not physical cause-and-effect.
The Kutta–Joukowski model does not predict how much circulation or lift a two-dimensional airfoil produces. Calculating the lift per unit span using Kutta–Joukowski requires a known value for the circulation. In particular, if the Kutta condition is met, in which the rear stagnation point moves to the airfoil trailing edge and attaches there for the duration of flight, the lift can be calculated theoretically through the conformal mapping method.
The lift generated by a conventional airfoil is dictated by both its design and the flight conditions, such as forward velocity, angle of attack and air density. Lift can be increased by artificially increasing the circulation, for example by boundary-layer blowing or the use of blown flaps. In the Flettner rotor the entire airfoil is circular and spins about a spanwise axis to create the circulation.
== Three-dimensional flow ==
The flow around a three-dimensional wing involves significant additional issues, especially relating to the wing tips. For a wing of low aspect ratio, such as a typical delta wing, two-dimensional theories may provide a poor model and three-dimensional flow effects can dominate. Even for wings of high aspect ratio, the three-dimensional effects associated with finite span can affect the whole span, not just close to the tips.
=== Wing tips and spanwise distribution ===
The vertical pressure gradient at the wing tips causes air to flow sideways, out from under the wing then up and back over the upper surface. This reduces the pressure gradient at the wing tip, therefore also reducing lift. The lift tends to decrease in the spanwise direction from root to tip, and the pressure distributions around the airfoil sections change accordingly in the spanwise direction. Pressure distributions in planes perpendicular to the flight direction tend to look like the illustration at right. This spanwise-varying pressure distribution is sustained by a mutual interaction with the velocity field. Flow below the wing is accelerated outboard, flow outboard of the tips is accelerated upward, and flow above the wing is accelerated inboard, which results in the flow pattern illustrated at right.
There is more downward turning of the flow than there would be in a two-dimensional flow with the same airfoil shape and sectional lift, and a higher sectional angle of attack is required to achieve the same lift compared to a two-dimensional flow. The wing is effectively flying in a downdraft of its own making, as if the freestream flow were tilted downward, with the result that the total aerodynamic force vector is tilted backward slightly compared to what it would be in two dimensions. The additional backward component of the force vector is called lift-induced drag.
The difference in the spanwise component of velocity above and below the wing (between being in the inboard direction above and in the outboard direction below) persists at the trailing edge and into the wake downstream. After the flow leaves the trailing edge, this difference in velocity takes place across a relatively thin shear layer called a vortex sheet.
=== Horseshoe vortex system ===
The wingtip flow leaving the wing creates a tip vortex. As the main vortex sheet passes downstream from the trailing edge, it rolls up at its outer edges, merging with the tip vortices. The combination of the wingtip vortices and the vortex sheets feeding them is called the vortex wake.
In addition to the vorticity in the trailing vortex wake there is vorticity in the wing's boundary layer, called 'bound vorticity', which connects the trailing sheets from the two sides of the wing into a vortex system in the general form of a horseshoe. The horseshoe form of the vortex system was recognized by the British aeronautical pioneer Lanchester in 1907.
Given the distribution of bound vorticity and the vorticity in the wake, the Biot–Savart law (a vector-calculus relation) can be used to calculate the velocity perturbation anywhere in the field, caused by the lift on the wing. Approximate theories for the lift distribution and lift-induced drag of three-dimensional wings are based on such analysis applied to the wing's horseshoe vortex system. In these theories, the bound vorticity is usually idealized and assumed to reside at the camber surface inside the wing.
Because the velocity is deduced from the vorticity in such theories, some authors describe the situation to imply that the vorticity is the cause of the velocity perturbations, using terms such as "the velocity induced by the vortex", for example. But attributing mechanical cause-and-effect between the vorticity and the velocity in this way is not consistent with the physics. The velocity perturbations in the flow around a wing are in fact produced by the pressure field.
== Manifestations of lift in the farfield ==
=== Integrated force/momentum balance in lifting flows ===
The flow around a lifting airfoil must satisfy Newton's second law regarding conservation of momentum, both locally at every point in the flow field, and in an integrated sense over any extended region of the flow. For an extended region, Newton's second law takes the form of the momentum theorem for a control volume, where a control volume can be any region of the flow chosen for analysis. The momentum theorem states that the integrated force exerted at the boundaries of the control volume (a surface integral), is equal to the integrated time rate of change (material derivative) of the momentum of fluid parcels passing through the interior of the control volume. For a steady flow, this can be expressed in the form of the net surface integral of the flux of momentum through the boundary.
The lifting flow around a 2D airfoil is usually analyzed in a control volume that completely surrounds the airfoil, so that the inner boundary of the control volume is the airfoil surface, where the downward force per unit span
−
L
′
{\displaystyle -L'}
is exerted on the fluid by the airfoil. The outer boundary is usually either a large circle or a large rectangle. At this outer boundary distant from the airfoil, the velocity and pressure are well represented by the velocity and pressure associated with a uniform flow plus a vortex, and viscous stress is negligible, so that the only force that must be integrated over the outer boundary is the pressure. The free-stream velocity is usually assumed to be horizontal, with lift vertically upward, so that the vertical momentum is the component of interest.
For the free-air case (no ground plane), the force
−
L
′
{\displaystyle -L'}
exerted by the airfoil on the fluid is manifested partly as momentum fluxes and partly as pressure differences at the outer boundary, in proportions that depend on the shape of the outer boundary, as shown in the diagram at right. For a flat horizontal rectangle that is much longer than it is tall, the fluxes of vertical momentum through the front and back are negligible, and the lift is accounted for entirely by the integrated pressure differences on the top and bottom. For a square or circle, the momentum fluxes and pressure differences account for half the lift each. For a vertical rectangle that is much taller than it is wide, the unbalanced pressure forces on the top and bottom are negligible, and lift is accounted for entirely by momentum fluxes, with a flux of upward momentum that enters the control volume through the front accounting for half the lift, and a flux of downward momentum that exits the control volume through the back accounting for the other half.
The results of all of the control-volume analyses described above are consistent with the Kutta–Joukowski theorem described above. Both the tall rectangle and circle control volumes have been used in derivations of the theorem.
=== Lift reacted by overpressure on the ground under an airplane ===
An airfoil produces a pressure field in the surrounding air, as explained under "The wider flow around the airfoil" above. The pressure differences associated with this field die off gradually, becoming very small at large distances, but never disappearing altogether. Below the airplane, the pressure field persists as a positive pressure disturbance that reaches the ground, forming a pattern of slightly-higher-than-ambient pressure on the ground, as shown on the right. Although the pressure differences are very small far below the airplane, they are spread over a wide area and add up to a substantial force. For steady, level flight, the integrated force due to the pressure differences is equal to the total aerodynamic lift of the airplane and to the airplane's weight. According to Newton's third law, this pressure force exerted on the ground by the air is matched by an equal-and-opposite upward force exerted on the air by the ground, which offsets all of the downward force exerted on the air by the airplane. The net force due to the lift, acting on the atmosphere as a whole, is therefore zero, and thus there is no integrated accumulation of vertical momentum in the atmosphere, as was noted by Lanchester early in the development of modern aerodynamics.
== See also ==
Drag coefficient
Flow separation
Fluid dynamics
Foil (fluid mechanics)
Küssner effect
Lift-to-drag ratio
Lifting-line theory
Spoiler (automotive)
== Footnotes ==
== References ==
== Further reading ==
== External links ==
Discussion of the apparent "conflict" between the various explanations of lift Archived July 25, 2021, at the Wayback Machine
NASA tutorial, with animation, describing lift Archived March 9, 2009, at the Wayback Machine
NASA FoilSim II 1.5 beta. Lift simulator
Explanation of Lift with animation of fluid flow around an airfoil Archived June 13, 2021, at the Wayback Machine
A treatment of why and how wings generate lift that focuses on pressure Archived December 19, 2006, at the Wayback Machine
Physics of Flight – reviewed Archived March 9, 2021, at the Wayback Machine. Online paper by Prof. Dr. Klaus Weltner
How do Wings Work? Holger Babinsky
Bernoulli Or Newton: Who's Right About Lift? Archived September 24, 2015, at the Wayback Machine Plane and Pilot magazine
One Minute Physics How Does a Wing actually work? Archived May 20, 2021, at the Wayback Machine (YouTube video)
How wings really work, University of Cambridge Archived June 14, 2021, at the Wayback Machine Holger Babinsky (referred by "One Minute Physics How Does a Wing actually work?" YouTube video)
From Summit to Seafloor – Lifted Weight as a Function of Altitude and Depth by Rolf Steinegger
Joukowski Transform Interactive WebApp Archived October 19, 2019, at the Wayback Machine
How Planes Fly Archived June 11, 2021, at the Wayback Machine YouTube video presentation by Krzysztof Fidkowski, associate professor of Aerospace Engineering at the University of Michigan | Wikipedia/Lift_(force) |
Surface force denoted fs is the force that acts across an internal or external surface element in a material body.
Normal forces and shear forces between objects are types of surface force. All cohesive forces and contact forces between objects are considered as surface forces.
Surface force can be decomposed into two perpendicular components: normal forces and shear forces. A normal force acts normally over an area and a shear force acts tangentially over an area.
== Equations for surface force ==
=== Surface force due to pressure ===
f
s
=
p
⋅
A
{\displaystyle f_{s}=p\cdot A\ }
, where f = force, p = pressure, and A = area on which a uniform pressure acts
== Examples ==
=== Pressure related surface force ===
Since pressure is
f
o
r
c
e
a
r
e
a
=
N
m
2
{\displaystyle {\frac {\mathit {force}}{\mathit {area}}}=\mathrm {\frac {N}{m^{2}}} }
, and area is a
(
l
e
n
g
t
h
)
⋅
(
w
i
d
t
h
)
=
m
⋅
m
=
m
2
{\displaystyle (length)\cdot (width)=\mathrm {m\cdot m} =\mathrm {m^{2}} }
,
a pressure of
5
N
m
2
=
5
P
a
{\displaystyle 5\ \mathrm {\frac {N}{m^{2}}} =5\ \mathrm {Pa} }
over an area of
20
m
2
{\displaystyle 20\ \mathrm {m^{2}} }
will produce a surface force of
(
5
P
a
)
⋅
(
20
m
2
)
=
100
N
{\displaystyle (5\ \mathrm {Pa} )\cdot (20\ \mathrm {m^{2}} )=100\ \mathrm {N} }
.
== See also ==
Body force
Contact force
== References == | Wikipedia/Surface_force |
A subfield of fluid statics, aerostatics is the study of gases that are not in motion with respect to the coordinate system in which they are considered. The corresponding study of gases in motion is called aerodynamics.
Aerostatics studies density allocation, especially in air. One of the applications of this is the barometric formula.
An aerostat is a lighter than air craft, such as an airship or balloon, which uses the principles of aerostatics to float.
== Basic laws ==
Treatment of the equations of gaseous behaviour at rest is generally taken, as in hydrostatics, to begin with a consideration of the general equations of momentum for fluid flow, which can be expressed as:
ρ
[
∂
U
j
∂
t
+
U
i
∂
U
j
∂
t
]
=
−
∂
P
∂
x
j
−
∂
τ
i
j
∂
x
i
+
ρ
g
j
{\displaystyle \rho [{\partial U_{j} \over \partial t}+U_{i}{\partial U_{j} \over \partial t}]=-{\partial P \over \partial x_{j}}-{\partial \tau _{ij} \over \partial x_{i}}+\rho g_{j}}
,
where
ρ
{\displaystyle \rho }
is the mass density of the fluid,
U
j
{\displaystyle U_{j}}
is the instantaneous velocity,
P
{\displaystyle P}
is fluid pressure,
g
{\displaystyle g}
are the external body forces acting on the fluid, and
τ
i
j
{\displaystyle \tau _{ij}}
is the momentum transport coefficient. As the fluid's static nature mandates that
U
j
=
0
{\displaystyle U_{j}=0}
, and that
τ
i
j
=
0
{\displaystyle \tau _{ij}=0}
, the following set of partial differential equations representing the basic equations of aerostatics is found.: 154
∂
P
∂
x
j
=
ρ
g
j
{\displaystyle {\partial P \over \partial x_{j}}=\rho g_{j}}
However, the presence of a non-constant density as is found in gaseous fluid systems (due to the compressibility of gases) requires the inclusion of the ideal gas law:
P
ρ
=
R
T
{\displaystyle {P \over \rho }=RT}
,
where
R
{\displaystyle R}
denotes the universal gas constant, and
T
{\displaystyle T}
the temperature of the gas, in order to render the valid aerostatic partial differential equations:
∂
P
∂
x
j
=
ρ
g
j
^
=
P
R
T
g
j
^
{\displaystyle {\partial P \over \partial x_{j}}=\rho {\hat {g_{j}}}={P \over \ RT}{\hat {g_{j}}}}
,
which can be employed to compute the pressure distribution in gases whose thermodynamic states are given by the equation of state for ideal gases.: 183
== Fields of study ==
Atmospheric pressure fluctuation
Composition of mountain air
Cross-section of the atmosphere
Gas density
Gas diffusion in soil
Gas pressure
Kinetic theory of gases
Partial pressures in gas mixtures
Pressure measurement
== See also ==
Aeronautics
== References == | Wikipedia/Aerostatics |
Wind turbine design is the process of defining the form and configuration of a wind turbine to extract energy from the wind. An installation consists of the systems needed to capture the wind's energy, point the turbine into the wind, convert mechanical rotation into electrical power, and other systems to start, stop, and control the turbine.
In 1919, German physicist Albert Betz showed that for a hypothetical ideal wind-energy extraction machine, the fundamental laws of conservation of mass and energy allowed no more than 16/27 (59.3%) of the wind's kinetic energy to be captured. This Betz' law limit can be approached by modern turbine designs which reach 70 to 80% of this theoretical limit.
In addition to the blades, design of a complete wind power system must also address the hub, controls, generator, supporting structure and foundation. Turbines must also be integrated into power grids.
== Aerodynamics ==
Blade shape and dimension are determined by the aerodynamic performance required to efficiently extract energy, and by the strength required to resist forces on the blade.
The aerodynamics of a horizontal-axis wind turbine are not straightforward. The air flow at the blades is not the same as that away from the turbine. The way that energy is extracted from the air also causes air to be deflected by the turbine. Wind turbine aerodynamics at the rotor surface exhibit phenomena that are rarely seen in other aerodynamic fields.
== Power control ==
Rotation speed must be controlled for efficient power generation and to keep the turbine components within speed and torque limits. The centrifugal force on the blades increases as the square of the rotation speed, which makes this structure sensitive to overspeed. Because power increases as the cube of the wind speed, turbines must survive much higher wind loads (such as gusts of wind) than those loads from which they generate power.
A wind turbine must produce power over a range of wind speeds. The cut-in speed is around 3–4 m/s for most turbines, and cut-out at 25 m/s. If the rated wind speed is exceeded the power has to be limited.
A control system involves three basic elements: sensors to measure process variables, actuators to manipulate energy capture and component loading, and control algorithms that apply information gathered by the sensors to coordinate the actuators.
Any wind blowing above the survival speed damages the turbine. The survival speed of commercial wind turbines ranges from 40 m/s (144 km/h, 89 MPH) to 72 m/s (259 km/h, 161 MPH), typically around 60 m/s (216 km/h, 134 MPH). Some turbines can survive 80 metres per second (290 km/h; 180 mph).
=== Stall ===
A stall on an airfoil occurs when air passes over it in such a way that the generation of lift rapidly decreases. Usually this is due to a high angle of attack (AOA), but can also result from dynamic effects. The blades of a fixed pitch turbine can be designed to stall in high wind speeds, slowing rotation. This is a simple fail-safe mechanism to help prevent damage. However, other than systems with dynamically controlled pitch, it cannot produce a constant power output over a large range of wind speeds, which makes it less suitable for large scale, power grid applications.
A fixed-speed HAWT (Horizontal Axis Wind Turbine) inherently increases its angle of attack at higher wind speed as the blades speed up. A natural strategy, then, is to allow the blade to stall when the wind speed increases. This technique was successfully used on many early HAWTs. However, the degree of blade pitch tended to increase noise levels.
Vortex generators may be used to control blade lift characteristics. VGs are placed on the airfoil to enhance the lift if they are placed on the lower (flatter) surface or limit the maximum lift if placed on the upper (higher camber) surface.
=== Furling ===
Furling works by decreasing the angle of attack, which reduces drag and blade cross-section. One major problem is getting the blades to stall or furl quickly enough in a wind gust. A fully furled turbine blade, when stopped, faces the edge of the blade into the wind.
Loads can be reduced by making a structural system softer or more flexible. This can be accomplished with downwind rotors or with curved blades that twist naturally to reduce angle of attack at higher wind speeds. These systems are nonlinear and couple the structure to the flow field - requiring design tools to evolve to model these nonlinearities.
Standard turbines all furl in high winds. Since furling requires acting against the torque on the blade, it requires some form of pitch angle control, which is achieved with a slewing drive. This drive precisely angles the blade while withstanding high torque loads. In addition, many turbines use hydraulic systems. These systems are usually spring-loaded, so that if hydraulic power fails, the blades automatically furl. Other turbines use an electric servomotor for every blade. They have a battery-reserve in case of grid failure. Small wind turbines (under 50 kW) with variable-pitching generally use systems operated by centrifugal force, either by flyweights or geometric design, and avoid electric or hydraulic controls.
Fundamental gaps exist in pitch control, limiting the reduction of energy costs, according to a report funded by the Atkinson Center for a Sustainable Future. Load reduction is currently focused on full-span blade pitch control, since individual pitch motors are the actuators on commercial turbines. Significant load mitigation has been demonstrated in simulations for blades, tower, and drive train. However, further research is needed to increase energy capture and mitigate fatigue loads.
A control technique applied to the pitch angle is done by comparing the power output with the power value at the rated engine speed (power reference, Ps reference). Pitch control is done with PI controller. In order to adjust pitch rapidly enough, the actuator uses the time constant Tservo, an integrator and limiters. The pitch angle remains from 0° to 30° with a change rate of 10°/second.
As in the figure at the right, the reference pitch angle is compared with the actual pitch angle b and then the difference is corrected by the actuator. The reference pitch angle, which comes from the PI controller, goes through a limiter. Restrictions are important to maintain the pitch angle in real terms. Limiting the change rate is especially important during network faults. The importance is due to the fact that the controller decides how quickly it can reduce the aerodynamic energy to avoid acceleration during errors.
== Other controls ==
=== Generator torque ===
Modern large wind turbines operate at variable speeds. When wind speed falls below the turbine's rated speed, generator torque is used to control the rotor speed to capture as much power as possible. The most power is captured when the tip speed ratio is held constant at its optimum value (typically between 6 and 7). This means that rotor speed increases proportional to wind speed. The difference between the aerodynamic torque captured by the blades and the applied generator torque controls the rotor speed. If the generator torque is lower, the rotor accelerates, and if the generator torque is higher, the rotor slows. Below rated wind speed, the generator torque control is active while the blade pitch is typically held at the constant angle that captures the most power, fairly flat to the wind. Above rated wind speed, the generator torque is typically held constant while the blade pitch is adjusted accordingly.
One technique to control a permanent magnet synchronous motor is field-oriented control. Field-oriented control is a closed loop strategy composed of two current controllers (an inner loop and cascading outer loop) necessary for controlling the torque, and one speed controller.
=== Constant torque angle control ===
In this control strategy the d axis current is kept at zero, while the vector current aligns with the q axis in order to maintain the torque angle at 90o. This is a common control strategy because only the Iqs current must be controlled. The torque equation of the generator is a linear equation dependent only on the Iqs current.
So, the electromagnetic torque for Ids = 0 (we can achieve that with the d-axis controller) is now:
T
e
=
3
/
2
p
(
λ
p
m
I
q
s
+
(
L
d
s
−
L
q
s
)
I
d
s
I
q
s
)
=
3
/
2
p
λ
p
m
I
q
s
{\displaystyle Te=3/2p(\lambda pmIqs+(Lds-Lqs)IdsIqs)=3/2p\lambda pmIqs}
Thus, the complete system of the machine side converter and the cascaded PI controller loops is given by the figure. The control inputs are the duty rations mds and mqs, of the PWM-regulated converter. It displays the control scheme for the wind turbine in the machine side and simultaneously how the Ids to zero (the torque equation is linear).
=== Yawing ===
Large turbines are typically actively controlled to face the wind direction measured by a wind vane situated on the back of the nacelle. By minimizing the yaw angle (the misalignment between wind and turbine pointing direction), power output is maximized and non-symmetrical loads minimized. However, since wind direction varies, the turbine does not strictly follow the wind and experiences a small yaw angle on average. The power output losses can be approximated to fall with (cos(yaw angle))3. Particularly at low-to-medium wind speeds, yawing can significantly reduce output, with wind common variations reaching 30°. At high wind speeds, wind direction is less variable.
=== Electrical braking ===
Braking a small turbine can be done by dumping energy from the generator into a resistor bank, converting kinetic energy into heat. This method is useful if the kinetic load on the generator is suddenly reduced or is too small to keep the turbine speed within its allowed limit.
Cyclically braking slows the blades, which increases the stalling effect and reducing efficiency. Rotation can be kept at a safe speed in faster winds while maintaining (nominal) power output. This method is usually not applied on large, grid-connected wind turbines.
=== Mechanical braking ===
A mechanical drum brake or disc brake stops rotation in emergency situations such as extreme gust events. The brake is a secondary means to hold the turbine at rest for maintenance, with a rotor lock system as primary means. Such brakes are usually applied only after blade furling and electromagnetic braking have reduced the turbine speed because mechanical brakes can ignite a fire inside the nacelle if used at full speed. Turbine load increases if the brake is applied at rated RPM.
== Turbine size ==
Turbines come in size classes. The smallest, with power less than 10 kW are used in homes, farms and remote applications whereas intermediate wind turbines (10-250 kW ) are useful for village power, hybrid systems and distributed power. The world's largest wind turbine as of 2021 was Vestas' V236-15.0 MW turbine. The new design's blades offer the largest swept area in the world with three 115.5 metres (379 ft) blades giving a rotor diameter of 236 metres (774 ft). Ming Yang in China have announced a larger 16 MW design.
For a given wind speed, turbine mass is approximately proportional to the cube of its blade-length. Wind power intercepted is proportional to the square of blade-length. The maximum blade-length of a turbine is limited by strength, stiffness, and transport considerations.
Labor and maintenance costs increase slower than turbine size, so to minimize costs, wind farm turbines are basically limited by the strength of materials, and siting requirements.
=== Low temperature ===
Utility-scale wind turbine generators have minimum temperature operating limits that apply in areas with temperatures below −20 °C (−4 °F). Turbines must be protected from ice accumulation that can make anemometer readings inaccurate and which, in certain turbine control designs, can cause high structure loads and damage. Some turbine manufacturers offer low-temperature packages at extra cost, which include internal heaters, different lubricants, and different alloys for structural elements. If low-temperatures are combined with a low-wind condition, the turbine requires an external supply of power, equivalent to a few percent of its rated output, for internal heating. For example, the St. Leon Wind Farm in Manitoba, Canada, has a total rating of 99 MW and is estimated to need up to 3 MW (around 3% of capacity) of station service power a few days a year for temperatures down to −30 °C (−22 °F).
== Nacelle ==
The nacelle houses the gearbox and generator connecting the tower and rotor. Sensors detect the wind speed and direction, and motors turn the nacelle into the wind to maximize output.
=== Gearbox ===
In conventional wind turbines, the blades spin a shaft that is connected through a gearbox to the generator. The gearbox converts the turning speed of the blades (15 to 20 RPM for a one-megawatt turbine) into the 1,800 (750-3600) RPM that the generator needs to generate electricity. Gearboxes are one of the more expensive components for installing and maintaining wind turbines. Analysts from GlobalData estimate that the gearbox market grew from $3.2bn in 2006 to $6.9bn in 2011. The market leader for Gearbox production was Winergy in 2011. The use of magnetic gearboxes has been explored as a way of reducing maintenance costs.
=== Generator ===
For large horizontal-axis wind turbines (HAWT), the generator is mounted in a nacelle at the top of a tower, behind the rotor hub. Older wind turbines generate electricity through asynchronous machines directly connected to the grid. The gearbox reduces generator cost and weight. Commercial generators have a rotor carrying a winding so that a rotating magnetic field is produced inside a set of windings called the stator. While the rotating winding consumes a fraction of a percent of the generator output, adjustment of the field current allows good control over the output voltage.
The rotor's varying output frequency and voltage can be matched to the fixed values of the grid using multiple technologies such as doubly fed induction generators or full-effect converters, which converts the variable frequency current to DC and then back to AC using inverters. Although such alternatives require costly equipment and cost power, the turbine can capture a significantly larger fraction of the wind energy. Most are low voltage 660 Volt, but some offshore turbines (several MW) are 3.3 kV medium voltage.
In some cases, especially when offshore, a large collector transformer converts the wind farm's medium-voltage AC grid to DC and transmits the energy through a power cable to an onshore HVDC converter station.
=== Hydraulic ===
Hydraulic wind turbines perform the frequency and torque adjustments of gearboxes via a pressurized hydraulic fluid. Typically, the action of the turbine pressurizes the fluid with a hydraulic pump at the nacelle. Meanwhile, components on the ground can transform this pressure into energy, and recirculate the working fluid. Typically, the working fluid used in this kind of hydrostatic transmission is oil, which serves as a lubricant, reducing losses due to friction in the hydraulic units and allowing for a broad range of operating temperatures. However, other concepts are currently under study, which involve using water as the working fluid because it is abundant and eco-friendly.
Hydraulic turbines provide benefits to both operation and capital costs. They can use hydraulic units with variable displacement to have a continuously variable transmission that adapts in real time. This decouples generator speed to rotor speed, avoiding stalling and allowing for operating the turbine at an optimum speed and torque. This built-in transmission is how these hydraulic systems avoid the need for a conventional gearbox. Furthermore, hydraulic instead of mechanical power conversion introduces a damping effect on rotation fluctuations, reducing fatigue of the drivetrain and improving turbine structural integrity. Additionally, using a pressurized fluid instead of mechanical components allows for the electrical conversion to occur on the ground instead of the nacelle: this reduces maintenance difficulty, and reduces weight and center of gravity of the turbine. Studies estimate that these benefits may yield to a 3.9-18.9% reduction in the levelized cost of power for offshore wind turbines.
Some years ago, Mitsubishi, through its branch Artemis, deployed the Sea Angel, a unique hydraulic wind turbine at the utility scale. The Digital Displacement technology underwent trials on the Sea Angel, a wind turbine rated at 7 MW. This design is capable of adjusting the displacement of the central unit in response to erratic wind velocities, thereby maintaining the optimal efficiency of the system. Still, these systems are newer and in earlier stages of commercialization compared to conventional gearboxes.
=== Gearless ===
Gearless wind turbines (also called direct drive) eliminate the gearbox. Instead, the rotor shaft is attached directly to the generator, which spins at the same speed as the blades.
Advantages of permanent magnet direct drive generators (PMDD) over geared generators include increased efficiency, reduced noise, longer lifetime, high torque at low RPM, faster and precise positioning, and drive stiffness. PMDD generators "eliminate the gear-speed increaser, which is susceptible to significant accumulated fatigue torque loading, related reliability issues, and maintenance costs".
To make up for a direct-drive generator's slower rotation rate, the diameter of the generator's rotor is increased so that it can contain more magnets to create the required frequency and power. Gearless wind turbines are often heavier than geared wind turbines. An EU study showed that gearbox reliability is not the main problem in wind turbines. The reliability of direct drive turbines offshore is still not known, given the small sample size.
Experts from Technical University of Denmark estimate that a geared generator with permanent magnets may require 25 kg/MW of the rare-earth element neodymium, while a gearless may use 250 kg/MW.
In December 2011, the US Department of Energy announced a critical shortage of rare-earth elements such as neodymium. China produces more than 95%: 9 of rare-earth elements, while Hitachi holds more than 600 patents covering neodymium magnets.: 56 Direct-drive turbines require 600 kg of permanent magnet material per megawatt, which translates to several hundred kilograms of rare-earth content per megawatt,: 20 as neodymium content is estimated to be 31% of magnet weight. Hybrid drivetrains (intermediate between direct drive and traditional geared) use significantly less rare-earth materials. While permanent magnet wind turbines only account for about 5% of the market outside of China, their market share inside of China is estimated at 25% or higher.: 20 In 2011, demand for neodymium in wind turbines was estimated to be 1/5 of that in electric vehicles.: 91
== Blades ==
=== Blade design ===
The ratio between the blade speed and the wind speed is called tip-speed ratio. High efficiency 3-blade-turbines have tip speed/wind speed ratios of 6 to 7. Wind turbines spin at varying speeds (a consequence of their generator design). Use of aluminum and composite materials has contributed to low rotational inertia, which means that newer wind turbines can accelerate quickly if the winds pick up, keeping the tip speed ratio more nearly constant. Operating closer to their optimal tip speed ratio during energetic gusts of wind allows wind turbines to improve energy capture from sudden gusts.
Noise increases with tip speed. To increase tip speed without increasing noise would reduce torque into the gearbox and generator, reducing structural loads, thereby reducing cost. The noise reduction is linked to the detailed blade aerodynamics, especially factors that reduce abrupt stalling. The inability to predict stall restricts the use of aggressive aerodynamics. Some blades (mostly on Enercon) have a winglet to increase performance and reduce noise.
A blade can have a lift-to-drag ratio of 120, compared to 70 for a sailplane and 15 for an airliner. In order to optimize the lift-to-drag ratio of a blade, they are typically designed with varying airfoil cross-sections along their length, customized to the varying wind speeds and angles encountered from root to tip.
An additional design improvement is the incorporation of vortex generators, small fins mounted to the surface of the blade, the help to smooth the airflow, preventing flow separation and reducing turbulence, both of which contribute to reducing energy losses. All of these innovations have the end goal of increasing the efficiency of converting wind energy to electricity.
=== Applications of IMU in Wind Power Generation ===
==== Blade Dynamic Deformation and Load Monitoring ====
The role of the Inertial Measurement Unit (IMU) in wind power generation is to measure the three-axis acceleration and angular velocity of wind turbine blades, hubs, and tower tops in real-time. By using inertial navigation algorithms, it calculates the motion states (position, velocity, and attitude) of these components. IMU captures the global dynamic information of wind turbines and, through data fusion with Kalman filters and GNSS data, reduces cumulative errors. This enables high-precision estimation of blade deflection and loads, providing critical support for monitoring the operational loads of wind turbines.
IMUs measure angular velocity and acceleration, which, combined with navigation algorithms, capture the flexural attitude and positional changes of blades during operation in real time. Through the use of Kalman filters (KF) to fuse data from multiple IMUs, and based on rigid body geometric models and rotor angles, the position of each IMU is determined. Precision is further enhanced by compensating for differences between actual positions and the model.
GNSS Integration:
GNSS plays two key roles:
Time Synchronization: It provides a unified time reference for all IMUs, ensuring sensor data alignment.
Absolute Position Reference: By integrating IMU data, it limits drift errors, ensuring convergence and accuracy in IMU navigation solutions (position and attitude).
==== Structural Health Monitoring and Fault Prediction ====
IMU can be combined with other sensors (e.g., vibration and stress sensors) to improve fault detection sensitivity through multi-source data fusion. For example, IMUs installed on the turbine main shaft can extract tower acceleration signals through signal processing and use azimuth information to identify specific faults. Multi-sensor fusion technology can detect blade stress changes and crack risks, reducing downtime losses.
In 2021, Chinese researchers proposed an innovative multi-IMU data fusion algorithm for wind turbine blade dynamic deformation sensing. This algorithm uses a relative motion sensing fusion method that employs an improved Kalman filter and a feedback-based distributed structure to achieve multi-node data fusion.
High-Precision and Low-Precision IMU Collaboration:
High-precision IMUs (main nodes) are placed at the blade root base, serving as a global reference point to provide information on the overall torsional attitude and positional changes of the blade.
Low-precision IMUs (sub-nodes) are distributed at different positions along the blade, sensing local dynamic deformations.
Data from the high-precision IMU is filtered and fused to correct the measurement errors of low-precision IMUs, significantly improving the system's overall measurement accuracy and fault tolerance. Each sub-node independently processes local data, and redundant information is integrated through a global fusion layer to enhance fault tolerance. Even if a single IMU fails, the system can maintain high accuracy.
Application in Blade Dynamic Testing
During wind turbine blade dynamic testing, blades undergo continuous motion under external forces. By combining global reference data from high-precision IMUs with local measurements from low-precision IMUs, multi-node data fusion is achieved through a federated Kalman filter. This enables precise perception of the blade's flexural attitude and position in three-dimensional space.
Simulation results show that the fusion algorithm effectively reduces the measurement errors of low-precision IMUs, significantly decreasing the relative position and attitude errors of local blade nodes while maintaining the accuracy of high-precision IMU nodes. Particularly for complex motions at the blade's middle and tip, the fusion algorithm demonstrates strong robustness and accuracy.
=== Hub design ===
In simple designs, the blades are directly bolted to the hub and are unable to pitch, which leads to aerodynamic stall above certain windspeeds. In more sophisticated designs, they are bolted to the pitch bearing, which adjusts their angle of attack with the help of a pitch system according to the wind speed. Pitch control is performed by hydraulic or electric systems (battery or ultracapacitor). The pitch bearing is bolted to the hub. The hub is fixed to the rotor shaft, which drives the generator directly or through a gearbox.
=== Blade count ===
The number of blades is selected for aerodynamic efficiency, component costs, and system reliability. Noise emissions are affected by the location of the blades upwind or downwind of the tower and the rotor speed. Given that the noise emissions from the blades' trailing edges and tips vary by the 5th power of blade speed, a small increase in tip speed dramatically increases noise.
Wind turbines almost universally use either two or three blades. However, patents present designs with additional blades, such as Chan Shin's multi-unit rotor blade system. Aerodynamic efficiency increases with number of blades but with diminishing return. Increasing from one to two yields a six percent increase, while going from two to three yields an additional three percent. Further increasing the blade count yields minimal improvements and sacrifices too much in blade stiffness as the blades become thinner.
Theoretically, an infinite number of blades of zero width is the most efficient, operating at a high value of the tip speed ratio, but this is not practical.
Component costs affected by blade count are primarily for materials and manufacturing of the turbine rotor and drive train. Generally, the lower the number of blades, the lower the material and manufacturing costs. In addition, fewer blades allow higher rotational speed. Blade stiffness requirements to avoid tower interference limit blade thickness, but only when the blades are upwind of the tower; deflection in a downwind machine increases tower clearance. Fewer blades with higher rotational speeds reduce peak torque in the drive train, resulting in lower gearbox and generator costs.
System reliability is affected by blade count primarily through the dynamic loading of the rotor into the drive train and tower systems. While aligning the wind turbine to changes in wind direction (yawing), each blade experiences a cyclic load at its root end depending on blade position. However, these cyclic loads when combined at the drive train shaft are symmetrically balanced for three blades, yielding smoother operation during yaw. One or two blade turbines can use a pivoting teetered hub to nearly eliminate the cyclic loads into the drive shaft and system during yawing. In 2012, a Chinese 3.6 MW two-blade turbine was tested in Denmark.
=== Blade size ===
Increasing blade length pushed power generation from the single megawatt range to upwards of 10 megawatts. A larger area effectively increases tip-speed ratio at a given wind speed, thus increasing its energy extraction. Software such as HyperSizer (originally developed for spacecraft design) can be used to improve blade design.
As of 2015 the rotor diameters of onshore wind turbine blades reached 130 meters, while the diameter of offshore turbines reached 170 meters. In 2001, an estimated 50 million kilograms of fiberglass laminate were used in wind turbine blades.
=== Blade weight ===
An important goal is to control blade weight. Since blade mass scales as the cube of the turbine radius, gravity loading constrains systems with larger blades. Gravitational loads include axial and tensile/ compressive loads (top/bottom of rotation) as well as bending (lateral positions). The magnitude of these loads fluctuates cyclically and the edgewise moments (see below) are reversed every 180° of rotation. Typical rotor speeds and design life are ~10 and 20 years, respectively, with the number of lifetime revolutions on the order of 10^8. Considering wind, it is expected that turbine blades go through ~10^9 loading cycles.
Wind is another source of rotor blade loading. Lift causes bending in the flatwise direction (out of rotor plane) while airflow around the blade cause edgewise bending (in the rotor plane). Flaps bending involves tension on the pressure (upwind) side and compression on the suction (downwind) side. Edgewise bending involves tension on the leading edge and compression on the trailing edge.
Wind loads are cyclical because of natural variability in wind speed and wind shear (higher speeds at top of rotation).
Failure in ultimate loading of wind-turbine rotor blades exposed to wind and gravity loading is a failure mode that needs to be considered when the rotor blades are designed. The wind speed that causes bending of the rotor blades exhibits a natural variability, and so does the stress response in the rotor blades. Also, the resistance of the rotor blades, in terms of their tensile strengths, exhibits a natural variability. Given the increasing size of production wind turbines, blade failures are increasingly relevant when assessing public safety risks from wind turbines. The most common failure is the loss of a blade or part thereof. This has to be considered in the design.
In light of these failure modes and increasingly larger blade systems, researchers seek cost-effective materials with higher strength-to-mass ratios.
=== Blade materials ===
In general, materials should meet the following criteria:
wide availability and easy processing to reduce cost and maintenance
low weight or density to reduce gravitational forces
high strength to withstand wind and gravitational loading
high fatigue resistance to withstand cyclic loading
high stiffness to ensure stability of the optimal shape and orientation of the blade and clearance with the tower
high fracture toughness
the ability to withstand environmental impacts such as lightning strikes, humidity, and temperature
==== History ====
Wood and canvas sails were used on early windmills due to their low price, availability, and ease of manufacture. These materials, however, require frequent maintenance. Wood and canvas construction limits the airfoil shape to a flat plate, which has a relatively high ratio of drag to force captured (low aerodynamic efficiency) compared to solid airfoils. Construction of solid airfoil designs requires inflexible materials such as metals or composites.
Advances in turbine blade materials mirrored the progression of materials science as a broader subject. The first large turbine blades were predominantly made from metals like steel and aluminum due to their availability and robustness. However, their heavy weight and low flexibility restricted turbine size and decreased efficiency, requiring more energy to maintain blade rotation.
The wind energy sector eventually moved onto lighter materials, namely fiberglass, a marked improvement over the excessive weight of metals. However, fiberglass possessed its own set of disadvantages, notably durability and sustainability issues. They were susceptible to environmental damages including UV radiation and moisture, leading to delamination and loss of structural integrity. Additionally, fiberglass is difficult to recycle, making the end-of-life impact of fiberglass blades quite high.
As a response to these challenges, the wind energy industry looked to carbon fiber as a blade material, the specific stiffness and durability of which are greater than both metal and carbon fiber. The superior stiffness-to-weight ratio allows for the use of larger blades, increasing efficiency (see size section). In recent research, bio-based composites and nanostructure enhancements have been utilized to further reduce weight and increase strength and stiffness.
==== Polymer ====
The majority of commercialized wind turbine blades are made from fiber-reinforced polymers (FRPs), which are composites consisting of a polymer matrix and fibers. The long fibers provide longitudinal stiffness and strength, and the matrix provides fracture toughness, delamination strength, out-of-plane strength, and stiffness. Material indices based on maximizing power efficiency, high fracture toughness, fatigue resistance, and thermal stability are highest for glass and carbon fiber reinforced plastics (GFRPs and CFRPs).
In turbine blades, matrices such as thermosets or thermoplastics are used; as of 2017, thermosets are more common. These allow for the fibers to be bound together and add toughness. Thermosets make up 80% of the market, as they have lower viscosity, and also allow for low-temperature cure, both features contributing to ease of processing during manufacture. Thermoplastics offer recyclability that the thermosets do not, however their processing temperature and viscosity are much higher, limiting the product size and consistency, which are both important for large blades. Fracture toughness is higher for thermoplastics, but the fatigue behavior is worse.
Manufacturing blades in the 40 to 50-metre range involves proven fiberglass composite fabrication techniques. Manufacturers such as Nordex SE and GE Wind use an infusion process. Other manufacturers vary this technique, some including carbon and wood with fiberglass in an epoxy matrix. Other options include pre-impregnated ("prepreg") fiberglass and vacuum-assisted resin transfer moulding. Each of these options uses a glass-fiber reinforced polymer composite constructed with differing complexity. Perhaps the largest issue with open-mould, wet systems is the emissions associated with the volatile organic compounds ("VOCs") released. Preimpregnated materials and resin infusion techniques contain all VOCs, however these contained processes have their challenges, because the production of thick laminates necessary for structural components becomes more difficult. In particular, the preform resin permeability dictates the maximum laminate thickness; also, bleeding is required to eliminate voids and ensure proper resin distribution. One solution to resin distribution is to use partially impregnated fiberglass. During evacuation, the dry fabric provides a path for airflow and, once heat and pressure are applied, the resin may flow into the dry region, resulting in an evenly impregnated laminate structure.
==== Epoxy ====
Epoxy-based composites have environmental, production, and cost advantages over other resin systems. Epoxies also allow shorter cure cycles, increased durability, and improved surface finish. Prepreg operations further reduce processing time over wet lay-up systems. As turbine blades passed 60 metres, infusion techniques became more prevalent, because traditional resin transfer moulding injection times are too long compared to resin set-up time, limiting laminate thickness. Injection forces resin through a thicker ply stack, thus depositing the resin in the laminate structure before gelation occurs. Specialized epoxy resins have been developed to customize lifetimes and viscosity.
Carbon fiber-reinforced load-bearing spars can reduce weight and increase stiffness. Using carbon fibers in 60-metre turbine blades is estimated to reduce total blade mass by 38% and decrease cost by 14% compared to 100% fiberglass. Carbon fibers have the added benefit of reducing the thickness of fiberglass laminate sections, further addressing the problems associated with resin wetting of thick lay-up sections. Wind turbines benefit from the trend of decreasing carbon fiber costs.
Although glass and carbon fibers have many optimal qualities, their downsides include the fact that high filler fraction (10-70 wt%) causes increased density as well as microscopic defects and voids that can lead to premature failure.
==== Carbon nanotubes ====
Carbon nanotubes (CNTs) can reinforce polymer-based nanocomposites. CNTs can be grown or deposited on the fibers or added into polymer resins as a matrix for FRP structures. Using nanoscale CNTs as filler instead of traditional microscale filler (such as glass or carbon fibers) results in CNT/polymer nanocomposites, for which the properties can be changed significantly at low filler contents (typically < 5 wt%). They have low density and improve the elastic modulus, strength, and fracture toughness of the polymer matrix. The addition of CNTs to the matrix also reduces the propagation of interlaminar cracks.
Research on a low-cost carbon fiber (LCCF) at Oak Ridge National Laboratory gained attention in 2020, because it can mitigate the structural damage from lightning strikes. On glass fiber wind turbines, lightning strike protection (LSP) is usually added on top, but this is effectively deadweight in terms of structural contribution. Using conductive carbon fiber can avoid adding this extra weight.
Bio-composites
A significant concern in materials criteria for a turbine blade is its manufacturing and end-of-life environmental impact, as well its recyclability. While there are methods for manufacturing of fiberglass and carbon fiber composites into turbine blades have a lower carbon footprint than aluminum, for example, they still have a noticeable impact (30–100 kg CO2 eq per kg). Additionally, fiberglass is incredibly difficult to recycle and carbon fiber composites, while possible to recycle, require additional research to yield fibers that are suitable for reusing as turbine materials (as opposed to the fibers being so degraded that they are only suitable for downcycling). The development of bio-composite materials with sufficient mechanical properties aims to address these issues.
Bio-composite materials use natural fibers and fillers as reinforcement instead of synthetic glass or carbon fibers. Approaches vary from partial to complete replacement of synthetics, with varying levels of success. Unfortunately, plant-based natural fibers, while having extremely low environmental impact, possess issues in their structural properties. Namely, they have high cellulosic content and large oxygen reaction sites, both of which contribute to issues in mechanical and thermal performance. As such, other natural fibers, such as non-moisture attractive basalt, have become the focus of bio-composite research.
==== Research ====
Some polymer composites feature self-healing properties. Since the blades of the turbine form cracks from fatigue due to repetitive cyclic stresses, self-healing polymers are attractive for this application, because they can improve reliability and buffer various defects such as delamination. Embedding paraffin wax-coated copper wires in a fiber reinforced polymer creates a network of tubes. Using a catalyst, these tubes and dicyclopentadiene (DCPD) then react to form a thermosetting polymer, which repairs the cracks as they form in the material. As of 2019, this approach is not yet commercial.
Further improvement is possible through the use of carbon nanofibers (CNFs) in the blade coatings. A major problem in desert environments is erosion of the leading edges of blades by sand-laden wind, which increases roughness and decreases aerodynamic performance. The particle erosion resistance of fiber-reinforced polymers is poor when compared to metallic materials and elastomers. Replacing glass fiber with CNF on the composite surface greatly improves erosion resistance. CNFs provide good electrical conductivity (important for lightning strikes), high damping ratio, and good impact-friction resistance.
For wind turbines, especially those offshore, or in wet environments, base surface erosion also occurs. For example, in cold climates, ice can build up on the blades and increase roughness. At high speeds, this same erosion impact can occur from rainwater. A useful coating must have good adhesion, temperature tolerance, weather tolerance (to resist erosion from salt, rain, sand, etc.), mechanical strength, ultraviolet light tolerance, and have anti-icing and flame retardant properties. Along with this, the coating should be cheap and environmentally friendly.
Super hydrophobic surfaces (SHS) cause water droplets to bead, and roll off the blades. SHS prevents ice formation, up to -25 C, as it changes the ice formation process.; specifically, small ice islands form on SHS, as opposed to a large ice front. Further, due to the lowered surface area from the hydrophobic surface, aerodynamic forces on the blade allow these islands to glide off the blade, maintaining proper aerodynamics. SHS can be combined with heating elements to further prevent ice formation.
==== Lightning ====
Lightning damage over the course of a 25-year lifetime goes from surface level scorching and cracking of the laminate material, to ruptures in the blade or full separation in the adhesives that hold the blade together. It is most common to observe lightning strikes on the tips of the blades, especially in rainy weather due to embedded copper wiring. The most common method countermeasure, especially in non-conducting blade materials like GFRPs and CFRPs, is to add lightning "arresters", which are metallic wires that ground the blade, skipping the blades and gearbox entirely.
=== Blade repair ===
Wind turbine blades typically require repair after 2–5 years. Notable causes of blade damage comes from manufacturing defects, transportation, assembly, installation, lightning strikes, environmental wear, thermal cycling, leading edge erosion, or fatigue. Due to composite blade material and function, repair techniques found in aerospace applications often apply or provide a basis for basic repairs.
Depending on the nature of the damage, the approach of blade repairs can vary. Erosion repair and protection includes coatings, tapes, or shields. Structural repairs require bonding or fastening new material to the damaged area. Nonstructural matrix cracks and delaminations require fills and seals or resin injections. If ignored, minor cracks or delaminations can propagate and create structural damage.
Four zones have been identified with their respective repair needs:
Zone 1- the blade's leading edge. Requires erosion or crack repair.
Zone 2- close to the tip but behind the leading edge. Requires aeroelastic semi-structural repair.
Zone 3- Middle area behind the leading edge. Requires erosion repair.
Zone 4- Root and near root of the blade. Requires semi-structural or structural repairs
After the past few decades of rapid wind expansion across the globe, wind turbines are aging. This aging brings operation and maintenance(O&M) costs along with it, increasing as turbines approach their end of life. If damages to blades are not caught in time, power production and blade lifespan are decreased. Estimates project that 20-25% of the total levelized cost per kWh produced stems from blade O&M alone.
=== Blade recycling ===
The Global Wind Energy Council (GWEC) predicted that wind energy will supply 28.5% of global energy by 2030. This requires a newer and larger fleet of more efficient turbines and the corresponding decommissioning of older ones. Based on a European Wind Energy Association study, in 2010 between 110 and 140 kilotonnes of composites were consumed to manufacture blades. The majority of the blade material ends up as waste and requires recycling or downcycling. As of 2020, most end-of-use blades are stored or sent to landfills rather than recycled. It is also important to note that recent studies predict that nearly 52,000 tons of turbine blades are to be decommissioned every year until 2030. Typically, glass-fiber-reinforced polymers (GFRPs) comprise around 70% of the laminate material in the blade. GFRPs are not combustible and so hinder the incineration of combustible materials. The following methods are the major EOL paths for turbine blades, with methods varying depending on whether individual fibers are to be recovered and the requisite temperature/catalysts.
Mechanical recycling: This method doesn't recover individual fibers. Initial processes involve shredding, crushing, or milling. The crushed pieces are then separated into fiber-rich and resin-rich fractions. These fractions are ultimately incorporated into new composites either as fillers or reinforcements.
Pyrolysis: Thermal decomposition of the composites recovers individual fibers. For pyrolysis, the material is heated up to 500 °C in an environment without oxygen, causing it to break down into lower-weight organic substances and gaseous products. The glass fibers generally lose 50% of their strength and can be downcycled for fiber reinforcement applications in paints or concrete. This can recover up to approximately 19 MJ/kg at relatively high cost. It requires mechanical pre-processing, similar to that involved in purely mechanical recycling.
Solvolysis: This method involves the polymer matrix undergoing chemical decomposition via solvents including but not limited to acetone, nitric acid, ammonia, and alcohols. Advantages of solvolysis include a lower operation compared to pyrolysis and its ability to yield fibers with favorable surface and mechanical properties. Solvolysis has a significant number of operational considerations, including solvent flow, solvent diffusion, phase transitions, etc., that depend heavily on the polymer structure of the blades, which are notably heterogeneous and contain relatively high numbers of defects and voids. As such, current research focuses on computational modeling of solvolysis to allow for more complete and efficient recycling.
Direct structural recycling of composites: The general idea is to reuse the composite as is, without altering its chemical properties, which can be achieved especially for larger composite material parts by partitioning them into pieces that can be used directly in other applications.
Start-up company Global Fiberglass Solutions claimed in 2020 that it had a method to process blades into pellets and fiber boards for use in flooring and walls. The company started producing samples at a plant in Sweetwater, Texas.
== Tower ==
=== Height ===
Wind velocities increase at higher altitudes due to surface aerodynamic drag (by land or water surfaces) and air viscosity. The variation in velocity with altitude, called wind shear, is most dramatic near the surface. Typically, the variation follows the wind profile power law, which predicts that wind speed rises proportionally to the seventh root of altitude. Doubling the altitude of a turbine, then, increases the expected wind speeds by 10% and the expected power by 34%. To avoid buckling, doubling the tower height generally requires doubling the tower diameter, increasing the amount of material by a factor of at least four.
During the night, or when the atmosphere becomes stable, wind speed close to the ground usually subsides whereas at turbine hub altitude it does not decrease that much or may even increase. As a result, the wind speed is higher and a turbine will produce more power than expected from the 1/7 power law: doubling the altitude may increase wind speed by 20% to 60%. A stable atmosphere is caused by radiative cooling of the surface and is common in a temperate climate: it usually occurs when there is a (partly) clear sky at night. When the (high altitude) wind is strong (a 10-meter wind speed higher than approximately 6 to 7 m/s) the stable atmosphere is disrupted because of friction turbulence and the atmosphere turns neutral. A daytime atmosphere is either neutral (no net radiation; usually with strong winds and heavy clouding) or unstable (rising air because of ground heating—by the sun). The 1/7 power law is a good approximation of the wind profile. Indiana was rated as having a wind capacity of 30,000 MW, but by raising the expected turbine height from 50 m to 70 m raised the wind capacity to 40,000 MW, and could be double that at 100 m.
For HAWTs, tower heights approximately two to three times the blade length balance material costs of the tower against better utilisation of the more expensive active components.
Road restrictions make tower transport with a diameter of more than 4.3 m difficult. Swedish analyses showed that the bottom wing tip must be at least 30 m above the tree tops. A 3 MW turbine may increase output from 5,000 MWh to 7,700 MWh per year by rising from 80 to 125 meters. A tower profile made of connected shells rather than cylinders can have a larger diameter and still be transportable. A 100 m prototype tower with TC bolted 18 mm 'plank' shells at the wind turbine test center Høvsøre in Denmark was certified by Det Norske Veritas, with a Siemens nacelle. Shell elements can be shipped in standard 12 m shipping containers.
As of 2003, typical modern wind turbine installations used 65 metres (213 ft) towers. Height is typically limited by the availability of cranes. This led to proposals for "partially self-erecting wind turbines" that, for a given available crane, allow taller towers that locate a turbine in stronger and steadier winds, and "self-erecting wind turbines" that could be installed without cranes.
=== Materials ===
Currently, the majority of wind turbines are supported by conical tubular steel towers. These towers represent 30% – 65% of the turbine weight and therefore account for a large percentage of transport costs. The use of lighter tower materials could reduce the overall transport and construction cost, as long as stability is maintained. Higher grade S500 steel costs 20%-25% more than S355 steel (standard structural steel), but it requires 30% less material because of its improved strength. Therefore, replacing wind turbine towers with S500 steel offer savings in weight and cost.
Another disadvantage of conical steel towers is meeting the requirements of wind turbines taller than 90 meters. High performance concrete may increase tower height and increase lifetime. A hybrid of prestressed concrete and steel improves performance over standard tubular steel at tower heights of 120 meters. Concrete also allows small precast sections to be assembled on site. One downside of concrete towers is the higher CO2 emissions during concrete production. However, the overall environmental impact should be positive if concrete towers can double the wind turbine lifetime.
Wood is another alternative: a 100-metre tower supporting a 1.5 MW turbine operates in Germany. The wood tower shares the same transportation benefits of the segmented steel shell tower, but without the steel. A 2 MW turbine on a wooden tower started operating in Sweden in 2023.
Another approach is to form the tower on site via spiral welding rolled sheet steel. Towers of any height and diameter can be formed this way, eliminating restrictions driven by transport requirements. A factory can be built in one month. The developer claims 80% labor savings over conventional approaches.
== Grid connection ==
Grid-connected wind turbines, until the 1970s, were fixed-speed. As recently as 2003, nearly all grid-connected wind turbines operated at constant speed (synchronous generators) or within a few percent of constant speed (induction generators). As of 2011, many turbines used fixed-speed induction generators (FSIG). By then, most newly connected turbines were variable speed.
Early control systems were designed for peak power extraction, also called maximum power point tracking—they attempted to pull the maximum power from a given wind turbine under the current wind conditions. More recent systems deliberately pull less than maximum power in most circumstances, in order to provide other benefits, which include:
Spinning reserves to produce more power when needed—such as when some other generator drops from the grid
Variable-speed turbines can transiently produce slightly more power than wind conditions support, by storing some energy as kinetic energy (accelerating during brief gusts of faster wind) and later converting that kinetic energy to electric energy (decelerating). either when more power is needed, or to compensate for variable windspeeds.
damping (electrical) subsynchronous resonances in the grid
damping (mechanical) tower resonances
The generator produces alternating current (AC). The most common method in large modern turbines is to use a doubly fed induction generator directly connected to the grid. Some turbines drive an AC/AC converter—which converts the AC to direct current (DC) with a rectifier and then back to AC with an inverter—in order to match grid frequency and phase.
A useful technique to connect a PMSG (Permanent Magnet Synchronous Generator) to the grid is via a back-to-back converter. Control schemes can achieve unity power factor in the connection to the grid. In that way the wind turbine does not consume reactive power, which is the most common problem with turbines that use induction machines. This leads to a more stable power system. Moreover, with different control schemes a PMSG turbine can provide or consume reactive power. So, it can work as a dynamic capacitor/inductor bank to help with grid stability.
The diagram shows the control scheme for a unity power factor :
Reactive power regulation consists of one PI controller in order to achieve operation with unity power factor (i.e. Qgrid = 0 ). IdN has to be regulated to reach zero at steady-state (IdNref = 0).
The complete system of the grid side converter and the cascaded PI controller loops is displayed in the figure.
== Construction ==
As wind turbine usage has increased, so have companies that assist in the planning and construction of wind turbines. Most often, turbine parts are shipped via sea or rail, and then via truck to the installation site. Due to the massive size of the components involved, companies usually need to obtain transportation permits and ensure that the chosen trucking route is free of potential obstacles such as overpasses, bridges, and narrow roads. Groups known as "reconnaissance teams" will scout the way up to a year in advance as they identify problematic roads, cut down trees, and relocate utility poles. Turbine blades continue to increase in size, sometimes necessitating brand new logistical plans, as previously used routes may not allow a larger blade. Specialized vehicles known as Schnabel trailers are custom-designed to load and transport turbine sections: tower sections can be loaded without a crane and the rear end of the trailer is steerable, allowing for easier maneuvering. Drivers must be specially trained.
=== Foundations ===
Wind turbines, by their nature, are very tall, slender structures, and this can cause a number of issues when the structural design of the foundations are considered. The foundations for a conventional engineering structure are designed mainly to transfer the vertical load (dead weight) to the ground, generally allowing comparatively unsophisticated arrangement to be used. However, in the case of wind turbines, the force of the wind's interaction with the rotor at the top of the tower creates a strong tendency to tip the wind turbine over. This loading regime causes large moment loads to be applied to the foundations of a wind turbine. As a result, considerable attention needs to be given when designing the footings to ensure that the foundation will resist this tipping tendency.
One of the most common foundations for offshore wind turbines is the monopile, a single large-diameter (4 to 6 metres) tubular steel pile driven to a depth of 5-6 times the diameter of the pile into the seabed. The cohesion of the soil, and friction between the pile and the soil provide the necessary structural support for the wind turbine.
In onshore turbines the most common type of foundation is a gravity foundation, where a large mass of concrete spread out over a large area is used to resist the turbine loads. Wind turbine size & type, wind conditions and soil conditions at the site are all determining factors in the design of the foundation. Prestressed piles or rock anchors are alternative foundation designs that use much less concrete and steel.
== Costs ==
A wind turbine is a complex and integrated system. Structural elements comprise the majority of the weight and cost. All parts of the structure must be inexpensive, lightweight, durable, and manufacturable, surviving variable loading and environmental conditions. Turbine systems with fewer failures require less maintenance, are lighter and last longer, reducing costs.
The major parts of a turbine divide as: tower 22%, blades 18%, gearbox 14%, generator 8%.
== Specification ==
Turbine design specifications contain a power curve and availability guarantee. The wind resource assessment makes it possible to calculate commercial viability. Typical operating temperature range is −20 to 40 °C (−4 to 104 °F). In areas with extreme climate (like Inner Mongolia or Rajasthan) climate-specific versions are required.
Wind turbines can be designed and validated according to IEC 61400 standards.
RDS-PP (Reference Designation System for Power Plants) is a standardized system used worldwide to create structured hierarchy of wind turbine components. This facilitates turbine maintenance and operation cost, and is used during all stages of a turbine creation.
== See also ==
Brushless wound-rotor doubly fed electric machine
Floating wind turbine
Vertical-axis wind turbine
Wind-turbine aerodynamics
Copper in renewable energy, section Wind
Unconventional wind turbines
== References ==
== Further reading ==
Robert Gasch, Jochen Twele (ed.), Wind power plants. Fundamentals, design, construction and operation, Springer 2012 ISBN 978-3-642-22937-4.
Paul Gipe, ed. (2004). Wind Power: Renewable Energy for Home, Farm, and Business (second ed.). Chelsea Green Publishing Company. ISBN 978-1-931498-14-2.
Erich Hau, Wind turbines: fundamentals, technologies, application, economicsSpringer, 2013 ISBN 978-3-642-27150-2 (preview on Google Books)
Siegfried Heier, Grid integration of wind energy conversion systems Wiley 2006, ISBN 978-0-470-86899-7.
Peter Jamieson, Innovation in Wind Turbine Design. Wiley & Sons 2011, ISBN 978-0-470-69981-2
David Spera (ed,) Wind Turbine Technology: Fundamental Concepts in Wind Turbine Engineering, Second Edition (2009), ASME Press, ISBN 9780791802601
Alois Schaffarczyk (ed.), Understanding wind power technology, Wiley & Sons 2014, ISBN 978-1-118-64751-6.
Wei Tong, ed. (2010). Wind Power Generation and Wind Turbine Design. WIT Press. ISBN 978-1-84564-205-1.
Hermann-Josef Wagner, Jyotirmay Mathur, Introduction to wind energy systems. Basics, technology and operation. Springer 2013, ISBN 978-3-642-32975-3.
== External links ==
Offshore Wind Turbines - Installation and Operation of Turbines
Department of Energy- Energy Efficiency and Renewable Energy
RenewableUK - Wind Energy Reference and FAQs
How is Wind turbine made | Wikipedia/Wind_turbine_design |
Compressible flow (or gas dynamics) is the branch of fluid mechanics that deals with flows having significant changes in fluid density. While all flows are compressible, flows are usually treated as being incompressible when the Mach number (the ratio of the speed of the flow to the speed of sound) is smaller than 0.3 (since the density change due to velocity is about 5% in that case). The study of compressible flow is relevant to high-speed aircraft, jet engines, rocket motors, high-speed entry into a planetary atmosphere, gas pipelines, commercial applications such as abrasive blasting, and many other fields.
== History ==
The study of gas dynamics is often associated with the flight of modern high-speed aircraft and atmospheric reentry of space-exploration vehicles; however, its origins lie with simpler machines. At the beginning of the 19th century, investigation into the behaviour of fired bullets led to improvement in the accuracy and capabilities of guns and artillery. As the century progressed, inventors such as Gustaf de Laval advanced the field, while researchers such as Ernst Mach sought to understand the physical phenomena involved through experimentation.
At the beginning of the 20th century, the focus of gas dynamics research shifted to what would eventually become the aerospace industry. Ludwig Prandtl and his students proposed important concepts ranging from the boundary layer to supersonic shock waves, supersonic wind tunnels, and supersonic nozzle design. Theodore von Kármán, a student of Prandtl, continued to improve the understanding of supersonic flow. Other notable figures (Meyer, Luigi Crocco, and Ascher Shapiro) also contributed significantly to the principles considered fundamental to the study of modern gas dynamics. Many others also contributed to this field.
Accompanying the improved conceptual understanding of gas dynamics in the early 20th century was a public misconception that there existed a barrier to the attainable speed of aircraft, commonly referred to as the "sound barrier." In truth, the barrier to supersonic flight was merely a technological one, although it was a stubborn barrier to overcome. Amongst other factors, conventional aerofoils saw a dramatic increase in drag coefficient when the flow approached the speed of sound. Overcoming the larger drag proved difficult with contemporary designs, thus the perception of a sound barrier. However, aircraft design progressed sufficiently to produce the Bell X-1. Piloted by Chuck Yeager, the X-1 officially achieved supersonic speed in October 1947.
Historically, two parallel paths of research have been followed in order to further gas dynamics knowledge. Experimental gas dynamics undertakes wind tunnel model experiments and experiments in shock tubes and ballistic ranges with the use of optical techniques to document the findings. Theoretical gas dynamics considers the equations of motion applied to a variable-density gas, and their solutions. Much of basic gas dynamics is analytical, but in the modern era Computational fluid dynamics applies computing power to solve the otherwise-intractable nonlinear partial differential equations of compressible flow for specific geometries and flow characteristics.
== Introductory concepts ==
There are several important assumptions involved in the underlying theory of compressible flow. All fluids are composed of molecules, but tracking a huge number of individual molecules in a flow (for example at atmospheric pressure) is unnecessary. Instead, the continuum assumption allows us to consider a flowing gas as a continuous substance except at low densities. This assumption provides a huge simplification which is accurate for most gas-dynamic problems. Only in the low-density realm of rarefied gas dynamics does the motion of individual molecules become important.
A related assumption is the no-slip condition where the flow velocity at a solid surface is presumed equal to the velocity of the surface itself, which is a direct consequence of assuming continuum flow. The no-slip condition implies that the flow is viscous, and as a result a boundary layer forms on bodies traveling through the air at high speeds, much as it does in low-speed flow.
Most problems in incompressible flow involve only two unknowns: pressure and velocity, which are typically found by solving the two equations that describe conservation of mass and of linear momentum, with the fluid density presumed constant. In compressible flow, however, the gas density and temperature also become variables. This requires two more equations in order to solve compressible-flow problems: an equation of state for the gas and a conservation of energy equation. For the majority of gas-dynamic problems, the simple ideal gas law is the appropriate state equation. Otherwise, more complex equations of state must be considered and the so-called non ideal compressible fluids dynamics (NICFD) establishes.
Fluid dynamics problems have two overall types of references frames, called Lagrangian and Eulerian (see Joseph-Louis Lagrange and Leonhard Euler). The Lagrangian approach follows a fluid mass of fixed identity as it moves through a flowfield. The Eulerian reference frame, in contrast, does not move with the fluid. Rather it is a fixed frame or control volume that fluid flows through. The Eulerian frame is most useful in a majority of compressible flow problems, but requires that the equations of motion be written in a compatible format.
Finally, although space is known to have 3 dimensions, an important simplification can be had in describing gas dynamics mathematically if only one spatial dimension is of primary importance, hence 1-dimensional flow is assumed. This works well in duct, nozzle, and diffuser flows where the flow properties change mainly in the flow direction rather than perpendicular to the flow. However, an important class of compressible flows, including the external flow over bodies traveling at high speed, requires at least a 2-dimensional treatment. When all 3 spatial dimensions and perhaps the time dimension as well are important, we often resort to computerized solutions of the governing equations.
== Mach number, wave motion, and sonic speed ==
The Mach number (M) is defined as the ratio of the speed of an object (or of a flow) to the speed of sound. For instance, in air at room temperature, the speed of sound is about 343 m/s (1,130 ft/s). M can range from 0 to ∞, but this broad range falls naturally into several flow regimes. These regimes are subsonic, transonic, supersonic, hypersonic, and hypervelocity flow. The figure below illustrates the Mach number "spectrum" of these flow regimes.
These flow regimes are not chosen arbitrarily, but rather arise naturally from the strong mathematical background that underlies compressible flow (see the cited reference textbooks). At very slow flow speeds the speed of sound is so much faster that it is mathematically ignored, and the Mach number is irrelevant. Once the speed of the flow approaches the speed of sound, however, the Mach number becomes all-important, and shock waves begin to appear. Thus the transonic regime is described by a different (and much more complex) mathematical treatment. In the supersonic regime the flow is dominated by wave motion at oblique angles similar to the Mach angle. Above about Mach 5, these wave angles grow so small that a different mathematical approach is required, defining the hypersonic speed regime. Finally, at speeds comparable to that of planetary atmospheric entry from orbit, in the range of several km/s, the speed of sound is now comparatively so slow that it is once again mathematically ignored in the hypervelocity regime.
As an object accelerates from subsonic toward supersonic speed in a gas, different types of wave phenomena occur. To illustrate these changes, the next figure shows a stationary point (M = 0) that emits symmetric sound waves. The speed of sound is the same in all directions in a uniform fluid, so these waves are simply concentric spheres. As the sound-generating point begins to accelerate, the sound waves "bunch up" in the direction of motion and "stretch out" in the opposite direction. When the point reaches sonic speed (M = 1), it travels at the same speed as the sound waves it creates. Therefore, an infinite number of these sound waves "pile up" ahead of the point, forming a Shock wave. Upon achieving supersonic flow, the particle is moving so fast that it continuously leaves its sound waves behind. When this occurs, the locus of these waves trailing behind the point creates an angle known as the Mach wave angle or Mach angle, μ:
μ
=
arcsin
(
a
V
)
=
arcsin
(
1
M
)
{\displaystyle \mu =\arcsin \left({\frac {a}{V}}\right)=\arcsin \left({\frac {1}{M}}\right)}
where
a
{\displaystyle a}
represents the speed of sound in the gas and
V
{\displaystyle V}
represents the velocity of the object. Although named for Austrian physicist Ernst Mach, these oblique waves were first discovered by Christian Doppler.
== One-dimensional flow ==
One-dimensional (1-D) flow refers to flow of gas through a duct or channel in which the flow parameters are assumed to change significantly along only one spatial dimension, namely, the duct length. In analysing the 1-D channel flow, a number of assumptions are made:
Ratio of duct length to width (L/D) is ≤ about 5 (in order to neglect friction and heat transfer),
Steady vs. Unsteady Flow,
Flow is isentropic (i.e. a reversible adiabatic process),
Ideal gas law (i.e. P = ρRT)
=== Converging-diverging Laval nozzles ===
As the speed of a flow accelerates from the subsonic to the supersonic regime, the physics of nozzle and diffuser flows is altered. Using the conservation laws of fluid dynamics and thermodynamics, the following relationship for channel flow is developed (combined mass and momentum conservation):
d
P
(
1
−
M
2
)
=
ρ
V
2
(
d
A
A
)
{\displaystyle dP\left(1-M^{2}\right)=\rho V^{2}\left({\frac {dA}{A}}\right)}
,
where dP is the differential change in pressure, M is the Mach number, ρ is the density of the gas, V is the velocity of the flow, A is the area of the duct, and dA is the change in area of the duct. This equation states that, for subsonic flow, a converging duct (dA < 0) increases the velocity of the flow and a diverging duct (dA > 0) decreases velocity of the flow. For supersonic flow, the opposite occurs due to the change of sign of (1 − M2). A converging duct (dA < 0) now decreases the velocity of the flow and a diverging duct (dA > 0) increases the velocity of the flow. At Mach = 1, a special case occurs in which the duct area must be either a maximum or minimum. For practical purposes, only a minimum area can accelerate flows to Mach 1 and beyond. See table of sub-supersonic diffusers and nozzles.
Therefore, to accelerate a flow to Mach 1, a nozzle must be designed to converge to a minimum cross-sectional area and then expand. This type of nozzle – the converging-diverging nozzle – is called a de Laval nozzle after Gustaf de Laval, who invented it. As subsonic flow enters the converging duct and the area decreases, the flow accelerates. Upon reaching the minimum area of the duct, also known as the throat of the nozzle, the flow can reach Mach 1. If the speed of the flow is to continue to increase, its density must decrease in order to obey conservation of mass. To achieve this decrease in density, the flow must expand, and to do so, the flow must pass through a diverging duct. See image of de Laval Nozzle.
=== Maximum achievable velocity of a gas ===
Ultimately, because of the energy conservation law, a gas is limited to a certain maximum velocity based on its energy content. The maximum velocity, Vmax, that a gas can attain is:
V
max
=
2
c
p
T
t
{\displaystyle V_{\text{max}}={\sqrt {2c_{p}T_{t}}}}
where cp is the specific heat of the gas and Tt is the stagnation temperature of the flow.
=== Isentropic flow Mach number relationships ===
Using conservations laws and thermodynamics, a number of relationships of the form
property
1
property
2
=
f
(
M
,
γ
)
{\displaystyle {\frac {{\text{property}}_{1}}{{\text{property}}_{2}}}=f(M,\gamma )}
can be obtained, where M is the Mach number and γ is the ratio of specific heats (1.4 for air). See table of isentropic flow Mach number relationships.
=== Achieving supersonic flow ===
As previously mentioned, in order for a flow to become supersonic, it must pass through a duct with a minimum area, or sonic throat. Additionally, an overall pressure ratio, Pb/Pt, of approximately 2 is needed to attain Mach 1. Once it has reached Mach 1, the flow at the throat is said to be choked. Because changes downstream can only move upstream at sonic speed, the mass flow through the nozzle cannot be affected by changes in downstream conditions after the flow is choked.
=== Non-isentropic 1D channel flow of a gas - normal shock waves ===
Normal shock waves are shock waves that are perpendicular to the local flow direction. These shock waves occur when pressure waves build up and coalesce into an extremely thin shockwave that converts kinetic energy into thermal energy. The waves thus overtake and reinforce one another, forming a finite shock wave from an infinite series of infinitesimal sound waves. Because the change of state across the shock is highly irreversible, entropy increases across the shock. When analysing a normal shock wave, one-dimensional, steady, and adiabatic flow of a perfect gas is assumed. Stagnation temperature and stagnation enthalpy are the same upstream and downstream of the shock.
Normal shock waves can be easily analysed in either of two reference frames: the standing normal shock and the moving shock. The flow before a normal shock wave must be supersonic, and the flow after a normal shock must be subsonic. The Rankine-Hugoniot equations are used to solve for the flow conditions.
== Two-dimensional flow ==
Although one-dimensional flow can be directly analysed, it is merely a specialized case of two-dimensional flow. It follows that one of the defining phenomena of one-dimensional flow, a normal shock, is likewise only a special case of a larger class of oblique shocks. Further, the name "normal" is with respect to geometry rather than frequency of occurrence. Oblique shocks are much more common in applications such as: aircraft inlet design, objects in supersonic flight, and (at a more fundamental level) supersonic nozzles and diffusers. Depending on the flow conditions, an oblique shock can either be attached to the flow or detached from the flow in the form of a bow shock.
=== Oblique shock waves ===
Oblique shock waves are similar to normal shock waves, but they occur at angles less than 90° with the direction of flow. When a disturbance is introduced to the flow at a nonzero angle (δ), the flow must respond to the changing boundary conditions. Thus an oblique shock is formed, resulting in a change in the direction of the flow.
==== Shock polar diagram ====
Based on the level of flow deflection (δ), oblique shocks are characterized as either strong or weak. Strong shocks are characterized by larger deflection and more entropy loss across the shock, with weak shocks as the opposite. In order to gain cursory insight into the differences in these shocks, a shock polar diagram can be used. With the static temperature after the shock, T*, known the speed of sound after the shock is defined as,
A
∗
=
γ
R
T
∗
{\displaystyle A^{*}={\sqrt {\gamma RT^{*}}}}
with R as the gas constant and γ as the specific heat ratio. The Mach number can be broken into Cartesian coordinates
M
2
x
∗
=
V
x
a
∗
M
2
y
∗
=
V
y
a
∗
{\displaystyle {\begin{aligned}M_{2x}^{*}&={\frac {V_{x}}{a^{*}}}\\M_{2y}^{*}&={\frac {V_{y}}{a^{*}}}\end{aligned}}}
with Vx and Vy as the x and y-components of the fluid velocity V. With the Mach number before the shock given, a locus of conditions can be specified. At some δmax, the flow transitions from a strong to weak oblique shock. With δ = 0°, a normal shock is produced at the limit of the strong oblique shock and the Mach wave is produced at the limit of the weak shock wave.
==== Oblique shock reflection ====
Due to the inclination of the shock, after an oblique shock is created, it can interact with a boundary in three different manners, two of which are explained below.
===== Solid boundary =====
Incoming flow is first turned by angle δ with respect to the flow. This shockwave is reflected off the solid boundary, and the flow is turned by – δ to again be parallel with the boundary. Each progressive shock wave is weaker and the wave angle is increased.
===== Irregular reflection =====
An irregular reflection is much like the case described above, with the caveat that δ is larger than the maximum allowable turning angle. Thus a detached shock is formed and a more complicated reflection known as Mach reflection occurs.
=== Prandtl–Meyer fans ===
Prandtl–Meyer fans can be expressed as both compression and expansion fans. Prandtl–Meyer fans also cross a boundary layer (i.e. flowing and solid) which reacts in different changes as well. When a shock wave hits a solid surface the resulting fan returns as one from the opposite family while when one hits a free boundary the fan returns as a fan of opposite type.
==== Prandtl–Meyer expansion fans ====
To this point, the only flow phenomena that have been discussed are shock waves, which slow the flow and increase its entropy. It is possible to accelerate supersonic flow in what has been termed a Prandtl–Meyer expansion fan, after Ludwig Prandtl and Theodore Meyer. The mechanism for the expansion is shown in the figure below.
As opposed to the flow encountering an inclined obstruction and forming an oblique shock, the flow expands around a convex corner and forms an expansion fan through a series of isentropic Mach waves. The expansion "fan" is composed of Mach waves that span from the initial Mach angle to the final Mach angle. Flow can expand around either a sharp or rounded corner equally, as the increase in Mach number is proportional to only the convex angle of the passage (δ). The expansion corner that produces the Prandtl–Meyer fan can be sharp (as illustrated in the figure) or rounded. If the total turning angle is the same, then the P-M flow solution is also the same.
The Prandtl–Meyer expansion can be seen as the physical explanation of the operation of the Laval nozzle. The contour of the nozzle creates a smooth and continuous series of Prandtl–Meyer expansion waves.
==== Prandtl–Meyer compression fans ====
A Prandtl–Meyer compression is the opposite phenomenon to a Prandtl–Meyer expansion. If the flow is gradually turned through an angle of δ, a compression fan can be formed. This fan is a series of Mach waves that eventually coalesce into an oblique shock. Because the flow is defined by an isentropic region (flow that travels through the fan) and an anisentropic region (flow that travels through the oblique shock), a slip line results between the two flow regions.
== Applications ==
=== Supersonic wind tunnels ===
Supersonic wind tunnels are used for testing and research in supersonic flows, approximately over the Mach number range of 1.2 to 5. The operating principle behind the wind tunnel is that a large pressure difference is maintained upstream to downstream, driving the flow.
Wind tunnels can be divided into two categories: continuous-operating and intermittent-operating wind tunnels. Continuous operating supersonic wind tunnels require an independent electrical power source that drastically increases with the size of the test section. Intermittent supersonic wind tunnels are less expensive in that they store electrical energy over an extended period of time, then discharge the energy over a series of brief tests. The difference between these two is analogous to the comparison between a battery and a capacitor.
Blowdown type supersonic wind tunnels offer high Reynolds number, a small storage tank, and readily available dry air. However, they cause a high pressure hazard, result in difficulty holding a constant stagnation pressure, and are noisy during operation.
Indraft supersonic wind tunnels are not associated with a pressure hazard, allow a constant stagnation pressure, and are relatively quiet. Unfortunately, they have a limited range for the Reynolds number of the flow and require a large vacuum tank.
There is no dispute that knowledge is gained through research and testing in supersonic wind tunnels; however, the facilities often require vast amounts of power to maintain the large pressure ratios needed for testing conditions. For example, Arnold Engineering Development Complex has the largest supersonic wind tunnel in the world and requires the power required to light a small city for operation. For this reason, large wind tunnels are becoming less common at universities.
=== Supersonic aircraft inlets ===
Perhaps the most common requirement for oblique shocks is in supersonic aircraft inlets for speeds greater than about Mach 2 (the F-16 has a maximum speed of Mach 2 but doesn't need an oblique shock intake). One purpose of the inlet is to minimize losses across the shocks as the incoming supersonic air slows down to subsonic before it enters the turbojet engine. This is accomplished with one or more oblique shocks followed by a very weak normal shock, with an upstream Mach number usually less than 1.4. The airflow through the intake has to be managed correctly over a wide speed range from zero to its maximum supersonic speed. This is done by varying the position of the intake surfaces.
Although variable geometry is required to achieve acceptable performance from take-off to speeds exceeding Mach 2 there is no one method to achieve it. For example, for a maximum speed of about Mach 3, the XB-70 used rectangular inlets with adjustable ramps and the SR-71 used circular inlets with adjustable inlet cone.
== See also ==
Incompressible flow
Conservation laws
Entropy
Equation of state
Gas kinetics
Heat capacity ratio
Isentropic nozzle flow
Lagrangian and Eulerian specification of the flow field
Prandtl–Meyer function
Thermodynamics especially "Commonly Considered Thermodynamic Processes" and "Laws of Thermodynamics"
Non-ideal compressible fluid dynamics
== References ==
Liepmann, Hans W.; Roshko, A. (1957) [1957]. Elements of Gasdynamics. Dover Publications. ISBN 0-486-41963-0. {{cite book}}: ISBN / Date incompatibility (help)
Anderson, John D. Jr. (2003) [1982]. Modern Compressible Flow (3rd ed.). McGraw-Hill Science/Engineering/Math. ISBN 0-07-242443-5.
John, James E.; Keith, T. G. (2006) [1969]. Gas Dynamics (3rd ed.). Prentice Hall. ISBN 0-13-120668-0.
Oosthuizen, Patrick H.; Carscallen, W. E. (2013) [1997]. Introduction to Compressible Flow (2nd ed.). CRC Press. ISBN 978-1439877913.
Zucker, Robert D.; Biblarz, O. (2002) [1977]. Fundamentals of Gas Dynamics (2nd ed.). Wiley. ISBN 0471059676.
Shapiro, Ascher H. (1953). The Dynamics and Thermodynamics of Compressible Fluid Flow, Volume 1. Ronald Press Company. ISBN 978-0-471-06691-0. {{cite book}}: ISBN / Date incompatibility (help)
Anderson, John D. Jr. (2000) [1989]. Hypersonic and High Temperature Gas Dynamics. AIAA. ISBN 1-56347-459-X.
== External links ==
NASA Beginner's Guide to Compressible Aerodynamics
Virginia Tech Compressible Flow Calculators
[1] | Wikipedia/Gas_dynamics |
Given the problem of the aerodynamic design of the nose cone section of any vehicle or body meant to travel through a compressible fluid medium (such as a rocket or aircraft, missile, shell or bullet), an important problem is the determination of the nose cone geometrical shape for optimum performance. For many applications, such a task requires the definition of a solid of revolution shape that experiences minimal resistance to rapid motion through such a fluid medium.
== Nose cone shapes and equations ==
Source:
=== General dimensions ===
Source:
In all of the following nose cone shape equations, L is the overall length of the nose cone and R is the radius of the base of the nose cone. y is the radius at any point x, as x varies from 0, at the tip of the nose cone, to L. The equations define the two-dimensional profile of the nose shape. The full body of revolution of the nose cone is formed by rotating the profile around the centerline C⁄L. While the equations describe the "perfect" shape, practical nose cones are often blunted or truncated for manufacturing, aerodynamic, or thermodynamic reasons.
=== Conic ===
A very common nose-cone shape is a simple cone. This shape is often chosen for its ease of manufacture. More optimal, streamlined shapes (described below) are often much more difficult to create. The sides of a conic profile are straight lines, so the diameter equation is simply:
y
=
x
R
L
{\displaystyle y={xR \over L}}
Cones are sometimes defined by their half angle, φ:
ϕ
=
arctan
(
R
L
)
{\displaystyle \phi =\arctan \left({R \over L}\right)}
and
y
=
x
tan
(
ϕ
)
{\displaystyle y=x\tan(\phi )\;}
==== Spherically blunted conic ====
In practical applications such as re-entry vehicles, a conical nose is often blunted by capping it with a segment of a sphere. The tangency point where the sphere meets the cone can be found, using similar triangles, from:
x
t
=
L
2
R
r
n
2
R
2
+
L
2
{\displaystyle x_{t}={\frac {L^{2}}{R}}{\sqrt {\frac {r_{n}^{2}}{R^{2}+L^{2}}}}}
y
t
=
x
t
R
L
{\displaystyle y_{t}={\frac {x_{t}R}{L}}}
where rn is the radius of the spherical nose cap.
The center of the spherical nose cap, xo, can be found from:
x
o
=
x
t
+
r
n
2
−
y
t
2
{\displaystyle x_{o}=x_{t}+{\sqrt {r_{n}^{2}-y_{t}^{2}}}}
And the apex point, xa can be found from:
x
a
=
x
o
−
r
n
{\displaystyle x_{a}=x_{o}-r_{n}}
=== Bi-conic ===
A bi-conic nose cone shape is simply a cone with length L1 stacked on top of a frustum of a cone (commonly known as a conical transition section shape) with length L2, where the base of the upper cone is equal in radius R1 to the top radius of the smaller frustum with base radius R2.
L
=
L
1
+
L
2
{\displaystyle L=L_{1}+L_{2}}
For
0
≤
x
≤
L
1
{\displaystyle 0\leq x\leq L_{1}}
:
y
=
x
R
1
L
1
{\displaystyle y={xR_{1} \over L_{1}}}
For
L
1
≤
x
≤
L
{\displaystyle L_{1}\leq x\leq L}
:
y
=
R
1
+
(
x
−
L
1
)
(
R
2
−
R
1
)
L
2
{\displaystyle y=R_{1}+{(x-L_{1})(R_{2}-R_{1}) \over L_{2}}}
Half angles:
ϕ
1
=
arctan
(
R
1
L
1
)
{\displaystyle \phi _{1}=\arctan \left({R_{1} \over L_{1}}\right)}
and
y
=
x
tan
(
ϕ
1
)
{\displaystyle y=x\tan(\phi _{1})\;}
ϕ
2
=
arctan
(
R
2
−
R
1
L
2
)
{\displaystyle \phi _{2}=\arctan \left({R_{2}-R_{1} \over L_{2}}\right)}
and
y
=
R
1
+
(
x
−
L
1
)
tan
(
ϕ
2
)
{\displaystyle y=R_{1}+(x-L_{1})\tan(\phi _{2})\;}
=== Tangent ogive ===
Next to a simple cone, the tangent ogive shape is the most familiar in hobby rocketry. The profile of this shape is formed by a segment of a circle such that the rocket body is tangent to the curve of the nose cone at its base, and the base is on the radius of the circle. The popularity of this shape is largely due to the ease of constructing its profile, as it is simply a circular section.
The radius of the circle that forms the ogive is called the ogive radius, ρ, and it is related to the length and base radius of the nose cone as expressed by the formula:
ρ
=
R
2
+
L
2
2
R
{\displaystyle \rho ={R^{2}+L^{2} \over 2R}}
The radius y at any point x, as x varies from 0 to L is:
y
=
ρ
2
−
(
L
−
x
)
2
+
R
−
ρ
{\displaystyle y={\sqrt {\rho ^{2}-(L-x)^{2}}}+R-\rho }
The nose cone length, L, must be less than or equal to ρ. If they are equal, then the shape is a hemisphere.
==== Spherically blunted tangent ogive ====
A tangent ogive nose is often blunted by capping it with a segment of a sphere. The tangency point where the sphere meets the tangent ogive can be found from:
x
o
=
L
−
(
ρ
−
r
n
)
2
−
(
ρ
−
R
)
2
y
t
=
r
n
(
ρ
−
R
)
ρ
−
r
n
x
t
=
x
o
−
r
n
2
−
y
t
2
{\displaystyle {\begin{aligned}x_{o}&=L-{\sqrt {\left(\rho -r_{n}\right)^{2}-(\rho -R)^{2}}}\\y_{t}&={\frac {r_{n}(\rho -R)}{\rho -r_{n}}}\\x_{t}&=x_{o}-{\sqrt {r_{n}^{2}-y_{t}^{2}}}\end{aligned}}}
where rn is the radius and xo is the center of the spherical nose cap.
=== Secant ogive ===
The profile of this shape is also formed by a segment of a circle, but the base of the shape is not on the radius of the circle defined by the ogive radius. The rocket body will not be tangent to the curve of the nose at its base. The ogive radius ρ is not determined by R and L (as it is for a tangent ogive), but rather is one of the factors to be chosen to define the nose shape. If the chosen ogive radius of a secant ogive is greater than the ogive radius of a tangent ogive with the same R and L, then the resulting secant ogive appears as a tangent ogive with a portion of the base truncated.
ρ
>
R
2
+
L
2
2
R
{\displaystyle \rho >{R^{2}+L^{2} \over 2R}}
and
α
=
arccos
(
L
2
+
R
2
2
ρ
)
−
arctan
(
R
L
)
{\displaystyle \alpha =\arccos \left({{\sqrt {L^{2}+R^{2}}} \over 2\rho }\right)-\arctan \left({R \over L}\right)}
Then the radius y at any point x as x varies from 0 to L is:
y
=
ρ
2
−
(
ρ
cos
(
α
)
−
x
)
2
−
ρ
sin
(
α
)
{\displaystyle y={\sqrt {\rho ^{2}-(\rho \cos(\alpha )-x)^{2}}}-\rho \sin(\alpha )}
If the chosen ρ is less than the tangent ogive ρ and greater than half the length of the nose cone, then the result will be a secant ogive that bulges out to a maximum diameter that is greater than the base diameter. A classic example of this shape is the nose cone of the Honest John.
L
2
<
ρ
<
R
2
+
L
2
2
R
{\displaystyle {\frac {L}{2}}<\rho <{R^{2}+L^{2} \over 2R}}
=== Elliptical ===
The profile of this shape is one-half of an ellipse, with the major axis being the centerline and the minor axis being the base of the nose cone. A rotation of a full ellipse about its major axis is called a prolate spheroid, so an elliptical nose shape would properly be known as a prolate hemispheroid. This shape is popular in subsonic flight (such as model rocketry) due to the blunt nose and tangent base. This is not a shape normally found in professional rocketry, which almost always flies at much higher velocities where other designs are more suitable. If R equals L, this is a hemisphere.
y
=
R
1
−
x
2
L
2
{\displaystyle y=R{\sqrt {1-{x^{2} \over L^{2}}}}}
=== Parabolic ===
This nose shape is not the blunt shape that is envisioned when people commonly refer to a "parabolic" nose cone. The parabolic series nose shape is generated by rotating a segment of a parabola around a line parallel to its latus rectum. This construction is similar to that of the tangent ogive, except that a parabola is the defining shape rather than a circle. Just as it does on an ogive, this construction produces a nose shape with a sharp tip. For the blunt shape typically associated with a parabolic nose, see power series below. (The parabolic shape is also often confused with the elliptical shape.)
For
0
≤
K
′
≤
1
{\displaystyle 0\leq K'\leq 1}
:
y
=
R
(
2
(
x
L
)
−
K
′
(
x
L
)
2
2
−
K
′
)
{\displaystyle y=R\left({2\left({x \over L}\right)-K'\left({x \over L}\right)^{2} \over 2-K'}\right)}
K′ can vary anywhere between 0 and 1, but the most common values used for nose cone shapes are:
For the case of the full parabola (K′ = 1) the shape is tangent to the body at its base, and the base is on the axis of the parabola. Values of K′ less than 1 result in a slimmer shape, whose appearance is similar to that of the secant ogive. The shape is no longer tangent at the base, and the base is parallel to, but offset from, the axis of the parabola.
=== Power series ===
The power series includes the shape commonly referred to as a "parabolic" nose cone, but the shape correctly known as a parabolic nose cone is a member of the parabolic series (described above). The power series shape is characterized by its (usually) blunt tip, and by the fact that its base is not tangent to the body tube. There is always a discontinuity at the joint between nose cone and body that looks distinctly non-aerodynamic. The shape can be modified at the base to smooth out this discontinuity. Both a flat-faced cylinder and a cone are members of the power series.
The power series nose shape is generated by rotating the y = R(x/L)n curve about the x-axis for values of n less than 1. The factor n controls the bluntness of the shape. For values of n above about 0.7, the tip is fairly sharp. As n decreases towards zero, the power series nose shape becomes increasingly blunt.
For
0
≤
n
≤
1
{\displaystyle 0\leq n\leq 1}
:
y
=
R
(
x
L
)
n
{\displaystyle y=R\left({x \over L}\right)^{n}}
Common values of n include:
=== Haack series ===
Unlike all of the nose cone shapes above, Wolfgang Haack's series shapes are not constructed from geometric figures. The shapes are instead mathematically derived for the purpose of minimizing drag; a related shape with similar derivation being the Sears–Haack body. While the series is a continuous set of shapes determined by the value of C in the equations below, two values of C have particular significance: when C = 0, the notation LD signifies minimum drag for the given length and diameter, and when C = 1/3, LV indicates minimum drag for a given length and volume. The Haack series nose cones are not perfectly tangent to the body at their base except for the case where C = 2/3. However, the discontinuity is usually so slight as to be imperceptible. For C > 2/3, Haack nose cones bulge to a maximum diameter greater than the base diameter. Haack nose tips do not come to a sharp point, but are slightly rounded.
x
(
θ
)
=
L
2
(
1
−
cos
(
θ
)
)
y
(
θ
,
C
)
=
R
π
θ
−
sin
(
2
θ
)
2
+
C
sin
3
(
θ
)
{\displaystyle {\begin{aligned}x(\theta )&={L \over 2}\left(1-\cos(\theta )\right)\\y(\theta ,C)&={R \over {\sqrt {\pi }}}{\sqrt {\theta -{\sin(2\theta ) \over 2}+C\sin ^{3}(\theta )}}\end{aligned}}}
For
0
≤
θ
≤
π
{\displaystyle 0\leq \theta \leq \pi }
.
Special values of C (as described above) include:
=== Von Kármán ===
The Haack series designs giving minimum drag for the given length and diameter, the LD-Haack where C = 0, is commonly called the Von Kármán or Von Kármán ogive.
=== Aerospike ===
An aerospike can be used to reduce the forebody pressure acting on supersonic aircraft. The aerospike creates a detached shock ahead of the body, thus reducing the drag acting on the aircraft.
== Nose cone drag characteristics ==
For aircraft and rockets, below Mach .8, the nose pressure drag is essentially zero for all shapes. The major significant factor is friction drag, which is largely dependent upon the wetted area, the surface smoothness of that area, and the presence of any discontinuities in the shape. For example, in strictly subsonic rockets a short, blunt, smooth elliptical shape is usually best. In the transonic region and beyond, where the pressure drag increases dramatically, the effect of nose shape on drag becomes highly significant. The factors influencing the pressure drag are the general shape of the nose cone, its fineness ratio, and its bluffness ratio.
=== Influence of the general shape ===
Many references on nose cone design contain empirical data comparing the drag characteristics of various nose shapes in different flight regimes. The chart shown here seems to be the most comprehensive and useful compilation of data for the flight regime of greatest interest. This chart generally agrees with more detailed, but less comprehensive data found in other references (most notably the USAF Datcom).
In many nose cone designs, the greatest concern is flight performance in the transonic region from Mach 0.8 to Mach 1.2. Although data are not available for many shapes in the transonic region, the table clearly suggests that either the Von Kármán shape, or power series shape with n = 1/2, would be preferable to the popular conical or ogive shapes, for this purpose.
This observation goes against the often-repeated conventional wisdom that a conical nose is optimum for "Mach-breaking". Fighter aircraft are probably good examples of nose shapes optimized for the transonic region, although their nose shapes are often distorted by other considerations of avionics and inlets. For example, an F-16 Fighting Falcon nose appears to be a very close match to a Von Kármán shape.
=== Influence of the fineness ratio ===
The ratio of the length of a nose cone compared to its base diameter is known as the fineness ratio. This is sometimes also called the aspect ratio, though that term is usually applied to wings and tails. Fineness ratio is often applied to the entire vehicle, considering the overall length and diameter. The length/diameter relation is also often called the caliber of a nose cone.
At supersonic speeds, the fineness ratio has a significant effect on nose cone wave drag, particularly at low ratios; but there is very little additional gain for ratios increasing beyond 5:1. As the fineness ratio increases, the wetted area, and thus the skin friction component of drag, will also increase. Therefore, the minimum drag fineness ratio will ultimately be a trade-off between the decreasing wave drag and increasing friction drag.
== See also ==
Index of aviation articles
Bullet-nose curve
Nose bullet
== Further reading ==
Haack, Wolfgang (1941). "Geschoßformen kleinsten Wellenwiderstandes" (PDF). Bericht 139 der Lilienthal-Gesellschaft für Luftfahrtforschung: 14–28. Archived from the original (PDF) on 2007-09-27.
U.S. Army Missile Command (17 July 1990). Design of Aerodynamically Stabilized Free Rockets. U.S. Government Printing Office. MIL-HDBK-762(MI).
== References == | Wikipedia/Nose_cone_design |
Automotive aerodynamics is the study of the aerodynamics of road vehicles. Its main goals are reducing drag and wind noise, minimizing noise emission, and preventing undesired lift forces and other causes of aerodynamic instability at high speeds. Air is also considered a fluid in this case. For some classes of racing vehicles, it may also be important to produce downforce to improve traction and thus cornering abilities.
== History ==
The frictional force of aerodynamic drag increases significantly with vehicle speed. As early as the 1920s engineers began to consider automobile shape in reducing aerodynamic drag at higher speeds. By the 1950s German and British automotive engineers were systematically analyzing the effects of automotive drag for the higher performance vehicles. By the late 1960s scientists also became aware of the significant increase in sound levels emitted by automobiles at high speed. These effects were understood to increase the intensity of sound levels for adjacent land uses at a non-linear rate. Soon highway engineers began to design roadways to consider the speed effects of aerodynamic drag produced sound levels, and automobile manufacturers considered the same factors in vehicle design.
== Strategies for reducing drag ==
The deletion of parts on a vehicle is an easy way for designers and vehicle owners to reduce parasitic and frontal drag of the vehicle with little cost and effort. Deletion can be as simple as removing an aftermarket part, or part that has been installed on the vehicle after production, or having to modify and remove an OEM part, meaning any part of the vehicle that was originally manufactured on the vehicle. Most production sports cars and high efficiency vehicles come standard with many of these deletions in order to be competitive in the automotive and race market, while others choose to keep these drag-increasing aspects of the vehicle for their visual aspects, or to fit the typical uses of their customer base.
=== Spoilers ===
A rear spoiler usually comes standard in most sports vehicles and resembles the shape of a raised wing in the rear of the vehicle. The main purpose of a rear spoiler in a vehicle's design is to counteract lift, thereby increasing stability at higher speeds. In order to achieve the lowest possible drag, air must flow around the streamlined body of the vehicle without coming into contact with any areas of possible turbulence. A rear spoiler design that stands off the rear deck lid will increase downforce, reducing lift at high speeds while incurring a drag penalty. Flat spoilers, possibly angled slightly downward may reduce turbulence and thereby reduce the coefficient of drag. Some cars now feature automatically adjustable rear spoilers, so at lower speed the effect on drag is reduced when the benefits of reduced lift are not required.
=== Mirrors ===
Side mirrors both increase the frontal area of the vehicle and increase the coefficient of drag since they protrude from the side of the vehicle. In order to decrease the impact that side mirrors have on the drag of the vehicle the side mirrors can be replaced with smaller mirrors or mirrors with a different shape. Several concept cars of the 2010s are replacing mirrors with tiny cameras but this option is not common for production cars because most countries require side mirrors. One of the first production passenger automobiles to swap out mirrors for cameras was the Honda e, and in this case the cameras are claimed by Honda to have decreased aerodynamic drag by "around 90% compared to conventional door mirrors" which contributed to an approximately 3.8% reduction in drag for the entire vehicle. It is estimated that two side mirrors are responsible for 2 to 7% of the total aerodynamic drag of a motor vehicle, and that removing them could improve fuel economy by 1.5–2 miles per US gallon.
=== Radio antennas ===
While they do not have the biggest impact on the drag coefficient due to their small size, radio antennas commonly found protruding from the front of the vehicle can be relocated and changed in design to rid the car of this added drag. The most common replacement for the standard car antenna is the shark fin antenna found in most high efficiency vehicles.
=== Wheels ===
When air flows around the wheel wells it gets disturbed by the rims of the vehicles and forms an area of turbulence around the wheel. In order for the air to flow more smoothly around the wheel well, smooth wheel covers are often applied. Smooth wheel covers are hub caps with no holes in them for air to pass through. This design reduces drag; however, it may cause the brakes to heat up more quickly because the covers prevent airflow around the brake system. As a result, this modification is more commonly seen in high efficiency vehicles rather than sports cars or racing vehicles.
=== Air curtains ===
Air curtains divert air flow from slots in the body and guide it towards the outside edges of the wheel wells.
=== Partial grille blocks ===
The front grille of a vehicle is used to direct air through the radiator. In a streamlined design the air flows around the vehicle rather than through; however, the grille of a vehicle redirects airflow from around the vehicle to through the vehicle, which then increases the drag. In order to reduce this impact a grille block is often used. A grille block covers up a portion of, or the entirety of, the front grille of a vehicle. In most high efficiency models or in vehicles with low drag coefficients, a very small grille will already be built into the vehicle's design, eliminating the need for a grille block. The grille in most production vehicles is generally designed to maximize air flow through the radiator where it exits into the engine compartment. This design can actually create too much airflow into the engine compartment, preventing it from warming up in a timely manner, and in such cases a grille block is used to increase engine performance and reduce vehicle drag simultaneously.
=== Under trays ===
The underside of a vehicle often traps air in various places and adds turbulence around the vehicle. In most racing vehicles this is eliminated by covering the entire underside of the vehicle in what is called an under tray. This tray prevents any air from becoming trapped under the vehicle and reduces drag.
=== Fender skirts ===
Fender skirts are often made as extensions of the body panels of the vehicles and cover the entire wheel wells. Much like smooth wheel covers this modification reduces the drag of the vehicle by preventing any air from becoming trapped in the wheel well and assists in streamlining the body of the vehicle. Fender skirts are more commonly found on the rear wheel wells of a vehicle because the rear tires do not pivot when steering. This is commonly seen in vehicles such as the first generation Honda Insight. Front fender skirts have the same effect on reducing drag as the rear wheel skirts, but must be further offset from the body in order to compensate for the tire sticking out from the body of the vehicle as turns are made.
=== Boattails and Kammbacks ===
A boattail can greatly reduce a vehicle's total drag. Boattails create a teardrop shape that will give the vehicle a more streamlined profile, reducing the occurrence of drag inducing flow separation. A kammback is a truncated boattail. It is created as an extension of the rear of the vehicle, moving the rear backward at a slight angle toward the bumper of the car. This can reduce drag as well but a boattail would reduce the vehicle's drag more. Nonetheless, for practical and style reasons, a kammback is more commonly seen in racing, high efficiency vehicles, and trucking.
== Comparison with aircraft aerodynamics ==
Automotive aerodynamics differs from aircraft aerodynamics in several ways:
The characteristic shape of a road vehicle is much less streamlined compared to an aircraft.
The vehicle operates very close to the ground, rather than in free air.
The operating speeds are lower (and aerodynamic drag varies as the square of speed).
A ground vehicle has fewer degrees of freedom than an aircraft, and its motion is less affected by aerodynamic forces.
Passenger and commercial ground vehicles have very specific design constraints such as their intended purpose, high safety standards (requiring, for example, more 'dead' structural space to act as crumple zones), and certain regulations.
== Methods of studying aerodynamics ==
Automotive aerodynamics is studied using both computer modelling and wind tunnel testing. For the most accurate results from a wind tunnel test, the tunnel is sometimes equipped with a rolling road. This is a movable floor for the working section, which moves at the same speed as the air flow. This prevents a boundary layer from forming on the floor of the working section and affecting the results.
== Downforce ==
Downforce describes the downward pressure created by the aerodynamic characteristics of a car that allows it to travel faster through a corner by holding the car to the track or road surface. Some elements to increase vehicle downforce will also increase drag.
It is very important to produce a good downward aerodynamic force because it affects the car's speed and traction.
== See also ==
== References ==
== External links ==
One of the first cars to generate downforce - The Prevost analysed in CFD | Wikipedia/Automotive_aerodynamics |
In mathematics, the surface subgroup conjecture of Friedhelm Waldhausen states that the fundamental group of every closed, irreducible 3-manifold with infinite fundamental group has a surface subgroup. By "surface subgroup" we mean the fundamental group of a closed surface not the 2-sphere. This problem is listed as Problem 3.75 in Robion Kirby's problem list.
Assuming the geometrization conjecture, the only open case was that of closed hyperbolic 3-manifolds. A proof of this case was announced in the summer of 2009 by Jeremy Kahn and Vladimir Markovic and outlined in a talk August 4, 2009 at the FRG (Focused Research Group) Conference hosted by the University of Utah. A preprint appeared in the arxiv.org server in October 2009. Their paper was published in the Annals of Mathematics in 2012. In June 2012, Kahn and Markovic were given the Clay Research Awards by the Clay Mathematics Institute at a ceremony in Oxford.
== See also ==
Virtually Haken conjecture
Ehrenpreis conjecture
== References == | Wikipedia/Surface_subgroup_conjecture |
In graph theory, a division of mathematics, a median graph is an undirected graph in which every three vertices a, b, and c have a unique median: a vertex m(a,b,c) that belongs to shortest paths between each pair of a, b, and c.
The concept of median graphs has long been studied, for instance by Birkhoff & Kiss (1947) or (more explicitly) by Avann (1961), but the first paper to call them "median graphs" appears to be Nebeský (1971). As Chung, Graham, and Saks write, "median graphs arise naturally in the study of ordered sets and discrete distributive lattices, and have an extensive literature". In phylogenetics, the Buneman graph representing all maximum parsimony evolutionary trees is a median graph. Median graphs also arise in social choice theory: if a set of alternatives has the structure of a median graph, it is possible to derive in an unambiguous way a majority preference among them.
Additional surveys of median graphs are given by Klavžar & Mulder (1999), Bandelt & Chepoi (2008), and Knuth (2008).
== Examples ==
Every tree is a median graph. To see this, observe that in a tree, the union of the three shortest paths between pairs of the three vertices a, b, and c is either itself a path, or a subtree formed by three paths meeting at a single central node with degree three. If the union of the three paths is itself a path, the median m(a,b,c) is equal to one of a, b, or c, whichever of these three vertices is between the other two in the path. If the subtree formed by the union of the three paths is not a path, the median of the three vertices is the central degree-three node of the subtree.
Additional examples of median graphs are provided by the grid graphs. In a grid graph, the coordinates of the median m(a,b,c) can be found as the median of the coordinates of a, b, and c. Conversely, it turns out that, in every median graph, one may label the vertices by points in an integer lattice in such a way that medians can be calculated coordinatewise in this way.
Squaregraphs, planar graphs in which all interior faces are quadrilaterals and all interior vertices have four or more incident edges, are another subclass of the median graphs. A polyomino is a special case of a squaregraph and therefore also forms a median graph.
The simplex graph κ(G) of an arbitrary undirected graph G has a vertex for every clique (complete subgraph) of G; two vertices of κ(G) are linked by an edge if the corresponding cliques differ by one vertex of G . The simplex graph is always a median graph, in which the median of a given triple of cliques may be formed by using the majority rule to determine which vertices of the cliques to include.
No cycle graph of length other than four can be a median graph. Every such cycle has three vertices a, b, and c such that the three shortest paths wrap all the way around the cycle without having a common intersection. For such a triple of vertices, there can be no median.
== Equivalent definitions ==
In an arbitrary graph, for each two vertices a and b, the minimal number of edges between them is called their distance, denoted by d(x,y). The interval of vertices that lie on shortest paths between a and b is defined as
I(a,b) = {v | d(a,b) = d(a,v) + d(v,b)}.
A median graph is defined by the property that, for every three vertices a, b, and c, these intervals intersect in a single point:
For all a, b, and c, |I(a,b) ∩ I(a,c) ∩ I(b,c)| = 1.
Equivalently, for every three vertices a, b, and c one can find a vertex m(a,b,c) such that the unweighted distances in the graph satisfy the equalities
d(a,b) = d(a,m(a,b,c)) + d(m(a,b,c),b)
d(a,c) = d(a,m(a,b,c)) + d(m(a,b,c),c)
d(b,c) = d(b,m(a,b,c)) + d(m(a,b,c),c)
and m(a,b,c) is the only vertex for which this is true.
It is also possible to define median graphs as the solution sets of 2-satisfiability problems, as the retracts of hypercubes, as the graphs of finite median algebras, as the Buneman graphs of Helly split systems, and as the graphs of windex 2; see the sections below.
== Distributive lattices and median algebras ==
In lattice theory, the graph of a finite lattice has a vertex for each lattice element and an edge for each pair of elements in the covering relation of the lattice. Lattices are commonly presented visually via Hasse diagrams, which are drawings of graphs of lattices. These graphs, especially in the case of distributive lattices, turn out to be closely related to median graphs.
In a distributive lattice, Birkhoff's self-dual ternary median operation
m(a,b,c) = (a ∧ b) ∨ (a ∧ c) ∨ (b ∧ c) = (a ∨ b) ∧ (a ∨ c) ∧ (b ∨ c),
satisfies certain key axioms, which it shares with the usual median of numbers in the range from 0 to 1 and with median algebras more generally:
Idempotence:
m
(
a
,
a
,
b
)
=
a
{\displaystyle m(a,a,b)=a}
for all a and b.
Commutativity:
m
(
a
,
b
,
c
)
=
m
(
a
,
c
,
b
)
=
m
(
b
,
a
,
c
)
=
m
(
b
,
c
,
a
)
=
m
(
c
,
a
,
b
)
=
m
(
c
,
b
,
a
)
{\displaystyle m(a,b,c)=m(a,c,b)=m(b,a,c)=m(b,c,a)=m(c,a,b)=m(c,b,a)}
for all a, b, and c.
Distributivity:
m
(
a
,
m
(
b
,
c
,
d
)
,
e
)
=
m
(
m
(
a
,
b
,
e
)
,
c
,
m
(
a
,
d
,
e
)
)
{\displaystyle m(a,m(b,c,d),e)=m(m(a,b,e),c,m(a,d,e))}
for all a, b, c, d, and e.
Identity elements: m(0,a,1) = a for all a.
The distributive law may be replaced by an associative law:
Associativity: m(x,w,m(y,w,z)) = m(m(x,w,y),w,z)
The median operation may also be used to define a notion of intervals for distributive lattices:
I(a,b) = {x | m(a,x,b) = x} = {x | a ∧ b ≤ x ≤ a ∨ b}.
The graph of a finite distributive lattice has an edge between vertices a and b whenever I(a,b) = {a,b}. For every two vertices a and b of this graph, the interval I(a,b) defined in lattice-theoretic terms above consists of the vertices on shortest paths from a to b, and thus coincides with the graph-theoretic intervals defined earlier. For every three lattice elements a, b, and c, m(a,b,c) is the unique intersection of the three intervals I(a,b), I(a,c), and I(b,c). Therefore, the graph of an arbitrary finite distributive lattice is a median graph. Conversely, if a median graph G contains two vertices 0 and 1 such that every other vertex lies on a shortest path between the two (equivalently, m(0,a,1) = a for all a), then we may define a distributive lattice in which a ∧ b = m(a,0,b) and a ∨ b = m(a,1,b), and G will be the graph of this lattice.
Duffus & Rival (1983) characterize graphs of distributive lattices directly as diameter-preserving retracts of hypercubes. More generally, every median graph gives rise to a ternary operation m satisfying idempotence, commutativity, and distributivity, but possibly without the identity elements of a distributive lattice. Every ternary operation on a finite set that satisfies these three properties (but that does not necessarily have 0 and 1 elements) gives rise in the same way to a median graph.
== Convex sets and Helly families ==
In a median graph, a set S of vertices is said to be convex if, for every two vertices a and b belonging to S, the whole interval I(a,b) is a subset of S. Equivalently, given the two definitions of intervals above, S is convex if it contains every shortest path between two of its vertices, or if it contains the median of every set of three points at least two of which are from S. Observe that the intersection of every pair of convex sets is itself convex.
The convex sets in a median graph have the Helly property: if F is an arbitrary family of pairwise-intersecting convex sets, then all sets in F have a common intersection. For, if F has only three convex sets S, T, and U in it, with a in the intersection of the pair S and T, b in the intersection of the pair T and U, and c in the intersection of the pair S and U, then every shortest path from a to b must lie within T by convexity, and similarly every shortest path between the other two pairs of vertices must lie within the other two sets; but m(a,b,c) belongs to paths between all three pairs of vertices, so it lies within all three sets, and forms part of their common intersection. If F has more than three convex sets in it, the result follows by induction on the number of sets, for one may replace an arbitrary pair of sets in F by their intersection, using the result for triples of sets to show that the replaced family is still pairwise intersecting.
A particularly important family of convex sets in a median graph, playing a role similar to that of halfspaces in Euclidean space, are the sets
Wuv = {w | d(w,u) < d(w,v)}
defined for each edge uv of the graph. In words, Wuv consists of the vertices closer to u than to v, or equivalently the vertices w such that some shortest path from v to w goes through u.
To show that Wuv is convex, let w1w2...wk be an arbitrary shortest path that starts and ends within Wuv; then w2 must also lie within Wuv, for otherwise the two points m1 = m(u,w1,wk) and m2 = m(m1,w2...wk) could be shown (by considering the possible distances between the vertices) to be distinct medians of u, w1, and wk, contradicting the definition of a median graph which requires medians to be unique. Thus, each successive vertex on a shortest path between two vertices of Wuv also lies within Wuv, so Wuv contains all shortest paths between its nodes, one of the definitions of convexity.
The Helly property for the sets Wuv plays a key role in the characterization of median graphs as the solution of 2-satisfiability instances, below.
== 2-satisfiability ==
Median graphs have a close connection to the solution sets of 2-satisfiability problems that can be used both to characterize these graphs and to relate them to adjacency-preserving maps of hypercubes.
A 2-satisfiability instance consists of a collection of Boolean variables and a collection of clauses, constraints on certain pairs of variables requiring those two variables to avoid certain combinations of values. Usually such problems are expressed in conjunctive normal form, in which each clause is expressed as a disjunction and the whole set of constraints is expressed as a conjunction of clauses, such as
(
x
11
∨
x
12
)
∧
(
x
21
∨
x
22
)
∧
⋯
∧
(
x
n
1
∨
x
n
2
)
∧
⋯
.
{\displaystyle (x_{11}\lor x_{12})\land (x_{21}\lor x_{22})\land \cdots \land (x_{n1}\lor x_{n2})\land \cdots .}
A solution to such an instance is an assignment of truth values to the variables that satisfies all the clauses, or equivalently that causes the conjunctive normal form expression for the instance to become true when the variable values are substituted into it. The family of all solutions has a natural structure as a median algebra, where the median of three solutions is formed by choosing each truth value to be the majority function of the values in the three solutions; it is straightforward to verify that this median solution cannot violate any of the clauses. Thus, these solutions form a median graph, in which the neighbor of each solution is formed by negating a set of variables that are all constrained to be equal or unequal to each other.
Conversely, every median graph G may be represented in this way as the solution set to a 2-satisfiability instance. To find such a representation, create a 2-satisfiability instance in which each variable describes the orientation of one of the edges in the graph (an assignment of a direction to the edge causing the graph to become directed rather than undirected) and each constraint allows two edges to share a pair of orientations only when there exists a vertex v such that both orientations lie along shortest paths from other vertices to v. Each vertex v of G corresponds to a solution to this 2-satisfiability instance in which all edges are directed towards v. Each
solution to the instance must come from some vertex v in this way, where v is the common intersection of the sets Wuw for edges directed from w to u; this common intersection exists due to the Helly property of the sets Wuw. Therefore, the solutions to this 2-satisfiability instance correspond one-for-one with the vertices of G.
== Retracts of hypercubes ==
A retraction of a graph G is an adjacency-preserving map from G to one of its subgraphs. More precisely, it is graph homomorphism φ from G to itself such that φ(v) = v for each vertex v in the subgraph φ(G). The image of the retraction is called a retract of G.
Retractions are examples of metric maps: the distance between φ(v) and φ(w), for every v and w, is at most equal to the distance between v and w, and is equal whenever v and w both belong to φ(G). Therefore, a retract must be an isometric subgraph of G: distances in the retract equal those in G.
If G is a median graph, and a, b, and c are an arbitrary three vertices of a retract φ(G), then φ(m(a,b,c)) must be a median of a, b, and c, and so must equal m(a,b,c). Therefore, φ(G) contains medians of all triples of its vertices, and must also be a median graph. In other words, the family of median graphs is closed under the retraction operation.
A hypercube graph, in which the vertices correspond to all possible k-bit bitvectors and in which two vertices are adjacent when the corresponding bitvectors differ in only a single bit, is a special case of a k-dimensional grid graph and is therefore a median graph. The median of three bitvectors a, b, and c may be calculated by computing, in each bit position, the majority function of the bits of a, b, and c. Since median graphs are closed under retraction, and include the hypercubes, every retract of a hypercube is a median graph.
Conversely, every median graph must be the retract of a hypercube. This may be seen from the connection, described above, between median graphs and 2-satisfiability: let G be the graph of solutions to a 2-satisfiability instance; without loss of generality this instance can be formulated in such a way that no two variables are always equal or always unequal in every solution. Then the space of all truth assignments to the variables of this instance forms a hypercube. For each clause, formed as the disjunction of two variables or their complements, in the 2-satisfiability instance, one can form a retraction of the hypercube in which truth assignments violating this clause are mapped to truth assignments in which both variables satisfy the clause, without changing the other variables in the truth assignment. The composition of the retractions formed in this way for each of the clauses gives a retraction of the hypercube onto the solution space of the instance, and therefore gives a representation of G as the retract of a hypercube. In particular, median graphs are isometric subgraphs of hypercubes, and are therefore partial cubes. However, not all partial cubes are median graphs; for instance, a six-vertex cycle graph is a partial cube but is not a median graph.
As Imrich & Klavžar (2000) describe, an isometric embedding of a median graph into a hypercube may be constructed in time O(m log n), where n and m are the numbers of vertices and edges of the graph respectively.
== Triangle-free graphs and recognition algorithms ==
The problems of testing whether a graph is a median graph, and whether a graph is triangle-free, both had been well studied when Imrich, Klavžar & Mulder (1999) observed that, in some sense, they are computationally equivalent. Therefore, the best known time bound for testing whether a graph is triangle-free, O(m1.41), applies as well to testing whether a graph is a median graph, and any improvement in median graph testing algorithms would also lead to an improvement in algorithms for detecting triangles in graphs.
In one direction, suppose one is given as input a graph G, and must test whether G is triangle-free. From G, construct a new graph H having as vertices each set of zero, one, or two adjacent vertices of G. Two such sets are adjacent in H when they differ by exactly one vertex. An equivalent description of H is that it is formed by splitting each edge of G into a path of two edges, and adding a new vertex connected to all the original vertices of G. This graph H is by construction a partial cube, but it is a median graph only when G is triangle-free: if a, b, and c form a triangle in G, then {a,b}, {a,c}, and {b,c} have no median in H, for such a median would have to correspond to the set {a,b,c}, but sets of three or more vertices of G do not form vertices in H. Therefore, G is triangle-free if and only if H is a median graph. In the case that G is triangle-free, H is its simplex graph. An algorithm to test efficiently whether H is a median graph could by this construction also be used to test whether G is triangle-free. This transformation preserves the computational complexity of the problem, for the size of H is proportional to that of G.
The reduction in the other direction, from triangle detection to median graph testing, is more involved and depends on the previous median graph recognition algorithm of Hagauer, Imrich & Klavžar (1999), which tests several necessary conditions for median graphs in near-linear time. The key new step involves using a breadth first search to partition the graph's vertices into levels according to their distances from some arbitrarily chosen root vertex, forming a graph from each level in which two vertices are adjacent if they share a common neighbor in the previous level, and searching for triangles in these graphs. The median of any such triangle must be a common neighbor of the three triangle vertices; if this common neighbor does not exist, the graph is not a median graph. If all triangles found in this way have medians, and the previous algorithm finds that the graph satisfies all the other conditions for being a median graph, then it must actually be a median graph. This algorithm requires, not just the ability to test whether a triangle exists, but a list of all triangles in the level graph. In arbitrary graphs, listing all triangles sometimes requires Ω(m3/2) time, as some graphs have that many triangles, however Hagauer et al. show that the number of triangles arising in the level graphs of their reduction is near-linear, allowing the Alon et al. fast matrix multiplication based technique for finding triangles to be used.
== Evolutionary trees, Buneman graphs, and Helly split systems ==
Phylogeny is the inference of evolutionary trees from observed characteristics of species; such a tree must place the species at distinct vertices, and may have additional latent vertices, but the latent vertices are required to have three or more incident edges and must also be labeled with characteristics. A characteristic is binary when it has only two possible values, and a set of species and their characteristics exhibit perfect phylogeny when there exists an evolutionary tree in which the vertices (species and latent vertices) labeled with any particular characteristic value form a contiguous subtree. If a tree with perfect phylogeny is not possible, it is often desired to find one exhibiting maximum parsimony, or equivalently, minimizing the number of times the endpoints of a tree edge have different values for one of the characteristics, summed over all edges and all characteristics.
Buneman (1971) described a method for inferring perfect phylogenies for binary characteristics, when they exist. His method generalizes naturally to the construction of a median graph for any set of species and binary characteristics, which has been called the median network or Buneman graph and is a type of phylogenetic network. Every maximum parsimony evolutionary tree embeds into the Buneman graph, in the sense that tree edges follow paths in the graph and the number of characteristic value changes on the tree edge is the same as the number in the corresponding path. The Buneman graph will be a tree if and only if a perfect phylogeny exists; this happens when there are no two incompatible characteristics for which all four combinations of characteristic values are observed.
To form the Buneman graph for a set of species and characteristics, first, eliminate redundant species that are indistinguishable from some other species and redundant characteristics that are always the same as some other characteristic. Then, form a latent vertex for every combination of characteristic values such that every two of the values exist in some known species. In the example shown, there are small brown tailless mice, small silver tailless mice, small brown tailed mice, large brown tailed mice, and large silver tailed mice; the Buneman graph method would form a latent vertex corresponding to an unknown species of small silver tailed mice, because every pairwise combination (small and silver, small and tailed, and silver and tailed) is observed in some other known species. However, the method would not infer the existence of large brown tailless mice, because no mice are known to have both the large and tailless traits. Once the latent vertices are determined, form an edge between every pair of species or latent vertices that differ in a single characteristic.
One can equivalently describe a collection of binary characteristics as a split system, a family of sets having the property that the complement set of each set in the family is also in the family. This split system has a set for each characteristic value, consisting of the species that have that value. When the latent vertices are included, the resulting split system has the Helly property: every pairwise intersecting subfamily has a common intersection. In some sense median graphs are characterized as coming from Helly split systems: the pairs (Wuv, Wvu) defined for each edge uv of a median graph form a Helly split system, so if one applies the Buneman graph construction to this system no latent vertices will be needed and the result will be the same as the starting graph.
Bandelt et al. (1995) and Bandelt, Macaulay & Richards (2000) describe techniques for simplified hand calculation of the Buneman graph, and use this construction to visualize human genetic relationships.
== Additional properties ==
The Cartesian product of every two median graphs is another median graph. Medians in the product graph may be computed by independently finding the medians in the two factors, just as medians in grid graphs may be computed by independently finding the median in each linear dimension.
The windex of a graph measures the amount of lookahead needed to optimally solve a problem in which one is given a sequence of graph vertices si, and must find as output another sequence of vertices ti minimizing the sum of the distances d(si, ti) and d(ti − 1, ti). Median graphs are exactly the graphs that have windex 2. In a median graph, the optimal choice is to set ti = m(ti − 1, si, si + 1).
The property of having a unique median is also called the unique Steiner point property. An optimal Steiner tree for three vertices a, b, and c in a median graph may be found as the union of three shortest paths, from a, b, and c to m(a,b,c). Bandelt & Barthélémy (1984) study more generally the problem of finding the vertex minimizing the sum of distances to each of a given set of vertices, and show that it has a unique solution for any odd number of vertices in a median graph. They also show that this median of a set S of vertices in a median graph satisfies the Condorcet criterion for the winner of an election: compared to any other vertex, it is closer to a majority of the vertices in S.
As with partial cubes more generally, every median graph with n vertices has at most (n/2) log2 n edges. However, the number of edges cannot be too small: Klavžar, Mulder & Škrekovski (1998) prove that in every median graph the inequality 2n − m − k ≤ 2 holds, where m is the number of edges and k is the dimension of the hypercube that the graph is a retract of. This inequality is an equality if and only if the median graph contains no cubes. This is a consequence of another identity for median graphs: the Euler characteristic Σ (−1)dim(Q) is always equal to one, where the sum is taken over all hypercube subgraphs Q of the given median graph.
The only regular median graphs are the hypercubes.
Every median graph is a modular graph. The modular graphs are a class of graphs in which every triple of vertices has a median but the medians are not required to be unique.
== Notes ==
== References ==
== External links ==
Median graphs, Information System for Graph Class Inclusions.
Network, Free Phylogenetic Network Software. Network generates evolutionary trees and networks from genetic, linguistic, and other data.
PhyloMurka, open-source software for median network computations from biological data. | Wikipedia/Median_graph |
In the mathematical subfield of 3-manifolds, the virtually fibered conjecture, formulated by American mathematician William Thurston, states that every closed, irreducible, atoroidal 3-manifold with infinite fundamental group has a finite cover which is a surface bundle over the circle.
A 3-manifold which has such a finite cover is said to virtually fiber. If M is a Seifert fiber space, then M virtually fibers if and only if the rational Euler number of the Seifert fibration or the (orbifold) Euler characteristic of the base space is zero.
The hypotheses of the conjecture are satisfied by hyperbolic 3-manifolds. In fact, given that the geometrization conjecture is now settled, the only case needed to be proven for the virtually fibered conjecture is that of hyperbolic 3-manifolds.
The original interest in the virtually fibered conjecture (as well as its weaker cousins, such as the virtually Haken conjecture) stemmed from the fact that any of these conjectures, combined with Thurston's hyperbolization theorem, would imply the geometrization conjecture. However, in practice all known attacks on the "virtual" conjecture take geometrization as a hypothesis, and rely on the geometric and group-theoretic properties of hyperbolic 3-manifolds.
The virtually fibered conjecture was not actually conjectured by Thurston. Rather, he posed it as a question, writing only that "[t]his dubious-sounding question seems to have a definite chance for a positive answer".
The conjecture was finally settled in the affirmative in a series of papers from 2009 to 2012. In a posting on the ArXiv on 25 Aug 2009, Daniel Wise implicitly implied (by referring to a then-unpublished longer manuscript) that he had proven the conjecture for the case where the 3-manifold is closed, hyperbolic, and Haken. This was followed by a survey article in Electronic Research Announcements in Mathematical Sciences. Several other articles
have followed, including the aforementioned longer manuscript by Wise. In March 2012, during a conference at Institut Henri Poincaré in Paris, Ian Agol announced he could prove the virtually Haken conjecture for closed hyperbolic 3-manifolds
. Taken together with Daniel Wise's results, this implies the virtually fibered conjecture for all closed hyperbolic 3-manifolds.
== See also ==
Virtually Haken conjecture
Surface subgroup conjecture
Ehrenpreis conjecture
== Notes ==
== References ==
Thurston, William P. (1982). "Three dimensional manifolds, Kleinian groups and hyperbolic geometry". Bulletin of the American Mathematical Society. 6 (3): 357–382. CiteSeerX 10.1.1.535.7618. doi:10.1090/S0273-0979-1982-15003-0.
D. Gabai, On 3-manifold finitely covered by surface bundles, Low Dimensional Topology and Kleinian Groups (ed: D.B.A. Epstein), London Mathematical Society Lecture Note Series vol 112 (1986), p. 145-155.
Agol, Ian (2008). "Criteria for virtual fibering". Journal of Topology. 1 (2): 269–284. arXiv:0707.4522. doi:10.1112/jtopol/jtn003. S2CID 3028314.
== External links ==
Klarreich, Erica (2012-10-02). "Getting Into Shapes: From Hyperbolic Geometry to Cube Complexes and Back". Quanta Magazine. | Wikipedia/Virtually_fibered_conjecture |
In mathematics, the Ehrenpreis conjecture of Leon Ehrenpreis states that for any K greater than 1, any two closed Riemann surfaces of genus at least 2 have finite-degree covers which are K-quasiconformal: that is, the covers are arbitrarily close in the Teichmüller metric.
A proof was announced by Jeremy Kahn and Vladimir Markovic in January 2011, using their proof of the Surface subgroup conjecture and a newly developed "good pants homology" theory. In June 2012, Kahn and Markovic were given the Clay Research Awards for their work on these two problems by the Clay Mathematics Institute at a ceremony at Oxford University.
== See also ==
Surface subgroup conjecture
Virtually Haken conjecture
Virtually fibered conjecture
== References ==
Kahn, Jeremy; Markovic, Vladimir (29 April 2011). "The good pants homology and a proof of the Ehrenpreis conjecture". arXiv:1101.1330 [math.GT]. | Wikipedia/Ehrenpreis_conjecture |
In the mathematical subject of group theory, the Hanna Neumann conjecture is a statement about the rank of the intersection of two finitely generated subgroups of a free group. The conjecture was posed by Hanna Neumann in 1957.
In 2011, a strengthened version of the conjecture (see below) was proved independently by Joel Friedman
and by Igor Mineyev.
In 2017, a third proof of the Strengthened Hanna Neumann conjecture, based on homological arguments inspired by pro-p-group considerations, was published by Andrei Jaikin-Zapirain.
== History ==
The subject of the conjecture was originally motivated by a 1954 theorem of Howson who proved that the intersection of any two finitely generated subgroups of a free group is always finitely generated, that is, has finite rank. In this paper Howson proved that if H and K are subgroups of a free group F(X) of finite ranks n ≥ 1 and m ≥ 1 then the rank s of H ∩ K satisfies:
s − 1 ≤ 2mn − m − n.
In a 1956 paper Hanna Neumann improved this bound by showing that:
s − 1 ≤ 2mn − 2m − n.
In a 1957 addendum, Hanna Neumann further improved this bound to show that under the above assumptions
s − 1 ≤ 2(m − 1)(n − 1).
She also conjectured that the factor of 2 in the above inequality is not necessary and that one always has
s − 1 ≤ (m − 1)(n − 1).
This statement became known as the Hanna Neumann conjecture.
== Formal statement ==
Let H, K ≤ F(X) be two nontrivial finitely generated subgroups of a free group F(X) and let L = H ∩ K be the intersection of H and K. The conjecture says that in this case
rank(L) − 1 ≤ (rank(H) − 1)(rank(K) − 1).
Here for a group G the quantity rank(G) is the rank of G, that is, the smallest size of a generating set for G.
Every subgroup of a free group is known to be free itself and the rank of a free group is equal to the size of any free basis of that free group.
== Strengthened Hanna Neumann conjecture ==
If H, K ≤ G are two subgroups of a group G and if a, b ∈ G define the same double coset HaK = HbK then the subgroups H ∩ aKa−1 and H ∩ bKb−1 are conjugate in G and thus have the same rank. It is known that if H, K ≤ F(X) are finitely generated subgroups of a finitely generated free group F(X) then there exist at most finitely many double coset classes HaK in F(X) such that H ∩ aKa−1 ≠ {1}. Suppose that at least one such double coset exists and let a1,...,an be all the distinct representatives of such double cosets. The strengthened Hanna Neumann conjecture, formulated by her son Walter Neumann (1990), states that in this situation
∑
i
=
1
n
[
r
a
n
k
(
H
∩
a
i
K
a
i
−
1
)
−
1
]
≤
(
r
a
n
k
(
H
)
−
1
)
(
r
a
n
k
(
K
)
−
1
)
.
{\displaystyle \sum _{i=1}^{n}[{\rm {rank}}(H\cap a_{i}Ka_{i}^{-1})-1]\leq ({\rm {rank}}(H)-1)({\rm {rank}}(K)-1).}
The strengthened Hanna Neumann conjecture was proved in 2011 by Joel Friedman.
Shortly after, another proof was given by Igor Mineyev.
== Partial results and other generalizations ==
In 1971 Burns improved Hanna Neumann's 1957 bound and proved that under the same assumptions as in Hanna Neumann's paper one has
s ≤ 2mn − 3m − 2n + 4.
In a 1990 paper, Walter Neumann formulated the strengthened Hanna Neumann conjecture (see statement above).
Tardos (1992) established the strengthened Hanna Neumann Conjecture for the case where at least one of the subgroups H and K of F(X) has rank two. As most other approaches to the Hanna Neumann conjecture, Tardos used the technique of Stallings subgroup graphs for analyzing subgroups of free groups and their intersections.
Warren Dicks (1994) established the equivalence of the strengthened Hanna Neumann conjecture and a graph-theoretic statement that he called the amalgamated graph conjecture.
Arzhantseva (2000) proved that if H is a finitely generated subgroup of infinite index in F(X), then, in a certain statistical meaning, for a generic finitely generated subgroup
K
{\displaystyle K}
in
F
(
X
)
{\displaystyle F(X)}
, we have H ∩ gKg−1 = {1} for all g in F. Thus, the strengthened Hanna Neumann conjecture holds for every H and a generic K.
In 2001 Dicks and Formanek established the strengthened Hanna Neumann conjecture for the case where at least one of the subgroups H and K of F(X) has rank at most three.
Khan (2002) and, independently, Meakin and Weil (2002), showed that the conclusion of the strengthened Hanna Neumann conjecture holds if one of the subgroups H, K of F(X) is positively generated, that is, generated by a finite set of words that involve only elements of X but not of X−1 as letters.
Ivanov and Dicks and Ivanov obtained analogs and generalizations of Hanna Neumann's results for the intersection of subgroups H and K of a free product of several groups.
Wise (2005) claimed that the strengthened Hanna Neumann conjecture implies another long-standing group-theoretic conjecture which says that every one-relator group with torsion is coherent (that is, every finitely generated subgroup in such a group is finitely presented).
== See also ==
Geometric group theory
== References == | Wikipedia/Hanna_Neumann_conjecture |
In group theory, a word is any written product of group elements and their inverses. For example, if x, y and z are elements of a group G, then xy, z−1xzz and y−1zxx−1yz−1 are words in the set {x, y, z}. Two different words may evaluate to the same value in G, or even in every group. Words play an important role in the theory of free groups and presentations, and are central objects of study in combinatorial group theory.
== Definitions ==
Let G be a group, and let S be a subset of G. A word in S is any expression of the form
s
1
ε
1
s
2
ε
2
⋯
s
n
ε
n
{\displaystyle s_{1}^{\varepsilon _{1}}s_{2}^{\varepsilon _{2}}\cdots s_{n}^{\varepsilon _{n}}}
where s1,...,sn are elements of S, called generators, and each εi is ±1. The number n is known as the length of the word.
Each word in S represents an element of G, namely the product of the expression. By convention, the unique identity element can be represented by the empty word, which is the unique word of length zero.
== Notation ==
When writing words, it is common to use exponential notation as an abbreviation. For example, the word
x
x
y
−
1
z
y
z
z
z
x
−
1
x
−
1
{\displaystyle xxy^{-1}zyzzzx^{-1}x^{-1}\,}
could be written as
x
2
y
−
1
z
y
z
3
x
−
2
.
{\displaystyle x^{2}y^{-1}zyz^{3}x^{-2}.\,}
This latter expression is not a word itself—it is simply a shorter notation for the original.
When dealing with long words, it can be helpful to use an overline to denote inverses of elements of S. Using overline notation, the above word would be written as follows:
x
2
y
¯
z
y
z
3
x
¯
2
.
{\displaystyle x^{2}{\overline {y}}zyz^{3}{\overline {x}}^{2}.\,}
== Reduced words ==
Any word in which a generator appears next to its own inverse (xx−1 or x−1x) can be simplified by omitting the redundant pair:
y
−
1
z
x
x
−
1
y
⟶
y
−
1
z
y
.
{\displaystyle y^{-1}zxx^{-1}y\;\;\longrightarrow \;\;y^{-1}zy.}
This operation is known as reduction, and it does not change the group element represented by the word. Reductions can be thought of as relations (defined below) that follow from the group axioms.
A reduced word is a word that contains no redundant pairs. Any word can be simplified to a reduced word by performing a sequence of reductions:
x
z
y
−
1
x
x
−
1
y
z
−
1
z
z
−
1
y
z
⟶
x
y
z
.
{\displaystyle xzy^{-1}xx^{-1}yz^{-1}zz^{-1}yz\;\;\longrightarrow \;\;xyz.}
The result does not depend on the order in which the reductions are performed.
A word is cyclically reduced if and only if every cyclic permutation of the word is reduced.
== Operations on words ==
The product of two words is obtained by concatenation:
(
x
z
y
z
−
1
)
(
z
y
−
1
x
−
1
y
)
=
x
z
y
z
−
1
z
y
−
1
x
−
1
y
.
{\displaystyle \left(xzyz^{-1}\right)\left(zy^{-1}x^{-1}y\right)=xzyz^{-1}zy^{-1}x^{-1}y.}
Even if the two words are reduced, the product may not be.
The inverse of a word is obtained by inverting each generator, and reversing the order of the elements:
(
z
y
−
1
x
−
1
y
)
−
1
=
y
−
1
x
y
z
−
1
.
{\displaystyle \left(zy^{-1}x^{-1}y\right)^{-1}=y^{-1}xyz^{-1}.}
The product of a word with its inverse can be reduced to the empty word:
z
y
−
1
x
−
1
y
y
−
1
x
y
z
−
1
=
1.
{\displaystyle zy^{-1}x^{-1}y\;y^{-1}xyz^{-1}=1.}
You can move a generator from the beginning to the end of a word by conjugation:
x
−
1
(
x
y
−
1
z
−
1
y
z
)
x
=
y
−
1
z
−
1
y
z
x
.
{\displaystyle x^{-1}\left(xy^{-1}z^{-1}yz\right)x=y^{-1}z^{-1}yzx.}
== Generating set of a group ==
A subset S of a group G is called a generating set if every element of G can be represented by a word in S.
When S is not a generating set for G, the set of elements represented by words in S is a subgroup of G, known as the subgroup of G generated by S and usually denoted
⟨
S
⟩
{\displaystyle \langle S\rangle }
. It is the smallest subgroup of G that contains the elements of S.
== Normal forms ==
A normal form for a group G with generating set S is a choice of one reduced word in S for each element of G. For example:
The words 1, i, j, ij are a normal form for the Klein four-group with S = {i, j} and 1 representing the empty word (the identity element for the group).
The words 1, r, r2, ..., rn-1, s, sr, ..., srn-1 are a normal form for the dihedral group Dihn with S = {s, r} and 1 as above.
The set of words of the form xmyn for m,n ∈ Z are a normal form for the direct product of the cyclic groups ⟨x⟩ and ⟨y⟩ with S = {x, y}.
The set of reduced words in S are the unique normal form for the free group over S.
== Relations and presentations ==
If S is a generating set for a group G, a relation is a pair of words in S that represent the same element of G. These are usually written as equations, e.g.
x
−
1
y
x
=
y
2
.
{\displaystyle x^{-1}yx=y^{2}.\,}
A set
R
{\displaystyle {\mathcal {R}}}
of relations defines G if every relation in G follows logically from those in
R
{\displaystyle {\mathcal {R}}}
using the axioms for a group. A presentation for G is a pair
⟨
S
∣
R
⟩
{\displaystyle \langle S\mid {\mathcal {R}}\rangle }
, where S is a generating set for G and
R
{\displaystyle {\mathcal {R}}}
is a defining set of relations.
For example, the Klein four-group can be defined by the presentation
⟨
i
,
j
∣
i
2
=
1
,
j
2
=
1
,
i
j
=
j
i
⟩
.
{\displaystyle \langle i,j\mid i^{2}=1,\,j^{2}=1,\,ij=ji\rangle .}
Here 1 denotes the empty word, which represents the identity element.
== Free groups ==
If S is any set, the free group over S is the group with presentation
⟨
S
∣
⟩
{\displaystyle \langle S\mid \;\rangle }
. That is, the free group over S is the group generated by the elements of S, with no extra relations. Every element of the free group can be written uniquely as a reduced word in S.
== See also ==
Word problem (mathematics)
Word problem for groups
== Notes ==
== References ==
Epstein, David; Cannon, J. W.; Holt, D. F.; Levy, S. V. F.; Paterson, M. S.; Thurston, W. P. (1992). Word Processing in Groups. AK Peters. ISBN 0-86720-244-0..
Novikov, P. S. (1955). "On the algorithmic unsolvability of the word problem in group theory". Trudy Mat. Inst. Steklov (in Russian). 44: 1–143.
Robinson, Derek John Scott (1996). A course in the theory of groups. Berlin: Springer-Verlag. ISBN 0-387-94461-3.
Rotman, Joseph J. (1995). An introduction to the theory of groups. Berlin: Springer-Verlag. ISBN 0-387-94285-8.
Schupp, Paul E; Lyndon, Roger C. (2001). Combinatorial group theory. Berlin: Springer. ISBN 3-540-41158-5.
Solitar, Donald; Magnus, Wilhelm; Karrass, Abraham (2004). Combinatorial group theory: presentations of groups in terms of generators and relations. New York: Dover. ISBN 0-486-43830-9.
Stillwell, John (1993). Classical topology and combinatorial group theory. Berlin: Springer-Verlag. ISBN 0-387-97970-0. | Wikipedia/Word_(group_theory) |
In group theory, a group
A
{\displaystyle A\ }
is algebraically closed if any finite set of equations and inequations that are applicable to
A
{\displaystyle A\ }
have a solution in
A
{\displaystyle A\ }
without needing a group extension. This notion will be made precise later in the article in § Formal definition.
== Informal discussion ==
Suppose we wished to find an element
x
{\displaystyle x\ }
of a group
G
{\displaystyle G\ }
satisfying the conditions (equations and inequations):
x
2
=
1
{\displaystyle x^{2}=1\ }
x
3
=
1
{\displaystyle x^{3}=1\ }
x
≠
1
{\displaystyle x\neq 1\ }
Then it is easy to see that this is impossible because the first two equations imply
x
=
1
{\displaystyle x=1\ }
. In this case we say the set of conditions are inconsistent with
G
{\displaystyle G\ }
. (In fact this set of conditions are inconsistent with any group whatsoever.)
Now suppose
G
{\displaystyle G\ }
is the group with the multiplication table to the right.
Then the conditions:
x
2
=
1
{\displaystyle x^{2}=1\ }
x
≠
1
{\displaystyle x\neq 1\ }
have a solution in
G
{\displaystyle G\ }
, namely
x
=
a
{\displaystyle x=a\ }
.
However the conditions:
x
4
=
1
{\displaystyle x^{4}=1\ }
x
2
a
−
1
=
1
{\displaystyle x^{2}a^{-1}=1\ }
Do not have a solution in
G
{\displaystyle G\ }
, as can easily be checked.
However, if we extend the group
G
{\displaystyle G\ }
to the group
H
{\displaystyle H\ }
with the adjacent multiplication table:
Then the conditions have two solutions, namely
x
=
b
{\displaystyle x=b\ }
and
x
=
c
{\displaystyle x=c\ }
.
Thus there are three possibilities regarding such conditions:
They may be inconsistent with
G
{\displaystyle G\ }
and have no solution in any extension of
G
{\displaystyle G\ }
.
They may have a solution in
G
{\displaystyle G\ }
.
They may have no solution in
G
{\displaystyle G\ }
but nevertheless have a solution in some extension
H
{\displaystyle H\ }
of
G
{\displaystyle G\ }
.
It is reasonable to ask whether there are any groups
A
{\displaystyle A\ }
such that whenever a set of conditions like these have a solution at all, they have a solution in
A
{\displaystyle A\ }
itself? The answer turns out to be "yes", and we call such groups algebraically closed groups.
== Formal definition ==
We first need some preliminary ideas.
If
G
{\displaystyle G\ }
is a group and
F
{\displaystyle F\ }
is the free group on countably many generators, then by a finite set of equations and inequations with coefficients in
G
{\displaystyle G\ }
we mean a pair of subsets
E
{\displaystyle E\ }
and
I
{\displaystyle I\ }
of
F
⋆
G
{\displaystyle F\star G}
the free product of
F
{\displaystyle F\ }
and
G
{\displaystyle G\ }
.
This formalizes the notion of a set of equations and inequations consisting of variables
x
i
{\displaystyle x_{i}\ }
and elements
g
j
{\displaystyle g_{j}\ }
of
G
{\displaystyle G\ }
. The set
E
{\displaystyle E\ }
represents equations like:
x
1
2
g
1
4
x
3
=
1
{\displaystyle x_{1}^{2}g_{1}^{4}x_{3}=1}
x
3
2
g
2
x
4
g
1
=
1
{\displaystyle x_{3}^{2}g_{2}x_{4}g_{1}=1}
…
{\displaystyle \dots \ }
The set
I
{\displaystyle I\ }
represents inequations like
g
5
−
1
x
3
≠
1
{\displaystyle g_{5}^{-1}x_{3}\neq 1}
…
{\displaystyle \dots \ }
By a solution in
G
{\displaystyle G\ }
to this finite set of equations and inequations, we mean a homomorphism
f
:
F
→
G
{\displaystyle f:F\rightarrow G}
, such that
f
~
(
e
)
=
1
{\displaystyle {\tilde {f}}(e)=1\ }
for all
e
∈
E
{\displaystyle e\in E}
and
f
~
(
i
)
≠
1
{\displaystyle {\tilde {f}}(i)\neq 1\ }
for all
i
∈
I
{\displaystyle i\in I}
, where
f
~
{\displaystyle {\tilde {f}}}
is the unique homomorphism
f
~
:
F
⋆
G
→
G
{\displaystyle {\tilde {f}}:F\star G\rightarrow G}
that equals
f
{\displaystyle f\ }
on
F
{\displaystyle F\ }
and is the identity on
G
{\displaystyle G\ }
.
This formalizes the idea of substituting elements of
G
{\displaystyle G\ }
for the variables to get true identities and inidentities. In the example the substitutions
x
1
↦
g
6
,
x
3
↦
g
7
{\displaystyle x_{1}\mapsto g_{6},x_{3}\mapsto g_{7}}
and
x
4
↦
g
8
{\displaystyle x_{4}\mapsto g_{8}}
yield:
g
6
2
g
1
4
g
7
=
1
{\displaystyle g_{6}^{2}g_{1}^{4}g_{7}=1}
g
7
2
g
2
g
8
g
1
=
1
{\displaystyle g_{7}^{2}g_{2}g_{8}g_{1}=1}
…
{\displaystyle \dots \ }
g
5
−
1
g
7
≠
1
{\displaystyle g_{5}^{-1}g_{7}\neq 1}
…
{\displaystyle \dots \ }
We say the finite set of equations and inequations is consistent with
G
{\displaystyle G\ }
if we can solve them in a "bigger" group
H
{\displaystyle H\ }
. More formally:
The equations and inequations are consistent with
G
{\displaystyle G\ }
if there is a group
H
{\displaystyle H\ }
and an embedding
h
:
G
→
H
{\displaystyle h:G\rightarrow H}
such that the finite set of equations and inequations
h
~
(
E
)
{\displaystyle {\tilde {h}}(E)}
and
h
~
(
I
)
{\displaystyle {\tilde {h}}(I)}
has a solution in
H
{\displaystyle H\ }
, where
h
~
{\displaystyle {\tilde {h}}}
is the unique homomorphism
h
~
:
F
⋆
G
→
F
⋆
H
{\displaystyle {\tilde {h}}:F\star G\rightarrow F\star H}
that equals
h
{\displaystyle h\ }
on
G
{\displaystyle G\ }
and is the identity on
F
{\displaystyle F\ }
.
Now we formally define the group
A
{\displaystyle A\ }
to be algebraically closed if every finite set of equations and inequations that has coefficients in
A
{\displaystyle A\ }
and is consistent with
A
{\displaystyle A\ }
has a solution in
A
{\displaystyle A\ }
.
== Known results ==
It is difficult to give concrete examples of algebraically closed groups as the following results indicate:
Every countable group can be embedded in a countable algebraically closed group.
Every algebraically closed group is simple.
No algebraically closed group is finitely generated.
An algebraically closed group cannot be recursively presented.
A finitely generated group has a solvable word problem if and only if it can be embedded in every algebraically closed group.
The proofs of these results are in general very complex. However, a sketch of the proof that a countable group
C
{\displaystyle C\ }
can be embedded in an algebraically closed group follows.
First we embed
C
{\displaystyle C\ }
in a countable group
C
1
{\displaystyle C_{1}\ }
with the property that every finite set of equations with coefficients in
C
{\displaystyle C\ }
that is consistent in
C
1
{\displaystyle C_{1}\ }
has a solution in
C
1
{\displaystyle C_{1}\ }
as follows:
There are only countably many finite sets of equations and inequations with coefficients in
C
{\displaystyle C\ }
. Fix an enumeration
S
0
,
S
1
,
S
2
,
…
{\displaystyle S_{0},S_{1},S_{2},\dots \ }
of them. Define groups
D
0
,
D
1
,
D
2
,
…
{\displaystyle D_{0},D_{1},D_{2},\dots \ }
inductively by:
D
0
=
C
{\displaystyle D_{0}=C\ }
D
i
+
1
=
{
D
i
if
S
i
is not consistent with
D
i
⟨
D
i
,
h
1
,
h
2
,
…
,
h
n
⟩
if
S
i
has a solution in
H
⊇
D
i
with
x
j
↦
h
j
1
≤
j
≤
n
{\displaystyle D_{i+1}=\left\{{\begin{matrix}D_{i}\ &{\mbox{if}}\ S_{i}\ {\mbox{is not consistent with}}\ D_{i}\\\langle D_{i},h_{1},h_{2},\dots ,h_{n}\rangle &{\mbox{if}}\ S_{i}\ {\mbox{has a solution in}}\ H\supseteq D_{i}\ {\mbox{with}}\ x_{j}\mapsto h_{j}\ 1\leq j\leq n\end{matrix}}\right.}
Now let:
C
1
=
∪
i
=
0
∞
D
i
{\displaystyle C_{1}=\cup _{i=0}^{\infty }D_{i}}
Now iterate this construction to get a sequence of groups
C
=
C
0
,
C
1
,
C
2
,
…
{\displaystyle C=C_{0},C_{1},C_{2},\dots \ }
and let:
A
=
∪
i
=
0
∞
C
i
{\displaystyle A=\cup _{i=0}^{\infty }C_{i}}
Then
A
{\displaystyle A\ }
is a countable group containing
C
{\displaystyle C\ }
. It is algebraically closed because any finite set of equations and inequations that is consistent with
A
{\displaystyle A\ }
must have coefficients in some
C
i
{\displaystyle C_{i}\ }
and so must have a solution in
C
i
+
1
{\displaystyle C_{i+1}\ }
.
== See also ==
Algebraic closure
Algebraically closed field
== References ==
A. Macintyre: On algebraically closed groups, ann. of Math, 96, 53-97 (1972)
B.H. Neumann: A note on algebraically closed groups. J. London Math. Soc. 27, 227-242 (1952)
B.H. Neumann: The isomorphism problem for algebraically closed groups. In: Word Problems, pp 553–562. Amsterdam: North-Holland 1973
W.R. Scott: Algebraically closed groups. Proc. Amer. Math. Soc. 2, 118-121 (1951) | Wikipedia/Algebraically_closed_group |
In the mathematical subject of geometric group theory, the Švarc–Milnor lemma (sometimes also called Milnor–Švarc lemma, with both variants also sometimes spelling Švarc as Schwarz) is a statement which says that a group
G
{\displaystyle G}
, equipped with a "nice" discrete isometric action on a metric space
X
{\displaystyle X}
, is quasi-isometric to
X
{\displaystyle X}
.
This result goes back, in different form, before the notion of quasi-isometry was formally introduced, to the work of Albert S. Schwarz (1955) and John Milnor (1968). Pierre de la Harpe called the Švarc–Milnor lemma "the fundamental observation in geometric group theory" because of its importance for the subject. Occasionally the name "fundamental observation in geometric group theory" is now used for this statement, instead of calling it the Švarc–Milnor lemma; see, for example, Theorem 8.2 in the book of Farb and Margalit.
== Precise statement ==
Several minor variations of the statement of the lemma exist in the literature. Here we follow the version given in the book of Bridson and Haefliger (see Proposition 8.19 on p. 140 there).
Let
G
{\displaystyle G}
be a group acting by isometries on a proper length space
X
{\displaystyle X}
such that the action is properly discontinuous and cocompact.
Then the group
G
{\displaystyle G}
is finitely generated and for every finite generating set
S
{\displaystyle S}
of
G
{\displaystyle G}
and every point
p
∈
X
{\displaystyle p\in X}
the orbit map
f
p
:
(
G
,
d
S
)
→
X
,
g
↦
g
p
{\displaystyle f_{p}:(G,d_{S})\to X,\quad g\mapsto gp}
is a quasi-isometry.
Here
d
S
{\displaystyle d_{S}}
is the word metric on
G
{\displaystyle G}
corresponding to
S
{\displaystyle S}
.
Sometimes a properly discontinuous cocompact isometric action of a group
G
{\displaystyle G}
on a proper geodesic metric space
X
{\displaystyle X}
is called a geometric action.
== Explanation of the terms ==
Recall that a metric space
X
{\displaystyle X}
is proper if every closed ball in
X
{\displaystyle X}
is compact.
An action of
G
{\displaystyle G}
on
X
{\displaystyle X}
is properly discontinuous if for every compact
K
⊆
X
{\displaystyle K\subseteq X}
the set
{
g
∈
G
∣
g
K
∩
K
≠
∅
}
{\displaystyle \{g\in G\mid gK\cap K\neq \varnothing \}}
is finite.
The action of
G
{\displaystyle G}
on
X
{\displaystyle X}
is cocompact if the quotient space
X
/
G
{\displaystyle X/G}
, equipped with the quotient topology, is compact.
Under the other assumptions of the Švarc–Milnor lemma, the cocompactness condition is equivalent to the existence of a closed ball
B
{\displaystyle B}
in
X
{\displaystyle X}
such that
⋃
g
∈
G
g
B
=
X
.
{\displaystyle \bigcup _{g\in G}gB=X.}
== Examples of applications of the Švarc–Milnor lemma ==
For Examples 1 through 5 below see pp. 89–90 in the book of de la Harpe.
Example 6 is the starting point of the part of the paper of Richard Schwartz.
For every
n
≥
1
{\displaystyle n\geq 1}
the group
Z
n
{\displaystyle \mathbb {Z} ^{n}}
is quasi-isometric to the Euclidean space
R
n
{\displaystyle \mathbb {R} ^{n}}
.
If
Σ
{\displaystyle \Sigma }
is a closed connected oriented surface of negative Euler characteristic then the fundamental group
π
1
(
Σ
)
{\displaystyle \pi _{1}(\Sigma )}
is quasi-isometric to the hyperbolic plane
H
2
{\displaystyle \mathbb {H} ^{2}}
.
If
(
M
,
g
)
{\displaystyle (M,g)}
is a closed connected smooth manifold with a smooth Riemannian metric
g
{\displaystyle g}
then
π
1
(
M
)
{\displaystyle \pi _{1}(M)}
is quasi-isometric to
(
M
~
,
d
g
~
)
{\displaystyle ({\tilde {M}},d_{\tilde {g}})}
, where
M
~
{\displaystyle {\tilde {M}}}
is the universal cover of
M
{\displaystyle M}
, where
g
~
{\displaystyle {\tilde {g}}}
is the pull-back of
g
{\displaystyle g}
to
M
~
{\displaystyle {\tilde {M}}}
, and where
d
g
~
{\displaystyle d_{\tilde {g}}}
is the path metric on
M
~
{\displaystyle {\tilde {M}}}
defined by the Riemannian metric
g
~
{\displaystyle {\tilde {g}}}
.
If
G
{\displaystyle G}
is a connected finite-dimensional Lie group equipped with a left-invariant Riemannian metric and the corresponding path metric, and if
Γ
≤
G
{\displaystyle \Gamma \leq G}
is a uniform lattice then
Γ
{\displaystyle \Gamma }
is quasi-isometric to
G
{\displaystyle G}
.
If
M
{\displaystyle M}
is a closed hyperbolic 3-manifold, then
π
1
(
M
)
{\displaystyle \pi _{1}(M)}
is quasi-isometric to
H
3
{\displaystyle \mathbb {H} ^{3}}
.
If
M
{\displaystyle M}
is a complete finite volume hyperbolic 3-manifold with cusps, then
Γ
=
π
1
(
M
)
{\displaystyle \Gamma =\pi _{1}(M)}
is quasi-isometric to
Ω
=
H
3
−
B
{\displaystyle \Omega =\mathbb {H} ^{3}-{\mathcal {B}}}
, where
B
{\displaystyle {\mathcal {B}}}
is a certain
Γ
{\displaystyle \Gamma }
-invariant collection of horoballs, and where
Ω
{\displaystyle \Omega }
is equipped with the induced path metric.
== References == | Wikipedia/Švarc–Milnor_lemma |
A 3D projection (or graphical projection) is a design technique used to display a three-dimensional (3D) object on a two-dimensional (2D) surface. These projections rely on visual perspective and aspect analysis to project a complex object for viewing capability on a simpler plane.
3D projections use the primary qualities of an object's basic shape to create a map of points, that are then connected to one another to create a visual element. The result is a graphic that contains conceptual properties to interpret the figure or image as not actually flat (2D), but rather, as a solid object (3D) being viewed on a 2D display.
3D objects are largely displayed on two-dimensional mediums (such as paper and computer monitors). As such, graphical projections are a commonly used design element; notably, in engineering drawing, drafting, and computer graphics. Projections can be calculated through employment of mathematical analysis and formulae, or by using various geometric and optical techniques.
== Overview ==
In order to display a three-dimensional (3D) object on a two-dimensional (2D) surface, a projection transformation is applied to the 3D object using a projection matrix. This transformation removes information in the third dimension while preserving it in the first two. See Projective Geometry for more details.
If the size and shape of the 3D object should not be distorted by its relative position to the 2D surface, a parallel projection may be used.
Examples of parallel projections:
If the 3D perspective of an object should be preserved on a 2D surface, the transformation must include scaling and translation based on the object's relative position to the 2D surface. This process is called perspective projection.
Examples of perspective projections:
== Parallel projection ==
In parallel projection, the lines of sight from the object to the projection plane are parallel to each other. Thus, lines that are parallel in three-dimensional space remain parallel in the two-dimensional projected image. Parallel projection also corresponds to a perspective projection with an infinite focal length (the distance from a camera's lens and focal point), or "zoom".
Images drawn in parallel projection rely upon the technique of axonometry ("to measure along axes"), as described in Pohlke's theorem. In general, the resulting image is oblique (the rays are not perpendicular to the image plane); but in special cases the result is orthographic (the rays are perpendicular to the image plane). Axonometry should not be confused with axonometric projection, as in English literature the latter usually refers only to a specific class of pictorials (see below).
=== Orthographic projection ===
The orthographic projection is derived from the principles of descriptive geometry and is a two-dimensional representation of a three-dimensional object. It is a parallel projection (the lines of projection are parallel both in reality and in the projection plane). It is the projection type of choice for working drawings.
If the normal of the viewing plane (the camera direction) is parallel to one of the primary axes (which is the x, y, or z axis), the mathematical transformation is as follows;
To project the 3D point
a
x
{\displaystyle a_{x}}
,
a
y
{\displaystyle a_{y}}
,
a
z
{\displaystyle a_{z}}
onto the 2D point
b
x
{\displaystyle b_{x}}
,
b
y
{\displaystyle b_{y}}
using an orthographic projection parallel to the y axis (where positive y represents forward direction - profile view), the following equations can be used:
b
x
=
s
x
a
x
+
c
x
{\displaystyle b_{x}=s_{x}a_{x}+c_{x}}
b
y
=
s
z
a
z
+
c
z
{\displaystyle b_{y}=s_{z}a_{z}+c_{z}}
where the vector s is an arbitrary scale factor, and c is an arbitrary offset. These constants are optional, and can be used to properly align the viewport. Using matrix multiplication, the equations become:
[
b
x
b
y
]
=
[
s
x
0
0
0
0
s
z
]
[
a
x
a
y
a
z
]
+
[
c
x
c
z
]
.
{\displaystyle {\begin{bmatrix}b_{x}\\b_{y}\end{bmatrix}}={\begin{bmatrix}s_{x}&0&0\\0&0&s_{z}\end{bmatrix}}{\begin{bmatrix}a_{x}\\a_{y}\\a_{z}\end{bmatrix}}+{\begin{bmatrix}c_{x}\\c_{z}\end{bmatrix}}.}
While orthographically projected images represent the three dimensional nature of the object projected, they do not represent the object as it would be recorded photographically or perceived by a viewer observing it directly. In particular, parallel lengths at all points in an orthographically projected image are of the same scale regardless of whether they are far away or near to the virtual viewer. As a result, lengths are not foreshortened as they would be in a perspective projection.
==== Multiview projection ====
With multiview projections, up to six pictures (called primary views) of an object are produced, with each projection plane parallel to one of the coordinate axes of the object. The views are positioned relative to each other according to either of two schemes: first-angle or third-angle projection. In each, the appearances of views may be thought of as being projected onto planes that form a 6-sided box around the object. Although six different sides can be drawn, usually three views of a drawing give enough information to make a 3D object. These views are known as front view, top view, and end view. The terms elevation, plan and section are also used.
=== Oblique projection ===
In oblique projections the parallel projection rays are not perpendicular to the viewing plane as with orthographic projection, but strike the projection plane at an angle other than ninety degrees. In both orthographic and oblique projection, parallel lines in space appear parallel on the projected image. Because of its simplicity, oblique projection is used exclusively for pictorial purposes rather than for formal, working drawings. In an oblique pictorial drawing, the displayed angles among the axes as well as the foreshortening factors (scale) are arbitrary. The distortion created thereby is usually attenuated by aligning one plane of the imaged object to be parallel with the plane of projection thereby creating a true shape, full-size image of the chosen plane. Special types of oblique projections are:
==== Cavalier projection (45°) ====
In cavalier projection (sometimes cavalier perspective or high view point) a point of the object is represented by three coordinates, x, y and z. On the drawing, it is represented by only two coordinates, x″ and y″. On the flat drawing, two axes, x and z on the figure, are perpendicular and the length on these axes are drawn with a 1:1 scale; it is thus similar to the dimetric projections, although it is not an axonometric projection, as the third axis, here y, is drawn in diagonal, making an arbitrary angle with the x″ axis, usually 30 or 45°. The length of the third axis is not scaled.
==== Cabinet projection ====
The term cabinet projection (sometimes cabinet perspective) stems from its use in illustrations by the furniture industry. Like cavalier perspective, one face of the projected object is parallel to the viewing plane, and the third axis is projected as going off in an angle (typically 30° or 45° or arctan(2) = 63.4°). Unlike cavalier projection, where the third axis keeps its length, with cabinet projection the length of the receding lines is cut in half.
==== Military projection ====
A variant of oblique projection is called military projection. In this case, the horizontal sections are isometrically drawn so that the floor plans are not distorted and the verticals are drawn at an angle. The military projection is given by rotation in the xy-plane and a vertical translation an amount z.
=== Axonometric projection ===
Axonometric projections show an image of an object as viewed from a skew direction in order to reveal all three directions (axes) of space in one picture. Axonometric projections may be either orthographic or oblique. Axonometric instrument drawings are often used to approximate graphical perspective projections, but there is attendant distortion in the approximation. Because pictorial projections innately contain this distortion, in instrument drawings of pictorials great liberties may then be taken for economy of effort and best effect.
Axonometric projection is further subdivided into three categories: isometric projection, dimetric projection, and trimetric projection, depending on the exact angle at which the view deviates from the orthogonal. A typical characteristic of orthographic pictorials is that one axis of space is usually displayed as vertical.
==== Isometric projection ====
In isometric pictorials (for methods, see Isometric projection), the direction of viewing is such that the three axes of space appear equally foreshortened, and there is a common angle of 120° between them. The distortion caused by foreshortening is uniform, therefore the proportionality of all sides and lengths are preserved, and the axes share a common scale. This enables measurements to be read or taken directly from the drawing.
==== Dimetric projection ====
In dimetric pictorials (for methods, see Dimetric projection), the direction of viewing is such that two of the three axes of space appear equally foreshortened, of which the attendant scale and angles of presentation are determined according to the angle of viewing; the scale of the third direction (vertical) is determined separately. Approximations are common in dimetric drawings.
==== Trimetric projection ====
In trimetric pictorials (for methods, see Trimetric projection), the direction of viewing is such that all of the three axes of space appear unequally foreshortened. The scale along each of the three axes and the angles among them are determined separately as dictated by the angle of viewing. Approximations in Trimetric drawings are common.
=== Limitations of parallel projection ===
Objects drawn with parallel projection do not appear larger or smaller as they extend closer to or away from the viewer. While advantageous for architectural drawings, where measurements must be taken directly from the image, the result is a perceived distortion, since unlike perspective projection, this is not how our eyes or photography normally work. It also can easily result in situations where depth and altitude are difficult to gauge, as is shown in the illustration to the right.
In this isometric drawing, the blue sphere is two units higher than the red one. However, this difference in elevation is not apparent if one covers the right half of the picture, as the boxes (which serve as clues suggesting height) are then obscured.
This visual ambiguity has been exploited in op art, as well as "impossible object" drawings. M. C. Escher's Waterfall (1961), while not strictly utilizing parallel projection, is a well-known example, in which a channel of water seems to travel unaided along a downward path, only to then paradoxically fall once again as it returns to its source. The water thus appears to disobey the law of conservation of energy. An extreme example is depicted in the film Inception, where by a forced perspective trick an immobile stairway changes its connectivity. The video game Fez uses tricks of perspective to determine where a player can and cannot move in a puzzle-like fashion.
== Perspective projection ==
Perspective projection or perspective transformation is a projection where three-dimensional objects are projected on a picture plane. This has the effect that distant objects appear smaller than nearer objects.
It also means that lines which are parallel in nature (that is, meet at the point at infinity) appear to intersect in the projected image. For example, if railways are pictured with perspective projection, they appear to converge towards a single point, called the vanishing point. Photographic lenses and the human eye work in the same way, therefore the perspective projection looks the most realistic. Perspective projection is usually categorized into one-point, two-point and three-point perspective, depending on the orientation of the projection plane towards the axes of the depicted object.
Graphical projection methods rely on the duality between lines and points, whereby two straight lines determine a point while two points determine a straight line. The orthogonal projection of the eye point onto the picture plane is called the principal vanishing point (P.P. in the scheme on the right, from the Italian term punto principale, coined during the renaissance).
Two relevant points of a line are:
its intersection with the picture plane, and
its vanishing point, found at the intersection between the parallel line from the eye point and the picture plane.
The principal vanishing point is the vanishing point of all horizontal lines perpendicular to the picture plane. The vanishing points of all horizontal lines lie on the horizon line. If, as is often the case, the picture plane is vertical, all vertical lines are drawn vertically, and have no finite vanishing point on the picture plane. Various graphical methods can be easily envisaged for projecting geometrical scenes. For example, lines traced from the eye point at 45° to the picture plane intersect the latter along a circle whose radius is the distance of the eye point from the plane, thus tracing that circle aids the construction of all the vanishing points of 45° lines; in particular, the intersection of that circle with the horizon line consists of two distance points. They are useful for drawing chessboard floors which, in turn, serve for locating the base of objects on the scene. In the perspective of a geometric solid on the right, after choosing the principal vanishing point —which determines the horizon line— the 45° vanishing point on the left side of the drawing completes the characterization of the (equally distant) point of view. Two lines are drawn from the orthogonal projection of each vertex, one at 45° and one at 90° to the picture plane. After intersecting the ground line, those lines go toward the distance point (for 45°) or the principal point (for 90°). Their new intersection locates the projection of the map. Natural heights are measured above the ground line and then projected in the same way until they meet the vertical from the map.
While orthographic projection ignores perspective to allow accurate measurements, perspective projection shows distant objects as smaller to provide additional realism.
=== Mathematical formula ===
The perspective projection requires a more involved definition as compared to orthographic projections. A conceptual aid to understanding the mechanics of this projection is to imagine the 2D projection as though the object(s) are being viewed through a camera viewfinder. The camera's position, orientation, and field of view control the behavior of the projection transformation. The following variables are defined to describe this transformation:
a
x
,
y
,
z
{\displaystyle \mathbf {a} _{x,y,z}}
– the 3D position of a point A that is to be projected
c
x
,
y
,
z
{\displaystyle \mathbf {c} _{x,y,z}}
– the 3D position of a point C representing the camera
θ
x
,
y
,
z
{\displaystyle \mathbf {\theta } _{x,y,z}}
– The orientation of the camera (represented by Tait–Bryan angles)
e
x
,
y
,
z
{\displaystyle \mathbf {e} _{x,y,z}}
– the display surface's position relative to aforementioned
c
{\displaystyle \mathbf {c} }
Most conventions use positive z values (the plane being in front of the pinhole
c
{\displaystyle \mathbf {c} }
), however negative z values are physically more correct, but the image will be inverted both horizontally and vertically.
Which results in:
b
x
,
y
{\displaystyle \mathbf {b} _{x,y}}
– the 2D projection of
a
.
{\displaystyle \mathbf {a} .}
When
c
x
,
y
,
z
=
⟨
0
,
0
,
0
⟩
,
{\displaystyle \mathbf {c} _{x,y,z}=\langle 0,0,0\rangle ,}
and
θ
x
,
y
,
z
=
⟨
0
,
0
,
0
⟩
,
{\displaystyle \mathbf {\theta } _{x,y,z}=\langle 0,0,0\rangle ,}
the 3D vector
⟨
1
,
2
,
0
⟩
{\displaystyle \langle 1,2,0\rangle }
is projected to the 2D vector
⟨
1
,
2
⟩
{\displaystyle \langle 1,2\rangle }
.
Otherwise, to compute
b
x
,
y
{\displaystyle \mathbf {b} _{x,y}}
we first define a vector
d
x
,
y
,
z
{\displaystyle \mathbf {d} _{x,y,z}}
as the position of point A with respect to a coordinate system defined by the camera, with origin in C and rotated by
θ
{\displaystyle \mathbf {\theta } }
with respect to the initial coordinate system. This is achieved by subtracting
c
{\displaystyle \mathbf {c} }
from
a
{\displaystyle \mathbf {a} }
and then applying a rotation by
−
θ
{\displaystyle -\mathbf {\theta } }
to the result. This transformation is often called a camera transform, and can be expressed as follows, expressing the rotation in terms of rotations about the x, y, and z axes (these calculations assume that the axes are ordered as a left-handed system of axes):
[
d
x
d
y
d
z
]
=
[
1
0
0
0
cos
(
θ
x
)
sin
(
θ
x
)
0
−
sin
(
θ
x
)
cos
(
θ
x
)
]
[
cos
(
θ
y
)
0
−
sin
(
θ
y
)
0
1
0
sin
(
θ
y
)
0
cos
(
θ
y
)
]
[
cos
(
θ
z
)
sin
(
θ
z
)
0
−
sin
(
θ
z
)
cos
(
θ
z
)
0
0
0
1
]
(
[
a
x
a
y
a
z
]
−
[
c
x
c
y
c
z
]
)
{\displaystyle {\begin{bmatrix}\mathbf {d} _{x}\\\mathbf {d} _{y}\\\mathbf {d} _{z}\end{bmatrix}}={\begin{bmatrix}1&0&0\\0&\cos(\mathbf {\theta } _{x})&\sin(\mathbf {\theta } _{x})\\0&-\sin(\mathbf {\theta } _{x})&\cos(\mathbf {\theta } _{x})\end{bmatrix}}{\begin{bmatrix}\cos(\mathbf {\theta } _{y})&0&-\sin(\mathbf {\theta } _{y})\\0&1&0\\\sin(\mathbf {\theta } _{y})&0&\cos(\mathbf {\theta } _{y})\end{bmatrix}}{\begin{bmatrix}\cos(\mathbf {\theta } _{z})&\sin(\mathbf {\theta } _{z})&0\\-\sin(\mathbf {\theta } _{z})&\cos(\mathbf {\theta } _{z})&0\\0&0&1\end{bmatrix}}\left({{\begin{bmatrix}\mathbf {a} _{x}\\\mathbf {a} _{y}\\\mathbf {a} _{z}\\\end{bmatrix}}-{\begin{bmatrix}\mathbf {c} _{x}\\\mathbf {c} _{y}\\\mathbf {c} _{z}\\\end{bmatrix}}}\right)}
This representation corresponds to rotating by three Euler angles (more properly, Tait–Bryan angles), using the xyz convention, which can be interpreted either as "rotate about the extrinsic axes (axes of the scene) in the order z, y, x (reading right-to-left)" or "rotate about the intrinsic axes (axes of the camera) in the order x, y, z (reading left-to-right)". If the camera is not rotated (
θ
x
,
y
,
z
=
⟨
0
,
0
,
0
⟩
{\displaystyle \mathbf {\theta } _{x,y,z}=\langle 0,0,0\rangle }
), then the matrices drop out (as identities), and this reduces to simply a shift:
d
=
a
−
c
.
{\displaystyle \mathbf {d} =\mathbf {a} -\mathbf {c} .}
Alternatively, without using matrices (let us replace
a
x
−
c
x
{\displaystyle a_{x}-c_{x}}
with
x
{\displaystyle \mathbf {x} }
and so on, and abbreviate
cos
(
θ
α
)
{\displaystyle \cos \left(\theta _{\alpha }\right)}
to
c
o
s
α
{\displaystyle cos_{\alpha }}
and
sin
(
θ
α
)
{\displaystyle \sin \left(\theta _{\alpha }\right)}
to
s
i
n
α
{\displaystyle sin_{\alpha }}
):
d
x
=
c
o
s
y
(
s
i
n
z
y
+
c
o
s
z
x
)
−
s
i
n
y
z
d
y
=
s
i
n
x
(
c
o
s
y
z
+
s
i
n
y
(
s
i
n
z
y
+
c
o
s
z
x
)
)
+
c
o
s
x
(
c
o
s
z
y
−
s
i
n
z
x
)
d
z
=
c
o
s
x
(
c
o
s
y
z
+
s
i
n
y
(
s
i
n
z
y
+
c
o
s
z
x
)
)
−
s
i
n
x
(
c
o
s
z
y
−
s
i
n
z
x
)
{\displaystyle {\begin{aligned}\mathbf {d} _{x}&=cos_{y}(sin_{z}\mathbf {y} +cos_{z}\mathbf {x} )-sin_{y}\mathbf {z} \\\mathbf {d} _{y}&=sin_{x}(cos_{y}\mathbf {z} +sin_{y}(sin_{z}\mathbf {y} +cos_{z}\mathbf {x} ))+cos_{x}(cos_{z}\mathbf {y} -sin_{z}\mathbf {x} )\\\mathbf {d} _{z}&=cos_{x}(cos_{y}\mathbf {z} +sin_{y}(sin_{z}\mathbf {y} +cos_{z}\mathbf {x} ))-sin_{x}(cos_{z}\mathbf {y} -sin_{z}\mathbf {x} )\end{aligned}}}
This transformed point can then be projected onto the 2D plane using the formula (here, x/y is used as the projection plane; literature also may use x/z):
b
x
=
e
z
d
z
d
x
+
e
x
,
b
y
=
e
z
d
z
d
y
+
e
y
.
{\displaystyle {\begin{aligned}\mathbf {b} _{x}&={\frac {\mathbf {e} _{z}}{\mathbf {d} _{z}}}\mathbf {d} _{x}+\mathbf {e} _{x},\\[5pt]\mathbf {b} _{y}&={\frac {\mathbf {e} _{z}}{\mathbf {d} _{z}}}\mathbf {d} _{y}+\mathbf {e} _{y}.\end{aligned}}}
Or, in matrix form using homogeneous coordinates, the system
[
f
x
f
y
f
w
]
=
[
1
0
e
x
e
z
0
1
e
y
e
z
0
0
1
e
z
]
[
d
x
d
y
d
z
]
{\displaystyle {\begin{bmatrix}\mathbf {f} _{x}\\\mathbf {f} _{y}\\\mathbf {f} _{w}\end{bmatrix}}={\begin{bmatrix}1&0&{\frac {\mathbf {e} _{x}}{\mathbf {e} _{z}}}\\0&1&{\frac {\mathbf {e} _{y}}{\mathbf {e} _{z}}}\\0&0&{\frac {1}{\mathbf {e} _{z}}}\end{bmatrix}}{\begin{bmatrix}\mathbf {d} _{x}\\\mathbf {d} _{y}\\\mathbf {d} _{z}\end{bmatrix}}}
in conjunction with an argument using similar triangles, leads to division by the homogeneous coordinate, giving
b
x
=
f
x
/
f
w
b
y
=
f
y
/
f
w
{\displaystyle {\begin{aligned}\mathbf {b} _{x}&=\mathbf {f} _{x}/\mathbf {f} _{w}\\\mathbf {b} _{y}&=\mathbf {f} _{y}/\mathbf {f} _{w}\end{aligned}}}
The distance of the viewer from the display surface,
e
z
{\displaystyle \mathbf {e} _{z}}
, directly relates to the field of view, where
α
=
2
⋅
arctan
(
1
/
e
z
)
{\displaystyle \alpha =2\cdot \arctan(1/\mathbf {e} _{z})}
is the viewed angle. (Note: This assumes that you map the points (-1,-1) and (1,1) to the corners of your viewing surface)
The above equations can also be rewritten as:
b
x
=
(
d
x
s
x
)
/
(
d
z
r
x
)
r
z
,
b
y
=
(
d
y
s
y
)
/
(
d
z
r
y
)
r
z
.
{\displaystyle {\begin{aligned}\mathbf {b} _{x}&=(\mathbf {d} _{x}\mathbf {s} _{x})/(\mathbf {d} _{z}\mathbf {r} _{x})\mathbf {r} _{z},\\\mathbf {b} _{y}&=(\mathbf {d} _{y}\mathbf {s} _{y})/(\mathbf {d} _{z}\mathbf {r} _{y})\mathbf {r} _{z}.\end{aligned}}}
In which
s
x
,
y
{\displaystyle \mathbf {s} _{x,y}}
is the display size,
r
x
,
y
{\displaystyle \mathbf {r} _{x,y}}
is the recording surface size (CCD or Photographic film),
r
z
{\displaystyle \mathbf {r} _{z}}
is the distance from the recording surface to the entrance pupil (camera center), and
d
z
{\displaystyle \mathbf {d} _{z}}
is the distance, from the 3D point being projected, to the entrance pupil.
Subsequent clipping and scaling operations may be necessary to map the 2D plane onto any particular display media.
=== Weak perspective projection ===
A "weak" perspective projection uses the same principles of an orthographic projection, but requires the scaling factor to be specified, thus ensuring that closer objects appear bigger in the projection, and vice versa. It can be seen as a hybrid between an orthographic and a perspective projection, and described either as a perspective projection with individual point depths
Z
i
{\displaystyle Z_{i}}
replaced by an average constant depth
Z
ave
{\displaystyle Z_{\text{ave}}}
, or simply as an orthographic projection plus a scaling.
The weak-perspective model thus approximates perspective projection while using a simpler model, similar to the pure (unscaled) orthographic perspective.
It is a reasonable approximation when the depth of the object along the line of sight is small compared to the distance from the camera, and the field of view is small. With these conditions, it can be assumed that all points on a 3D object are at the same distance
Z
ave
{\displaystyle Z_{\text{ave}}}
from the camera without significant errors in the projection (compared to the full perspective model).
Equation
P
x
=
X
Z
ave
P
y
=
Y
Z
ave
{\displaystyle {\begin{aligned}&P_{x}={\frac {X}{Z_{\text{ave}}}}\\[5pt]&P_{y}={\frac {Y}{Z_{\text{ave}}}}\end{aligned}}}
assuming focal length
f
=
1
{\textstyle f=1}
.
== Diagram ==
To determine which screen x-coordinate corresponds to a point at
A
x
,
A
z
{\displaystyle A_{x},A_{z}}
multiply the point coordinates by:
B
x
=
A
x
B
z
A
z
{\displaystyle B_{x}=A_{x}{\frac {B_{z}}{A_{z}}}}
where
B
x
{\displaystyle B_{x}}
is the screen x coordinate
A
x
{\displaystyle A_{x}}
is the model x coordinate
B
z
{\displaystyle B_{z}}
is the focal length—the axial distance from the camera center to the image plane
A
z
{\displaystyle A_{z}}
is the subject distance.
Since the camera operates in 3D, the same principle applies to the screen’s y coordinate— one can substitute y for x in the diagram and equation above.
Alternatively, clipping techniques can be used. These involve substituting values of a point outside the field of view (FOV) with interpolated values from a corresponding point inside the camera's view matrix.
This approach, often referred to as the inverse camera method, involves performing a perspective projection calculation using known values. It determines the last visible point along the viewing frustum by projecting from an out-of-view (invisible) point after all necessary transformations have been applied.
== See also ==
== References ==
== Further reading ==
Kenneth C. Finney (2004). 3D Game Programming All in One. Thomson Course. p. 93. ISBN 978-1-59200-136-1. 3D projection.
Koehler; Ralph (December 2000). 2D/3D Graphics and Splines with Source Code. Author Solutions Incorporated. ISBN 978-0759611870.
== External links ==
Creating 3D Environments from Digital Photographs | Wikipedia/Perspective_transform |
A variety of computer graphic techniques have been used to display video game content throughout the history of video games. The predominance of individual techniques have evolved over time, primarily due to hardware advances and restrictions such as the processing power of central or graphics processing units.
== Text-based ==
Some of the earliest video games were text games or text-based games that used text characters instead of bitmapped or vector graphics. Examples include MUDs (multi-user dungeons), where players could read or view depictions of rooms, objects, other players, and actions performed in the virtual world; and roguelikes, a subgenre of role-playing video games featuring many monsters, items, and environmental effects, as well as an emphasis on randomization, replayability and permanent death. Some of the earliest text games were developed for computer systems which had no video display at all.
Text games are typically easier to write and require less processing power than graphical games, and thus were more common from 1970 to 1990. However, terminal emulators are still in use today, and people continue to play MUDs and explore interactive fiction. Many beginning programmers still create these types of games to familiarize themselves with a programming language, and contests are still held even today on who can finish programming a roguelike within a short time period, such as seven days.
== Vector graphics ==
Vector graphics refer to the use of geometrical primitives such as points, lines, and curves (i.e., shapes based on mathematical equations) instead of resolution-dependent bitmap graphics to represent images in computer graphics. In video games this type of projection is somewhat rare, but has become more common in recent years in browser-based gaming with the advent of Flash and HTML5 Canvas, as these support vector graphics natively. An earlier example for the personal computer is Starglider (1986).
Vector game can also refer to a video game that uses a vector graphics display capable of projecting images using an electron beam to draw images instead of with pixels, much like a laser show. Many early arcade games used such displays, as they were capable of displaying more detailed images than raster displays on the hardware available at that time. Many vector-based arcade games used full-color overlays to complement the otherwise monochrome vector images. Other uses of these overlays were very detailed drawings of the static gaming environment, while the moving objects were drawn by the vector beam. Games of this type were produced mainly by Atari, Cinematronics, and Sega. Examples of vector games include Asteroids, Armor Attack, Aztarac, Eliminator, Lunar Lander, Space Fury, Space Wars, Star Trek, Tac/Scan, Tempest and Zektor. The Vectrex home console also used a vector display. After 1985, the use of vector graphics declined substantially due to improvements in sprite technology; rasterized 3D Filled Polygon Graphics returned to the arcades and were so popular that vector graphics could no longer compete.
== Full motion video ==
Full motion video (FMV) games are video games that rely upon pre-recorded television- or movie-quality recordings and animations rather than sprites, vectors or 3D models to display action in the game. FMV-based games were popular during the early 1990s as CD-ROMs and Laserdiscs made their way into the living rooms, providing an alternative to the low-capacity ROM cartridges of most consoles at the time. Although FMV-based games did manage to look better than many contemporary sprite-based games, they occupied a niche market; and a vast majority of FMV games were panned at the time of their release, with many gamers citing their dislike for the lack of interaction inherent in these games. As a result, the format became a well-known failure in video gaming, and the popularity of FMV games declined substantially after 1995 as more advanced consoles started to become widely available.
A number of different types of games utilized this format. Some resembled modern music/dance games, where the player timely presses buttons according to a screen instruction. Others included early rail shooters such as Tomcat Alley, Surgical Strike and Sewer Shark. Full motion video was also used in several interactive movie adventure games, such as The Beast Within: A Gabriel Knight Mystery and Phantasmagoria.
== 2D ==
Games utilizing parallel projection typically make use of two-dimensional bitmap graphics as opposed to 3D-rendered triangle-based geometry, allowing developers to create large, complex gameworlds efficiently and with relatively few art assets by dividing the art into sprites or tiles and reusing them repeatedly (though some games use a mix of different techniques).
=== Top-down perspective ===
Top-down perspective, also sometimes referred to as bird's-eye view, Overworld, Godview, overhead view, or helicopter view, when used in the context of video games, refers to a camera angle that shows players and the areas around them from above. While not exclusive to video games utilizing parallel projection, it was at one time common among 2D role playing video games, wargames, and construction and management simulation games, such as SimCity, Pokémon, and Railroad Tycoon; as well as among action and action-adventure games, such as the early The Legend of Zelda, Metal Gear, and Grand Theft Auto games.
=== Side-scrolling game ===
A side-scrolling game or side-scroller is a video game in which the viewpoint is taken from the side, and the onscreen characters generally can only move, to the left or right. Games of this type make use of scrolling computer display technology, and sometimes parallax scrolling to suggest added depth.
In many games the screen follows the player character such that the player character is always positioned near the center of the screen. In other games the position of the screen will change according to the player character's movement, such that the player character is off-center and more space is shown in front of the character than behind. Sometimes, the screen will scroll not only forward in the speed and direction of the player character's movement, but also backwards to previously visited parts of a stage. In other games or stages, the screen will only scroll forwards, not backwards, so that once a stage has been passed it can no longer be visited. In games such as shoot 'em ups like R-type, the screen scrolls forward by itself at a steady rate, and the player must keep up with the screen, attempting to avoid obstacles and collect things before they pass off screen.
Examples of side-scrolling games include platform games such as Sonic the Hedgehog and Ori and the Blind Forest, beat 'em ups such as the popular Double Dragon and Battletoads, and shooters such as R-Type and (more recently) Jets'n'Guns. The Super Mario Bros. series has used all three types of side-scrolling at some time in its history.
=== 2.5D, 3/4 perspective, and pseudo-3D ===
2.5D ("two-and-a-half-dimensional"), 3/4 perspective and pseudo-3D are informal terms used to describe graphical projections and techniques that try to "fake" three-dimensionality, typically by using some form of parallel projection, wherein the point of view is from a fixed perspective, but also reveals multiple facets of an object. Examples of pseudo-3D techniques include isometric/axonometric projection, oblique projection, orthographic projection, billboarding, parallax scrolling, scaling, skyboxes, and skydomes. In addition, 3D graphical techniques such as bump mapping and parallax mapping are often used to extend the illusion of three-dimensionality without substantially increasing the resulting computational overhead introduced by larger numbers of polygons (also known as the "polygon count").
These terms sometimes possess a second meaning, wherein the gameplay in an otherwise 3D game is forcibly restricted to a two-dimensional plane.
Examples of games that make use of pseudo-3D techniques include Zaxxon, The Sims and Diablo (isometric/axonometric projection); Ultima VII and Paperboy (oblique projection); Sonic the Hedgehog and Street Fighter II (parallax scrolling); Fonz and Space Harrier (scaling); and Half-Life 2 (skyboxes). In addition to axonometric projection, games such as The Sims and Final Fantasy Tactics also make use of a combination of pre-drawn 2D sprites and real-time polygonal graphics instead of relying entirely on 2D sprites as is the norm.
== 3D ==
With the advent of 3D accelerated graphics, video games could expand beyond the typically sprite-based 2D graphics of older graphics technologies to describe a view frequently more true to reality and lifelike than their predecessors. Federica Romagnoli has stated that in her opinion, high-budget 3D game graphics "display...levels of artistry once more commonly found in films" because of their capability to render complex cinematography and CG characters and the optimization of video game consoles and PCs to be able to handle such content. Perspective projection has also been used in some earlier titles to present a 3D view from a fixed (and thus somewhat less hardware-intensive) perspective with a limited ability to move.
=== Voxel engines ===
Instead of using triangle meshes, voxel engines use voxels.
=== Fixed 3D ===
Fixed 3D refers to a three-dimensional representation of the game world where foreground objects (i.e. game characters) are typically rendered in real time against a static background. The principal advantage of this technique is its ability to display a high level of detail on minimal hardware. The main disadvantage is that the player's frame of reference remains fixed at all times, preventing players from examining or moving about the environment from multiple viewpoints.
Backgrounds in fixed 3D games tend to be pre-rendered two-dimensional images, but are sometimes rendered in real time (e.g. Blade Runner). The developers of SimCity 4 took advantage of fixed perspective by not texturing the reverse sides of objects (and thereby speeding up rendering) which players could not see anyway. Fixed 3D is also sometimes used to "fake" areas which are inaccessible to players. The Legend of Zelda: Ocarina of Time, for instance, is nearly completely 3D, but uses fixed 3D to represent many of the building interiors as well as one entire town (this technique was later dropped in favor of full-3D in the game's successor, The Legend of Zelda: Majora's Mask). A similar technique, the skybox, is used in many 3D games to represent distant background objects that are not worth rendering in real time.
Used heavily in the survival horror genre, fixed 3D was first seen in Infogrames' Alone in the Dark series in the early 1990s and imitated by titles such as Ecstatica. It was later brought back by Capcom in the Resident Evil series. Gameplay-wise there is little difference between fixed 3D games and their 2D precursors. Players' ability to navigate within a scene still tends to be limited, and interaction with the gameworld remains mostly "point-and-click".
Further examples include the PlayStation-era titles in the Final Fantasy series (Square); the role-playing games Parasite Eve and Parasite Eve II (Square); the action-adventure games Ecstatica and Ecstatica 2 (Andrew Spencer/Psygnosis), as well as Little Big Adventure (Adeline Software International); the graphic adventure Grim Fandango (LucasArts); and 3D Movie Maker (Microsoft Kids).
Pre-rendered backgrounds are also found in some isometric video games, such as the role-playing game The Temple of Elemental Evil (Troika Games) and the Baldur's Gate series (BioWare); though in these cases the form of graphical projection used is not different.
=== First-person perspective ===
First person refers to a graphical perspective rendered from the viewpoint of the player character. In many cases, this may be the viewpoint from the cockpit of a vehicle. Many different genres have made use of first-person perspectives, including adventure games, flight simulators, and the highly popular first-person shooter genre.
Games with a first-person perspective are usually avatar-based, wherein the game displays what the player's avatar would see with the avatar's own eyes. In many games, players cannot see the avatar's body, though they may be able to see the avatar's weapons or hands. This viewpoint is also frequently used to represent the perspective of a driver within a vehicle, as in flight and racing simulators; and it is common to make use of positional audio, where the volume of ambient sounds varies depending on their position with respect to the player's avatar.
Games with a first-person perspective do not require sophisticated animations for the player's avatar, and do not need to implement a manual or automated camera-control scheme as in third-person perspective. A first person perspective allows for easier aiming, since there is no representation of the avatar to block the player's view. However, the absence of an avatar can make it difficult to master the timing and distances required to jump between platforms, and may cause motion sickness in some players.
Players have come to expect first-person games to accurately scale objects to appropriate sizes. However, key objects such as dropped items or levers may be exaggerated in order to improve their visibility.
=== Third-person perspective ===
Third person refers to a graphical perspective rendered from a view that is some distance away (usually behind and slightly above) from the player's character. This viewpoint allows players to see a more strongly characterized avatar, and is most common in action and action-adventure games. This viewpoint poses some difficulties, however, in that when the player turns or stands with his back to a wall, the camera may jerk or end up in awkward positions. Developers have tried to alleviate this issue by implementing intelligent camera systems, or by giving the player control over the camera. There are three primary types of third-person camera systems: "fixed camera systems" in which the camera positions are set during the game creation; "tracking camera systems" in which the camera simply follows the player's character; and "interactive camera systems" that are under the player's control.
Examples of games utilizing third-person perspective include Super Mario 64, the Tomb Raider series, the 3D installments of the Legend of Zelda series, and Crash Bandicoot.
== Other topics ==
=== Stereo graphics ===
Stereoscopic video games use stereoscopic technologies to create depth perception for the player by any form of stereo display. Such games are not to be confused with video games that use 3D computer graphics, which although they feature graphics on screen, do not give the illusion of depth beyond the screen.
=== Virtual reality headset ===
The graphics for virtual reality gaming consist of a special kind of stereo 3D graphics to fit the up-close display. The requirements for latency are also higher to reduce the potential for virtual reality sickness.
=== Multi-monitor setup ===
Many games can run multi-monitor setups to achieve very high display resolutions. Running games in this way can create a greater sense of immersion, e.g. when playing a video racing game or flight simulator or give a tactical advantage due to the higher field of view.
=== Augmented reality ===
Augmented reality games typically use 3D graphics on a single flat screen on a smartphone or tablet, or in a head-mounted display. When playing an AR game on a head-mounted device, the visuals are displayed on transparent glass that overlays the real world and has 3D depth through stereoscopic display.
== See also ==
== References == | Wikipedia/Video_game_graphics |
In technical drawing and computer graphics, a multiview projection is a technique of illustration by which a standardized series of orthographic two-dimensional pictures are constructed to represent the form of a three-dimensional object. Up to six pictures of an object are produced (called primary views), with each projection plane parallel to one of the coordinate axes of the object. The views are positioned relative to each other according to either of two schemes: first-angle or third-angle projection. In each, the appearances of views may be thought of as being projected onto planes that form a six-sided box around the object. Although six different sides can be drawn, usually three views of a drawing give enough information to make a three-dimensional object.
These three views are known as front view (also elevation view), top view or plan view and end view (also profile view or section view).
When the plane or axis of the object depicted is not parallel to the projection plane, and where multiple sides of an object are visible in the same image, it is called an auxiliary view.
== Overview ==
To render each such picture, a ray of sight (also called a projection line, projection ray or line of sight) towards the object is chosen, which determines on the object various points of interest (for instance, the points that are visible when looking at the object along the ray of sight); those points of interest are mapped by an orthographic projection to points on some geometric plane (called a projection plane or image plane) that is perpendicular to the ray of sight, thereby creating a 2D representation of the 3D object.
Customarily, two rays of sight are chosen for each of the three axes of the object's coordinate system; that is, parallel to each axis, the object may be viewed in one of 2 opposite directions, making for a total of 6 orthographic projections (or "views") of the object:
Along a vertical axis (often the y-axis): The top and bottom views, which are known as plans (because they show the arrangement of features on a horizontal plane, such as a floor in a building).
Along a horizontal axis (often the z-axis): The front and back views, which are known as elevations (because they show the heights of features of an object such as a building).
Along an orthogonal axis (often the x-axis): The left and right views, which are also known as elevations, following the same reasoning.
These six planes of projection intersect each other, forming a box around the object, the most uniform construction of which is a cube; traditionally, these six views are presented together by first projecting the 3D object onto the 2D faces of a cube, and then "unfolding" the faces of the cube such that all of them are contained within the same plane (namely, the plane of the medium on which all of the images will be presented together, such as a piece of paper, or a computer monitor, etc.). However, even if the faces of the box are unfolded in one standardized way, there is ambiguity as to which projection is being displayed by a particular face; the cube has two faces that are perpendicular to a ray of sight, and the points of interest may be projected onto either one of them, a choice which has resulted in two predominant standards of projection:
First-angle projection: In this type of projection, the object is imagined to be in the first quadrant. Because the observer normally looks from the right side of the quadrant to obtain the front view, the objects will come in between the observer and the plane of projection. Therefore, in this case, the object is imagined to be transparent, and the projectors are imagined to be extended from various points of the object to meet the projection plane. When these meeting points are joined in order on the plane they form an image, thus in the first angle projection, any view is so placed that it represents the side of the object away from it. First angle projection is often used throughout parts of Europe so that it is often called European projection.
Third-angle projection: In this type of projection, the object is imagined to be in the third quadrant. Again, as the observer is normally supposed to look from the right side of the quadrant to obtain the front view, in this method, the projection plane comes in between the observer and the object. Therefore, the plane of projection is assumed to be transparent. The intersection of this plan with the projectors from all the points of the object would form an image on the transparent plane.
== Primary views ==
Multiview projections show the primary views of an object, each viewed in a direction parallel to one of the main coordinate axes. These primary views are called plans and elevations. Sometimes they are shown as if the object has been cut across or sectioned to expose the interior: these views are called sections.
=== Plan ===
A plan is a view of a 3-dimensional object seen from vertically above (or sometimes below). It may be drawn in the position of a horizontal plane passing through, above, or below the object. The outline of a shape in this view is sometimes called its planform, for example with aircraft wings.
The plan view from above a building is called its roof plan. A section seen in a horizontal plane through the walls and showing the floor beneath is called a floor plan.
=== Elevation ===
Elevation is the view of a 3-dimensional object from the position of a vertical plane beside an object. In other words, an elevation is a side view as viewed from the front, back, left or right (and referred to as a front elevation, [left/ right] side elevation, and a rear elevation).
An elevation is a common method of depicting the external configuration and detailing of a 3-dimensional object in two dimensions. Building façades are shown as elevations in architectural drawings and technical drawings.
Elevations are the most common orthographic projection for conveying the appearance of a building from the exterior. Perspectives are also commonly used for this purpose. A building elevation is typically labeled in relation to the compass direction it faces; the direction from which a person views it. E.g. the North Elevation of a building is the side that most closely faces true north on the compass.
Interior elevations are used to show details such as millwork and trim configurations.
In the building industry elevations are non-perspective views of the structure. These are drawn to scale so that measurements can be taken for any aspect necessary. Drawing sets include front, rear, and both side elevations. The elevations specify the composition of the different facades of the building, including ridge heights, the positioning of the final fall of the land, exterior finishes, roof pitches, and other architectural details.
==== Developed elevation ====
A developed elevation is a variant of a regular elevation view in which several adjacent non-parallel sides may be shown together as if they have been unfolded. For example, the north and west views may be shown side-by-side, sharing an edge, even though this does not represent a proper orthographic projection.
=== Section ===
A section, or cross-section, is a view of a 3-dimensional object from the position of a plane through the object.
A section is a common method of depicting the internal arrangement of a 3-dimensional object in two dimensions. It is often used in technical drawing and is traditionally crosshatched. The style of crosshatching often indicates the type of material the section passes through.
With computed axial tomography, computers construct cross-sections from x-ray data.
== Auxiliary views ==
An auxiliary view or pictorial, is an orthographic view that is projected into any plane other than one of the six primary views. These views are typically used when an object has a surface in an oblique plane. By projecting into a plane parallel with the oblique surface, the true size and shape of the surface are shown. Auxiliary views are often drawn using isometric projection.
== Multiviews ==
=== Quadrants in descriptive geometry ===
Modern orthographic projection is derived from Gaspard Monge's descriptive geometry. Monge defined a reference system of two viewing planes, horizontal H ("ground") and vertical V ("backdrop"). These two planes intersect to partition 3D space into 4 quadrants, which he labeled:
I: above H, in front of V
II: above H, behind V
III: below H, behind V
IV: below H, in front of V
These quadrant labels are the same as used in 2D planar geometry, as seen from infinitely far to the "left", taking H and V to be the X-axis and Y-axis, respectively.
The 3D object of interest is then placed into either quadrant I or III (equivalently, the position of the intersection line between the two planes is shifted), obtaining first- and third-angle projections, respectively. Quadrants II and IV are also mathematically valid, but their use would result in one view "true" and the other view "flipped" by 180° through its vertical centerline, which is too confusing for technical drawings. (In cases where such a view is useful, e.g. a ceiling viewed from above, a reflected view is used, which is a mirror image of the true orthographic view.)
Monge's original formulation uses two planes only and obtains the top and front views only. The addition of a third plane to show a side view (either left or right) is a modern extension. The terminology of quadrant is a mild anachronism, as a modern orthographic projection with three views corresponds more precisely to an octant of 3D space.
=== First-angle projection ===
In first-angle projection, the object is conceptually located in quadrant I, i.e. it floats above and before the viewing planes, the planes are opaque, and each view is pushed through the object onto the plane furthest from it. (Mnemonic: an "actor on a stage".) Extending to the 6-sided box, each view of the object is projected in the direction (sense) of sight of the object, onto the (opaque) interior walls of the box; that is, each view of the object is drawn on the opposite side of the box. A two-dimensional representation of the object is then created by "unfolding" the box, to view all of the interior walls. This produces two plans and four elevations. A simpler way to visualize this is to place the
object on top of an upside-down bowl. Sliding the object down the right edge of the bowl reveals the right side view.
=== Third-angle projection ===
In third-angle projection, the object is conceptually located in quadrant III, i.e. it is positioned below and behind the viewing planes, the planes are transparent, and each view is pulled onto the plane closest to it. Using the six-sided viewing box, each view of the object is projected opposite to the direction (sense) of sight, onto the (transparent) exterior walls of the box; that is, each view of the object is drawn on the corresponding side of the box. The box is then unfolded to view all of its exterior walls.
Below is the construction of third-angle projections of the same object as above. The individual views are the same, just arranged differently.
=== Additional information ===
First-angle projection is as if the object were sitting on the paper and, from
the face (front) view, it is rolled to the right to show the left side or rolled up to show its bottom. It is standard throughout Europe and Asia (excluding Japan). First-angle projection was widely used in the UK, but during World War II, British drawings sent to be manufactured in the USA, such as of the Rolls-Royce Merlin, had to be drawn in third-angle projection before they could be produced, e.g., as the Packard V-1650 Merlin. This meant that some British companies completely adopted third angle projection. BS 308 (Part 1) Engineering Drawing Practice, gave the option of using both projections, but generally, every illustration (other than the ones explaining the difference between first and third-angle) was done in first-angle. After the withdrawal of BS 308 in 1999, BS 8888 offered the same choice since it referred directly to ISO 5456-2, Technical drawings – Projection methods – Part 2: Orthographic representations.
Third-angle is as if the object were a box to be unfolded. If we unfold the box so that the front view is in the center of the two arms, then the top view is above it, the bottom view is below it, the left view is to the left, and the right view is to the right. It is standard in the USA (ASME Y14.3-2003 specifies it as the default projection system), Japan (JIS B 0001:2010 specifies it as the default projection system), Canada, and Australia (AS1100.101 specifies it as the preferred projection system).
Both first-angle and third-angle projections result in the same 6 views; the difference between them is the arrangement of these views around the box.
=== Symbol ===
A great deal of confusion has ensued in drafting rooms and engineering departments when drawings are transferred from one convention to another. On engineering drawings, the projection is denoted by an international symbol representing a truncated cone in either first-angle or third-angle projection, as shown by the diagram on the right.
The 3D interpretation is a solid truncated cone, with the small end pointing toward the viewer. The front view is, therefore, two concentric circles. The fact that the inner circle is drawn with a solid line instead of dashed identifies this view as the front view, not the rear view. The side view is an isosceles trapezoid.
In first-angle projection, the front view is pushed back to the rear wall, and the right side view is pushed to the left wall, so the first-angle symbol shows the trapezoid with its shortest side away from the circles.
In third-angle projection, the front view is pulled forward to the front wall, and the right side view is pulled to the right wall, so the third-angle symbol shows the trapezoid with its shortest side towards the circles.
== Multiviews without rotation ==
Orthographic multiview projection is derived from the principles of descriptive geometry and may produce an image of a specified, imaginary object as viewed from any direction of space. Orthographic projection is distinguished by parallel projectors emanating from all points of the imaged object and which intersect of projection at right angles. Above, a technique is described that obtains varying views by projecting images after the object is rotated to the desired position.
Descriptive geometry customarily relies on obtaining various views by imagining an object to be stationary and changing the direction of projection (viewing) in order to obtain the desired view.
See Figure 1. Using the rotation technique above, note that no orthographic view is available looking perpendicularly at any of the inclined surfaces. Suppose a technician desired such a view to, say, look through a hole to be drilled perpendicularly to the surface. Such a view might be desired for calculating clearances or for dimensioning purposes. To obtain this view without multiple rotations requires the principles of Descriptive Geometry. The steps below describe the use of these principles in third angle projection.
Fig.1: Pictorial of the imaginary object that the technician wishes to image.
Fig.2: The object is imagined behind a vertical plane of projection. The angled corner of the plane of projection is addressed later.
Fig.3: Projectors emanate parallel from all points of the object, perpendicular to the plane of projection.
Fig.4: An image is created thereby.
Fig.5: A second, horizontal plane of projection is added, perpendicular to the first.
Fig.6: Projectors emanate parallel from all points of the object perpendicular to the second plane of projection.
Fig.7: An image is created thereby.
Fig.8: The third plane of projection is added, perpendicular to the previous two.
Fig.9: Projectors emanate parallel from all points of the object perpendicular to the third plane of projection.
Fig.10: An image is created thereby.
Fig.11: The fourth plane of projection is added parallel to the chosen inclined surface, and perforce, perpendicular to the first (frontal) plane of projection.
Fig.12: Projectors emanate parallel from all points of the object perpendicularly from the inclined surface, and perforce, perpendicular to the fourth (auxiliary) plane of projection.
Fig.13: An image is created thereby.
Fig.14-16: The various planes of projection are unfolded to be planar with the Frontal plane of projection.
Fig.17: The final appearance of an orthographic multiview projection and which includes an auxiliary view showing the true shape of an inclined surface
== Territorial use ==
First-angle is used in most of the world.
Third-angle projection is most commonly used in the United States and Japan (in JIS B 0001:2010) and is preferred in Australia, as laid down in AS 1100.101—1992 6.3.3.
In the UK, BS8888 9.7.2.1 allows for three different conventions for arranging views: labelled views, third-angle projection, and first-angle projection.
== See also ==
Architectural drawing
Cross section (geometry)
Engineering drawing
Graphical projection
Plans (drawings)
== References ==
BS 308 (Part 1) Engineering Drawing Practice
BS 8888 Technical product documentation and specification
ISO 5456-2 Technical drawings – Projection methods – Part 2: Orthographic Representations (includes the truncated cone symbol)
== External links ==
Educational website describing the principles of first and third angle projection — University of Limerick
Educational website describing the principles of first and third angle projection
Images tagged "Elevation" on Flickr.com
Basic Projection Method first angle vs the third angle | Wikipedia/Multiview_orthographic_projection |
A category of fine art, graphic art covers a broad range of visual artistic expression, typically two-dimensional graphics, i.e. produced on a flat surface, today normally paper or a screen on various electronic devices. The term usually refers to the arts that rely more on line, color or tone, especially drawing and the various forms of engraving; it is sometimes understood to refer specifically to drawing and the various printmaking processes, such as line engraving, aquatint, drypoint, etching, mezzotint, monotype, lithography, and screen printing (silk-screen, serigraphy). Graphic art mostly includes calligraphy, photography, painting, typography, computer graphics, and bindery. It also encompasses drawn plans and layouts for interior and architectural designs.
In museum parlance "works on paper" is a common term, covering the various types of traditional fine art graphic art. There is now a large sector of graphic designers working mostly on web design.
== History ==
Throughout history, technological inventions have shaped the development of graphic art. In 2500 BC, the Egyptians used graphic symbols to communicate their thoughts in a written form known as hieroglyphics. The Egyptians wrote and illustrated narratives on rolls of papyrus to share the stories and art with others.
During the Middle Ages, scribes manually copied each individual page of manuscripts to maintain their sacred teachings. The scribes would leave marked sections of the page available for artists to insert drawings and decorations. Using art alongside the carefully lettered text enhanced the religious reading experience.
In 1450, Johannes Gutenberg created the first upgraded moving type of mechanical equipment called as the printing press. His printing press aided the mass creation of text and visual art, eventually obviating the need for hand transcriptions.
Again during the Renaissance years, graphic art in the form of printing played a major role in the spread of classical learning in Europe. Within these manuscripts, book designers focused heavily on the typeface.
Due to the development of larger fonts during the Industrial Revolution, posters became a popular form of graphic art used to communicate the latest information as well as to advertise the latest products and services.
The invention and popularity of film and television changed graphic art through the additional aspect of motion as advertising agencies attempted to use kinetics to their advantage.
The next major change in graphic arts came when the personal computer was invented in the twentieth century. Powerful computer software enables artists to manipulate images in a much faster and simpler way than the skills of board artists prior to the 1990s. With quick calculations, computers easily recolor, scale, rotate, and rearrange images if the programs are known.
The design of street signs has been impacted by scientific examinations into readability. New York City is in the midst of replacing all of its street signs that have all capital characters with ones that only have upper and lower case letters. They anticipate that greater readability will improve wayfinding and greatly reduce collisions and injuries.
== See also ==
Animation
Communication design
Crowdsourcing creative work
Digital art
Illustration
Caricature
Cartoon
Comics
Graphic design
Painting
Performance art
Printmaking
== References == | Wikipedia/Graphic_arts |
Perspective control is a procedure for composing or editing photographs to better conform with the commonly accepted distortions in constructed perspective. The control would:
make all lines that are vertical in reality vertical in the image. This includes columns, vertical edges of walls and lampposts. This is a commonly accepted distortion in constructed perspective; perspective is based on the notion that more distant objects are represented as smaller on the page; however, even though the top of the cathedral tower is in reality further from the viewer than base of the tower (due to the vertical distance), constructed perspective considers only the horizontal distance and considers the top and bottom to be the same distance away;
make all parallel lines (such as four horizontal edges of a cubic room) cross in one point.
Perspective distortion occurs in photographs when the film plane is not parallel to lines that are required to be parallel in the photo. A common case is when a photo is taken of a tall building from ground level by tilting the camera backwards: the building appears to fall away from the camera.
== At exposure ==
Professional cameras where perspective control is important control the perspective at exposure by raising the lens parallel to the film. There is more information on this in the view camera article.
Most large format (4x5 and up) cameras have this feature, as well as plane of focus control built into the camera body in the form of flexible bellows and moveable front (lens) and rear (film holder) elements. Thus any focal length lens mounted on a view camera or field camera, and many press cameras can be used with perspective control.
Some interchangeable lens medium format, 35 mm film SLR, and Digital SLR camera systems have PC, shift, or tilt/shift lens options which allow perspective control and, in the case of a tilt/shift lens, plane of focus control, but only at a specific focal length.
== In the darkroom ==
A darkroom technician can correct perspective distortion in the printing process. It is usually done by exposing the paper at an angle to the film, with the paper raised toward the part of the image that is larger, therefore not allowing the light from the enlarger to spread as much as the other side of the exposure.
The process is known as rectification printing, and is done using a rectifying printer (transforming printer), which involves rotating the negative and/or easel. Restoring parallelism to verticals (for instance) is easily done by tilting one plane, but if the focal length of the enlarger is not suitably chosen, the resulting image will have vertical distortion (compression or stretching). For correct perspective correction, the proper focal length (specifically, angle of view) must be chosen so that the enlargement replicates the perspective of the camera.
== During digital post-processing ==
Digital post-processing software provides means to correct converging verticals and other distortions introduced at image capture.
Adobe Photoshop and GIMP have several "transform" options to achieve, with care, the desired control without any significant degradation in the overall image quality. Photoshop CS2 and subsequent releases includes perspective correction as part of its Lens Distortion Correction Filter; DxO Optics Pro from DxO Labs includes perspective correction; while GIMP (as of 2.6) does not include a specialized tool for correcting perspective, though a plug-in, EZ Perspective, is available. RawTherapee, a free and open-source raw converter, includes horizontal and vertical perspective correction tools too. Note that because the mathematics of projective transforms depends on the angle of view, perspective tools require that the angle of view or 35 mm equivalent focal length be entered, though this can often be determined from Exif metadata.
It is commonly suggested to correct perspective using a general projective transformation tool, correcting vertical tilt (converging verticals) by stretching out the top; this is the "Distort Transform" in Photoshop, and the "perspective tool" in GIMP. However, this introduces vertical distortion – objects appear squat (vertically compressed, horizontally extended) – unless the vertical dimension is also stretched. This effect is minor for small angles, and can be corrected by hand, manually stretching the vertical dimension until the proportions look right, but is automatically done by specialized perspective transform tools.
An alternative interface, found in Photoshop (CS and subsequent releases) is the "perspective crop", which enables the user to perform perspective control with the cropping tool, setting each side of the crop to independently determined angles, which can be more intuitive and direct.
Other software with mathematical models on how lenses and different types of optical distortions affect the image can correct this by being able to calculate the different characteristics of a lens and re-projecting the image in a number of ways (including non-rectilinear projections). An example of this kind of software is the panorama creation suite Hugin.
However these techniques do not enable the recovery of lost spatial resolution in the more distant areas of the subject, or the recovery of lost depth of field due to the angle of the film/sensor plane to the subject. These transforms involve interpolation, as in image scaling, which degrades the image quality, in particular blurring high-frequency detail. How significant this is depends on the original image resolution, degree of manipulation, print/display size, and viewing distance, and perspective correction must be traded off against preserving high-frequency detail.
== In virtual environments ==
Architectural images are commonly "rendered" from 3D computer models, for use in promotional material. These have virtual cameras within to create the images, which normally have modifiers capable of correcting (or distorting) the perspective to the artist's taste. See 3D projection for details.
== See also ==
Anamorphosis
Keystone effect
Image distortion
== References ==
== External links ==
Illustrations
Panorama Tools wiki page on perspective control
Controlling perspective while cropping using Photoshop software
Correcting perspective using the Open Source Hugin software | Wikipedia/Perspective_control |
The Ford Model T is an automobile that was produced by the Ford Motor Company from October 1, 1908, to May 26, 1927. It is generally regarded as the first mass-affordable automobile, which made car travel available to middle-class Americans. The relatively low price was partly the result of Ford's efficient fabrication, including assembly line production instead of individual handcrafting. The savings from mass production allowed the price to decline from $780 in 1910 (equivalent to $26,322 in 2024) to $290 in 1924 ($5,321 in 2024 dollars). It was mainly designed by three engineers, Joseph A. Galamb (the main engineer), Eugene Farkas, and Childe Harold Wills. The Model T was colloquially known as the "Tin Lizzie".
The Ford Model T was named the most influential car of the 20th century in the 1999 Car of the Century competition, ahead of the BMC Mini, Citroën DS, and Volkswagen Beetle. Ford's Model T was successful not only because it provided inexpensive transportation on a massive scale, but also because the car signified innovation for the rising middle class and became a powerful symbol of the United States' age of modernization. With over 15 million sold, it was the most sold car in history before being surpassed by the Volkswagen Beetle in 1972.
== Introduction ==
Early automobiles, which were produced from the 1880s, were mostly scarce, expensive, and often unreliable. Being the first reliable, easily maintained, mass-market motorized transportation turned the Model T into a great success: Within a few days after release, 15,000 orders were placed.
The first production Model T was built on August 12, 1908, and left the factory on September 27, 1908, at the Ford Piquette Avenue Plant in Detroit, Michigan. On May 26, 1927, Henry Ford watched the 15 millionth Model T Ford roll off the assembly line at his factory in Highland Park, Michigan.
Henry Ford conceived a series of cars between the founding of the company in 1903 and the introduction of the Model T. Ford named his first car the Model A and proceeded through the alphabet up through the Model T. Twenty models in all, not all of which went into production. The production model immediately before the Model T was the Model S, an upgraded version of the company's largest success to that point, the Model N. The follow-up to the Model T was another Ford Model A, rather than the "Model U". The company publicity said this was because the new car was such a departure from the old that Ford wanted to start all over again with the letter A.
The Model T was Ford's first automobile mass-produced on moving assembly lines with completely interchangeable parts, marketed to the middle class. Henry Ford said of the vehicle:
I will build a motor car for the great multitude. It will be large enough for the family, but small enough for the individual to run and care for. It will be constructed of the best materials, by the best men to be hired, after the simplest designs that modern engineering can devise. But it will be so low in price that no man making a good salary will be unable to own one – and enjoy with his family the blessing of hours of pleasure in God's great open spaces.
Although credit for the development of the assembly line belongs to Ransom E. Olds, with the first mass-produced automobile, the Oldsmobile Curved Dash, having begun in 1901, the tremendous advances in the efficiency of the system over the life of the Model T can be credited almost entirely to Ford and his engineers.
== Characteristics and design ==
The Model T was designed by Childe Harold Wills, and Hungarian immigrants Joseph A. Galamb (main engineer) and Eugene Farkas. Henry Love, C. J. Smith, Gus Degner and Peter E. Martin were also part of the team, as were Galamb's fellow Hungarian immigrants Gyula Hartenberger and Károly Balogh. Henry Ford supervised the designers himself. Production of the Model T began in the third quarter of 1908. Collectors today sometimes classify Model Ts by build years and refer to these as "model years", thus labeling the first Model Ts as 1909 models. This is a retroactive classification scheme; the concept of model years as understood today did not exist at the time. Even though design revisions occurred during the car's two decades of production, the company gave no particular name to any of the revised designs; all of them were called simply "Model T".
=== Engine ===
The Model T has a front-mounted 177-cubic-inch (2.9 L) inline four-cylinder engine, producing 20 hp (15 kW), for a top speed of 42 mph (68 km/h). According to Ford Motor Company, the Model T had fuel economy of 13–21 mpg‑US (16–25 mpg‑imp; 18–11 L/100 km). The engine was designed to run on gasoline, although it was able to run on kerosene or ethanol, although the decreasing cost of gasoline and the later introduction of Prohibition made ethanol an impractical fuel for most users. The engines of the first 2,447 units were cooled with water pumps; the engines of unit 2,448 and onward, with a few exceptions prior to around unit 2,500, were cooled by thermosiphon action.
The ignition system used in the Model T was an unusual one, with a low-voltage magneto incorporated in the flywheel, supplying alternating current to trembler coils to drive the spark plugs. This was closer to that used for stationary gas engines than the expensive high-voltage ignition magnetos that were used on some other cars. This ignition also made the Model T more flexible as to the quality or type of fuel it used. The system did not need a starting battery, since proper hand-cranking would generate enough current for starting. Electric lighting powered by the magneto was adopted in 1915, replacing acetylene gas flame lamp and oil lamps, but electric starting was not offered until 1919.
The Model T engine was produced for replacement needs as well as stationary and marine applications until 1941, well after production of the Model T ended.
The Fordson Model F tractor engine, that was designed about a decade later, was very similar to, but larger than, the Model T engine.
=== Transmission and drive train ===
The Model T is a rear-wheel drive vehicle. Its transmission is a planetary gear type known (at the time) as "three speed". In today's terms it is considered a two-speed, because one of the three speeds is reverse.
The Model T's transmission is controlled with three floor-mounted pedals, a revolutionary feature for its time, and a lever mounted to the road side of the driver's seat. The throttle is controlled with a lever on the steering wheel. The left-hand pedal is used to engage the transmission. With the floor lever in either the mid position or fully forward and the pedal pressed and held forward, the car enters low gear. When held in an intermediate position, the car is in neutral. If the left pedal is released, the Model T enters high gear, but only when the lever is fully forward – in any other position, the pedal only moves up as far as the central neutral position. This allows the car to be held in neutral while the driver cranks the engine by hand. The car can thus cruise without the driver having to press any of the pedals.
In the first 800 units, reverse is engaged with a lever; all units after that use the central pedal, which is used to engage reverse gear when the car is in neutral. The right-hand pedal operates the transmission brake – there are no brakes on the wheels. The floor lever also controls the parking brake, which is activated by pulling the lever all the way back. This doubles as an emergency brake.
Although it was uncommon, the drive bands could fall out of adjustment, allowing the car to creep, particularly when cold, adding another hazard to attempting to start the car: a person cranking the engine could be forced backward while still holding the crank as the car crept forward, although it was nominally in neutral. As the car utilizes a wet clutch, this condition could also occur in cold weather, when the thickened oil prevents the clutch discs from slipping freely. Power reaches the differential through a single universal joint attached to a torque tube which drives the rear axle; some models (typically trucks, but available for cars, as well) could be equipped with an optional two-speed rear Ruckstell axle, shifted by a floor-mounted lever which provides an underdrive gear for easier hill climbing.
==== Chassis / frame ====
The heavy-duty Model TT truck chassis came with a special worm gear rear differential with lower gearing than the normal car and truck, giving more pulling power but a lower top speed (the frame is also stronger; the cab and engine are the same). A Model TT is easily identifiable by the cylindrical housing for the worm-drive over the axle differential. All gears are vanadium steel running in an oil bath.
==== Transmission bands and linings ====
Two main types of band lining material were used:
Cotton – Cotton woven linings were the original type fitted and specified by Ford. Generally, the cotton lining is "kinder" to the drum surface, with damage to the drum caused only by the retaining rivets scoring the drum surface. Although this in itself did not pose a problem, a dragging band resulting from improper adjustment caused overheating of the transmission and engine, diminished power, and – in the case of cotton linings – rapid destruction of the band lining.
Wood – Wooden linings were originally offered as a "longer life" accessory part during the life of the Model T. They were a single piece of steam-bent wood and metal wire, fitted to the normal Model T transmission band. These bands give a very different feel to the pedals, with much more of a "bite" feel. The sensation is of a definite "grip" of the drum and seemed to noticeably increase the feel, in particular of the brake drum.
==== Aftermarket transmissions and drives ====
During the Model T's production run, particularly after 1916, more than 30 manufacturers offered auxiliary transmissions or drives to substitute for, or enhance, the Model T's drivetrain gears. Some offered overdrive for greater speed and efficiency, while others offered underdrives for more torque (often incorrectly described as "power") to enable hauling or pulling greater loads. Among the most noted were the Ruckstell two-speed rear axle, and transmissions by Muncie, Warford, and Jumbo.
Aftermarket transmissions generally fit one of four categories:
Replacement transmission – usually a sliding gear/selective transmission, intended as a direct replacement for Ford's planetary-gear transmission.
Front-mounted auxiliary transmission – designed to fit between the engine and Ford's transmission, to add additional gear ratios.
Rear-mounted auxiliary transmission – mounted at the rear axle housing, and attached between it and the driveshaft, to add additional gear ratios.
Multi-speed axle – designed to fit inside the differential's housing, to add additional gear ratios.
Murray Fahnestock, a Ford expert in the era of the Model T, particularly advised the use of auxiliary transmissions for the enclosed Model T's, such as the Ford Sedan and Coupelet, for three reasons: their greater weight put more strain on the drivetrain and engine, which auxiliary transmissions could smooth out; their bodies acted as sounding boards, echoing engine noise and vibration at higher engine speeds, which could be lessened with intermediate gears; and owners of the enclosed cars spent more to buy them, and thus likely had more money to enhance them.
He also noted that auxiliary transmissions were valuable for Ford Ton-Trucks in commercial use, allowing for driving speeds to vary with their widely variable loads – particularly when returning empty – possibly saving as much as 50% of returning drive time.
=== Suspension and wheels ===
Model T suspension employed a transversely mounted semi-elliptical spring for each of the front and rear beam axles, which allowed a great deal of wheel movement to cope with the dirt roads of the time.
The front axle was drop forged as a single piece of vanadium steel. Ford twisted many axles through eight full rotations (2880 degrees) and sent them to dealers to be put on display to demonstrate its superiority.
The Model T did not have a modern service brake. The right foot pedal applied a band around a drum in the transmission, thus stopping the rear wheels from turning. The previously mentioned parking brake lever operated band brakes acting on the inside of the rear brake drums, which were an integral part of the rear wheel hubs. Optional brakes that acted on the outside of the brake drums were available from aftermarket suppliers.
Wheels were wooden artillery wheels, with steel welded-spoke wheels available in 1926 and 1927.
Tires were pneumatic clincher type, 30 in (762 mm) in diameter, 3.5 in (89 mm) wide in the rear, 3 in (76 mm) in the front. Clinchers needed much higher pressure than today's tires, typically 60 psi (410 kPa), to prevent them from leaving the rim at speed. Flat tires were a common problem.
Balloon tires became available in 1925. They were 21 in × 4.5 in (530 mm × 110 mm) all around. Balloon tires were closer in design to today's tires, with steel wires reinforcing the tire bead, making lower pressure possible – typically 35 psi (240 kPa) – giving a softer ride. The steering gear ratio was changed from 4:1 to 5:1 with the introduction of balloon tires. The old nomenclature for tire size changed from measuring the outer diameter to measuring the rim diameter so 21 in (530 mm) (rim diameter) × 4.5 in (110 mm) (tire width) wheels has about the same outer diameter as 30 in (760 mm) clincher tires. All tires in this time period used an inner tube to hold the pressurized air; tubeless tires were not generally in use until much later.
Wheelbase is 100 in (254 cm) and standard track width was 56 in (142 cm) – 60 in (152 cm) track could be obtained on special order, "for Southern roads," identical to the pre-Civil War track gauge for many railroads in the former Confederacy. The standard 56-inch track being very near the 4 ft 8+1⁄2 in (143.5 cm) inch standard railroad track gauge, meant that Model Ts could be and frequently were, fitted with flanged wheels and used as motorized railway vehicles or "speeders". The availability of a 60 in (152 cm) version meant the same could be done on the few remaining Southern 5 ft (152 cm) railways – these being the only nonstandard lines remaining, except for a few narrow-gauge lines of various sizes. Although a Model T could be adapted to run on track as narrow as 2 ft (61 cm) gauge (Wiscasset, Waterville and Farmington RR, Maine has one), this was a more complex alteration.
=== Colors ===
By 1918, half of all the cars in the U.S. were Model Ts. In his autobiography, Ford reported that in 1909 he told his management team, "Any customer can have a car painted any color that he wants so long as it is black."
However, in the first years of production from 1908 to 1913, the Model T was not available in black, but rather only in gray, green, blue, and red. Green was available for the touring cars, town cars, coupes, and Landaulets. Gray was available for the town cars only and red only for the touring cars. By 1912, all cars were being painted midnight blue with black fenders. Only in 1914 was the "any color so long as it is black" policy finally implemented.
It is often stated Ford suggested the use of black from 1914 to 1925 due to the low cost, durability, and faster drying time of black paint in that era. There is no evidence that black dried any faster than any other dark varnishes used at the time for painting, but carbon black pigment was indeed one of the cheapest (if not the cheapest) available, and dark color of gilsonite, a form of bitumen making cheap metal paints of the time durable, limited the (final) color options to dark shades of maroon, blue, green or black. At that period Ford used two similar types of the so-called Japan black paint, one as a basic coat applied directly to the metal and another as a final finish.
Paint choices in the American automotive industry, as well as in others (including locomotives, furniture, bicycles, and the rapidly expanding field of electrical appliances), were shaped by the development of the chemical industry. These included the disruption of dye sources during World War I and the advent, by the mid-1920s, of new nitrocellulose lacquers that were faster-drying and more scratch-resistant and obviated the need for multiple coats.: 261–301 Understanding the choice of paints for the Model T era and the years immediately following requires an understanding of the contemporaneous chemical industry.
During the lifetime production of the Model T, over 30 types of black paint were used on various parts of the car. These were formulated to satisfy the different means of applying the paint to the various parts, and had distinct drying times, depending on the part, paint, and method of drying.
=== Body ===
Although Ford classified the Model T with a single letter designation throughout its entire life and made no distinction by model years, enough significant changes to the body were made over the production life that the car may be classified into several style generations. The most immediately visible and identifiable changes were in the hood and cowl areas, although many other modifications were made to the vehicle.
1909–1914 – Characterized by a nearly straight, five-sided hood, with a flat top containing a center hinge and two side sloping sections containing the folding hinges. The firewall is flat from the windshield down with no distinct cowl. For these years, acetylene gas flame headlights were used because the flame is resistant to wind and rain. Thick concave mirrors combined with magnifying lenses projected the acetylene flame light. The fuel tank is placed under the front seat.
1915–1916 – The hood design is nearly the same five-sided design with the only obvious change being the addition of louvers to the vertical sides. A significant change to the cowl area occurred with the windshield relocated significantly behind the firewall and joined with a compound-contoured cowl panel. In these years electric headlights replaced carbide headlights.
1917–1923 – The hood design was changed to a tapered design with a curved top. The folding hinges were now located at the joint between the flat sides and the curved top. This is sometimes referred to as the "low hood" to distinguish it from the later hoods. The back edge of the hood now met the front edge of the cowl panel so that no part of the flat firewall was visible outside of the hood. This design was used the longest and during the highest production years, accounting for about half of the total number of Model Ts built.
1923–1925 – This change was made during the 1923 calendar year, so models built earlier in the year have the older design, while later vehicles have the newer design. The taper of the hood was increased and the rear section at the firewall is about an inch taller and several inches wider than the previous design. While this is a relatively minor change, the parts between the third and fourth generations are not interchangeable.
1926–1927 – This design change made the greatest difference in the appearance of the car. The hood was again enlarged, with the cowl panel no longer a compound curve and blended much more with the line of the hood. The distance between the firewall and the windshield was also increased significantly. This style is sometimes referred to as the "high hood".
The styling on the last "generation" was a preview for the following Model A, but the two models are visually quite different, as the body on the A is much wider and has curved doors as opposed to the flat doors on the T.
=== Diverse applications ===
When the Model T was designed and introduced, the infrastructure of the world was quite different from today's. Pavement was a rarity except for sidewalks and a few big-city streets. (The meaning of the term "pavement" as opposed to "sidewalk" comes from that era, when streets and roads were generally dirt and sidewalks were a paved way to walk along them.) Agriculture was the occupation of many people. Power tools were scarce outside factories, as were power sources for them; electrification, like pavement, was found usually only in larger towns. Rural electrification and motorized mechanization were embryonic in some regions and nonexistent in most. Henry Ford oversaw the requirements and design of the Model T based on contemporary realities. Consequently, the Model T was (intentionally) almost as much a tractor and portable engine as it was an automobile. It has always been well regarded for its all-terrain abilities and ruggedness. It could travel a rocky, muddy farm lane, cross a shallow stream, climb a steep hill, and be parked on the other side to have one of its wheels removed and a pulley fastened to the hub for a flat belt to drive a bucksaw, thresher, silo blower, conveyor for filling corn cribs or haylofts, baler, water pump, electrical generator, and many other applications. One unique application of the Model T was shown in the October 1922 issue of Fordson Farmer magazine. It showed a minister who had transformed his Model T into a mobile church, complete with small organ.
During this era, entire automobiles (including thousands of Model Ts) were hacked apart by their owners and reconfigured into custom machinery permanently dedicated to a purpose, such as homemade tractors and ice saws. Dozens of aftermarket companies sold prefab kits to facilitate the T's conversion from car to tractor. The Model T had been around for a decade before the Fordson tractor became available (1917–18), and many Ts were converted for field use. (For example, Harry Ferguson, later famous for his hitches and tractors, worked on Eros Model T tractor conversions before he worked with Fordsons and others.) During the next decade, Model T tractor conversion kits were harder to sell, as the Fordson and then the Farmall (1924), as well as other light and affordable tractors, served the farm market. But during the Depression (1930s), Model T tractor conversion kits had a resurgence, because by then used Model Ts and junkyard parts for them were plentiful and cheap.
Like many popular car engines of the era, the Model T engine was also used on home-built aircraft (such as the Pietenpol Sky Scout) and motorboats.
During World War I, the Model T was heavily used by the Allies in different roles and configurations, such as staff cars, light cargo trucks, light vans, light patrol cars, liaison vehicles and even as rail tractors. The ambulance version proved to be well-suited for use in the combat areas. The ambulances could carry three stretcher patients or four seated patients, and two others could sit with the driver. Besides those made in the United States, ambulance bodies were also made by Carrosserie Kellner of Boulogne, near Paris. The Romanian Army also made use of converted Model T ambulances. These ambulances, named "Regina Maria" ambulances, were capable of carrying four stretcher patients. Conversion work was done by the Leonida Workshops of Bucharest. An armored-car variant (called the "FT-B") was developed in Poland in 1920 due to the high demand during the Polish-Soviet war in 1920.
Many Model Ts were converted into vehicles that could travel across heavy snows with kits on the rear wheels (sometimes with an extra pair of rear-mounted wheels and two sets of continuous track to mount on the now-tandemed rear wheels, essentially making it a half-track) and skis replacing the front wheels. They were popular for rural mail delivery for a time. The common name for these conversions of cars and small trucks was "snowflyers". These vehicles were extremely popular in the northern reaches of Canada, where factories were set up to produce them.
A number of companies built Model T–based railcars. In The Great Railway Bazaar, Paul Theroux mentions a rail journey in India on such a railcar. The New Zealand Railways Department's RM class included a few.
The American LaFrance company modified more than 900 Model Ts for use in firefighting, adding tanks, hoses, tools and a bell. Model T fire engines were in service in North America, Europe, and Australia. A 1919 Model T equipped to fight chemical fires has been restored and is on display at the North Charleston Fire Museum in South Carolina.
== Production ==
=== Mass production ===
The knowledge and skills needed by a factory worker were reduced to 84 areas. When introduced, the T used the building methods typical at the time, assembly by hand, and production was small. The Ford Piquette Avenue Plant could not keep up with demand for the Model T, and only 11 cars were built there during the first full month of production. More and more machines were used to reduce the complexity within the 84 defined areas. In 1910, after assembling nearly 12,000 Model Ts, Henry Ford moved the company to the new Highland Park complex. During this time the Model T production system (including the supply chain) transitioned into an iconic example of assembly-line production. In subsequent decades it would also come to be viewed as the classic example of the rigid, first-generation version of assembly line production, as opposed to flexible mass production of higher quality products.
As a result, Ford's cars came off the line in three-minute intervals, much faster than previous methods, reducing production time from 12+1⁄2 hours before to 93 minutes by 1914, while using less manpower. In 1914, Ford produced more cars than all other automakers combined. The Model T was a great commercial success, and by the time Ford made its 10 millionth car, half of all cars in the world were Fords. It was so successful Ford did not purchase any advertising between 1917 and 1923; instead, the Model T became so famous, people considered it a norm. More than 15 million Model Ts were manufactured in all, reaching a rate of 9,000 to 10,000 cars a day in 1925, or 2 million annually, more than any other model of its day, at a price of just $260 ($4,662 today). Total Model T production was finally surpassed by the Volkswagen Beetle on February 17, 1972, while the Ford F-Series (itself directly descended from the Model T roadster pickup) has surpassed the Model T as Ford's all-time best-selling model.
Henry Ford's ideological approach to Model T design was one of getting it right and then keeping it the same; he believed the Model T was all the car a person would, or could, ever need. As other companies offered comfort and styling advantages, at competitive prices, the Model T lost market share and became barely profitable. Design changes were not as few as the public perceived, but the idea of an unchanging model was kept intact. Eventually, on May 26, 1927, Ford Motor Company ceased US production and began the changeovers required to produce the Model A. Some of the other Model T factories in the world continued for a short while, with the final Model T produced at the Cork, Ireland plant in December 1927.
Model T engines continued to be produced until August 4, 1941. Almost 170,000 were built after car production stopped, as replacement engines were required to service the many existing vehicles. Racers and enthusiasts, forerunners of modern hot rodders, used the Model Ts' blocks to build popular and cheap racing engines, including Cragar, Navarro, and, famously, the Frontenacs ("Fronty Fords") of the Chevrolet brothers, among many others.
The Model T employed some advanced technology, for example, its use of vanadium steel alloy. Its durability was phenomenal, and some Model Ts and their parts are in running order over a century later. Although Henry Ford resisted some kinds of change, he always championed the advancement of materials engineering, and often mechanical engineering and industrial engineering.
In 2002, Ford built a final batch of six Model Ts as part of their 2003 centenary celebrations. These cars were assembled from remaining new components and other parts produced from the original drawings. The last of the six was used for publicity purposes in the UK.
Although Ford no longer manufactures parts for the Model T, many parts are still manufactured through private companies as replicas to service the thousands of Model Ts still in operation today.
On May 26, 1927, Henry Ford and his son Edsel drove the 15-millionth Model T out of the factory. This marked the famous automobile's official last day of production at the main factory.
=== Price and production ===
The moving assembly line system, which started on October 7, 1913, allowed Ford to reduce the price of his cars. As he continued to fine-tune the system, Ford was able to keep reducing costs significantly. As volume increased, he was able to also lower the prices due to some of the fixed costs being spread over a larger number of vehicles as large supply chain investments increased assets per vehicle. Other factors reduced the price such as material costs and design changes. As Ford had market dominance in North America during the 1910s, other competitors reduced their prices to stay competitive, while offering features that were not available on the Model T such as a wide choice of colors, body styles and interior appearance and choices, and competitors also benefited from the reduced costs of raw materials and infrastructure benefits to supply chain and ancillary manufacturing businesses.
In 1909, the cost of the Runabout started at $825 (equivalent to $28,870 in 2024). By 1925 it had been lowered to $260 (equivalent to $4,660 in 2024).
The figures below are US production numbers compiled by R. E. Houston, Ford Production Department, August 3, 1927. The figures between 1909 and 1920 are for Ford's fiscal year. From 1909 to 1913, the fiscal year was from October 1 to September 30 the following calendar year with the year number being the year in which it ended. For the 1914 fiscal year, the year was October 1, 1913, through July 31, 1914. Starting in August 1914, and through the end of the Model T era, the fiscal year was August 1 through July 31. Beginning with January 1920, the figures are for the calendar year.
The above tally includes a total of 14,689,525 vehicles. Ford said the last Model T was the 15 millionth vehicle produced.
=== Recycling ===
Henry Ford used wood scraps from the production of Model Ts to make charcoal briquettes. Originally named Ford Charcoal, the name was changed to Kingsford Charcoal after the Iron Mountain Ford Plant closed in 1951 and the Kingsford Chemical Company was formed and continued the wood distillation process. E. G. Kingsford, Ford's cousin by marriage, brokered the selection of the new sawmill and wood distillation plant site. Lumber for production of the Model T came from the same location, built in 1920 called the Iron Mountain Ford which incorporated a sawmill where lumber from Ford purchased land in the Upper Peninsula of Michigan was cut and dried. Scrap wood was distilled at the Iron Mountain plant for its wood chemicals, including methanol (wood alcohol), with the end by-product being lump charcoal. This lump charcoal was modified and pressed into briquettes and mass-marketed by Ford.
=== First global car ===
The Ford Model T was the first automobile built by multiple countries simultaneously, since they were being produced in Walkerville, Canada, and in Trafford Park, Greater Manchester, England, starting in 1911. After World War I ended in 1918, they were assembled in Germany, Argentina, France, Spain, Denmark, Norway, Belgium, Brazil, Mexico, Australia and Japan. Furthermore, exports from the American factories reached 303,000 in 1925. The heavy losses of horses during the World War made the Model T attractive as a new power source for European farmers. They used the Model T to pull plows, tow wagons, and power farm machinery. It enabled them to transport their products to markets more efficiently.
The Aeroford was an English automobile manufactured in Bayswater, London, from 1920 to 1925. It was a Model T with a distinct hood and grille to make it appear to be a totally different design, what later was called badge engineering. The Aeroford sold from £288 in 1920, dropping to £168–214 by 1925. It was available as a two-seater, four-seater, or coupé.
== Advertising and marketing ==
Ford created a massive publicity machine in Detroit to ensure every newspaper carried stories and advertisements about the new product. Promotion began well in advance for the introduction of the Model T, with advertisements appearing in newspapers in January 1908. Ford's network of local dealers made the car ubiquitous in virtually every city in North America. A large part of the success of Ford's Model T stems from the innovative strategy which introduced a large network of sales hubs making it easy to purchase the car. As independent dealers, the franchisees grew rich and publicized not just the Ford but the very concept of automobiling; local motor clubs sprang up to help new drivers and to explore the countryside. Ford was always eager to sell to farmers, who looked on the vehicle as a commercial device to help their business. Sales skyrocketed – several years posted around 100 percent gains on the previous year.
== "Jitney" taxi ==
In the early years of the 20th century, many Ford Model T owners in the US and Canada used their vehicles to provide a regulated or unregulated share taxi or illegal taxi operation. As a result, the Model T was often colloquially known at that time as a "jitney" when used as a cab or taxi.
== 24 Hours of Le Mans ==
Parisian Ford dealer Charles Montier and his brother-in-law Albert Ouriou entered a heavily modified version of the Model T (the "Montier Special") in the first three 24 Hours of Le Mans. They finished 14th in the inaugural 1923 race.
== Car clubs ==
Today, four main clubs exist to support the preservation and restoration of these cars: the Model T Ford Club International, the Model T Ford Club of America and the combined clubs of Australia. With many chapters of clubs around the world, the Model T Ford Club of Victoria has a membership with a considerable number of uniquely Australian cars. (Australia produced its own car bodies, and therefore many differences occurred between the Australian bodied tourers and the US/Canadian cars.) In the UK, the Model T Ford Register of Great Britain celebrated its 50th anniversary in 2010. Many steel Model T parts are still manufactured today, and even fiberglass replicas of their distinctive bodies are produced, which are popular for T-bucket style hot rods (as immortalized in the Jan and Dean surf music song "Bucket T", which was later recorded by The Who). In 1949, more than twenty years after the end of production, 200,000 Model Ts were registered in the United States. In 2008, it was estimated that about 50,000 to 60,000 Ford Model Ts remain roadworthy.
== Gallery ==
Model T chronology
== See also ==
Lakeside Foundry
New Zealand RM class (Model T Ford) – a 1925 experimental railcar based on a Model T powertrain
Piper J-3 Cub, the 1930s/40s American light aircraft that developed a similar degree of ubiquity in general aviation circles to the Model T
== Notes and references ==
== Bibliography ==
Clymer, Floyd (1955). Henry's wonderful Model T, 1908–1927. New York, NY, U.S.: McGraw-Hill. LCCN 55010405.
Clymer, Floyd (1950). Treasury of Early American Automobiles, 1877–1925. New York, NY, U.S.: McGraw-Hill. LCCN 50010680.
Dutton, William S. (1942). Du Pont: One Hundred and Forty Years. Charles Scribner's Sons. LCCN 42011897.
Ford, Henry; Crowther, Samuel (1922), My Life and Work, Garden City, New York, USA: Garden City Publishing Company, Inc. Various republications, including ISBN 9781406500189. Original is public domain in U.S. Also available at Google Books.
Georgano, G. N. (1985). Cars: Early and Vintage, 1886–1930. London, UK: Grange-Universal.
Hounshell, David A. (1984), From the American System to Mass Production, 1800–1932: The Development of Manufacturing Technology in the United States, Baltimore, Maryland: Johns Hopkins University Press, ISBN 978-0-8018-2975-8, LCCN 83016269, OCLC 1104810110
Kimes, Beverly Rae; Clark, Henry Austin Jr. (1989). Standard Catalog of America Cars: 1805–1942 (2nd ed.). Krause Publications. ISBN 9780873411110.
Lacey, Robert (1986). Ford: The Men and the Machine. Boston, MA, U.S.: Little, Brown. ISBN 978-0-316-51166-7.
Leffingwell, Randy (2002) [1998]. Ford Tractors. Borders. ISBN 0-681-87878-9.
Lewis, David (1976). The Public Image of Henry Ford: An American Folk Hero and His Company. Detroit, MI, U.S.: Wayne State University Press. ISBN 978-0-8143-1553-8.
Manly, Harold P. (1919). The Ford Motor Car and Truck; Fordson Tractor: Their Construction, Care and Operation. Chicago, IL, US: Frederick J. Drake & Co.
McCalley, Bruce W. (1994). Model T Ford: The Car That Changed the World. Iola, WI, U.S.: Krause Publications. ISBN 0-87341-293-1.
Nevins, Allan (1954). Ford: The Times, the Man, the Company. New York: Charles Scribner's Sons. pp. 385–590. LCCN 54-6305.
Nevins, Allan; Hill, Frank Ernest (1957). Ford: Expansion and Challenge 1915-1933. New York: Charles Scribner's Sons. LCCN 57-9695.
Pripps, Robert N.; Morland, Andrew (photographer) (1993). Farmall Tractors: History of International McCormick-Deering Farmall Tractors. Farm Tractor Color History Series. Osceola, WI, U.S.: MBI. ISBN 978-0-87938-763-1.
Ross, Irwin (November 1974). "Ford's Fabulous Flivver". Gas Engine Magazine. Retrieved August 11, 2016.
Sedgwick, Michael (1972) [1962]. Early Cars. Octopus Books. ISBN 0-7064-0058-5.
Ward, Ian, ed. (1974). The World of Automobiles. Vol. 13. London, UK: Orbis.
Wik, Reynold M. (1972). Henry Ford and grass-roots America. Ann Arbor, MI, U.S.: University of Michigan Press. ISBN 978-0-472-97200-5.
== External links ==
FordModelT.net – Resource for Model T Owners and Enthusiasts
Model T Ford Club of America (USA)
Model T Ford Club International
Ford Model T at the Internet Movie Cars Database
First and second web pages of Old Rhinebeck Aerodrome's vintage vehicle collection, featuring five Model T-based vehicles | Wikipedia/Ford_Model_T |
A model is an informative representation of an object, person or system, and serves as a substitute for the original. For example:
Machine learning model, a type of a mathematical model of reality in the context of machine learning
Model (person), a human representing, or to be imitated by, other humans, e.g. in art or commercial advertising
Model may also refer to:
== Film and television ==
Model (TV series), a 1997 South Korean television series
The Model (film), a 2016 Danish thriller drama film
Models, a 1999 Austrian drama film by Ulrich Seidl
== Literature ==
Model (manhwa), a 1999 series by Lee So-young
The Model, a 2005 novel by Lars Saabye Christensen
== Music ==
Model (band), a Turkish rock band
Models (band), an Australian rock band
The Models, an English punk rock band
"Model" (Gulddreng song), 2016
"Das Model", a 1978 song by Kraftwerk
Model (album), a 2024 album by Wallows
Models (album), a 2023 album by Lee Gamble
"Model", a 1994 song by Avail from Dixie
"Model", a 2015 song by Before You Exit
"Model", a 1991 song by Simply Red from Stars
== People ==
Model (surname), a surname frequently of Central European and occasionally English origins
The Model (wrestler), ring name of Rick Martel (born 1956)
Eddie Taubensee (born 1968), baseball player nicknamed "The Model"
Walter Model German WW2 General
== Places ==
Model, Colorado, an unincorporated town in the United States
Model, Masovian Voivodeship, a village in east-central Poland
== Other uses ==
MODEL, Movement for Democracy in Liberia, a rebel group
== See also ==
All pages with titles beginning with Model
All pages with titles containing Model
All pages with titles beginning with Modeling
All pages with titles containing Modeling
Modell (disambiguation)
Modelo (disambiguation)
Model City (disambiguation)
Model School (disambiguation)
Model Town (disambiguation)
Scientific modelling, minimizing a complex system to better be able to solve problems
Model theory, the study of classes of mathematical structures
Modeling (NLP), the process of adopting the behaviors, language, strategies and beliefs of another person or exemplar
Modeling (psychology), learning by imitating or observing a person's behavior
Remodeling (disambiguation)
Miniature faking, a photograph made to look like a photograph of a scale model
Fluid mosaic model the proposal that biological cell walls consist of a double layer of non-rigid biomolecules
Model lipid bilayer, an artificial chemical reconstruction of a biological cell wall
Model Automobile Company, an early vehicle manufacturer in Peru, Indiana
Models (painting) or Les Poseuses, a c.1887 work by Georges Seurat
Soho walk-up, a type of apartment for prostitution signposted "model" | Wikipedia/Model_(disambiguation) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.