text stringlengths 4 602k |
|---|
• • • Network topology is the arrangement of the various elements (,, etc.) of a communication network. Network topology is the structure of a network and may be depicted physically or logically. Physical topology is the placement of the various components of a network, including device location and cable installation, while illustrates how data flows within a network.
Distances between nodes, physical interconnections,, or signal types may differ between two networks, yet their topologies may be identical. An example is a (). Any given node in the LAN has one or more physical links to other devices in the network; graphically mapping these links results in a geometric shape that can be used to describe the physical topology of the network. Conversely, mapping the between the components determines the logical topology of the network. Diagram of different network topologies. Two basic categories of network topologies exist, physical topologies and. The cabling layout used to link devices is the physical topology of the network.
This refers to the layout of, the locations of nodes, and the links between the nodes and the cabling. The physical topology of a network is determined by the capabilities of the network access devices and media, the level of control or fault tolerance desired, and the cost associated with cabling or telecommunications circuits. In contrast, logical topology is the way that the signals act on the network media, or the way that the data passes through the network from one device to the next without regard to the physical interconnection of the devices. A network's logical topology is not necessarily the same as its physical topology. For example, the original using was a logical bus topology carried on a physical star topology. Is a logical ring topology, but is wired as a physical star from the. Logical topologies are often closely associated with methods and protocols.
Some networks are able to dynamically change their logical topology through configuration changes to their and switches. Further information: The transmission media (often referred to in the literature as the physical media) used to link devices to form a computer network include (,,, ), (), and (). In the, these are defined at layers 1 and 2 — the physical layer and the data link layer. A widely adopted family of transmission media used in local area network () technology is collectively known as. The media and protocol standards that enable communication between networked devices over Ethernet are defined. Ethernet transmits data over both copper and fiber cables.
Wireless LAN standards (e.g. Those defined by ) use, or others use signals as a transmission medium. Uses a building's power cabling to transmit data. Wired technologies [ ].
Are used to transmit light from one computer/network node to another The orders of the following wired technologies are, roughly, from slowest to fastest transmission speed. • is widely used for cable television systems, office buildings, and other work-sites for local area networks. The cables consist of copper or aluminum wire surrounded by an insulating layer (typically a flexible material with a high dielectric constant), which itself is surrounded by a conductive layer. The insulation helps minimize interference and distortion. Transmission speed ranges from 200 million bits per second to more than 500 million bits per second.
• technology uses existing (, phone lines and ) to create a high-speed (up to 1 Gigabit/s) local area network • wire is the most widely used medium for all telecommunication. Twisted-pair cabling consist of copper wires that are twisted into pairs. Ordinary telephone wires consist of two insulated copper wires twisted into pairs. Computer network cabling (wired as defined by ) consists of 4 pairs of copper cabling that can be utilized for both voice and data transmission. The use of two wires twisted together helps to reduce and. The transmission speed ranges from 2 million bits per second to 10 billion bits per second.
Twisted pair cabling comes in two forms: unshielded twisted pair (UTP) and shielded twisted-pair (STP). Each form comes in several category ratings, designed for use in various scenarios. 2007 map showing submarine optical fiber telecommunication cables around the world. • An is a glass fiber. It carries pulses of light that represent data. Some advantages of optical fibers over metal wires are very low transmission loss and immunity from electrical interference.
Optical fibers can simultaneously carry multiple wavelengths of light, which greatly increases the rate that data can be sent, and helps enable data rates of up to trillions of bits per second. Optic fibers can be used for long runs of cable carrying very high data rates, and are used for to interconnect continents.
Price is a main factor distinguishing wired- and wireless-technology options in a business. Wireless options command a price premium that can make purchasing wired computers, printers and other devices a financial benefit. Before making the decision to purchase hard-wired technology products, a review of the restrictions and limitations of the selections is necessary. Business and employee needs may override any cost considerations.
Figure 15.2 Various network topologies. Types of Networks. Wide-area network (WAN) A network that connects two or more local-area networks over a potentially large geographic distance. Often one particular node on a LAN is set up to serve as a gateway to handle all communication going between that LAN. Network बनाने से पहले यह Decide करना पड़ता है की हम अपने सभी Computer को कैसे एक दुसरे से जोड़ेंगे और जिस तरह से हम इन Computer को जोड़ेंगे इस.
Wireless technologies [ ]. Main article: • Terrestrial – Terrestrial microwave communication uses Earth-based transmitters and receivers resembling satellite dishes.
Terrestrial microwaves are in the low gigahertz range, which limits all communications to line-of-sight. Relay stations are spaced approximately 48 km (30 mi) apart. • Communications – Satellites communicate via microwave radio waves, which are not deflected by the Earth's atmosphere.
The satellites are stationed in space, typically in geosynchronous orbit 35,400 km (22,000 mi) above the equator. These Earth-orbiting systems are capable of receiving and relaying voice, data, and TV signals. • and PCS systems use several radio communications technologies.
The systems divide the region covered into multiple geographic areas. Each area has a low-power transmitter or radio relay antenna device to relay calls from one area to the next area. • Radio and technologies – Wireless local area networks use a high-frequency radio technology similar to digital cellular and a low-frequency radio technology. Wireless LANs use spread spectrum technology to enable communication between multiple devices in a limited area. Defines a common flavor of open-standards wireless radio-wave technology known as. • uses visible or invisible light for communications.
In most cases, is used, which limits the physical positioning of communicating devices. Exotic technologies [ ] There have been various attempts at transporting data over exotic media: • was a humorous April fool's, issued as. It was implemented in real life in 2001.
• Extending the Internet to interplanetary dimensions via radio waves, the. Both cases have a large, which gives slow two-way communication, but doesn't prevent sending large amounts of information.
An network interface in the form of an accessory card. A lot of network interfaces are built-in. A (NIC) is that provides a computer with the ability to access the transmission media, and has the ability to process low-level network information. For example, the NIC may have a connector for accepting a cable, or an aerial for wireless transmission and reception, and the associated circuitry.
The NIC responds to traffic addressed to a for either the NIC or the computer as a whole. In networks, each network interface controller has a unique (MAC) address—usually stored in the controller's permanent memory. To avoid address conflicts between network devices, the (IEEE) maintains and administers MAC address uniqueness. The size of an Ethernet MAC address is six. The three most significant octets are reserved to identify NIC manufacturers. These manufacturers, using only their assigned prefixes, uniquely assign the three least-significant octets of every Ethernet interface they produce.
Repeaters and hubs [ ] A is an device that receives a network, cleans it of unnecessary noise and regenerates it. The signal is at a higher power level, or to the other side of an obstruction, so that the signal can cover longer distances without degradation. In most twisted pair Ethernet configurations, repeaters are required for cable that runs longer than 100 meters.
With fiber optics, repeaters can be tens or even hundreds of kilometers apart. A repeater with multiple ports is known as an. Repeaters work on the physical layer of the OSI model. Repeaters require a small amount of time to regenerate the signal. This can cause a that affects network performance and may affect proper function. As a result, many network architectures limit the number of repeaters that can be used in a row, e.g., the Ethernet. Hubs and repeaters in LANs have been mostly obsoleted by modern.
Bridges [ ] A connects and filters traffic between two at the (layer 2) of the to form a single network. This breaks the network's collision domain but maintains a unified broadcast domain. Network segmentation breaks down a large, congested network into an aggregation of smaller, more efficient networks. Bridges come in three basic types: • Local bridges: Directly connect LANs • Remote bridges: Can be used to create a wide area network (WAN) link between LANs. Remote bridges, where the connecting link is slower than the end networks, largely have been replaced with routers. • Wireless bridges: Can be used to join LANs or connect remote devices to LANs. Switches [ ] A is a device that forwards and filters () between based on the destination MAC address in each frame.
A switch is distinct from a hub in that it only forwards the frames to the physical ports involved in the communication rather than all ports connected. It can be thought of as a multi-port bridge.
It learns to associate physical ports to MAC addresses by examining the source addresses of received frames. If an unknown destination is targeted, the switch broadcasts to all ports but the source. Switches normally have numerous ports, facilitating a star topology for devices, and cascading additional switches. Are capable of routing based on layer 3 addressing or additional logical levels.
The term switch is often used loosely to include devices such as routers and bridges, as well as devices that may distribute traffic based on load or based on application content (e.g., a Web identifier). A typical home or small office router showing the telephone line and network cable connections A is an device that forwards between networks by processing the routing information included in the packet or datagram (Internet protocol information from layer 3).
The routing information is often processed in conjunction with the routing table (or forwarding table). A router uses its routing table to determine where to forward packets. A destination in a routing table can include a 'null' interface, also known as the 'black hole' interface because data can go into it, however, no further processing is done for said data, i.e. The packets are dropped. Modems [ ] (MOdulator-DEModulator) are used to connect network nodes via wire not originally designed for digital network traffic, or for wireless. To do this one or more are by the digital signal to produce an that can be tailored to give the required properties for transmission.
Modems are commonly used for telephone lines, using a technology. Firewalls [ ] A is a network device for controlling network security and access rules. Firewalls are typically configured to reject access requests from unrecognized sources while allowing actions from recognized ones. The vital role firewalls play in network security grows in parallel with the constant increase in. Classification [ ] The study of network topology recognizes eight basic topologies: point-to-point, bus, star, ring or circular, mesh, tree, hybrid, or daisy chain. Point-to-point [ ].
Main article: The simplest topology with a dedicated link between two endpoints. Easiest to understand, of the variations of point-to-point topology, is a point-to-point that appears, to the user, to be permanently associated with the two endpoints. A child's is one example of a physical dedicated channel. Using or technologies, a point-to-point circuit can be set up dynamically and dropped when no longer needed. Switched point-to-point topologies are the basic model of conventional. The value of a permanent point-to-point network is unimpeded communications between the two endpoints.
The value of an on-demand point-to-point connection is proportional to the number of potential pairs of subscribers and has been expressed as. Main article: In local area networks where bus topology is used, each node is connected to a single cable, by the help of interface connectors. This central cable is the backbone of the network and is known as the bus (thus the name). A signal from the source travels in both directions to all machines connected on the bus cable until it finds the intended recipient. If the machine address does not match the intended address for the data, the machine ignores the data. Alternatively, if the data matches the machine address, the data is accepted.
Because the bus topology consists of only one wire, it is rather inexpensive to implement when compared to other topologies. However, the low cost of implementing the technology is offset by the high cost of managing the network. Additionally, because only one cable is utilized, it can be the. In this topology data being transferred may be accessed by any.
Linear bus [ ] The type of network topology in which all of the nodes of the network that are connected to a common transmission medium which has exactly two endpoints (this is the 'bus', which is also commonly referred to as the, or ) – all that is in between nodes in the network is transmitted over this common transmission medium and is able to be by all nodes in the network simultaneously. Note: When the electrical signal reaches the end of the bus, the signal is reflected back down the line, causing unwanted interference. As a solution, the two endpoints of the bus are normally terminated with a device called a that prevents this reflection. Distributed bus [ ] The type of network topology in which all of the nodes of the network are connected to a common transmission medium which has more than two endpoints that are created by adding branches to the main section of the transmission medium – the physical distributed bus topology functions in exactly the same fashion as the physical linear bus topology (i.e., all nodes share a common transmission medium). Star network topology In local area networks with a star topology, each network host is connected to a central hub with a point-to-point connection. So it can be said that every computer is indirectly connected to every other node with the help of the hub. In Star topology, every node (computer workstation or any other peripheral) is connected to a central node called hub, router or switch.
The switch is the server and the peripherals are the clients. The network does not necessarily have to resemble a star to be classified as a star network, but all of the nodes on the network must be connected to one central device.
All traffic that traverses the network passes through the central hub. The hub acts as a. The star topology is considered the easiest topology to design and implement. An advantage of the star topology is the simplicity of adding additional nodes.
The primary disadvantage of the star topology is that the hub represents a single point of failure. Since all peripheral communication must flow through the central hub, the aggregate central bandwidth forms a network bottleneck for large clusters. Extended star [ ] A type of network topology in which a network that is based upon the physical star topology has one or more repeaters between the central node and the peripheral or 'spoke' nodes, the repeaters being used to extend the maximum transmission distance of the point-to-point links between the central node and the nodes beyond that which is supported by the transmitter power of the central node or beyond that which is supported by the standard upon which the physical layer of the physical star network is based. If the repeaters in a network that is based upon the physical extended star topology are replaced with hubs or switches, then a hybrid network topology is created that is referred to as a physical hierarchical star topology, although some texts make no distinction between the two topologies. Distributed Star [ ] A type of network topology that is composed of individual networks that are based upon the physical star topology connected in a linear fashion – i.e., 'daisy-chained' – with no central or top level connection point (e.g. Chief Keef Finally Rich Deluxe Edition Download Free. , two or more 'stacked' hubs, along with their associated star connected nodes or 'spokes'). Ring network topology A ring topology is a in a closed loop. Data travels around the ring in one direction.
When one node sends data to another, the data passes through each intermediate node on the ring until it reaches its destination. The intermediate nodes repeat (re transmit) the data to keep the signal strong. Every node is a peer; there is no hierarchical relationship of clients and servers. If one node is unable to re transmit data, it severs communication between the nodes before and after it in the bus. Advantages: • When the load on the network increases, its performance is better than bus topology. • There is no need of network server to control the connectivity between workstations. Disadvantages: • Aggregate network bandwidth is bottlenecked by the weakest link between two nodes.
Partially connected mesh topology In a partially connected network, certain nodes are connected to exactly one other node; but some nodes are connected to two or more other nodes with a point-to-point link. This makes it possible to make use of some of the redundancy of mesh topology that is physically fully connected, without the expense and complexity required for a connection between every node in the network. Hybrid [ ] Hybrid networks combine two or more topologies in such a way that the resulting network does not exhibit one of the standard topologies (e.g., bus, star, ring, etc.). For example, a (or star-bus network) is a hybrid topology in which are interconnected via. However, a tree network connected to another tree network is still topologically a tree network, not a distinct network type. A hybrid topology is always produced when two different basic network topologies are connected.
A star-ring network consists of two or more ring networks connected using a (MAU) as a centralized hub. Snowflake topology is a star network of star networks.
[ ] Two other hybrid network types are hybrid mesh and hierarchical star. Daisy chain [ ] Except for star-based networks, the easiest way to add more computers into a network is by, or connecting each computer in series to the next. If a message is intended for a computer partway down the line, each system bounces it along in sequence until it reaches the destination. A daisy-chained network can take two basic forms: linear and ring. • A puts a two-way link between one computer and the next. However, this was expensive in the early days of computing, since each computer (except for the ones at each end) required two receivers and two transmitters.
• By connecting the computers at each end, a can be formed. An advantage of the ring is that the number of transmitters and receivers can be cut in half, since a message will eventually loop all of the way around.
When a sends a message, the message is processed by each computer in the ring. If the ring breaks at a particular link then the transmission can be sent via the reverse path thereby ensuring that all nodes are always connected in the case of a single failure. Centralization [ ] The reduces the probability of a network failure by connecting all of the peripheral nodes (computers, etc.) to a central node. When the physical star topology is applied to a logical bus network such as, this central node (traditionally a hub) rebroadcasts all transmissions received from any peripheral node to all peripheral nodes on the network, sometimes including the originating node. All nodes may thus communicate with all others by transmitting to, and receiving from, the central node only. The of a linking any peripheral node to the central node will result in the isolation of that peripheral node from all others, but the remaining peripheral nodes will be unaffected.
However, the disadvantage is that the failure of the central node will cause the failure of all of the peripheral nodes. If the central node is passive, the originating node must be able to tolerate the reception of an of its own transmission, delayed by the two-way (i.e. To and from the central node) plus any delay generated in the central node.
An active star network has an active central node that usually has the means to prevent echo-related problems. Hierarchical topology) can be viewed as a collection of star networks arranged in a.
This has individual peripheral nodes (e.g. Leaves) which are required to transmit to and receive from one other node only and are not required to act as repeaters or regenerators. Unlike the star network, the functionality of the central node may be distributed. As in the conventional star network, individual nodes may thus still be isolated from the network by a single-point failure of a transmission path to the node. If a link connecting a leaf fails, that leaf is isolated; if a connection to a non-leaf node fails, an entire section of the network becomes isolated from the rest. To alleviate the amount of network traffic that comes from broadcasting all signals to all nodes, more advanced central nodes were developed that are able to keep track of the identities of the nodes that are connected to the network.
These will 'learn' the layout of the network by 'listening' on each port during normal data transmission, examining the and recording the address/identifier of each connected node and which port it is connected to in a held in memory. This lookup table then allows future transmissions to be forwarded to the intended destination only.
Decentralization [ ] In a partially connected mesh topology, there are at least two nodes with two or more paths between them to provide redundant paths in case the link providing one of the paths fails. Decentralization is often used to compensate for the single-point-failure disadvantage that is present when using a single device as a central node (e.g., in star and tree networks). A special kind of mesh, limiting the number of hops between two nodes, is a. The number of arbitrary forks in mesh networks makes them more difficult to design and implement, but their decentralized nature makes them very useful.
In 2012 the (IEEE) published the protocol to ease configuration tasks and allows all paths to be active which increases bandwidth and redundancy between all devices. This is similar in some ways to a, where a linear or ring topology is used to connect systems in multiple directions. A multidimensional ring has a topology, for instance. A fully connected network, complete topology, or full mesh topology is a network topology in which there is a direct link between all pairs of nodes.
In a fully connected network with n nodes, there are n(n-1)/2 direct links. Networks designed with this topology are usually very expensive to set up, but provide a high degree of reliability due to the multiple paths for data that are provided by the large number of redundant links between nodes. This topology is mostly seen in applications. See also [ ]. • ^ Groth, David; Toby Skandier (2005). Network+ Study Guide, Fourth Edition.
• ATIS committee PRQC.. ATIS Telecom Glossary 2007.. Retrieved 2008-10-10.
• Chiang, Mung; Yang, Michael (2004). 42nd Allerton Conference. • ^ Inc, S., (2002).
Networking Complete. Third Edition. San Francisco: Sybex •, The Disadvantages of Wired Technology, Laura Acevedo, Demand Media.
Retrieved 2014-03-01. Hooke (September 2000), (PDF), Third Annual International Symposium on Advanced Radio Technologies, archived from (PDF) on 2012-01-13, retrieved 2011-11-12 •. Retrieved April 8, 2008. Network Design Basics for Cabling Professionals. McGraw-Hill Professional.. • ^ Sosinsky, Barrie A. Networking Bible.
Indianapolis: Wiley Publishing. Power Maxx Vibration Plate Manually. Retrieved 2016-03-26. • Bradley, Ray.. Retrieved 2016-03-26.
1 April 2014. Retrieved 18 April 2014. • Peter Ashwood-Smith (24 February 2011). Retrieved 11 May 2012. • Jim Duffy (11 May 2012).. Retrieved 11 May 2012. Shortest Path Bridging will replace Spanning Tree in the Ethernet fabric.
Tech Power Up. Retrieved 11 May 2012. Fedyk, Ed.,; P. Ashwood-Smith, Ed.,; D. Unbehagen (April 2012).. Retrieved 12 May 2012.
CS1 maint: Multiple names: authors list () External links [ ] Wikimedia Commons has media related to. • Application of a tetrahedral structure to create a resilient partial-mesh 3-dimensional campus backbone data network.
Visit for Network topology in Hindi Urdu plus Ring topology in Hindi Urdu. This tutorial explains what is a ring topology in Hindi Urdu. This tutorial continues the introduction of what is a network typology in Hindi Urdu and introduce about Ring topology in Hindi Urdu.
This is the Computer Networking Tutorial 14 lecture. This tutorial is a part of 20 Computer Networking video tutorial lecture in Computer Networking in Hindi Urdu tutorial series. I explain what is a Ring topology in hindi urdu. Learn what is a RING topology in Hindi Urdu in the Computer Networking Urdu Hindi lectures tutorial series by ifacter.com.
Tutorial 14 introduces the viewers to networking and computer network Ring topology in Hindi Urdu. Networking in Urdu tutorials can be read at and downloaded in pdf. We have notes, pdf and list of books of Computer Networking and how does Internet work in Hindi and Urdu. This tutorial introduces to the fundamentals Ring topology in layman’s term ifactner.com and can be used for info for computer networking certifications and technology operations. Ifactner introduces diagram and image of basic computer networks and the Ring topology. This computer networking tutorial introduces the viewers to internal working of network topology, internet and the concept of ring topology.
Ring topology in Hindi Urdu Visit: for my blogs, written and video tutorials. Please subscribe to My Goole+: My Facebook: You may also be interested in ifactner tutorials in English channel: ifactner tutorials in Punjabi channel: Have you ever wonder what is a Ring topology and how does Ring topology works in networking. This tutorial introduces you to the basics of Ring topology, web servers, and computer networks on ifactner.com. This lecture video tutorial on computer networking is the 14th part of 20 parts ifactner computer networking video lectures in hindi and urdu. These computer networking tutorials are aimed at beginners, students,nerds,dummies and starters of computer networking. |
Object-Oriented Programming (OOP) is a popular programming paradigm that is widely used to develop reliable and scalable software solutions. OOP is a programming approach that focuses on creating objects that can interact with each other to achieve a given task. In this blog post, we will outline some essential tips on how to learn object-oriented programming quickly.
Learning OOP is essential for any programmer who wishes to develop sophisticated applications for any industry. We will discuss the key concepts of OOP, how to write OOP code and the best practices that can help you become an expert in OOP.
Additionally, we will share some resources that you can use to improve your OOP skills, such as online courses, books, and coding exercises. By the end of this blog post, you should have a good understanding of OOP and be confident in your ability to write OOP code. So, let’s dive into the world of OOP and discover how you can become a proficient OOP developer.
Basics of Object-Oriented Programming
Object-oriented programming (OOP) has become a popular programming paradigm that provides solutions to complex software problems by breaking them up into smaller, more manageable modules called objects. For beginners who are interested in learning OOP, there are four basic principles to master. Let’s take a closer look at each principle:
- Encapsulation is the process of hiding data within a class and providing methods to access and modify it.
- For example, let’s say you have a class called the “Person,” with attributes like name, age, and gender. Using encapsulation, you can protect this data by making the attributes private and only accessible through methods like “setAge()” and “getAge().”
- Encapsulation is important because it minimizes errors in your program and allows you to maintain code more efficiently.
- Abstraction is all about modeling only the essential details of an object and ignoring the rest.
- For example, when you drive a car, all you need to know is the gas pedal, brake pedal, and steering wheel. You don’t need to understand how the engine, transmission, and other parts work because those details are abstracted away from you.
- Abstraction is important because it simplifies complex systems for users and programmers.
- Inheritance is the process where objects can take on the attributes and methods of another object.
- For example, you may have a class called “Vehicle,” with methods like “accelerate()” and “decelerate().” You can then create a subclass called “Car,” which takes on the methods of “Vehicle” and adds its own methods like “openDoors()” and “closeDoors().”
- Inheritance is important because it reduces code redundancy and allows for specialized classes to be created to meet specific needs.
- Polymorphism is the ability to present the same interface for different data types.
- For example, you may have a method called “draw()” used to display the shape of different objects like circles, squares, and triangles. Each object can have its own implementation of the “draw()” method, but they all share the same interface.
- Polymorphism is essential because it simplifies code by making it more flexible and adaptable to new requirements.
Understanding the basics of OOP is important because OOP is a powerful tool used in many applications, from web development to mobile app design. Mastery of these four principles will help you build applications that are easier to write, test, maintain, and modify.
Read: 10 Must-Have Qualities of a Good Software Developer
Choose a Programming Language
Object-oriented programming (OOP) can be learned using popular programming languages such as:
- Java: Known for its stability and broad usage, Java has a large library of frameworks and tools for OOP development. Resources for learning Java include Java Programming Masterclass for Software Developers on Udemy and Java Programming Basics on Coursera.
- Python: A popular language for data analysis, Python is also a great option to learn OOP concepts. Python.org offers a tutorial for beginners and the Introduction to Computer Science and Programming Using Python course on edX is a good resource for those looking for a structured learning program.
- C++: C++ is often used in systems programming and software development. For beginners, Codecademy offers a free C++ course and C++ Primer by Stanley B. Lippman is a good resource for those who prefer books.
- C#: Developed by Microsoft, C# is used in the development of Windows applications and video games. C# for Absolute Beginners on Microsoft Virtual Academy and C# Yellow Book by Rob Miles are both excellent resources.
When choosing a programming language to learn OOP, there are a few factors to consider:
- Level of Difficulty: Some languages, such as Java and Python, are easier to learn than others like C++. Choosing a language that is within your abilities is important for efficient learning.
- Personal Interests: Select a language that aligns with your interests and the type of software you want to develop. For example, if you’re interested in developing video games, C# may be a good choice.
- Compatibility: Consider the compatibility of the chosen language with tools, frameworks, and operating systems. For instance, if you want to build an iOS app, learning Swift or Objective-C may be more suitable.
Resources for learning a programming language include:
- Online Courses: Platforms such as Udemy, Coursera, and edX offer structured courses for learning programming languages at various skill levels.
- YouTube: There are numerous YouTube channels dedicated to providing tutorials and tips for learning programming languages in an engaging and entertaining way.
- Books: Books are an excellent way to dive deep into a language and learn it in detail. There are numerous books available for each programming language, from beginner to advanced levels.
- Online Communities: Joining online communities such as Reddit, Stack Overflow, or Quora can provide access to information, resources, and insights from experienced programmers.
To sum up, choosing a programming language for learning OOP depends on personal interests, job market demand, and compatibility. It’s important to assess the level of difficulty and choose the language that aligns with the type of software you want to develop. There are several resources available for learning each programming language, including online courses, YouTube, books, and online communities.
Read: The Importance of User Experience (UX) Design in Software Development
Practice, Practice, Practice
Learning object-oriented programming quickly can be challenging, but there are some effective strategies to help you achieve it. In addition to grasping the fundamental concepts and principles of OOP, practicing is undoubtedly crucial to mastering this programming style. As the old saying goes, practice makes perfect. Here’s why hands-on learning is critical to OOP proficiency:
Importance of Hands-On Learning
- Enables you to apply theoretical knowledge to real-world problems
- Improves memory retention and understanding of the concepts through active engagement
- Helps you identify your areas of weaknesses and strengths
- Builds confidence and motivation as you see your progress through trial and error
- Prepares you for OOP-related job interviews by showcasing your practical skills
Different Ways to Practice OOP
- Building a simple program from scratch using an OOP language like Java, C++, or Python
- Working on small coding challenges that focus on OOP-specific concepts like inheritance, polymorphism, and encapsulation
- Coding exercises that require you to refactor non-OOP code into an OOP structure
- Contributing to open-source OOP projects to gain exposure to real-world issues and collaboration
- Learning OOP through game development, which provides interactive and engaging scenarios to apply OOP principles
Resources to Find Practice Projects
- GitHub: The world’s largest community of developers is an excellent resource for OOP projects. With thousands of OOP projects posted daily, you will most likely find one that interests you and matches your skill level.
- HackerRank: A coding website that offers coding challenges and competitions for a variety of programming languages, including OOP, HackerRank helps you test your OOP problem-solving skills and learn from other community members.
- Codewars: Another online coding platform that offers code challenges, Codewars has a plethora of OOP-focused projects that are ranked according to their difficulty level, allowing you to progress at your own pace.
- StackOverflow: A popular developer community that features discussions and questions about coding topics, StackOverflow is an excellent resource for finding OOP-related issues and concerns that you can contribute to and learn from.
Practicing OOP is an essential aspect of mastering this programming style quickly. Not only does it help you apply your theoretical knowledge to real-world problems, but it also improves your memory retention, builds your confidence, and prepares you for job interviews. By using the different ways to practice OOP and the resources to find practice projects, you can accelerate your learning process and achieve your OOP proficiency faster. Happy coding!
Read: Skills and Steps You Need to Become a Software Developer
Faster Object-Oriented Programming Learning Hack: Read Programming Books
Object-Oriented Programming (OOP) is among the most popular programming paradigms of our time. It has become a vital part of the software development industry, and demand for programmers, skilled in its application, has risen significantly in recent years.
Many developers, both beginners and professionals, find it challenging to learn OOP quickly. However, reading programming books is an effective way to learn OOP’s concepts and principles and master it fast.
Importance of Reading Programming Books
Books are an essential tool in the arsenal of any programmer. Reading books on OOP enables you to gain knowledge and understanding of the essential principles and concepts of OOP. Programming books offer in-depth knowledge that helps both beginners and professionals to understand how to solve specific programming problems.
The benefits of reading programming books include the following:
- They help you acquire a better understanding of programming concepts
- They can teach you how to design better algorithms and write more efficient code
- They enable you to keep up with the latest programming trends and practices
- They can provide practical advice on programming best practices
Popular OOP Books for Beginners
The software development community has several OOP books worth reading. Below are some of the most popular OOP books for beginners:
- Head First Java, 2nd Edition – Kathy Sierra and Bert Bates
- Clean Code: A Handbook of Agile Software Craftsmanship – Robert C. Martin
- Head First Object-Oriented Analysis and Design – Brett D. McLaughlin, Gary Pollice, and David West
- Code Complete: A Practical Handbook of Software Construction – Steve McConnell
- The Pragmatic Programmer: From Journeyman to Master – Andrew Hunt and David Thomas
Getting the Most Out of Reading Programming Books
Learning OOP through programming books requires more than mere reading. Here’s how to get the most out of reading programming books:
- Set goals: Create goals before reading. What do you want to learn by reading the book? What topics do you need to understand?
- Take notes: Take detailed notes of the essential concepts, ideas, or points you’ve learned while reading.
- Practice: Writing practical examples of what you’ve learned helps you cement the knowledge in your mind and improve retention levels.
- Discuss the contents: Engage a peer or mentor to help you better understand the content or practice with programming tasks.
Reading programming books can become monotonous and lead to boredom. To prevent this, break up your reading sessions. Try reading a chapter or two a day and take short breaks in between. Keep your sessions short, focused, and straightforward.
Programming books offer a wealth of knowledge in OOP and are an excellent resource for anyone looking to learn this popular programming paradigm.
The practical advice, detailed explanations, and coding examples they provide ensure that you’ll quickly learn the essential concepts and principles required for OOP programming. Read programming books, set goals, take detailed notes, practice, and discuss the content with peers to get a more thorough understanding of OOP programming concepts.
Read: Is a Software Developer the Same as a Web Developer?
Join an Online Community or Forum
Learning Object-Oriented Programming (OOP) quickly can be a challenging task. However, there are several ways to make the process more manageable. One of the most effective ways is by joining an online community or forum.
Benefits of Joining an Online Community or Forum
- Access to a supportive community: Joining an online community or forum provides access to a supportive group of like-minded individuals who are also learning OOP.
- Opportunity to ask questions: A community or forum provides a platform to ask questions, get answers and engage in discussions that will help you learn OOP faster.
- Access to learning resources: Many communities and forums provide access to learning resources, including books, videos, articles, and tutorials, which can help you learn OOP more efficiently.
- Opportunity to network: Joining an online community or forum provides an opportunity to network with other professionals in the industry, which can come in handy in the future.
Read: Basic Things to Know About Software as a Service (SaaS)
Popular OOP Online Communities and Forums
- Stack Overflow: Stack Overflow is one of the most popular and largest online communities for programmers. It provides a platform to ask and answer OOP and other programming-related questions.
- Reddit: Reddit is a social platform with several subreddits devoted to programming topics. Joining subreddits like r/learnprogramming and r/programming can provide access to discussions and learning resources related to OOP.
- GitHub: GitHub is a platform that provides access to several open-source projects related to OOP. By participating in these projects, you can learn OOP while working on real-world projects.
- Codementor: Codementor is an online platform that connects learners with experienced developers who can mentor them. By joining Codementor, you can get access to OOP experts who can help you learn OOP quickly.
How to Make the Most of Online Communities
- Participate regularly: To make the most of online communities, you need to participate regularly. Ask questions, answer questions, and engage in discussions on a regular basis.
- Be respectful: Online communities and forums are made up of people from different backgrounds, cultures, and experiences. Always be respectful of others, their views, and their opinions.
- Contribute: Don’t just ask questions, try to contribute to the community by answering questions, sharing your knowledge, and helping other learners on their journey of learning OOP.
- Use the search function: Before asking a question, try to use the search function to see if the question has been asked before. This will save time and ensure that the community does not get flooded with the same questions repeatedly.
- Keep learning: Keep learning and growing your knowledge of OOP. This will ensure that you contribute effectively to the community and also help you become a better programmer.
Joining an online community or forum can be a great way to learn Object-Oriented Programming quickly. By leveraging the support, resources, and expertise available in these communities, you can accelerate your learning, become a better programmer, and deepen your understanding of OOP. So, don’t hesitate, to join an online community or forum today, and start your journey of learning OOP the easy way!
Read: Differences Between a Software Developer and Software Designer
Learning Object-Oriented Programming (OOP) can be challenging, but there are several ways to make it easier.
Here’s a quick recap of our tips to learn OOP quickly:
- Understand the basic principles of OOP
- Choose a programming language and stick to it
- Practice, practice, practice
- Read OOP books and articles
- Collaborate with others in the programming community
Remember, learning OOP takes time, effort, and dedication. But don’t be discouraged!
You can start small by practicing simple projects and gradually work your way up to more complex applications.
Remember that mistakes and challenges are part of the learning process, and don’t be afraid to seek help or advice from experienced developers.
So, take the plunge and begin your journey to becoming a proficient Object-Oriented Programmer. You’ve got this!
Read: Who is a Software Developer?
Before You Go…
Hey, thank you for reading this blog to the end. I hope it was helpful. Let me tell you a little bit about Nicholas Idoko Technologies. We help businesses and companies build an online presence by developing web, mobile, desktop, and blockchain applications.
We also help aspiring software developers and programmers learn the skills they need to have a successful career. Take your first step to becoming a programming boss by joining our Learn To Code academy today!
Be sure to contact us if you need more information or have any questions! We are readily available. |
In special relativity, the Minkowski spacetime is a four-dimensional manifold, created by Hermann Minkowski. It has four dimensions: three dimensions of space (x, y, z) and one dimension of time. Minkowski spacetime has a metric signature of (-+++), and describes a flat surface when no mass is present. The convention in this article is to call Minkowski spacetime simply spacetime.
Spacetime can be thought of as a four-dimensional coordinate system in which the axes are given by
They can also be denoted by
Where represents . The reason for measuring time in units of the speed of light times the time coordinate is so that the units for time are the same as the units for space. Spacetime has the differential for arc length given by
This implies that spacetime has a metric tensor given by
As before stated, spacetime is flat everywhere; to some extent, it can be thought of as a plane.
Spacetime can be thought of as the "arena" in which all of the events in the universe take place. All that one needs to specify a point in spacetime is a certain time and a typical spatial orientation. It is hard (virtually impossible) to visualize four dimensions, but some analogy can be made, using the method below.
Hermann Minkowski introduced a certain method for graphing coordinate systems in Minkowski spacetime. As seen to the right, different coordinate systems will disagree on an object's spatial orientation and/or position in time. As you can see from the diagram, there is only one spatial axis (the x-axis) and one time axis (the ct-axis). If need be, one can introduce an extra spatial dimension, (the y-axis); unfortunately, this is the limit to the number of dimensions: graphing in four dimensions is impossible. The rule for graphing in Minkowski spacetime goes as follows:
1) The angle between the x-axis and the x'-axis is given by where v is the velocity of the object
2) The speed of light through spacetime always makes an angle of 45 degrees with either axis.
Spacetime in general relativity
In the general theory of relativity, Einstein used the equation
To allow for spacetime to actually curve; the resulting effects are those of gravity.
Images for kids
Minkowski spacetime Facts for Kids. Kiddle Encyclopedia. |
Once cosmologists had hit on the Copernican Principle (Earth cannot be special by definition), it was instant orthodoxy. Michael Rowan-Robinson, president of the Royal Astronomical Society (2006-2008), opined in 1996:
It is evident that in the post-Copernican era of human history, no well-informed and rational person can imagine that the Earth occupies a unique position in the universe.1
For one thing, the Principle, also called the Principle of Mediocrity, solved so many problems around lack of evidence. What could not be demonstrated could merely be asserted.
In 2010, NASA’s Deep Space Camera had, we were told, located a “host of ‘Earths'” — up to 140 in six weeks, orbiting stars in our galaxy.
“The figures suggest our galaxy, the Milky Way [which has more than 100 billion stars], will contain 100 million habitable planets, and soon we will be identifying the first of them,” said Dimitar Sasselov, professor of astronomy at Harvard University and a scientist on the Kepler Mission. “There is a lot more work we need to do with this, but the statistical result is loud and clear, and it is that planets like our own Earth are out there.”
Actually, from present data, these “100 million habitable planets” are assumed to be Earth-like only in size and rockiness. A current official definition of habitable planets is “in the zone around the star where liquid water could exist,” but the ones discovered so far are unsuitable in many other ways.
Then a new cosmology term hit the media, “super-Earths.” It means “bigger than Earth,” but smaller than gas giant Neptune. Super-Earths could be the most numerous type of planet, in tight orbits around their star — which is actually bad news for life. Nonetheless, some insist, they may be more attractive to life than Earth is. Indeed, the Copernican Principle allows us to assume that some are inhabited already.
In reality, even the rocky exoplanets (known as of early 2013) that are Earth-sized are not Earth-like. For example, the Kepler mission’s first rocky planet find is described as follows: “Although similar in size to Earth, its orbit lasts just 0.84 days, making it likely that the planet is a scorched, waterless world with a sea of lava on its starlit side.” As space program physicist Rob Sheldon puts it, Earth is a rocky planet but so is a solid chunk of iron at 1300 degrees orbiting a few solar radii above the star. In any event, a planet may look Earth-like but have a very different internal structure and atmosphere.” Could exoplanets support life that has a different chemical composition? Absent details about the composition, who knows? Despite all this, an Earth Similarity Index has been compiled, offered with the caution that life might also exist under unearthly conditions, a caution that renders the Index’s value uncertain.
All this aside, planet scientists felt free to assert in 2011 that there were billions of worlds. Some bid as high as ten billion or tens of billions: We learn that “Every star twinkling in the night sky plays host to an average of 1.6 planets, a new study suggests.” “That implies there are some 10 billion Earth-sized planets in our galaxy.” And “Using a technique called gravitational microlensing, an international team found a handful of exoplanets that imply the existence of billions more.”2
Inspired by the Kepler mission’s science chief William Borucki, one reporter enthused:
How’s this for an astronomical estimate? There are at least 50 billion exoplanets in our galaxy. What’s more, astronomers estimate that 500 million of these alien worlds are probably sitting inside the habitable zones of their parent stars.
A 2013 estimate pegged alien planets that could support life at 60 billion. And we are getting ever closer to the “Goldilocks planet,” one space scientist says, momentarily forgetting that we are very close indeed to the one known Goldilocks planet, the one under our feet.
As journalist Tom Bethell put it in 2007, “This is dogma, lacking any justification.”
He reminds us that longshoreman philosopher Eric Hoffer, who worked on the San Francisco docks for 25 years, noted that intellectuals of the past century had done all in their power
“to denude the human entity of its uniqueness”; to demonstrate that we are “not essentially distinct from other forms of life.” He contrasted Pascal’s comment that “the firmament, the stars, the earth are not equal in value to the lowest human being,” with that of “the humanitarian” Bertrand Russell: “the stars, the wind in waste places mean more to me than even the human beings I love best.” Somehow, we take that as a sign of our maturity. Our philosophers want to rub our noses in the dust. Thou art dust!”
But some prominent scientists want to rub our noses too. And, like the conjurors of old, they despise mere lack of evidence. Indeed, they immediately proceeded to conjure the inhabitants of these planets.
(1) Rowan-Robinson, Michael (1996). Cosmology (3rd ed.). Oxford University Press. pp. 62-63.
(2) See also “‘Tens of billions’ of planets in habitable zones,” Yahoo News, Mar 29, 2012.
Photo credit: Earth Rise; NASA, Apollo 8 Crew. |
Finding the center of a circle can help you perform basic geometric tasks like finding the circumference or area. There are several ways to find the center point! You can draw crossed lines; you can draw overlapping circles; or you can use a straightedge and ruler.
Drawing Crossed Lines
1Draw a circle. Use a compass, or trace any circular object. The size of the circle does not matter. If you're finding the center of an existing circle, then you don't need to draw a new circle.
- A geometry compass is a tool specifically designed to draw and measure circles. Buy one in a school or office supply store!
2Sketch a chord between two points. A chord is a straight line segment that links any two points along the edge of a curve. Name the chord AB.
- Consider using a pencil to sketch your lines. This way, you can erase the marks once you've found the center. Draw with a light touch so that it'll be easier to erase.
3Draw a second chord. This line should be parallel and equal in length to the first chord that you drew. Name this new chord CD.
4Make another line between A and C. This third chord (AC) should stretch through the center of the circle – but you will need to draw one more line to find the exact center point.
5Join B and D. Draw one final chord (BD) across the circle between Point B and Point D. This new line should cross over the third chord (AC) that you drew.
6Find the center. If you have drawn straight and accurate lines, then the center of the circle lies at the intersection of the crossed lines AC and BD. Mark the center point with a pen or pencil. If you only want the center point marked, then erase the four chords that you drew.
Using Overlapping Circles
1Draw a chord between two points. Use a ruler or straightedge to draw a straight line inside the circle, from one edge to another. The points that you use don't matter. Label the two points A and B.
2Use a compass to draw two overlapping circles. The circles should be the exact same size. Make A the center of one circle, and B the center of the other. Space the two circles so that they overlap like a Venn diagram.
- Draw these circles in pencil, not pen. The process will be simpler if you are able to erase these circles later on.
3Draw a vertical line through the two points at which the circles intersect. There will be a point at the top and a point at the bottom of the "Venn diagram" space created between the overlap of the circles. Use a ruler to make sure that the line protrudes straight through these points. Finally, label the two points (C and D) at which this new line crosses the rim of the original circle. This line marks the diameter of the original circle.
4Erase the two overlapping circles. This should clear up your work space for the next step of the process. Now, you should have a circle with two perpendicular lines running through it. Do not erase the center points (A and B) of these circles! You will be drawing two new circles.
5Sketch two new circles. Use your compass to draw two equal circles: one with the point C at its center, and one with the point D. These circles, too, should overlap like a Venn diagram. Remember: C and D are the points at which the vertical line intersects the main circle.
6Draw a line through the points at which these new circles intersect. This straight, horizontal line should cut through the overlap space of the two new circles. This line is the second diameter of your original circle, and it should be exactly perpendicular to the first diameter line.
7Find the center. The intersection point of the two straight diameter lines is the exact center of the circle! Mark this center point for reference. If you want to clean up the page, feel free to erase the diameter lines and the non-original circles.
Using a Straightedge and a Triangular Ruler
1Draw two straight, intersecting tangent lines onto the circle. The lines can be completely random. However, the process will be easier if you make them roughly square or rectangular.
2Translate both of the lines to the other side of the circle. You will end up with four tangent lines forming a parallelogram or a rough rectangle.
3Draw the diagonals of the parallelogram. The point where these diagonal lines intersect is the circle's center.
4Check the accuracy of the center with a compass. The center should be on target as long as you didn't slip while translating the lines or when drawing the diagonals. Feel free to erase the parallelogram and diagonal lines.
- Try using graph paper instead of blank or ruled paper. It might help to have the perpendicular lines and boxes for guidance.
- You can also find the center of a circle by mathematically "completing the square." This is useful if you are given a circle equation, but you aren't working with a physical circle.
- A straightedge is not the same as a ruler. A straightedge can be any straight and even surface, but a ruler shows measurements. You can turn a straightedge into a functional ruler by marking it with inch or centimeter increments.
- In order to find the true center of a circle, you must use a geometry compass and a straightedge.
Things You’ll Need
- Geometry compass
Sources and Citations
- ↑ http://www.mathsisfun.com/geometry/circle.html
- ↑ http://mathworld.wolfram.com/Chord.html
- ↑ http://www.quickanddirtytips.com/education/math/how-to-find-the-center-of-a-circle-and-save-christmas
- ↑ https://www.khanacademy.org/math/algebra2/conics_precalc/circles-tutorial-precalc/v/radius-and-center-for-a-circle-equation-in-standard-form
- ↑ http://www.mathopenref.com/constcirclecenter2.html
- ↑ http://www.purplemath.com/modules/sqrcircle.htm
In other languages:
Português: Achar o Centro de um Círculo, Español: encontrar el centro de un círculo, Italiano: Trovare il Centro di un Cerchio, Deutsch: Den Mittelpunkt eines Kreises ermitteln, Français: trouver le centre d'un cercle, 中文: 找到圆心, Русский: найти центр круга, Bahasa Indonesia: Mencari Pusat Lingkaran
Thanks to all authors for creating a page that has been read 83,465 times. |
Math Curriculum » Fourth Grade
Fourth grade math students develop fluency with multiplication and division problems. They use various models, strategies, and representations to help them in their solutions. Also they extend their knowledge of the base-ten number system along with adding and subtracting large numbers. They explore equivalencies among fractions. Basic decimal concepts are also developed. In Geometry students study concepts of polygons, lines, and angles.
Grade 4 math instruction develops strong number concepts before introducing the traditional methods or "short cuts" of adding, subtracting, multiplying, or dividing. With this deeper understanding of how numbers work, students are more able to make connections with more complex problems and topics in later grades.
The fourth grade math curriculum follows the Investigations in Number, Data, and Space program with supplementation to meet the New York P-12 Common Core Learning Standards for Mathematics. Listed below are the fourth grade units. Specific skills taught in each unit are given in our Grade 4 District Math Curriculum Maps.
Factors, Multiples, and Arrays--Multiplication and Division-1
Multiple Towers and Division Stories--Multiplication and Division-2
Geometry--Polygons, Lines, and Angles
Landmarks and Large Numbers--Addition, Subtraction, and the Number System
Fraction Cards and Decimal Squares--Fractions and Decimals
How Many Packages? How Many Groups?--Multiplication and Division-3
Penny Jars--Patterns, Functions, and Change
Grade 4 Common Core
(pdf file - 128 KB)
||"pdf file: You need Adobe Acrobat Reader (version 5 or higher) to view this file. Download the free Adobe Acrobat Reader for PC or Macintosh."
Grade 4 District Math Curriculum Map
NY Common Core Learning Standards for Mathematics
TERC Investigations Grade 4 Math Content
Math Resource, K-4
Fayetteville-Manlius School District
The links in this area will let you leave the Fayetteville-Manlius School District site. The linked sites are not under the control of the District & the District is not responsible for the contents of any linked site or any link contained in a
linked site, or any changes or updates to such sites. The District is providing these links to you only as a convenience, and the inclusion of any link does not imply endorsement nor responsibility by the District of the content of the site.
Adopted July 1, 1997 Revised: February 8, 1999; September 13, 1999; November 17, 1999; October 16, 2001 |
standing wave ratio (SWR)
What is standing wave ratio (SWR)?
Standing wave ratio (SWR) is the ratio of the maximum magnitude or amplitude of a standing wave to its minimum magnitude. It indicates whether there is an impedance mismatch between the load and the internal impedance on a radio frequency (RF) transmission line, or waveguide. Such mismatches indicate that there are standing waves along the line that can reduce its power transmission efficiency.
What are standing waves?
Standing waves, first observed by scientist Michael Faraday in the 19th century, represent the power that the transmission line has not accepted. These unwanted waves are reflected along the line, affecting its transmission efficiency and reducing the power that finally gets transmitted.
A standing wave is the combination of these waves moving in opposite directions and with the same amplitude and frequency. Since the waves are "superimposed" on each other, interference is created and their energies (voltage or current) are either added or canceled out by each other.
Standing waves are created when a transmission line does not terminate correctly. As a result, the traveling wave (also known as the incident wave) gets reflected -- completely or partially -- at the receiving end. Together, the incident and reflected waves give rise to standing waves along the line.
At some points along the transmission line, the two waves (or signals) are in phase, so they add together, leading to maximum voltage and current. These points are known as the voltage or current maxima.
At other points, the two waves are out of phase, so the resultant voltage and current will fall to the minimum (voltage or current minima). Ultimately, the amplitude of standing waves indicates the amount of reflection along the transmission line.
How standing waves work
In real-world conditions, the load on the transmission line, or waveguide, does not absorb all the RF power that reaches it. This power is known as the forward power.
Instead, some power is sent back toward the signal source when the signal reaches the point where the line connects to the load. This is the reflected power or reverse power. The reflected power and forward power together set up a pattern of voltage and current maxima (loops) and minima (nodes) on the line. These patterns are known as standing waves.
Standing wave ratio (SWR) explained
The SWR, then, is the ratio of the following:
- the RF voltage at a loop (maxima) to the RF voltage at a node (minima); or
- the ratio of the RF current at a loop (maxima) to the RF current at a node (minima).
The highest SWR value – i.e., an infinite and, thus, undesirable value -- occurs when no load is connected to the end of the line, meaning the line is unterminated. This can happen when the end of the line is either short-circuited or simply left open (open circuit). A high SWR indicates that there are extreme voltages and currents at certain points on the line.
Sometimes, a moderately high SWR does not produce significant power loss or line overheating and can be tolerated. Such situations may occur at relatively low RF frequencies and low RF power levels as well as at short lengths of low-loss transmission lines.
Mathematical expression of standing wave ratio
When SWR refers to the ratio of forward and reflected voltages, it is known as voltage standing wave ratio, or VSWR. VSWR is the ratio of the maximum voltage magnitude to minimum voltage magnitude on a lossless line.
If the following is true:
i = incident wave and r = reflected wave
And the following is true:
V = voltage and |V| = voltage magnitude
Then the following is also true:
|Vmax| = |Vi| + |Vr| and |Vmin| = |Vi| - |Vr|
|Imax| = |Ii| + |Ir| and |Imin| = |Ii| - |Ir|
VSWR = |Vmax|
Similarly, the Current Standing Wave Ratio is expressed as the following:
ISWR = |Imax|
The SWR value is expressed as 2:1, 5:1, etc. Since it is a numerical ratio, it has no units. When there is a perfect impedance match between the transmission line and the load, the SWR will be 1:1. But when there is a complete mismatch -- that is, if there is a short or open circuit -- the SWR value is ∞:1.
Although SWR is a general way to describe both current and voltage standing waves, the VSWR is more commonly used for RF applications and, in general, for any feeder system.
Standing waves and impedance
In communications and other RF systems, all transmission lines, feeders and loads have some characteristic impedance. For RF applications, 50Ω is a common impedance value. To ensure the maximum power transfer from the source or signal generator to the transmission line or the transmission line to the load, their impedance levels must match. For a 50Ω system, the source impedance, the impedance of the transmission line and the impedance of the load must all be 50Ω.
However, such matches don't always happen. In fact, impedance mismatches between the line and load can and do happen. In such situations, all the power that is transferred into the line and traveling toward the load does not get transferred to the load. Since this power must go somewhere (it cannot disappear), it travels back down the line toward the source. When this happens, the voltages and currents of the forward and reflected waves add or subtract at different points, resulting in standing waves and transmission efficiency loss.
Standing wave ratio possible values
Under ideal conditions, the RF voltage on a signal transmission line is the same at all points on the line; therefore, the SWR is 1. When this happens, the load uses all the RF power that reaches it from the transmission line.
However, this optimum condition can exist only when the load -- for example, an antenna or a wireless receiver -- into which the RF power is delivered has the same impedance as the impedance of the transmission line. Simply put, the line and load impedances are identical. Also, the load resistance is the same as the characteristic impedance of the transmission line, and the load contains no reactance. (It is free of inductance or capacitance.)
In real-world conditions, the voltage and current fluctuate at various points along the line. Therefore, the line and load impedances cannot be identical, which is why the SWR cannot be 1. In theory, SWR can reach any value. And while SWR can exceed 100, in practice, it is usually limited by line losses. SWR of less than 2 is considered a "good match" for many applications. Other applications require SWR that's less than 1.1 or better. Others can tolerate SWR of 3 or greater.
Standing wave ratio vs. ratio of reflected power to forward power
The SWR on a transmission line is not the same as the ratio of reflected power to forward power. However, the two ratios are related. In general, the higher the ratio of reflected power to forward power, the greater the SWR. The converse is also true.
When the SWR is high, the power loss in the line is greater than the loss that occurs when the SWR is 1. This exaggerated loss is known as SWR loss and can be significant, especially when the SWR exceeds 2 and the transmission line has significant loss to begin with.
A high SWR can also have other undesirable effects. For one, it may lead to the overheating of the transmission line. It may also cause a breakdown of the dielectric material separating the line conductors. For all these reasons, RF engineers try to minimize the SWR on communications transmission lines.
Effect of standing wave ratio on real-world applications
Standing waves affect the power transmission efficiency in any system that uses RF and matched impedances. Simply put, there is a significant loss in the transmitted power. Since standing waves result in increased levels of voltage and current at some points along a line, they can damage the transmitter's output transistors. These high levels can also damage the feeder with excessive local heating or arcing.
Impedance mismatches can cause a signal to reflect back toward the source and the antenna. This can cause transmission delays as well as inter-signal interference. In analog applications, such as legacy analog TVs, the interference can result in a "ghost" image being reflected on the screen.
See also: resistance, Ohm's Law, true power, electric permittivity, dielectric constant |
Applied Math Lesson Plans
|Addition Math Trail Maker||Math Rubrics|
|Addition Math Crossnumbers||Multiple Math Skills|
|Equation Worksheet Maker||Problem Solving, Volume 1|
|Fraction Worksheet Maker||Problem Solving, Volume 2|
|Math Puzzle Maker|
- Attribute Blocks - Familiarizing the students with the different shapes and sizes found in the shapes provided.
- Bridge Building - This activity was the culmination of a several-week-long lesson on bridges and the forces acting upon them.
- Collecting and Analyzing Data - This project requires students to conduct a statistical investigation to determine some typical characteristics of students in their class.
- Counting Principles and Probability - Calculate the odds of an event occurring. Determine and justify the validity of a probability.
- Extending the Number System - Model situations with integers.
- Graphing and Probability - Students will graph, classify, count, round, use probability, ratios, and estimating.
- Graphing Your Students' Favorite Music - TSW be able to demonstrate understanding of collecting, organizing and analyzing information.
- Input Output Rules and Functions - Students will use story problems to solve for missing numbers and rules/functions in an input/output table.
- Insect/Spider Place Value - Continuing to use insects and spiders as a theme, the students learn about place value. Since they are so connected to this theme, the concept will become more meaningful and thus, be more easily internalized.
- Let's Do Business - The topic of this lesson is to create a entrepreneur mind set in the students. (2 week lesson)
- Let s Do Math!- Offers a variety of ways to make math fun!
- Manipulating Formulae...Using Recipes To Understand- The student will develop a tentative understanding of writing and using formulae.
- Math in Daily Life- . In this exhibit, you'll look at the language of numbers through common situations, such as playing games or cooking. Put your decision-making skills to the test by deciding whether buying or leasing a new car is right for you, and predict how much money you can save for your retirement by using an interest calculator.
- Microsoft Excel Budget Project - Students will use Microsoft Excel to create a spreadsheet to calculate the information.
- Place Value In Number Systems - Use models to explain how the number system is based on 10 and identify the place value of each digit in a multi-digit numeral.
- Percent: Using the Proportion Method - Students will apply prior knowledge of solving a proportion problem to solving a percent problem: finding the part.
- Processes of Design/Engineering 1 - Students will use drafting tools appropriately with increasing skill and precision.
- Scale Factor - The learner will understand and apply scale factor in all applicable situations.
- Who Wants Pizza? Fraction - Using technology,have the children identify properly how they have divided their pizza both orally and written using fractions. |
Carbon exists in various forms. In addition to diamond and graphite, there are recently discovered forms with astonishing properties. For example graphene, with a thickness of just one atomic layer, is the thinnest known material, and its unusual properties make it an extremely exciting candidate for applications like future electronics and high-tech engineering. In graphene, each carbon atom is linked to three neighbours, forming hexagons arranged in a honeycomb network. Theoretical studies have shown that carbon atoms can also arrange in other flat network patterns, while still binding to three neighbours, but none of these predicted networks had been realized until now.
Researchers at the University of Marburg in Germany and Aalto University in Finland have now discovered a new carbon network, which is atomically thin like graphene, but is made up of squares, hexagons, and octagons forming an ordered lattice. They confirmed the unique structure of the network using high-resolution scanning probe microscopy and interestingly found that its electronic properties are very different from those of graphene.
In contrast to graphene and other forms of carbon, the new Biphenylene network — as the new material is named –has metallic properties. Narrow stripes of the network, only 21 atoms wide, already behave like a metal, while graphene is a semiconductor at this size. “These stripes could be used as conducting wires in future carbon-based electronic devices.” said professor Michael Gottfried, at University of Marburg, who leads the team who developed the idea. The lead author of the study, Qitang Fan from Marburg continues, “This novel carbon network may also serve as a superior anode material in lithium-ion batteries, with a larger lithium storage capacity compared to that of the current graphene-based materials.”
The team at Aalto University helped image the material and decipher its properties. The group of Professor Peter Liljeroth carried out the high-resolution microscopy that showed the structure of the material, while researchers led by Professor Adam Foster used computer simulations and analysis to understand the exciting electrical properties of the material.
The new material is made by assembling carbon-containing molecules on an extremely smooth gold surface. These molecules first form chains, which consist of linked hexagons, and a subsequent reaction connects these chains together to form the squares and octagons. An important feature of the chains is that they are chiral, which means that they exist in two mirroring types, like left and right hands. Only chains of the same type aggregate on the gold surface, forming well-ordered assemblies, before they connect. This is critical for the formation of the new carbon material, because the reaction between two different types of chains leads only to graphene. “The new idea is to use molecular precursors that are tweaked to yield biphenylene instead of graphene” explains Linghao Yan, who carried out the high-resolution microscopy experiments at Aalto University.
For now, the teams work to produce larger sheets of the material, so that its application potential can be further explored. However, “We are confident that this new synthesis method will lead to the discovery of other novel carbon networks.” said Professor Liljeroth.
Breakthrough by uOttawa researchers sees creation of light-emitting solid carbon from CO2 gas
A team of researchers at the University of Ottawa has found a way to use visible light to transform carbon dioxide gas, or CO2, into solid carbon forms that emit light. This development creates a new, low-energy CO2 reduction pathway to solid carbon that will have implications across many fields.
We talked to lead author Dr. Jaspreet Walia, Post-Doctoral Fellow in the School of Electrical Engineering and Computer Science at the University of Ottawa, and research lead Dr. Pierre Berini, uOttawa Distinguished Professor and University Research Chair in Surface Plasmon Photonics, to learn more.
Please tell us about your team’s discovery.
Pierre Berini: “We have reduced carbon dioxide, a greenhouse gas, to solid carbon on a nanostructured silver surface illuminated with green light, without the need for any other reagents. Energetic electrons excited on the silver surface by green light transfer to carbon dioxide molecules, initiating dissociation. The carbon deposits were also found to emit intense yellow light in a process known as photoluminescence.”
How did you come to these conclusions?
Jaspreet Walia: “We used a technique known as Raman Scattering to probe the reaction in real time to determine which products, if any, were forming. To our surprise, we consistently observed signatures of carbon forming on the surface, as well as bright and visible yellow light emanating from the sample.”
Why is it important?
Pierre Berini: “Recently, there has been considerable global research effort devoted to developing technologies that can transform CO2 using visible light. Our work not only demonstrates that this is possible, but also that light emitting solid carbon can be formed.”
What are the applications of this discovery in our lives?
Jaspreet Walia: “This fixed pathway for reagent-less CO2 reduction to light emitting solid carbon, driven by visible light, will be of interest to researchers involved in the development of solar driven chemical transformations, industrial scale catalytic processes, and light-emitting metasurfaces.”
“More specifically, with respect to the creation of carbon directly from CO2 gas, our findings will have an impact on research involving plasmon assisted reactions and I would expect the emergence of applications in the oil and gas industries, where catalytic transformations involving carbon-based compounds is a key focus area.”
“Next-generation reactions involving CO2 and light could also lead to other useful outcomes, such as the potential for artificial photosynthesis. Our findings could be used for light control and manipulation at the nanoscale, or to possibility realize flat light sources due to the light-emitting aspect of our discovery. The nanostructured carbon itself could also be used in catalysis.”
“Finally, the wavelength (color) of the light emitted from carbon dots on a silver surface could be very sensitive to the local environment, making it an attractive sensing platform for pollutants, for example.”
Is there anything you would like to add?
Pierre Berini: “We have learned how to form solid carbon deposits that emit light “out of thick air”, in a breakthrough enabled by light-assisted transformation of CO2 gas driven by energetic electrons. The project was entirely driven by curiosity, with no set expectations on outcomes, and benefitted from close collaboration with graduate students Sabaa Rashid and Graham Killaire, as well as Professors Fabio Variola and Arnaud Weck.”
At ultracold temperature of 4 kelvins, the carbon increased efficiency by more than 30%.
Cryocoolers are ultracold refrigeration units used in surgery and drug development, semiconductor fabrication, and spacecraft. They can be tubes, pumps, tabletop sizes, or larger refrigerator systems.
The regenerative heat exchanger, or regenerator, is a core component of cryocoolers. At temperatures below 10 kelvins (-441.67 degrees Fahrenheit), performance drops precipitously, with maximum regenerator loss of more than 50%.
In their paper, published in Applied Physics Letters, by AIP Publishing, researchers at the University of Chinese Academy of Sciences used superactivated carbon particles as an alternative regenerator material to increase cooling capability at temperatures as low as 4 kelvins.
In most cryocoolers, a compressor drives room temperature gas through the regenerator. The regenerator soaks up heat from the compression, and the cooled gas expands. The oscillating ultracold gas absorbs the heat trapped in the regenerator, and the process repeats.
Nitrogen is the most commonly used gas in cryocoolers. But for applications requiring temperatures below 10 kelvins, such as space telescope instruments and magnetic resonance imaging systems, helium is used, because it has the lowest boiling point of any gas, enabling the coldest attainable temperatures.
However, helium’s high specific heat (the amount of heat transfer needed to change the temperature of a substance) results in large temperature fluctuations during the compression and expansion cycle at low temperatures, which seriously affects cooling efficiency.
To address this problem, researchers replaced the regenerator’s conventional rare-earth metals with activated carbon, which is carbon treated with carbon dioxide or superheated steam at high temperatures. This creates a matrix of micron-size pores that increases the carbon’s surface area, enabling the regenerator to hold more helium at low temperatures and remove more heat.
The researchers used a 4 kelvins Gifford-McMahon cryocooler to test the helium adsorption capacity in superactivated carbon particles with a porosity of 0.65 within varying temperature ranges of 3-10 kelvins.
They found when they filled the regenerator with 5.6% of carbon with diameters between 50 and 100 microns, the obtained no-load temperature of 3.6 kelvins was the same as using precious metals. However, at 4 kelvins, cooling capacity increased by more than 30%.
They confirmed improved performance by placing coconut shell-activated carbon into an experimental pulse tube they built and using a thermodynamic calculation model.
“In addition to providing increased cooling capacity, the activated carbon can serve as a low-cost alternative to precious metals and could also benefit low-temperature detectors that are sensitive to magnetism,” author Liubiao Chen said.
The article “Study on the use of porous materials with adsorbed helium as the regenerator of cryocooler at temperatures below 10 K” is authored by Xiaotong Xi, Biao Yang, Yuanheng Zhao, Liubiao Chen, and Junjie Wang. The article will appear in Applied Physics Letters on April 6, 2021 (DOI: 10.1063/5.0044221). After that date, it can be accessed at https://aip.scitation.org/doi/10.1063/5.0044221.
We are made of stardust, the saying goes, and a pair of studies including University of Michigan research finds that may be more true than we previously thought.
The first study, led by U-M researcher Jie (Jackie) Li and published in Science Advances, finds that most of the carbon on Earth was likely delivered from the interstellar medium, the material that exists in space between stars in a galaxy. This likely happened well after the protoplanetary disk, the cloud of dust and gas that circled our young sun and contained the building blocks of the planets, formed and warmed up.
Carbon was also likely sequestered into solids within one million years of the sun’s birth—which means that carbon, the backbone of life on earth, survived an interstellar journey to our planet.
Previously, researchers thought carbon in the Earth came from molecules that were initially present in nebular gas, which then accreted into a rocky planet when the gases were cool enough for the molecules to precipitate. Li and her team, which includes U-M astronomer Edwin Bergin, Geoffrey Blake of Caltech, Fred Ciesla of the University of Chicago and Marc Hirschmann of the University of Minnesota, point out in this study that the gas molecules that carry carbon wouldn’t be available to build the Earth because once carbon vaporizes, it does not condense back into a solid.
“The condensation model has been widely used for decades. It assumes that during the formation of the sun, all of the planet’s elements got vaporized, and as the disk cooled, some of these gases condensed and supplied chemical ingredients to solid bodies. But that doesn’t work for carbon,” said Li, a professor in the U-M Department of Earth and Environmental Sciences.
Much of carbon was delivered to the disk in the form of organic molecules. However, when carbon is vaporized, it produces much more volatile species that require very low temperatures to form solids. More importantly, carbon does not condense back again into an organic form. Because of this, Li and her team inferred most of Earth’s carbon was likely inherited directly from the interstellar medium, avoiding vaporization entirely.
To better understand how Earth acquired its carbon, Li estimated the maximum amount of carbon Earth could contain. To do this, she compared how quickly a seismic wave travels through the core to the known sound velocities of the core. This told the researchers that carbon likely makes up less than half a percent of Earth’s mass. Understanding the upper bounds of how much carbon the Earth might contain tells the researchers information about when the carbon might have been delivered here.
“We asked a different question: We asked how much carbon could you stuff in the Earth’s core and still be consistent with all the constraints,” Bergin said, professor and chair of the U-M Department of Astronomy. “There’s uncertainty here. Let’s embrace the uncertainty to ask what are the true upper bounds for how much carbon is very deep in the Earth, and that will tell us the true landscape we’re within.”
A planet’s carbon must exist in the right proportion to support life as we know it. Too much carbon, and the Earth’s atmosphere would be like Venus, trapping heat from the sun and maintaining a temperature of about 880 degrees Fahrenheit. Too little carbon, and Earth would resemble Mars: an inhospitable place unable to support water-based life, with temperatures around minus 60.
In a second study by the same group of authors, but led by Hirschmann of the University of Minnesota, the researchers looked at how carbon is processed when the small precursors of planets, known as planetesimals, retain carbon during their early formation. By examining the metallic cores of these bodies, now preserved as iron meteorites, they found that during this key step of planetary origin, much of the carbon must be lost as the planetesimals melt, form cores and lose gas. This upends previous thinking, Hirschmann says.
“Most models have the carbon and other life-essential materials such as water and nitrogen going from the nebula into primitive rocky bodies, and these are then delivered to growing planets such as Earth or Mars,” said Hirschmann, professor of earth and environmental sciences. “But this skips a key step, in which the planetesimals lose much of their carbon before they accrete to the planets.”
Hirschmann’s study was recently published in Proceedings of the National Academy of Sciences.
“The planet needs carbon to regulate its climate and allow life to exist, but it’s a very delicate thing,” Bergin said. “You don’t want to have too little, but you don’t want to have too much.”
Bergin says the two studies both describe two different aspects of carbon loss—and suggest that carbon loss appears to be a central aspect in constructing the Earth as a habitable planet.
“Answering whether or not Earth-like planets exist elsewhere can only be achieved by working at the intersection of disciplines like astronomy and geochemistry,” said Ciesla, a U. of C. professor of geophysical sciences. “While approaches and the specific questions that researchers work to answer differ across the fields, building a coherent story requires identifying topics of mutual interest and finding ways to bridge the intellectual gaps between them. Doing so is challenging, but the effort is both stimulating and rewarding.”
Blake, a co-author on both studies and a Caltech professor of cosmochemistry and planetary science, and of chemistry, says this kind of interdisciplinary work is critical.
“Over the history of our galaxy alone, rocky planets like the Earth or a bit larger have been assembled hundreds of millions of times around stars like the Sun,” he said. “Can we extend this work to examine carbon loss in planetary systems more broadly? Such research will take a diverse community of scholars.”
Funding sources for this collaborative research include the National Science Foundation, NASA’s Exoplanets Research Program, NASA’s Emerging Worlds Program and the NASA Astrobiology Program.
J. Li, E. A. Bergin, G. A. Blake, F. J. Ciesla, M. M. Hirschmann. Earth’s carbon deficit caused by early loss through irreversible sublimation. Science Advances, 2021; 7 (14): eabd3632 DOI: 10.1126/sciadv.abd3632
Marc M. Hirschmann, Edwin A. Bergin, Geoff A. Blake, Fred J. Ciesla, Jie Li. Early volatile depletion on planetesimals inferred from C–S systematics of iron meteorite parent bodies. Proceedings of the National Academy of Sciences, 2021; 118 (13): e2026779118 DOI: 10.1073/pnas.2026779118
Carbon is one of the most ubiquitous elements in existence. As the fourth most abundant element in the universe, a building block for all known life and a material that sits in the interior of carbon-rich exoplanets, the element has been subject to intense investigation by scientists.
Decades of studies have shown that carbon’s crystal structure has a significant impact on material properties. In addition to graphite and diamond, the most common carbon structures found at ambient pressures, scientists have predicted several new structures of carbon that could be found above 1,000 gigapascals (GPa). These pressures, approximately 2.5 times the pressure in Earth’s core, are relevant for modeling exoplanet interiors but have historically been impossible to achieve in the laboratory.
That is, until now. Under the Discovery Science program, which allows academic scientists access to Lawrence Livermore National Laboratory’s (LLNL) flagship National Ignition Facility (NIF), an international team of researchers led by LLNL and the University of Oxford has successfully measured carbon at pressures reaching 2,000 GPa (five times the pressure in Earth’s core), nearly doubling the maximum pressure at which a crystal structure has ever been directly probed. The results were reported today in Nature.
“We discovered that, surprisingly, under these conditions carbon does not transform to any of the predicted phases but retains the diamond structure up to the highest pressure,” said Amy Jenei, LLNL physicist and lead author on the study. “The same ultra-strong interatomic bonds (requiring high energies to break), which are responsible for the metastable diamond structure of carbon persisting indefinitely at ambient pressure, are also likely impeding its transformation above 1,000 GPa in our experiments.”
The academic component of the collaboration was led by Professor Justin Wark from the University of Oxford, who praised the Lab’s open access policy. “The NIF Discovery Science program is immensely beneficial to the academic community — it not only allows established faculty the chance to put forward proposals for experiments that would be impossible to do elsewhere, but importantly also gives graduate students, who are the senior scientists of the future, the chance to work on a completely unique facility,” he said.
The team — which also included scientists from the University of Rochester’s Laboratory for Laser Energetics and the University of York — leveraged the unique high power and energy and accurate laser pulse-shaping of LLNL’s National Ignition Facility to compress solid carbon to 2,000 GPa using ramp-shaped laser pulses, simultaneously measuring the crystal structure using an X-ray diffraction platform to capture a nanosecond-duration snapshot of the atomic lattice. These experiments nearly double the record high pressure at which X-ray diffraction has been recorded on any material.
The researchers found that even when subjected to these intense conditions, solid carbon retains its diamond structure far beyond its regime of predicted stability, confirming predictions that the strength of the molecular bonds in diamond persists under enormous pressure, resulting in large energy barriers that hinder conversion to other carbon structures.
“Whether nature has found a way to surmount the high energy barrier to formation of the predicted phases in the interiors of exoplanets is still an open question,” Jenei said. “Further measurements using an alternate compression pathway or starting from an allotrope of carbon with an atomic structure that requires less energy to rearrange will provide further insight.”
Co-authors include David Braun, Damian Swift, Martin Gorman, Ray Smith, Dayne Fratanduono, Federica Coppari, Christopher Wehrenberg, Rick Kraus, David Erskine, Joel Bernier, James McNaney, Robert Rudd and Jon Eggert of LLNL; David McGonegle, Patrick Heighway, Matthew Suggit and Justin Wark of the University of Oxford; Ryan Rygg and Gilbert Collins of the University of Rochester’s Laboratory for Laser Energetics; and Andrew Higginbotham of the University of York.
Featured image: An artist’s rendering of 55 Cancri e, a carbon-rich exoplanet. For the first time in a laboratory setting, experiments conducted at the National Ignition Facility through the Discovery Science program reach the extreme pressures relevant to understanding the structure of carbon that sits in the interior of exoplanets. Credit: ESA/Hubble/M. Kornmesser.
NIF, the world’s highest-energy laser system, is designed to create the extreme conditions — temperatures exceeding 100 million degrees and pressures more than 100 billion times that of the Earth’s atmosphere — similar to those in stars and in detonating nuclear weapons. As the only facility that can create the conditions that are relevant to understanding the operation of modern nuclear weapons, NIF is a crucial element of the National Nuclear Security Administration’s science-based Stockpile Stewardship Program. In addition to helping ensure the reliability of the nuclear deterrent, NIF opens new frontiers in laboratory astrophysics, materials science, hydrodynamics and many other scientific disciplines.
Carbon (C), nitrogen (N), and phosphorus (P) are three bioelements with maximal accumulations in areas of abundant life. C:N:P stoichiometry in soils greatly determines nutrient availability for plants and soil microorganisms, and further reflects the functioning of terrestrial ecosystems.
Soil C:N:P ratios are very susceptible to human activities (e.g., fertilization) and climate factors (e.g., temperature and precipitation). However, how the soil C:N:P stoichiometry is affected by upland and paddy cropping over broad geographical scale remains largely unknown.
A research group led by Prof. SU Yirong from the Institute of Subtropical Agriculture (ISA) of the Chinese Academy of Sciences conducted a study to examine the soil C:N:P stoichiometry in woodland (as control), agricultural upland and paddy from four climate zones (tropics, subtropics, warm temperate, and mid-temperate) across eastern China. The study was published in Soil and Tillage Research on Dec. 30.
The researchers collected 720 surface soil samples from 240 sites with adjacent woodland, agricultural upland, and paddy at a depth of 0-15 cm. Total C, N, and P contents and their ratios were determined.
They found that among climate zones, C and N contents and C:N ratios decreased in the order of mid-temperate > tropics > subtropics > warm temperate, whereas C:P and N:P ratios followed the order of subtropics > mid-temperate and tropics > warm-temperate.
“Compared to woodland, upland agriculture decreased the C content, but increased P content, resulting in the decreases of C:N, C:P, and N:P ratios. Hence, uplands are relatively limited by C and N but enriched with P, particularly in warm temperate zone,” said Prof. SU.
By contrast, the C, N, and P contents in paddy soils were all increased compared to woodland soils, but larger N and P increase leads to the decreases in C:N and C:P ratios. The higher P content, and consequently lower C:N:P ratios in both agricultural soils are the consequences of intensive fertilization.
As a whole, the direction of soil C, N, and P contents and their stoichiometric ratios in response to agricultural use was similar in the four climate zones: P increased, but C:N:P ratios decreased. The effects of agricultural use on C:N:P stoichiometry were greater in warmer and wetter zones.
This study provides a comparable dataset on the alteration of soil C, N, and P balances in the main Chinese grain-producing areas subjected to long-term intensive cultivation, which is useful to optimize future agricultural management.
The breakthrough shows that Superman may have had a similar trick up his sleeve when he crushed coal into diamond, without using his heat ray.
An international team of scientists has defied nature to make diamonds in minutes in a laboratory at room temperature – a process that normally requires billions of years, huge amounts of pressure and super-hot temperatures.
The team, led by The Australian National University (ANU) and RMIT University, made two types of diamonds: the kind found on an engagement ring and another type of diamond called Lonsdaleite, which is found in nature at the site of meteorite impacts such as Canyon Diablo in the US.
One of the lead researchers, ANU Professor Jodie Bradby, said their breakthrough shows that Superman may have had a similar trick up his sleeve when he crushed coal into diamond, without using his heat ray.
“Natural diamonds are usually formed over billions of years, about 150 kilometres deep in the Earth where there are high pressures and temperatures above 1,000 degrees Celsius,” said Professor Bradby from the ANU Research School of Physics.
The team, including former ANU PhD scholar Tom Shiell now at Carnegie Institution for Science, previously created Lonsdaleite in the lab only at high temperatures.
This new unexpected discovery shows both Lonsdaleite and regular diamond can also form at normal room temperatures by just applying high pressures – equivalent to 640 African elephants on the tip of a ballet shoe.
“The twist in the story is how we apply the pressure. As well as very high pressures, we allow the carbon to also experience something called ‘shear’ – which is like a twisting or sliding force. We think this allows the carbon atoms to move into place and form Lonsdaleite and regular diamond,” Professor Bradby said.
Co-lead researcher Professor Dougal McCulloch and his team at RMIT used advanced electron microscopy techniques to capture solid and intact slices from the experimental samples to create snapshots of how the two types of diamonds formed.
“Our pictures showed that the regular diamonds only form in the middle of these Lonsdaleite veins under this new method developed by our cross-institutional team,” Professor McCulloch said.
“Seeing these little ‘rivers’ of Lonsdaleite and regular diamond for the first time was just amazing and really helps us understand how they might form.”
Lonsdaleite, named after the crystallographer Dame Kathleen Lonsdale, the first woman elected as a Fellow to the Royal Society, has a different crystal structure to regular diamond. It is predicted to be 58 per cent harder.
“Lonsdaleite has the potential to be used for cutting through ultra-solid materials on mining sites,” Professor Bradby said.
“Creating more of this rare but super useful diamond is the long-term aim of this work.”
Ms Xingshuo Huang is an ANU PhD scholar working in Professor Bradby’s lab.
“Being able to make two types of diamonds at room temperature was exciting to achieve for the first time in our lab,” Ms Huang said.
The team, which involved University of Sydney and Oak Ridge National Laboratory in the US, have published the research findings in the journal Small.
References : McCulloch, D. G., Wong, S., Shiell, T. B., Haberl, B., Cook, B. A., Huang, X., Boehler, R., McKenzie, D. R., Bradby, J. E., Investigation of Room Temperature Formation of the Ultra‐Hard Nanocarbons Diamond and Lonsdaleite. Small 2020, 2004695. https://doi.org/10.1002/smll.202004695
The capacity of the Amazon forest to store carbon in a changing climate will ultimately be determined by how fast trees die – and what kills them. Now, a huge new study has unravelled what factors control tree mortality rates in Amazon forests and helps to explain why tree mortality is increasing across the Amazon basin.
This large analysis found that the mean growth rate of the tree species is the main risk factor behind Amazon tree death, with faster-growing trees dying off at a younger age. These findings have important consequences for our understanding of the future of these forests. Climate change tends to select fast-growing species. If the forests selected by climate change are more likely die younger, they will also store less carbon.
The study, co-led by the Universities of Birmingham and Leeds in collaboration with more than 100 scientists, is the first large scale analysis of the causes of tree death in the Amazon and uses long-term records gathered by the international RAINFOR network.
“Understanding the main drivers of tree death allows us to better predict and plan for future trends – but this is a huge undertaking as there are more than 15,000 different tree species in the Amazon,” said lead author Dr Adriane Esquivel-Muelbert, of the Birmingham Institute of Forest Research.
Dr David Galbraith, from the University of Leeds added “We found a strong tendency for faster-growing species to die more, meaning they have shorter life spans. While climate change has provided favourable conditions for these species, because they also die more quickly the carbon sequestration service provided by Amazon trees is declining.”
Tree mortality is a rare event so to truly understand it requires huge amounts of data. The RAINFOR network has assembled more than 30 years of contributions from more than 100 scientists. It includes records from 189 one-hectare plots, each visited and monitored on average every 3 years. Each visit, researchers measure all trees above 10cm in diameter as well as the condition of every tree.
In total more than 124,000 living trees were followed, and 18,000 tree deaths recorded and analysed. When trees die, the researcher follows a fixed protocol to unravel the actual cause of death. “This involves detailed, forensic work and amounts to a massive ‘CSI Amazon’ effort conducted by skilled investigators from a dozen nations”, noted Prof. Oliver Phillips, from the University of Leeds.
Dr Beatriz Marimon, from UNEMAT, who coordinates multiple plots in central Brazil added: “Now that we can see more clearly what is going on across the whole forest, there are clear opportunities for action. We find that drought is also driving tree death, but so far only in the South of the Amazon. What is happening here should serve as an early warning system as we need to prevent the same fate overtaking trees elsewhere.”
White dwarfs are the most common fossil stars within the stellar graveyard. It is well known that more than 95% of all main-sequence stars will finish their lives as white dwarfs, earth-sized objects less massive than ~1.4 Mo—the Chandrasekhar limiting mass— supported by electron degeneracy. A remarkable property of the white-dwarf population is its mass distribution, which exhibits a main peak at ~0.6 Mo, a smaller peak at the tail of the distribution around ~0.82 Mo, and a low-mass excess near ~0.4 Mo. White dwarfs with masses lower than 1.05 Mo are expected to harbour carbon(C)-oxygen(O) cores, enveloped by a shell of helium which is surrounded by a layer of hydrogen. Traditionally, white dwarfs more massive than 1.05 Mo (ultra-massive white dwarfs) are thought to contain an oxygen-neon(Ne) core, and their formation is theoretically predicted as the end product of the isolated evolution of intermediate-mass stars with an initial mass larger than 6-9 Mo, depending on the metallicity and the treatment of convective boundaries during core hydrogen burning. Once the helium in the core has been exhausted, these stars evolve to the super asymptotic giant branch (SAGB) phase, where they reach temperatures high enough to start off-centre carbon ignition under partially degenerate conditions. A violent carbon ignition eventually leads to the formation of an ONe core, which is not hot enough to develop further nuclear burning. During the SAGB phase, the star loses most of its outer envelope by the action of stellar winds, ultimately becoming an ultra-massive ONe-core white dwarf star.
On the other hand, the existence of a fraction of ultra-massive white dwarfs harbouring CO cores is supported by different piece of evidence. This population could be formed through binary evolution channels, involving the merger of two white dwarfs. Assuming that C is not ignited in the merger event, the merger of two CO-core white dwarfs with a combined mass below the Chandrasekhar limit is expected to lead to the formation of a single CO-core white dwarf substantially more massive than any CO-core white dwarf that can form from a single evolution ( 1.05 M ∼1.05 Mo). To the date, it has not been possible to distinguish a CO-core from a ONe-core ultra-massive white dwarf from their observed properties, although a promissory avenue to accomplish this is by means of white dwarf asteroseismology. Recent studies reveal that 10% -30% of all white dwarfs are expected to be formed as a result of merger events of any kind, and that this percentage raises up to 50 % for massive white dwarfs (M>0.9Mo). In particular, double white dwarf mergers contribute to the formation of massive white dwarfs in 20-30%. These results are in line with the existence of an excess of massive white dwarfs in the mass distribution. However, the existence of ultra-massive CO white dwarfs remains to be proven, and their exact percentage is still unclear.
The formation of white dwarfs as a result of stellar mergers is particularly important given the persistent historical interest in the study of the channels that lead to the occurrence of type Ia Supernovae, which are thought to be the violent explosion of a white dwarf exceeding the Chandrasekhar limiting mass. The main pathways to type Ia Supernovae involve binary evolution, namely the single-degenerate channel in which a white dwarf gains mass from a non-degenerate companion, or the double-degenerate channel involving the merger of two white dwarfs. Moreover, ultra-massive white dwarfs resulting from merger episodes are of utmost importance in connection with the formation of rapidly spinning neutron stars/magnetars. Mergers of white dwarfs have also been invoked as the most likely mechanism for the formation of Fast Radio Bursts, which are transient intense radio pulses with duration of milliseconds. Since they have been localized at redshifts z > 0.3, it is thought that Fast Radio Bursts could replace Supernovae of type Ia to probe the expansion of the universe to higher redshifts.
Recent observations provided by Gaia space mission, indicate that a fraction of the ultra-massive white dwarfs experience a strong delay in their cooling, which cannot be attributed only to the occurrence of crystallization, thus requiring an unknown energy source able to prolong their life for long periods of time.
In this study, Maria Camisassa and colleagues showed that these strong delays in the cooling times reported for a selected population of the ultra-massive white dwarfs are caused by the energy released by the sedimentation process of ²²Ne occurring in the interior of CO-core ultra-massive white dwarfs with high ²²Ne content, providing strong sustain to the formation of CO-core ultra-massive white dwarfs through merger events.
In order to demonstrate the possible existence of these eternal youth ultra-massive CO-core white dwarfs, they have analyzed the effect of ²²Ne sedimentation on the local white dwarf population revealed by Gaia observations by means of an up-to date population synthesis code. The code, based on Monte Carlo techniques, incorporates the different theoretical white dwarf cooling sequences under study, as well as an accurate modeling of the local white dwarf population and observational biases. They have performed a population synthesis analysis of the Galactic thin disk white dwarf population within 100 pc from the Sun under different input models. In order to minimize the selection effects, they chose the 100 pc sample, given that it represents the maximum size that the sample can be considered volume-limited and thus practically complete sample.
First, they considered that all the ultra-massive white dwarfs in the simulated sample have ONe core composition. This first synthetic population is shown in the Gaia HR diagram in the upper left panel of Figure 1. The histogram of this synthetic population is shown in the upper right panel (black steps), together with the Gaia 100 pc white dwarf sample (red steps). The Q branch can easily be regarded as the main peak in the histogram of the Gaia 100 pc white dwarf sample, between 13.0 and 13.4. A first glance at these three histograms reveals that ultra-massive ONe white dwarfs fail to account for the pile-up in the Q branch, even though the ONe white dwarf sequences used include all the energy sources resulting from the crystallization process. A quantitative statistical reduced x²-test analysis of the synthetic population distribution in the Q branch reveals a value of 11.18 when compared to the observed distribution.
The middle left panel of Figure 1 illustrates the HR diagram of a typical synthetic white dwarf population realization, considering that 20% of the ultra-massive white dwarfs come from merger events. That is, 20% of the ultra-massive white dwarfs harbour a CO-core. In this model they also have assumed white dwarfs with high ²²Ne abundance, X²²Ne=0.06. The histogram of this synthetic population, shown in black steps in the middle right panel, reveals that, although a mixed white dwarf population with both CO-core and ONe-core white dwarfs is in better agreement with the observations, the pile-up is still not fully reproduced. The reduced x² value of this synthetic population is 2.60.
Finally, they have also performed a population synthesis realization in which 50% of the ultra-massive white dwarfs have a merger origin and their core-chemical composition is CO. As in the previous model, the ²²Ne content of the white dwarf sequences with merger origin was set to X²²Ne=0.06. The results of this synthetic population are shown in the lower panels of Figure 1. They found that this simulation is in perfect agreement with the observed white dwarf sample, being its reduced x²-test value 1.36.
The better agreement with the observations revealed by Gaia of the synthetic populations that include CO-core ultra-massive white dwarf sequences is in line with the longer cooling times that characterize these stars due to ²²Ne sedimentation process. They have also simulated a synthetic population that considers a fraction of merger of 50% and a ²²Ne abundance of 0.02, finding a better agreement when compared to simulations computed with only ONe-core white dwarfs, but not as good as the agreement they found for a population with high ²²Ne content (The reduced x² value of this synthetic population is 4.86). They have also generated synthetic populations considering different ²²Ne abundances in the white dwarf models and found that the best fit models are obtained for a high ²²Ne abundance (X²²Ne=0.06), as the one shown in the middle and lower panels of Figure 1. Such a high ²²Ne abundance is not consistent with the isolated standard evolutionary history channel, because it would imply that these white dwarfs come from high-metallicity progenitors. However, merger events provide a possible scenario to create such a high ²²Ne abundance. If H were burnt in C-rich layers during the merger event, it would create a high amount of ¹⁴N that could later capture He ions, creating a high ²²Ne abundance before the ultra-massive white dwarf is born.
The analysis of the ultra-massive white dwarf population revealed by Gaia shows that ONe-core white dwarfs alone are not able to account for the pile-up in the ultra-massive Q branch. Indeed, energy sources as latent heat and phase separation process due to crystallization, and ²²Ne sedimentation can not prevent the fast cooling of these stars. Their study finds that CO-core ultra-massive white dwarfs with high ²²Ne content are long-standing living objects, that should stay on the Q branch for long periods of time. Indeed, their CO core composition, combined with a high ²²Ne abundance, provides a favorable scenario for ²²Ne sedimentation to effectively operate, producing strong delays in the cooling times due to the combination of three effects: crystallization, ²²Ne sedimentation and higher thermal content, and leading to an eternal youth source.
Their study indicates that the observed evidence of these delays from Gaia provides valuable sustain on their CO chemical composition, and their past history involving merger events, whilst ONe core white dwarfs are unable to predict these delays. Moreover, the high percentage of observed carbon-rich atmosphere stars (DQ white dwarfs) on the Q branch 20 supports the hypothesis that a large fraction of the white dwarfs on the Q branch would certainly have been formed through merger events.
References: María E. Camisassa, Leandro G. Althaus, Santiago Torres, Alejandro H. Córsico, Sihao Cheng, Alberto Rebassa-Mansergas, “Forever young white dwarfs: when stellar ageing stops”, ArXiv, pp. 1-27, 2020. https://arxiv.org/abs/2008.03028
Copyright of this article totally belongs to uncover reality. One is only allowed to use it only by giving proper credit to us. |
Permian–Triassic extinction event
The Permian–Triassic (P–Tr) extinction event, one of several events colloquially known as the great dying, occurred about 252 Ma (million years) ago, forming the boundary between the Permian and Triassic geologic periods, as well as the Paleozoic and Mesozoic eras. It is the Earth's most severe known extinction event, with up to 96% of all marine species and 70% of terrestrial vertebrate species becoming extinct. It is the only known mass extinction of insects. Some 57% of all families and 83% of all genera became extinct. Because so much biodiversity was lost, the recovery of life on Earth took significantly longer than after any other extinction event, possibly up to 10 million years.
Researchers have variously suggested that there were from one to three distinct pulses, or phases, of extinction. There are several proposed mechanisms for the extinctions; the earlier phase was probably due to gradual environmental change, while the latter phase has been argued to be due to a catastrophic event. Suggested mechanisms for the latter include one or more large bolide impact events, massive volcanism, coal or gas fires and explosions from the Siberian Traps, and a runaway greenhouse effect triggered by sudden release of methane from the sea floor due to methane clathrate dissociation or methane-producing microbes; possible contributing gradual changes include sea-level change, increasing anoxia, increasing aridity, and a shift in ocean circulation driven by climate change.
- 1 Dating the extinction
- 2 Extinction patterns
- 3 Biotic recovery
- 4 Causes of the extinction event
- 5 Methanosarcina
- 6 References
- 7 Further reading
- 8 External links
Dating the extinction
Until 2000, it was thought that rock sequences spanning the Permian–Triassic boundary were too few and contained too many gaps for scientists to determine reliably its details. Uranium-lead dating of zircons from rock sequences in multiple locations in southern China dates the extinction to 252.28±0.08 Ma; an earlier study of rock sequences near Meishan in Changxing County of Zhejiang Province, China dates the extinction to 251.4±0.3 Ma, with an ongoing elevated extinction rate occurring for some time thereafter. A large (approximately 0.9%), abrupt global decrease in the ratio of the stable isotope 13
C to that of 12
C, coincides with this extinction, and is sometimes used to identify the Permian–Triassic boundary in rocks that are unsuitable for radiometric dating. Further evidence for environmental change around the P–Tr boundary suggests an 8 °C (14 °F) rise in temperature, and an increase in CO
2 levels by 2000 ppm (by contrast, the concentration immediately before the industrial revolution was 280 ppm.) There is also evidence of increased ultraviolet radiation reaching the earth causing the mutation of plant spores.
It has been suggested that the Permian–Triassic boundary is associated with a sharp increase in the abundance of marine and terrestrial fungi, caused by the sharp increase in the amount of dead plants and animals fed upon by the fungi. For a while this "fungal spike" was used by some paleontologists to identify the Permian–Triassic boundary in rocks that are unsuitable for radiometric dating or lack suitable index fossils, but even the proposers of the fungal spike hypothesis pointed out that "fungal spikes" may have been a repeating phenomenon created by the post-extinction ecosystem in the earliest Triassic. The very idea of a fungal spike has been criticized on several grounds, including that: Reduviasporonites, the most common supposed fungal spore, was actually a fossilized alga; the spike did not appear worldwide; and in many places it did not fall on the Permian–Triassic boundary. The algae, which were misidentified as fungal spores, may even represent a transition to a lake-dominated Triassic world rather than an earliest Triassic zone of death and decay in some terrestrial fossil beds. Newer chemical evidence agrees better with a fungal origin for Reduviasporonites, diluting these critiques.
Uncertainty exists regarding the duration of the overall extinction and about the timing and duration of various groups' extinctions within the greater process. Some evidence suggests that there were multiple extinction pulses or that the extinction was spread out over a few million years, with a sharp peak in the last million years of the Permian. Statistical analyses of some highly fossiliferous strata in Meishan, Sichuan Province southwest China, suggest that the main extinction was clustered around one peak. Recent research shows that different groups became extinct at different times; for example, while difficult to date absolutely, ostracod and brachiopod extinctions were separated by 670 to 1170 thousand years. In a well-preserved sequence in east Greenland, the decline of animals is concentrated in a period 10 to 60 thousand years long, with plants taking several hundred thousand additional years to show the full impact of the event. An older theory, still supported in some recent papers, is that there were two major extinction pulses 9.4 million years apart, separated by a period of extinctions well above the background level, and that the final extinction killed off only about 80% of marine species alive at that time while the other losses occurred during the first pulse or the interval between pulses. According to this theory one of these extinction pulses occurred at the end of the Guadalupian epoch of the Permian. For example, all but one of the surviving dinocephalian genera died out at the end of the Guadalupian, as did the Verbeekinidae, a family of large-size fusuline foraminifera. The impact of the end-Guadalupian extinction on marine organisms appears to have varied between locations and between taxonomic groups—brachiopods and corals had severe losses.
|Marine extinctions||Genera extinct||Notes|
|97%||Fusulinids died out, but were almost extinct before the catastrophe|
|96%||Tabulate and rugose corals died out|
|79%||Fenestrates, trepostomes, and cryptostomes died out|
|96%||Orthids and productids died out|
|98%||Inadunates and camerates died out|
|100%||May have become extinct shortly before the P–Tr boundary|
|100%||In decline since the Devonian; only 2 genera living before the extinction|
Eurypterids ("sea scorpions")
|100%||May have become extinct shortly before the P–Tr boundary|
|100%||In decline since the Devonian, with only one living family|
Marine invertebrates suffered the greatest losses during the P–Tr extinction. In the intensively sampled south China sections at the P–Tr boundary, for instance, 286 out of 329 marine invertebrate genera disappear within the final 2 sedimentary zones containing conodonts from the Permian.
Statistical analysis of marine losses at the end of the Permian suggests that the decrease in diversity was caused by a sharp increase in extinctions instead of a decrease in speciation. The extinction primarily affected organisms with calcium carbonate skeletons, especially those reliant on stable CO2 levels to produce their skeletons, for the excursus in atmospheric CO2 was inherently linked to ocean acidification.
Among benthic organisms, the extinction event multiplied background extinction rates, and therefore caused most damage to taxa that had a high background extinction rate (by implication, taxa with a high turnover). The extinction rate of marine organisms was catastrophic.
Surviving marine invertebrate groups include: articulate brachiopods (those with a hinge), which have suffered a slow decline in numbers since the P–Tr extinction; the Ceratitida order of ammonites; and crinoids ("sea lilies"), which very nearly became extinct but later became abundant and diverse.
The groups with the highest survival rates generally had active control of circulation, elaborate gas exchange mechanisms, and light calcification; more heavily calcified organisms with simpler breathing apparatus were the worst hit. In the case of the brachiopods at least, surviving taxa were generally small, rare members of a diverse community.
The ammonoids, which had been in a long-term decline for the 30 million years since the Roadian (middle Permian), suffered a selective extinction pulse 10 mya before the main event, at the end of the Capitanian stage. In this preliminary extinction, which greatly reduced disparity, that is the range of different ecological guilds, environmental factors were apparently responsible. Diversity and disparity fell further until the P–Tr boundary; the extinction here was non-selective, consistent with a catastrophic initiator. During the Triassic, diversity rose rapidly, but disparity remained low.
The range of morphospace occupied by the ammonoids, that is the range of possible forms, shape or structure, became more restricted as the Permian progressed. Just a few million years into the Triassic, the original range of ammonoid structures was once again reoccupied, but the parameters were now shared differently amongclades.
The Permian had great diversity in insect and other invertebrate species, including the largest insects ever to have existed. The end-Permian is the only known mass extinction of insects, with eight or nine insect orders becoming extinct and ten more greatly reduced in diversity. Palaeodictyopteroids (insects with piercing and sucking mouthparts) began to decline during the mid-Permian; these extinctions have been linked to a change in flora. The greatest decline occurred in the Late Permian and was probably not directly caused by weather-related floral transitions.
Most fossil insect groups found after the Permian–Triassic boundary differ significantly from those that lived prior to the P–Tr extinction. With the exception of the Glosselytrodea, Miomoptera, and Protorthoptera, Paleozoic insect groups have not been discovered in deposits dating to after the P–Tr boundary. The caloneurodeans, monurans, paleodictyopteroids, protelytropterans, and protodonates became extinct by the end of the Permian. In well-documented Late Triassic deposits, fossils overwhelmingly consist of modern fossil insect groups.
Plant ecosystem response
The geological record of terrestrial plants is sparse, and based mostly on pollen and spore studies. Interestingly, plants are relatively immune to mass extinction, with the impact of all the major mass extinctions "insignificant" at a family level. Even the reduction observed in species diversity (of 50%) may be mostly due to taphonomic processes. However, a massive rearrangement of ecosystems does occur, with plant abundances and distributions changing profoundly and all the forests virtually disappearing; the Palaeozoic flora scarcely survived this extinction.
At the P–Tr boundary, the dominant floral groups changed, with many groups of land plants entering abrupt decline, such as Cordaites (gymnosperms) and Glossopteris (seed ferns). Dominant gymnosperm genera were replaced post-boundary by lycophytes—extant lycophytes are recolonizers of disturbed areas.
Palynological or pollen studies from East Greenland of sedimentary rock strata laid down during the extinction period indicate dense gymnosperm woodlands before the event. At the same time that marine invertebrate macrofauna are in decline these large woodlands die out and are followed by a rise in diversity of smaller herbaceous plants including Lycopodiophyta, both Selaginellales and Isoetales. Later on other groups of gymnosperms again become dominant but again suffer major die offs; these cyclical flora shifts occur a few times over the course of the extinction period and afterwards. These fluctuations of the dominant flora between woody and herbaceous taxa indicate chronic environmental stress resulting in a loss of most large woodland plant species. The successions and extinctions of plant communities do not coincide with the shift in δ13C values, but occurs many years after. The recovery of gymnosperm forests took 4–5 million years.
No coal deposits are known from the Early Triassic, and those in the Middle Triassic are thin and low-grade. This "coal gap" has been explained in many ways. It has been suggested that new, more aggressive fungi, insects and vertebrates evolved, and killed vast numbers of trees. These decomposers themselves suffered heavy losses of species during the extinction, and are not considered a likely cause of the coal gap. It could simply be that all coal forming plants were rendered extinct by the P–Tr extinction, and that it took 10 million years for a new suite of plants to adapt to the moist, acid conditions of peat bogs. On the other hand, abiotic factors (not caused by organisms), such as decreased rainfall or increased input of clastic sediments, may also be to blame. Finally, it is also true that there are very few sediments of any type known from the Early Triassic, and the lack of coal may simply reflect this scarcity. This opens the possibility that coal-producing ecosystems may have responded to the changed conditions by relocating, perhaps to areas where we have no sedimentary record for the Early Triassic. For example, in eastern Australia a cold climate had been the norm for a long period of time, with a peat mire ecosystem specialising to these conditions. Approximately 95% of these peat-producing plants went locally extinct at the P–Tr boundary; Interestingly, coal deposits in Australia and Antarctica disappear significantly before the P–Tr boundary.
There is enough evidence to indicate that over two-thirds of terrestrial labyrinthodont amphibians, sauropsid ("reptile") and therapsid ("mammal-like reptile") families became extinct. Large herbivores suffered the heaviest losses. All Permian anapsid reptiles died out except the procolophonids (testudines have anapsid skulls but are most often thought to have evolved later, from diapsid ancestors). Pelycosaurs died out before the end of the Permian. Too few Permian diapsid fossils have been found to support any conclusion about the effect of the Permian extinction on diapsids (the "reptile" group from which lizards, snakes, crocodilians, and dinosaurs [including birds] evolved). Even the groups that survived suffered extremely heavy losses of species, and some terrestrial vertebrate groups very nearly became extinct at the end-Permian. Some of the surviving groups did not persist for long past this period, while others that barely survived went on to produce diverse and long-lasting lineages.
Possible explanations of these patterns
An analysis of marine fossils from the Permian's final Changhsingian stage found that marine organisms with low tolerance for hypercapnia (high concentration of carbon dioxide) had high extinction rates, while the most tolerant organisms had very slight losses.
The most vulnerable marine organisms were those that produced calcareous hard parts (i.e., from calcium carbonate) and had low metabolic rates and weak respiratory systems—notably calcareous sponges, rugose and tabulate corals, calcite-depositing brachiopods, bryozoans, and echinoderms; about 81% of such genera became extinct. Close relatives without calcareous hard parts suffered only minor losses, for example sea anemones, from which modern corals evolved. Animals with high metabolic rates, well-developed respiratory systems, and non-calcareous hard parts had negligible losses—except for conodonts, in which 33% of genera died out.
This pattern is consistent with what is known about the effects of hypoxia, a shortage but not a total absence of oxygen. However, hypoxia cannot have been the only killing mechanism for marine organisms. Nearly all of the continental shelf waters would have had to become severely hypoxic to account for the magnitude of the extinction, but such a catastrophe would make it difficult to explain the very selective pattern of the extinction. Models of the Late Permian and Early Triassic atmospheres show a significant but protracted decline in atmospheric oxygen levels, with no acceleration near the P–Tr boundary. Minimum atmospheric oxygen levels in the Early Triassic are never less than present day levels—the decline in oxygen levels does not match the temporal pattern of the extinction.
Marine organisms are more sensitive to changes in CO
2 levels than are terrestrial organisms for a variety of reasons. CO
2 is 28 times more soluble in water than is oxygen. Marine animals normally function with lower concentrations of CO
2 in their bodies than land animals, as the removal of CO
2 in air-breathing animals is impeded by the need for the gas to pass through the respiratory system's membranes (lungs, tracheae, and the like). In marine organisms, relatively modest but sustained increases in CO
2 concentrations hamper the synthesis of proteins, reduce fertilization rates, and produce deformities in calcareous hard parts. In addition, an increase in CO
2 concentration is inevitably linked to ocean acidification, consistent with the preferential extinction of heavily calcified taxa and other signals in the rock record that suggest a more acidic ocean.
It is difficult to analyze extinction and survival rates of land organisms in detail, because few terrestrial fossil beds span the Permian–Triassic boundary. Triassic insects are very different from those of the Permian, but a gap in the insect fossil record spans approximately 15 million years from the late Permian to early Triassic. The best-known record of vertebrate changes across the Permian–Triassic boundary occurs in the Karoo Supergroup of South Africa, but statistical analyses have so far not produced clear conclusions. However, analysis of the fossil river deposits of the floodplains indicate a shift from meandering to braided river patterns, indicating an abrupt drying of the climate. The climate change may have taken as little as 100,000 years, prompting the extinction of the unique Glossopteris flora and its herbivores, followed by the carnivorous guild.
Earlier analyses indicated that life on Earth recovered quickly after the Permian extinctions, but this was mostly in the form of disaster taxa, opportunist organisms such as the hardy Lystrosaurus. Research published in 2006 indicates that the specialized animals that formed complex ecosystems, with high biodiversity, complex food webs and a variety of niches, took much longer to recover. It is thought that this long recovery was due to the successive waves of extinction, which inhibited recovery, and prolonged environmental stress to organisms, which continued into the Early Triassic. Research indicates that recovery did not begin until the start of the mid-Triassic, 4 to 6 million years after the extinction; and some writers estimate that the recovery was not complete until 30 Ma after the P–Tr extinction, i.e. in the late Triassic.
A study published in the journal Science found that during the Great Extinction the oceans' surface temperatures reached 40 °C (104 °F), which serves as an explanation for such a large time frame for recovery. Simply, it was too hot for life to survive.
During the early Triassic (4 to 6 million years after the P–Tr extinction), the plant biomass was insufficient to form coal deposits, which implies a limited food mass for herbivores. River patterns in the Karoo changed from meandering to braided, indicating that vegetation there was very sparse for a long time.
Each major segment of the early Triassic ecosystem—plant and animal, marine and terrestrial—was dominated by a small number of genera, which appeared virtually worldwide, for example: the herbivorous therapsid Lystrosaurus (which accounted for about 90% of early Triassic land vertebrates) and the bivalves Claraia, Eumorphotis, Unionites and Promylina. A healthy ecosystem has a much larger number of genera, each living in a few preferred types of habitat.
Disaster taxa took advantage of the devastated ecosystems and enjoyed a temporary population boom and increase in their territory. Microconchids are the dominant component of otherwise impoverished Early Triassic encrusting assemblages. For example: Lingula (a brachiopod); stromatolites, which had been confined to marginal environments since the Ordovician; Pleuromeia (a small, weedy plant); Dicroidium (a seed fern).
Changes in marine ecosystems
Prior to the extinction, about two-thirds of marine animals were sessile and attached to the sea floor but, during the Mesozoic, only about half of the marine animals were sessile while the rest were free-living. Analysis of marine fossils from the period indicated a decrease in the abundance of sessile epifaunal suspension feeders such as brachiopods and sea lilies and an increase in more complex mobile species such as snails, sea urchins and crabs.
Before the Permian mass extinction event, both complex and simple marine ecosystems were equally common; after the recovery from the mass extinction, the complex communities outnumbered the simple communities by nearly three to one, and the increase in predation pressure led to the Mesozoic Marine Revolution.
Bivalves were fairly rare before the P–Tr extinction but became numerous and diverse in the Triassic, and one group, the rudist clams, became the Mesozoic's main reef-builders. Some researchers think much of this change happened in the 5 million years between the two major extinction pulses.
Crinoids ("sea lilies") suffered a selective extinction, resulting in a decrease in the variety of their forms. Their ensuing adaptive radiation was brisk, and resulted in forms possessing flexible arms becoming widespread; motility, predominantly a response to predation pressure, also became far more prevalent.
Lystrosaurus, a pig-sized herbivorous dicynodont therapsid, constituted as much as 90% of some earliest Triassic land vertebrate fauna. Smaller carnivorous cynodont therapsids also survived, including the ancestors of mammals. In the Karoo region of southern Africa, the therocephalians Tetracynodon, Moschorhinus and Ictidosuchoides survived, but do not appear to have been abundant in the Triassic.
Archosaurs (which included the ancestors of dinosaurs and crocodilians) were initially rarer than therapsids, but they began to displace therapsids in the mid-Triassic. In the mid to late Triassic, the dinosaurs evolved from one group of archosaurs, and went on to dominate terrestrial ecosystems during the Jurassic and Cretaceous. This "Triassic Takeover" may have contributed to the evolution of mammals by forcing the surviving therapsids and their mammaliform successors to live as small, mainly nocturnal insectivores; nocturnal life probably forced at least the mammaliforms to develop fur and higher metabolic rates.
Some temnospondyl amphibians made a relatively quick recovery, in spite of nearly becoming extinct. Mastodonsaurus and trematosaurians were the main aquatic and semiaquatic predators during most of the Triassic, some preying on tetrapods and others on fish.
Land vertebrates took an unusually long time to recover from the P–Tr extinction; writer M. J. Benton estimated the recovery was not complete until 30 million years after the extinction, i.e. not until the Late Triassic, in which dinosaurs, pterosaurs, crocodiles, archosaurs, amphibians, and mammaliforms were abundant and diverse.
Causes of the extinction event
Pinpointing the exact cause or causes of the Permian–Triassic extinction event is difficult, mostly because the catastrophe occurred over 250 million years ago, and much of the evidence that would have pointed to the cause either has been destroyed by now or is concealed deep within the Earth under many layers of rock. The sea floor is also completely recycled every 200 million years by the ongoing process of plate tectonics and seafloor spreading, leaving no useful indications beneath the ocean. With the fairly significant evidence that scientists have accumulated, several mechanisms have been proposed for the extinction event, including both catastrophic and gradual processes (similar to those theorized for the Cretaceous–Paleogene extinction event). The former group includes one or more large bolide impact events, increased volcanism, and sudden release of methane from the sea floor, either due to dissociation of methane hydrate deposits or metabolism of organic carbon deposits by methanogenic microbes. The latter group includes sea level change, increasing anoxia, and increasing aridity. Any hypothesis about the cause must explain the selectivity of the event, which affected organisms with calcium carbonate skeletons most severely; the long period (4 to 6 million years) before recovery started, and the minimal extent of biological mineralization (despite inorganic carbonates being deposited) once the recovery began.
Evidence that an impact event may have caused the Cretaceous–Paleogene extinction event has led to speculation that similar impacts may have been the cause of other extinction events, including the P–Tr extinction, and therefore to a search for evidence of impacts at the times of other extinctions and for large impact craters of the appropriate age.
Reported evidence for an impact event from the P–Tr boundary level includes rare grains of shocked quartz in Australia and Antarctica; fullerenes trapping extraterrestrial noble gases; meteorite fragments in Antarctica; and grains rich in iron, nickel and silicon, which may have been created by an impact. However, the accuracy of most of these claims has been challenged. Quartz from Graphite Peak in Antarctica, for example, once considered "shocked", has been re-examined by optical and transmission electron microscopy. The observed features were concluded to be not due to shock, but rather to plastic deformation, consistent with formation in a tectonic environment such as volcanism.
An impact crater on the sea floor would be evidence of a possible cause of the P–Tr extinction, but such a crater would by now have disappeared. As 70% of the Earth's surface is currently sea, an asteroid or comet fragment is now perhaps more than twice as likely to hit ocean as it is to hit land. However, Earth has no ocean-floor crust more than 200 million years old, because the "conveyor belt" process of seafloor spreading and subduction destroys it within that time. Craters produced by very large impacts may be masked by extensive flood basalting from below after the crust is punctured or weakened. Subduction should not, however, be entirely accepted as an explanation of why no firm evidence can be found: as with the K-T event, an ejecta blanket stratum rich in siderophilic elements (e.g. iridium) would be expected to be seen in formations from the time.
One attraction of large impact theories is that theoretically they could trigger other cause-considered extinction-paralleling phenomena,[clarification needed] such as the Siberian Traps eruptions (see below) as being either an impact site or the antipode of an impact site. The abruptness of an impact also explains why more species did not rapidly evolve to survive, as would be expected if the Permian-Triassic event had been slower and less global than a meteorite impact.
Possible impact sites
Several possible impact craters have been proposed as the site of an impact causing the P–Tr extinction, including the Bedout structure off the northwest coast of Australia and the hypothesized Wilkes Land crater of East Antarctica. In each of these cases, the idea that an impact was responsible has not been proven, and has been widely criticized. In the case of Wilkes Land, the age of this sub-ice geophysical feature is very uncertain – it may be later than the Permian–Triassic extinction.
The Araguainha crater has been most recently dated to 254.7 ± 2.5 million years ago, overlapping with estimates for the Permo-Triassic boundary. Much of the local rock was oil shale. The estimated energy released by the Araguainha impact is insufficient to be a direct cause of the global mass extinction, but the colossal local earth tremors would have released huge amounts of oil and gas from the shattered rock. The resulting sudden global warming might have precipitated the Permian–Triassic extinction event.
The final stages of the Permian had two flood basalt events. A small one, the Emeishan Traps in China, occurred at the same time as the end-Guadalupian extinction pulse, in an area close to the equator at the time. The flood basalt eruptions that produced the Siberian Traps constituted one of the largest known volcanic events on Earth and covered over 2,000,000 square kilometres (770,000 sq mi) with lava. The Siberian Traps eruptions were formerly thought to have lasted for millions of years, but recent research dates them to 251.2 ± 0.3 Ma — immediately before the end of the Permian.
The Emeishan and Siberian Traps eruptions may have caused dust clouds and acid aerosols—which would have blocked out sunlight and thus disrupted photosynthesis both on land and in the photic zone of the ocean, causing food chains to collapse. These eruptions may also have caused acid rain when the aerosols washed out of the atmosphere. This may have killed land plants and molluscs and planktonic organisms which had calcium carbonate shells. The eruptions would also have emitted carbon dioxide, causing global warming. When all of the dust clouds and aerosols washed out of the atmosphere, the excess carbon dioxide would have remained and the warming would have proceeded without any mitigating effects.
The Siberian Traps had unusual features that made them even more dangerous. Pure flood basalts produce fluid, low-viscosity lava and do not hurl debris into the atmosphere. It appears, however, that 20% of the output of the Siberian Traps eruptions was pyroclastic, i.e. consisted of ash and other debris thrown high into the atmosphere, increasing the short-term cooling effect. The basalt lava erupted or intruded into carbonate rocks and into sediments that were in the process of forming large coal beds, both of which would have emitted large amounts of carbon dioxide, leading to stronger global warming after the dust and aerosols settled.
There is doubt, however, about whether these eruptions were enough on their own to cause a mass extinction as severe as the end-Permian. Equatorial eruptions are necessary to produce sufficient dust and aerosols to affect life worldwide, whereas the much larger Siberian Traps eruptions were inside or near the Arctic Circle. Furthermore, if the Siberian Traps eruptions occurred within a period of 200,000 years, the atmosphere's carbon dioxide content would have doubled. Recent climate models suggest such a rise in CO2 would have raised global temperatures by 1.5 to 4.5 °C (2.7 to 8.1 °F), which is unlikely to cause a catastrophe as great as the P–Tr extinction.
In January 2011, a team led by Stephen Grasby of the Geological Survey of Canada—Calgary, reported evidence that volcanism caused massive coal beds to ignite, possibly releasing more than 3 trillion tons of carbon. The team found ash deposits in deep rock layers near what is now Buchanan Lake. According to their article, "... coal ash dispersed by the explosive Siberian Trap eruption would be expected to have an associated release of toxic elements in impacted water bodies where fly ash slurries developed ...", and "Mafic megascale eruptions are long-lived events that would allow significant build-up of global ash clouds". In a statement, Grasby said, "In addition to these volcanoes causing fires through coal, the ash it spewed was highly toxic and was released in the land and water, potentially contributing to the worst extinction event in earth history."
Methane hydrate gasification
Scientists have found worldwide evidence of a swift decrease of about 1% in the 13C/12C isotope ratio in carbonate rocks from the end-Permian. This is the first, largest, and most rapid of a series of negative and positive excursions (decreases and increases in 13C/12C ratio) that continues until the isotope ratio abruptly stabilised in the middle Triassic, followed soon afterwards by the recovery of calcifying life forms (organisms that use calcium carbonate to build hard parts such as shells).
- Gases from volcanic eruptions have a 13C/12C ratio about 0.5 to 0.8% below standard (δ13C about −0.5 to −0.8%), but the amount required to produce a reduction of about 1.0% worldwide requires eruptions greater by orders of magnitude than any for which evidence has been found.
- A reduction in organic activity would extract 12C more slowly from the environment and leave more of it to be incorporated into sediments, thus reducing the 13C/12C ratio. Biochemical processes preferentially use the lighter isotopes, since chemical reactions are ultimately driven by electromagnetic forces between atoms and lighter isotopes respond more quickly to these forces. But a study of a smaller drop of 0.3 to 0.4% in 13C/12C (δ13C −3 to −4 ‰) at the Paleocene-Eocene Thermal Maximum (PETM) concluded that even transferring all the organic carbon (in organisms, soils, and dissolved in the ocean) into sediments would be insufficient: even such a large burial of material rich in 12C would not have produced the 'smaller' drop in the 13C/12C ratio of the rocks around the PETM.
- Buried sedimentary organic matter has a 13C/12C ratio 2.0 to 2.5% below normal (δ13C −2.0 to −2.5%). Theoretically, if the sea level fell sharply, shallow marine sediments would be exposed to oxidization. But 6,500–8,400 gigatons (1 gigaton = 109 metric tons) of organic carbon would have to be oxidized and returned to the ocean-atmosphere system within less than a few hundred thousand years to reduce the 13C/12C ratio by 1.0%. This is not thought to be a realistic possibility.
- Rather than a sudden decline in sea level, intermittent periods of ocean-bottom hyperoxia and anoxia (high-oxygen and low- or zero-oxygen conditions) may have caused the 13C/12C ratio fluctuations in the Early Triassic; and global anoxia may have been responsible for the end-Permian blip. The continents of the end-Permian and early Triassic were more clustered in the tropics than they are now, and large tropical rivers would have dumped sediment into smaller, partially enclosed ocean basins at low latitudes. Such conditions favor oxic and anoxic episodes; oxic/anoxic conditions would result in a rapid release/burial, respectively, of large amounts of organic carbon, which has a low 13C/12C ratio because biochemical processes use the lighter isotopes more. This, or another organic-based reason, may have been responsible for both this and a late Proterozoic/Cambrian pattern of fluctuating 13C/12C ratios.
The only proposed mechanism sufficient to cause a global 1.0% reduction in the 13C/12C ratio is the release of methane from methane clathrates,. Carbon-cycle models confirm it would have had enough effect to produce the observed reduction. Methane clathrates, also known as methane hydrates, consist of methane molecules trapped in cages of water molecules. The methane, produced by methanogens (microscopic single-celled organisms), has a 13C/12C ratio about 6.0% below normal (δ13C −6.0%). At the right combination of pressure and temperature, it gets trapped in clathrates fairly close to the surface of permafrost and in much larger quantities at continental margins (continental shelves and the deeper seabed close to them). Oceanic methane hydrates are usually found buried in sediments where the seawater is at least 300 m (980 ft) deep. They can be found up to about 2,000 m (6,600 ft) below the sea floor, but usually only about 1,100 m (3,600 ft) below the sea floor.
The area covered by lava from the Siberian Traps eruptions is about twice as large as was originally thought, and most of the additional area was shallow sea at the time. The seabed probably contained methane hydrate deposits, and the lava caused the deposits to dissociate, releasing vast quantities of methane. A vast release of methane might cause significant global warming, since methane is a very powerful greenhouse gas. Strong evidence suggests the global temperatures increased by about 6 °C (10.8 °F) near the equator and therefore by more at higher latitudes: a sharp decrease in oxygen isotope ratios (18O/16O); the extinction of Glossopteris flora (Glossopteris and plants that grew in the same areas), which needed a cold climate, and its replacement by floras typical of lower paleolatitudes.
However, the pattern of isotope shifts expected to result from a massive release of methane does not match the patterns seen throughout the early Triassic. Not only would a methane cause require the release of five times as much methane as postulated for the PETM, but it would also have to be reburied at an unrealistically high rate to account for the rapid increases in the 13C/12C ratio (episodes of high positive δ13C) throughout the early Triassic, before being released again several times.
A 2014 paper has found strong evidence for a bacterial source of the carbon-cycle disruption: the methanogenic archaeal genus Methanosarcina. Three lines of chronology converge at 250 mya, supporting a scenario in which a single-gene transfer created a metabolic pathway for efficient methane production in these archaea, nourished by volcanic nickel. According to the theory, the resultant super-exponential bacterial bloom suddenly freed carbon from ocean-bottom organic sediments into the water and air.
Evidence for widespread ocean anoxia (severe deficiency of oxygen) and euxinia (presence of hydrogen sulfide) is found from the Late Permian to the Early Triassic. Throughout most of the Tethys and Panthalassic Oceans, evidence for anoxia, including fine laminations in sediments, small pyrite framboids, high uranium/thorium ratios, and biomarkers for green sulfur bacteria, appear at the extinction event. However, in some sites, including Meishan, China, and eastern Greenland, evidence for anoxia precedes the extinction. Biomarkers for green sulfur bacteria, such as isorenieratane, the diagenetic product of isorenieratene, are widely used as indicators of photic zone euxinia, because green sulfur bacteria require both sunlight and hydrogen sulfide to survive. Their abundance in sediments from the P-T boundary indicates hydrogen sulfide was present even in shallow waters.
This spread of toxic, oxygen-depleted water would have been devastating for marine life, producing widespread die-offs. Models of ocean chemistry show that anoxia and euxinia would have been closely associated with high levels of carbon dioxide. This suggests that poisoning from hydrogen sulfide, anoxia, and hypercapnia acted together as a killing mechanism. Hypercapnia best explains the selectivity of the extinction, but anoxia and euxinia probably contributed to the high mortality of the event. The persistence of anoxia through the Early Triassic may explain the slow recovery of marine life after the extinction. Models also show that anoxic events can cause catastrophic hydrogen sulfide emissions into the atmosphere (see below).
The sequence of events leading to anoxic oceans may have been triggered by carbon dioxide emissions from the eruption of the Siberian Traps. In this scenario, warming from the enhanced greenhouse effect would reduce the solubility of oxygen in seawater, causing the concentration of oxygen to decline. Increased weathering of the continents due to warming and the acceleration of the water cycle would increase the riverine flux of phosphate to the ocean. This phosphate would have supported greater primary productivity in the surface oceans. This increase in organic matter production would have caused more organic matter to sink into the deep ocean, where its respiration would further decrease oxygen concentrations. Once anoxia became established, it would have been sustained by a positive feedback loop because deep water anoxia tends to increase the recycling efficiency of phosphate, leading to even higher productivity.
Hydrogen sulfide emissions
A severe anoxic event at the end of the Permian would have allowed sulfate-reducing bacteria to thrive, causing the production of large amounts of hydrogen sulfide in the anoxic ocean. Upwelling of this water may have released massive hydrogen sulfide emissions into the atmosphere. This would poison terrestrial plants and animals, as well as severely weaken the ozone layer, exposing much of the life that remained to fatal levels of UV radiation. Indeed, biomarker evidence for anaerobic photosynthesis by Chlorobiaceae (green sulfur bacteria) from the Late-Permian into the Early Triassic indicates that hydrogen sulfide did upwell into shallow waters because these bacteria are restricted to the photic zone and use sulfide as an electron donor.
This hypothesis has the advantage of explaining the mass extinction of plants,which would have added to the methane levels, & which ought otherwise to have thrived in an atmosphere with a high level of carbon dioxide. Fossil spores from the end-Permian further support the theory: many show deformities that could have been caused by ultraviolet radiation, which would have been more intense after hydrogen sulfide emissions weakened the ozone layer.
The supercontinent Pangaea
About halfway through the Permian (in the Kungurian age of the Permian's Cisuralian epoch), all the continents joined to form the supercontinent Pangaea, surrounded by the superocean Panthalassa, although blocks that are now parts of Asia did not join the supercontinent until very late in the Permian. This configuration severely decreased the extent of shallow aquatic environments, the most productive part of the seas, and exposed formerly isolated organisms of the rich continental shelves to competition from invaders. Pangaea's formation would also have altered both oceanic circulation and atmospheric weather patterns, creating seasonal monsoons near the coasts and an arid climate in the vast continental interior.
Marine life suffered very high but not catastrophic rates of extinction after the formation of Pangaea (see the diagram "Marine genus biodiversity" at the top of this article)—almost as high as in some of the "Big Five" mass extinctions. The formation of Pangaea seems not to have caused a significant rise in extinction levels on land, and, in fact, most of the advance of the therapsids and increase in their diversity seems to have occurred in the late Permian, after Pangaea was almost complete. So it seems likely that Pangaea initiated a long period of increased marine extinctions, but was not directly responsible for the "Great Dying" and the end of the Permian.
According to a theory published in 2014 (see also above), a genus of anaerobic methanogenic archaea known as Methanosarcina may have been largely responsible for the event. Evidence suggests that these microbes acquired a new metabolic pathway via gene transfer at about that time, enabling them to efficiently metabolize acetate into methane. This would have led to their exponential reproduction, allowing them to rapidly consume vast deposits of organic carbon that had accumulated in marine sediment. The result would have been a sharp buildup of methane and carbon dioxide in the Earth's oceans and atmosphere. Massive volcanism facilitated this process by releasing large amounts of nickel, a scarce metal which is a cofactor for one of the enzymes involved in producing methane.
Combination of causes
Possible causes supported by strong evidence appear to describe a sequence of catastrophes, each one worse than the last: the Siberian Traps eruptions were bad enough in their own right, but because they occurred near coal beds and the continental shelf, they also triggered very large releases of carbon dioxide and methane. The resultant global warming may have caused perhaps the most severe anoxic event in the oceans' history: according to this theory, the oceans became so anoxic, anaerobic sulfur-reducing organisms dominated the chemistry of the oceans and caused massive emissions of toxic hydrogen sulfide.
However, there may be some weak links in this chain of events: the changes in the 13C/12C ratio expected to result from a massive release of methane do not match the patterns seen throughout the early Triassic; and the types of oceanic thermohaline circulation, which may have existed at the end of the Permian, are not likely to have supported deep-sea anoxia.
- Rohde, R.A. & Muller, R.A. (2005). "Cycles in fossil diversity". Nature 434: 209–210. Bibcode:2005Natur.434..208R. doi:10.1038/nature03339. PMID 15758998.
- ""Great Dying" Lasted 200,000 Years". National Geographic. 23 November 2011. Retrieved 1 April 2014.
- "How a Single Act of Evolution Nearly Wiped Out All Life on Earth". ScienceDaily. 1 April 2014. Retrieved 1 April 2014.
- Shen S.-Z. et al. (2011). "Calibrating the End-Permian Mass Extinction". Science. Bibcode:2011Sci...334.1367S. doi:10.1126/science.1213454.
- Benton M J (2005). When life nearly died: the greatest mass extinction of all time. London: Thames & Hudson. ISBN 0-500-28573-X.
- Sahney S and Benton M.J (2008). "Recovery from the most profound mass extinction of all time". Proceedings of the Royal Society B 275 (1636): 759–765. doi:10.1098/rspb.2007.1370. PMC 2596898. PMID 18198148.
- Labandeira CC, Sepkoski JJ (1993). "Insect diversity in the fossil record". Science 261 (5119): 310–315. Bibcode:1993Sci...261..310L. doi:10.1126/science.11536548. PMID 11536548.
- Sole RV, Newman M (2003). "Extinctions and Biodiversity in the Fossil Record". In Canadell JG, Mooney, HA. Encyclopedia of Global Environmental Change, The Earth System - Biological and Ecological Dimensions of Global Environmental Change (Volume 2). New York: Wiley. pp. 297–391. ISBN 0-470-85361-1.
- "It Took Earth Ten Million Years to Recover from Greatest Mass Extinction". ScienceDaily. 27 May 2012. Retrieved 28 May 2012.
- Jin YG, Wang Y, Wang W, Shang QH, Cao CQ, Erwin DH (2000). "Pattern of Marine Mass Extinction Near the Permian–Triassic Boundary in South China". Science 289 (5478): 432–436. Bibcode:2000Sci...289..432J. doi:10.1126/science.289.5478.432. PMID 10903200.
- Yin H, Zhang K, Tong J, Yang Z, Wu S. "The Global Stratotype Section and Point (GSSP) of the Permian-Triassic Boundary". Episodes 24 (2). pp. 102–114.
- Yin HF, Sweets WC, Yang ZY, Dickins JM (1992). "Permo-Triassic events in the eastern Tethys–an overview". In Sweet WC. Permo-Triassic events in the eastern Tethys: stratigraphy, classification, and relations with the western Tethys. Cambridge, UK: Cambridge University Press. pp. 1–7. ISBN 0-521-54573-0.
- Darcy E. Ogdena and Norman H. Sleep (2011). "Explosive eruption of coal and basalt and the end-Permian mass extinction.". Proceedings of the National Academy of Sciences of the United States of America. Bibcode:2012PNAS..109...59O. doi:10.1073/pnas.1118675109.
- Ancient whodunit may be solved: The microbes did it!, MIT News Office March 31, 2014
- Payne, J. L.; Lehrmann, D. J.; Wei, J.; Orchard, M. J.; Schrag, D. P.; Knoll, A. H. (2004). "Large Perturbations of the Carbon Cycle During Recovery from the End-Permian Extinction". Science 305 (5683): 506–9. doi:10.1126/science.1097023. PMID 15273391.
- Benton, M. J. (2012). "No gap in the Middle Permian record of terrestrial vertebrates". Geology 40.
- McElwain, J. C.; Punyasena, S. W. (2007). "Mass extinction events and the plant fossil record". Trends in Ecology & Evolution 22 (10): 548–557. doi:10.1016/j.tree.2007.09.003. PMID 17919771.
- Retallack, G. J.; Veevers, J. J.; Morante, R. (1996). "Global coal gap between Permian–Triassic extinctions and middle Triassic recovery of peat forming plants". GSA Bulletin 108 (2): 195–207. doi:10.1130/0016-7606(1996)108<0195:GCGBPT>2.3.CO;2.
- Erwin, D.H (1993). The Great Paleozoic Crisis: Life and Death in the Permian. New York: Columbia University Press. ISBN 0-231-07467-0.
- Bowring SA, Erwin DH, Jin YG, Martin MW, Davidek K, Wang W (1998). "U/Pb Zircon Geochronology and Tempo of the End-Permian Mass Extinction". Science 280 (5366): 1039–1045. Bibcode:1998Sci...280.1039B. doi:10.1126/science.280.5366.1039.
- Magaritz M (1989). "13C minima follow extinction events: a clue to faunal radiation". Geology 17 (4): 337–340. Bibcode:1989Geo....17..337M. doi:10.1130/0091-7613(1989)017<0337:CMFEEA>2.3.CO;2.
- Krull SJ, Retallack JR (2000). "13C depth profiles from paleosols across the Permian–Triassic boundary: Evidence for methane release". GSA Bulletin 112 (9): 1459–1472. Bibcode:2000GSAB..112.1459K. doi:10.1130/0016-7606(2000)112<1459:CDPFPA>2.0.CO;2. ISSN 0016-7606.
- Dolenec T, Lojen S, Ramovs A (2001). "The Permian–Triassic boundary in Western Slovenia (Idrijca Valley section): magnetostratigraphy, stable isotopes, and elemental variations". Chemical Geology 175 (1): 175–190. doi:10.1016/S0009-2541(00)00368-5.
- Musashi M, Isozaki Y, Koike T, Kreulen R (2001). "Stable carbon isotope signature in mid-Panthalassa shallow-water carbonates across the Permo–Triassic boundary: evidence for 13C-depleted ocean". Earth Planet. Sci. Lett. 193: 9–20. Bibcode:2001E&PSL.191....9M. doi:10.1016/S0012-821X(01)00398-3.
- Dolenec T, Lojen S, Ramovs A (2001). "The Permian-Triassic boundary in Western Slovenia (Idrijca Valley section): magnetostratigraphy, stable isotopes, and elemental variations". Chemical Geology 175: 175–190. doi:10.1016/S0009-2541(00)00368-5.
- H Visscher, H Brinkhuis, D L Dilcher, W C Elsik, Y Eshet, C V Looy, M R Rampino, and A Traverse (1996). "The terminal Paleozoic fungal event: Evidence of terrestrial ecosystem destabilization and collapse". Proceedings of the National Academy of Sciences 93 (5): 2155–2158. Bibcode:1996PNAS...93.2155V. doi:10.1073/pnas.93.5.2155. PMC 39926. PMID 11607638.
- Foster, C.B.; Stephenson, M.H.; Marshall, C.; Logan, G.A.; Greenwood, P.F. (2002). "A Revision Of Reduviasporonites Wilson 1962: Description, Illustration, Comparison And Biological Affinities". Palynology 26 (1): 35–58. doi:10.2113/0260035.
- López-Gómez, J. and Taylor, E.L. (2005). "Permian-Triassic Transition in Spain: A multidisciplinary approach". Palaeogeography, Palaeoclimatology, Palaeoecology 229 (1–2): 1–2. doi:10.1016/j.palaeo.2005.06.028.
- Looy, C.V.; Twitchett, R.J.; Dilcher, D.L.; Van Konijnenburg-van Cittert, J.H.A.; Visscher, H. (2005). "Life in the end-Permian dead zone". Proceedings of the National Academy of Sciences 162 (4): 7879–7883. Bibcode:2001PNAS...98.7879L. doi:10.1073/pnas.131218098. PMC 35436. PMID 11427710. "See image 2"
- Ward PD, Botha J, Buick R, De Kock MO, Erwin DH, Garrison GH, Kirschvink JL & Smith R (2005). "Abrupt and Gradual Extinction Among Late Permian Land Vertebrates in the Karoo Basin, South Africa". Science 307 (5710): 709–714. Bibcode:2005Sci...307..709W. doi:10.1126/science.1107068. PMID 15661973.
- Retallack, G.J.; Smith, R.M.H.; Ward, P.D. (2003). "Vertebrate extinction across Permian-Triassic boundary in Karoo Basin, South Africa". Bulletin of the Geological Society of America 115 (9): 1133–1152. Bibcode:2003GSAB..115.1133R. doi:10.1130/B25215.1.
- Sephton, Mark A.; Visscher, Henk; Looy, Cindy V.; Verchovsky, Alexander B.; Watson, Jonathon S. (2009). "Chemical constitution of a Permian-Triassic disaster species". Geology 37 (10): 875–878. doi:10.1130/G30096A.1.
- Rampino MR, Prokoph A & Adler A (2000). "Tempo of the end-Permian event: High-resolution cyclostratigraphy at the Permian–Triassic boundary". Geology 28 (7): 643–646. Bibcode:2000Geo....28..643R. doi:10.1130/0091-7613(2000)28<643:TOTEEH>2.0.CO;2. ISSN 0091-7613.
- Wang, S.C.; Everson, P.J. (2007). "Confidence intervals for pulsed mass extinction events". Paleobiology 33 (2): 324–336. doi:10.1666/06056.1.
- Twitchett RJ Looy CV Morante R Visscher H & Wignall PB (2001). "Rapid and synchronous collapse of marine and terrestrial ecosystems during the end-Permian biotic crisis". Geology 29 (4): 351–354. Bibcode:2001Geo....29..351T. doi:10.1130/0091-7613(2001)029<0351:RASCOM>2.0.CO;2. ISSN 0091-7613.
- Retallack, G.J.; Metzger, C.A.; Greaver, T.; Jahren, A.H.; Smith, R.M.H.; Sheldon, N.D. (2006). "Middle-Late Permian mass extinction on land". Bulletin of the Geological Society of America 118 (11–12): 1398–1411. Bibcode:2006GSAB..118.1398R. doi:10.1130/B26011.1.
- Stanley SM & Yang X (1994). "A Double Mass Extinction at the End of the Paleozoic Era". Science 266 (5189): 1340–1344. Bibcode:1994Sci...266.1340S. doi:10.1126/science.266.5189.1340. PMID 17772839.
- Retallack, G.J., Metzger, C.A., Jahren, A.H., Greaver, T., Smith, R.M.H., and Sheldon, N.D (November–December 2006). "Middle-Late Permian mass extinction on land". GSA Bulletin 118 (11/12): 1398–1411. Bibcode:2006GSAB..118.1398R. doi:10.1130/B26011.1.
- Ota, A, and Isozaki, Y. (March 2006). "Fusuline biotic turnover across the Guadalupian–Lopingian (Middle–Upper Permian) boundary in mid-oceanic carbonate buildups: Biostratigraphy of accreted limestone in Japan". Journal of Asian Earth Sciences 26 (3–4): 353–368. Bibcode:2006JAESc..26..353O. doi:10.1016/j.jseaes.2005.04.001.
- Shen, S., and Shi, G.R. (2002). "Paleobiogeographical extinction patterns of Permian brachiopods in the Asian-western Pacific region". Paleobiology 28 (4): 449–463. doi:10.1666/0094-8373(2002)028<0449:PEPOPB>2.0.CO;2. ISSN 0094-8373.
- Wang, X-D, and Sugiyama, T. (December 2000). "Diversity and extinction patterns of Permian coral faunas of China". Lethaia 33 (4): 285–294. doi:10.1080/002411600750053853.
- Racki G (1999). "Silica-secreting biota and mass extinctions: survival processes and patterns". Palaeogeography, Palaeoclimatology, Palaeoecology 154 (1–2): 107–132. doi:10.1016/S0031-0182(99)00089-9.
- Bambach, R.K.; Knoll, A.H.; Wang, S.C. (December 2004). "Origination, extinction, and mass depletions of marine diversity". Paleobiology 30 (4): 522–542. doi:10.1666/0094-8373(2004)030<0522:OEAMDO>2.0.CO;2. ISSN 0094-8373.
- Knoll, A.H. (2004). "Biomineralization and evolutionary history. In: P.M. Dove, J.J. DeYoreo and S. Weiner (Eds), Reviews in Mineralogy and Geochemistry,".
- Stanley, S.M. (2008). "Predation defeats competition on the seafloor". Paleobiology 34 (1): 1–21. doi:10.1666/07026.1. Retrieved 2008-05-13.
- Stanley, S.M. (2007). "An Analysis of the History of Marine Animal Diversity". Paleobiology 33 (sp6): 1–55. doi:10.1666/06020.1.
- Erwin DH (1993). The great Paleozoic crisis; Life and death in the Permian. Columbia University Press. ISBN 0-231-07467-0.
- McKinney, M.L. (1987). "Taxonomic selectivity and continuous variation in mass and background extinctions of marine taxa". Nature 325 (6100): 143–145. Bibcode:1987Natur.325..143M. doi:10.1038/325143a0.
- Twitchett RJ, Looy CV, Morante R, Visscher H, Wignall PB (2001). "Rapid and synchronous collapse of marine and terrestrial ecosystems during the end-Permian biotic crisis". Geology 29 (4): 351–354. Bibcode:2001Geo....29..351T. doi:10.1130/0091-7613(2001)029<0351:RASCOM>2.0.CO;2. ISSN 0091-7613.
- Knoll, A.H.; Bambach, R.K.; Canfield, D.E.; Grotzinger, J.P. (1996). "Comparative Earth history and Late Permian mass extinction". Science(Washington) 273 (5274): 452–457. Bibcode:1996Sci...273..452K. doi:10.1126/science.273.5274.452. PMID 8662528.
- Leighton, L.R.; Schneider, C.L. (2008). "Taxon characteristics that promote survivorship through the Permian–Triassic interval: transition from the Paleozoic to the Mesozoic brachiopod fauna". Paleobiology 34 (1): 65–79. doi:10.1666/06082.1.
- Villier, L.; Korn, D. (Oct 2004). "Morphological Disparity of Ammonoids and the Mark of Permian Mass Extinctions". Science 306 (5694): 264–266. Bibcode:2004Sci...306..264V. doi:10.1126/science.1102127. ISSN 0036-8075. PMID 15472073.
- Saunders, W. B.; Greenfest-Allen, E.; Work, D. M.; Nikolaeva, S. V. (2008). "Morphologic and taxonomic history of Paleozoic ammonoids in time and morphospace". Paleobiology 34 (1): 128–154. doi:10.1666/07053.1.
- End-Permian mass extinction (the Great Dying) | Natural History Museum
- Cascales-Miñana, B.; Cleal, C. J. (2011). "Plant fossil record and survival analyses". Lethaia: no–no. doi:10.1111/j.1502-3931.2011.00262.x.
- Retallack, GJ (1995). "Permian–Triassic life crisis on land". Science 267 (5194): 77–80. Bibcode:1995Sci...267...77R. doi:10.1126/science.267.5194.77. PMID 17840061.
- Looy, CV Brugman WA Dilcher DL & Visscher H (1999). "The delayed resurgence of equatorial forests after the Permian–Triassic ecologic crisis". Proceedings of the National Academy of Sciences of the United States of America 96 (24): 13857–13862. Bibcode:1999PNAS...9613857L. doi:10.1073/pnas.96.24.13857. PMC 24155. PMID 10570163.
- Michaelsen P (2002). "Mass extinction of peat-forming plants and the effect on fluvial styles across the Permian–Triassic boundary, northern Bowen Basin, Australia". Palaeogeography, Palaeoclimatology, Palaeoecology 179 (3–4): 173–188. doi:10.1016/S0031-0182(01)00413-8.
- Maxwell, W. D. (1992). "Permian and Early Triassic extinction of non-marine tetrapods". Palaeontology 35: 571–583.
- Erwin DH (1990). "The End-Permian Mass Extinction". Annual Review of Ecology and Systematics 21: 69–91. doi:10.1146/annurev.es.21.110190.000441.
- Knoll, A.H., Bambach, R.K., Payne, J.L., Pruss, S., and Fischer, W.W. (2007). "Paleophysiology and end-Permian mass extinction". Earth and Planetary Science Letters 256 (3–4): 295–313. Bibcode:2007E&PSL.256..295K. doi:10.1016/j.epsl.2007.02.018. Retrieved 2008-07-04.
- Payne, J.; Turchyn, A.; Paytan, A.; Depaolo, D.; Lehrmann, D.; Yu, M.; Wei, J. (2010). "Calcium isotope constraints on the end-Permian mass extinction". Proceedings of the National Academy of Sciences of the United States of America 107 (19): 8543–8548. Bibcode:2010PNAS..107.8543P. doi:10.1073/pnas.0914065107. PMC 2889361. PMID 20421502.
- Smith, R.M.H. (16 November 1999). "Changing fluvial environments across the Permian-Triassic boundary in the Karoo Basin, South Africa and possible causes of tetrapod extinctions". Palaeogeography, Palaeoclimatology, Palaeoecology 117 (1-2): 81–104. doi:10.1016/0031-0182(94)00119-S. Retrieved 21 February 2012.
- Chinsamy-Turan (2012). Anusuya, ed. Forerunners of mammals : radiation, histology, biology. Bloomington: Indiana University Press. ISBN 978-0-253-35697-0.
- Lehrmann, D.J., Ramezan, J., Bowring, S.A. et al. (December 2006). "Timing of recovery from the end-Permian extinction: Geochronologic and biostratigraphic constraints from south China". Geology 34 (12): 1053–1056. Bibcode:2006Geo....34.1053L. doi:10.1130/G22827A.1.
- Yadong Sun1,2,*, Michael M. Joachimski3, Paul B. Wignall2, Chunbo Yan1, Yanlong Chen4, Haishui Jiang1, Lina Wang1, Xulong Lai1. "Lethally Hot Temperatures During the Early Triassic Greenhouse". Science 338 (6105): 366–370. Bibcode:2012Sci...338..366S. doi:10.1126/science.1224126.
- During the greatest mass extinction in Earth’s history the world’s oceans reached 40 °C (104 °F) – lethally hot.
- Ward PD, Montgomery DR, & Smith R (2000). "Altered river morphology in South Africa related to the Permian–Triassic extinction". Science 289 (5485): 1740–1743. Bibcode:2000Sci...289.1740W. doi:10.1126/science.289.5485.1740. PMID 10976065.
- Hallam A & Wignall PB (1997). Mass Extinctions and their Aftermath. Oxford University Press. ISBN 978-0-19-854916-1.
- Rodland, DL & Bottjer, DJ (2001). "Biotic Recovery from the End-Permian Mass Extinction: Behavior of the Inarticulate Brachiopod Lingula as a Disaster Taxon". PALAIOS 16 (1): 95–101. doi:10.1669/0883-1351(2001)016<0095:BRFTEP>2.0.CO;2. ISSN 0883-1351.
- Zi-qiang W (1996). "Recovery of vegetation from the terminal Permian mass extinction in North China". Review of Palaeobotany and Palynology 91 (1–4): 121–142. doi:10.1016/0034-6667(95)00069-0.
- Wagner PJ, Kosnik MA, & Lidgard S (2006). "Abundance Distributions Imply Elevated Complexity of Post-Paleozoic Marine Ecosystems". Science 314 (5803): 1289–1292. Bibcode:2006Sci...314.1289W. doi:10.1126/science.1133795. PMID 17124319.
- Clapham, M.E., Bottjer, D.J. and Shen, S. (2006). "Decoupled diversity and ecology during the end-Guadalupian extinction (late Permian)". Geological Society of America Abstracts with Programs 38 (7): 117. Retrieved 2008-03-28.
- Foote, M. (1999). "Morphological diversity in the evolutionary radiation of Paleozoic and post-Paleozoic crinoids". Paleobiology (PDFdoi:10.1666/0094-8373(1999)25[1:MDITER]2.0.CO;2. ISSN 0094-8373. JSTOR 2666042.) 25 (sp1): 1–116.
- Baumiller, T. K. (2008). "Crinoid Ecological Morphology". Annual Review of Earth and Planetary Sciences 36 (1): 221–249. Bibcode:2008AREPS..36..221B. doi:10.1146/annurev.earth.36.031207.124116.
- Botha, J., and Smith, R.M.H. (2007). "Lystrosaurus species composition across the Permo–Triassic boundary in the Karoo Basin of South Africa". Lethaia 40 (2): 125–137. doi:10.1111/j.1502-3931.2007.00011.x. Retrieved 2008-07-02. Full version online at "Lystrosaurus species composition across the Permo–Triassic boundary in the Karoo Basin of South Africa" (PDF). Retrieved 2008-07-02.
- Benton, M.J. (2004). Vertebrate Paleontology. Blackwell Publishers. xii–452. ISBN 0-632-05614-2.
- Ruben, J.A., and Jones, T.D. (2000). "Selective Factors Associated with the Origin of Fur and Feathers". American Zoologist 40 (4): 585–596. doi:10.1093/icb/40.4.585.
- Yates AM & Warren AA (2000). "The phylogeny of the 'higher' temnospondyls (Vertebrata: Choanata) and its implications for the monophyly and origins of the Stereospondyli". Zoological Journal of the Linnean Society 128 (1): 77–121. doi:10.1111/j.1096-3642.2000.tb00650.x. Archived from the original on 2007-10-01. Retrieved 2008-01-18.
- Retallack GJ, Seyedolali A, Krull ES, Holser WT, Ambers CP, Kyte FT (1998). "Search for evidence of impact at the Permian–Triassic boundary in Antarctica and Australia". Geology 26 (11): 979–982. Bibcode:1998Geo....26..979R. doi:10.1130/0091-7613(1998)026<0979:SFEOIA>2.3.CO;2.
- Becker L, Poreda RJ, Basu AR, Pope KO, Harrison TM, Nicholson C, Iasky R (2004). "Bedout: a possible end-Permian impact crater offshore of northwestern Australia". Science 304 (5676): 1469–1476. Bibcode:2004Sci...304.1469B. doi:10.1126/science.1093925. PMID 15143216.
- Becker L, Poreda RJ, Hunt AG, Bunch TE, Rampino M (2001). "Impact event at the Permian–Triassic boundary: Evidence from extraterrestrial noble gases in fullerenes". Science 291 (5508): 1530–1533. Bibcode:2001Sci...291.1530B. doi:10.1126/science.1057243. PMID 11222855.
- Basu AR, Petaev MI, Poreda RJ, Jacobsen SB, Becker L (2003). "Chondritic meteorite fragments associated with the Permian–Triassic boundary in Antarctica". Science 302 (5649): 1388–1392. Bibcode:2003Sci...302.1388B. doi:10.1126/science.1090852. PMID 14631038.
- Kaiho K, Kajiwara Y, Nakano T, Miura Y, Kawahata H, Tazaki K, Ueshima M, Chen Z, Shi GR (2001). "End-Permian catastrophe by a bolide impact: Evidence of a gigantic release of sulfur from the mantle". Geology 29 (9): 815–818. Bibcode:2001Geo....29..815K. doi:10.1130/0091-7613(2001)029<0815:EPCBAB>2.0.CO;2. ISSN 0091-7613. Retrieved 2007-10-22.
- Farley KA, Mukhopadhyay S, Isozaki Y, Becker L, Poreda RJ (2001). "An extraterrestrial impact at the Permian–Triassic boundary?". Science 293 (5539): 2343a–2343. doi:10.1126/science.293.5539.2343a. PMID 11577203.
- Koeberl C, Gilmour I, Reimold WU, Philippe Claeys P, Ivanov B (2002). "End-Permian catastrophe by bolide impact: Evidence of a gigantic release of sulfur from the mantle: Comment and Reply". Geology 30 (9): 855–856. Bibcode:2002Geo....30..855K. doi:10.1130/0091-7613(2002)030<0855:EPCBBI>2.0.CO;2. ISSN 0091-7613.
- Isbell JL, Askin RA, Retallack GR (1999). "Search for evidence of impact at the Permian–Triassic boundary in Antarctica and Australia; discussion and reply". Geology 27 (9): 859–860. Bibcode:1999Geo....27..859I. doi:10.1130/0091-7613(1999)027<0859:SFEOIA>2.3.CO;2.
- Koeberl K, Farley KA, Peucker-Ehrenbrink B, Sephton MA (2004). "Geochemistry of the end-Permian extinction event in Austria and Italy: No evidence for an extraterrestrial component". Geology 32 (12): 1053–1056. Bibcode:2004Geo....32.1053K. doi:10.1130/G20907.1.
- Langenhorst F, Kyte FT & Retallack GJ (2005). "Reexamination of quartz grains from the Permian–Triassic boundary section at Graphite Peak, Antarctica" (PDF). Lunar and Planetary Science Conference XXXVI. Retrieved 2007-07-13.
- Jones AP, Price GD, Price NJ, DeCarli PS, Clegg RA (2002). "Impact induced melting and the development of large igneous provinces". Earth and Planetary Science Letters 202 (3): 551–561. Bibcode:2002E&PSL.202..551J. doi:10.1016/S0012-821X(02)00824-5.
- White RV (2002). "Earth's biggest 'whodunnit': unravelling the clues in the case of the end-Permian mass extinction" (PDF). Phil. Trans. Royal Society of London 360 (1801): 2963–2985. Bibcode:2002RSPTA.360.2963W. doi:10.1098/rsta.2002.1097. PMID 12626276. Retrieved 2008-01-12.
- AHager, Bradford H (2001). "Giant Impact Craters Lead To Flood Basalts: A Viable Model". CCNet 33/2001: Abstract 50470.
- Hagstrum, Jonathan T (2001). "Large Oceanic Impacts As The Cause Of Antipodal Hotspots And Global Mass Extinctions". CCNet 33/2001: Abstract 50288.
- von Frese RR, Potts L, Gaya-Pique L, Golynsky AV, Hernandez O, Kim J, Kim H & Hwang J (2006). Abstract "Permian–Triassic mascon in Antarctica". Eos Trans. AGU, Jt. Assem. Suppl. 87 (36): Abstract T41A–08. Retrieved 2007-10-22.
- Von Frese, R.R.B.; L. V. Potts; S. B. Wells; T. E. Leftwich; H. R. Kim; J. W. Kim; A. V. Golynsky; O. Hernandez; L. R. Gaya-Piqué (2009). "GRACE gravity evidence for an impact basin in Wilkes Land, Antarctica". Geochem. Geophys. Geosyst. 10 (2): Q02014. Bibcode:2009GGG....1002014V. doi:10.1029/2008GC002149.
- Geochronological constraints on the age of a Permo–Triassic impact event: U–Pb and 40Ar/39Ar results for the 40 km Araguainha structure of central Brazil. E. Tohver, C. Lana, P.A. Cawood, I.R. Fletcher, F. Jourdan, S. Sherlock, B. Rasmussen, R.I.F. Trindade, E. Yokoyama, C.R. Souza Filho, Y. Marangoni. Geochimica et Cosmochimica Acta. Volume 86, 1 June 2012, Pages 214–227. sciencedirect.com
- Biggest extinction in history caused by climate-changing meteor. University of Western Australia University News Wednesday, 31 July 2013. http://www.news.uwa.edu.au/201307315921/international/biggest-extinction-history-caused-climate-changing-meteor
- Zhou, M-F., Malpas, J, Song, X-Y, Robinson, PT, Sun, M, Kennedy, AK, Lesher, CM & Keays, RR (2002). "A temporal link between the Emeishan large igneous province (SW China) and the end-Guadalupian mass extinction". Earth and Planetary Science Letters 196 (3–4): 113–122. Bibcode:2002E&PSL.196..113Z. doi:10.1016/S0012-821X(01)00608-2.
- Wignall, Paul B. et al. (2009). "Volcanism, Mass Extinction, and Carbon Isotope Fluctuations in the Middle Permian of China". Science 324 (5931): 1179–1182. Bibcode:2009Sci...324.1179W. doi:10.1126/science.1171956. PMID 19478179.
- Andy Saunders, Marc Reichow (2009). "The Siberian Traps - Area and Volume". Retrieved 2009-10-18.
- Andy Saunders and Marc Reichow (January 2009). "The Siberian Traps and the End-Permian mass extinction: a critical review". Chinese Science Bulletin (Springer) 54 (1): 20–37. doi:10.1007/s11434-008-0543-7. Retrieved 09-04-2010.
- Reichow, Marc K.; M.S. Pringle, A.I. Al'Mukhamedov, M.B. Allen, V.L. Andreichev, M.M. Buslov, C.E. Davies, G.S. Fedoseev, J.G. Fitton, S. Inger, A.Ya. Medvedev, C. Mitchell, V.N. Puchkov, I.Yu. Safonova, R.A. Scott, A.D. Saunders (2009). "The timing and extent of the eruption of the Siberian Traps large igneous province: Implications for the end-Permian environmental crisis". Earth and Planetary Science Letters 277: 9–20. Bibcode:2009E&PSL.277....9R. doi:10.1016/j.epsl.2008.09.030. Retrieved 09-04-2010.
- Mundil, R., Ludwig, K.R., Metcalfe, I. & Renne, P.R (2004). "Age and Timing of the Permian Mass Extinctions: U/Pb Dating of Closed-System Zircons". Science 305 (5691): 1760–1763. Bibcode:2004Sci...305.1760M. doi:10.1126/science.1101012. PMID 15375264.
- "Permian–Triassic Extinction - Volcanism"
- Dan Verango (January 24, 2011). "Ancient mass extinction tied to torched coal". USA Today.
- Stephen E. Grasby, Hamed Sanei & Benoit Beauchamp (January 23, 2011). "Catastrophic dispersion of coal fly ash into oceans during the latest Permian extinction". Nature Geoscience 4 (2): 104–107. Bibcode:2011NatGe...4..104G. doi:10.1038/ngeo1069.
- "Researchers find smoking gun of world's biggest extinction; Massive volcanic eruption, burning coal and accelerated greenhouse gas choked out life". University of Calgary. January 23, 2011. Retrieved 2011-01-26.
- Palfy J, Demeny A, Haas J, Htenyi M, Orchard MJ, & Veto I (2001). "Carbon isotope anomaly at the Triassic– Jurassic boundary from a marine section in Hungary". Geology 29 (11): 1047–1050. Bibcode:2001Geo....29.1047P. doi:10.1130/0091-7613(2001)029<1047:CIAAOG>2.0.CO;2. ISSN 0091-7613.
- Berner, R.A. (2002). "Examination of hypotheses for the Permo-Triassic boundary extinction by carbon cycle modeling". Proceedings of the National Academy of Sciences 99 (7): 4172–4177. Bibcode:2002PNAS...99.4172B. doi:10.1073/pnas.032095199. PMC 123621. PMID 11917102.
- Dickens GR, O'Neil JR, Rea DK & Owen RM (1995). "Dissociation of oceanic methane hydrate as a cause of the carbon isotope excursion at the end of the Paleocene". Paleoceanography 10 (6): 965–71. Bibcode:1995PalOc..10..965D. doi:10.1029/95PA02087.
- Schrag, D.P., Berner, R.A., Hoffman, P.F., and Halverson, G.P. (2002). "On the initiation of a snowball Earth". Geochemistry Geophysics Geosystems 3 (6): 1036. Bibcode:2002GGG....3fQ...1S. doi:10.1029/2001GC000219. Preliminary abstract at Schrag, D.P. (June 2001). "On the initiation of a snowball Earth". Geological Society of America.
- Benton, M.J.; Twitchett, R.J. (2003). "How to kill (almost) all life: the end-Permian extinction event". Trends in Ecology & Evolution 18 (7): 358–365. doi:10.1016/S0169-5347(03)00093-4.
- Dickens GR (2001). "The potential volume of oceanic methane hydrates with variable external conditions". Organic Geochemistry 32 (10): 1179–1193. doi:10.1016/S0146-6380(01)00086-9.
- Reichow MK, Saunders AD, White RV, Pringle MS, Al'Muhkhamedov AI, Medvedev AI & Kirda NP (2002). "40Ar/39Ar Dates from the West Siberian Basin: Siberian Flood Basalt Province Doubled". Science 296 (5574): 1846–1849. Bibcode:2002Sci...296.1846R. doi:10.1126/science.1071671. PMID 12052954.
- Holser WT, Schoenlaub H-P, Attrep Jr M, Boeckelmann K, Klein P, Magaritz M, Orth CJ, Fenninger A, Jenny C, Kralik M, Mauritsch H, Pak E, Schramm J-F, Stattegger K & Schmoeller R (1989). "A unique geochemical record at the Permian/Triassic boundary". Nature 337 (6202): 39–44. Bibcode:1989Natur.337...39H. doi:10.1038/337039a0.
- Dobruskina IA (1987). "Phytogeography of Eurasia during the early Triassic". Palaeogeography, Palaeoclimatology, Palaeoecology 58 (1–2): 75–86. doi:10.1016/0031-0182(87)90007-1.
- Rothman, D. H.; Fournier, G. P.; French, K. L.; Alm, E. J.; Boyle, E. A.; Cao, C.; Summons, R. E. (2014-03-31). "Methanogenic burst in the end-Permian carbon cycle". Proceedings of the National Academy of Sciences 111 (15): 5462–7. doi:10.1073/pnas.1318106111. PMC 3992638. PMID 24706773.
- Wignall, P.B.; Twitchett, R.J. (2002). "Extent, duration, and nature of the Permian-Triassic superanoxic event". Geological Society of America Special Papers 356: 395–413. doi:10.1130/0-8137-2356-6.395.
- Cao, Changqun; Gordon D. Love; Lindsay E. Hays; Wei Wang; Shuzhong Shen; Roger E. Summons (2009). "Biogeochemical evidence for euxinic oceans and ecological disturbance presaging the end-Permian mass extinction event". Earth and Planetary Science Letters 281: 188–201. Bibcode:2009E&PSL.281..188C. doi:10.1016/j.epsl.2009.02.012.
- Hays, Lindsay; Kliti Grice; Clinton B. Foster; Roger E. Summons (2012). "Biomarker and isotopic trends in a Permian–Triassic sedimentary section at Kap Stosch, Greenland". Organic Geochemistry 43: 67–82. doi:10.1016/j.orggeochem.2011.10.010.
- Meyers, Katja; L.R. Kump, A. Ridgwell (September 2008). "Biogeochemical controls on photic-zone euxinia during the end-Permian mass extinction". Geology 36 (9): 747–750. doi:10.1130/G24618A.
- Kump, Lee; Alexander Pavlov and Michael A. Arthur (2005). "Massive release of hydrogen sulfide to the surface ocean and atmosphere during intervals of oceanic anoxia". Geology 33: 397–400. Bibcode:2005Geo....33..397K. doi:10.1130/G21295.1.
- The Permian - Palaeos
- Chandler, David L.; Massachusetts Institute of Technology (March 31, 2014). "Ancient whodunit may be solved: Methane-producing microbes did it!". Science Daily.
- Zhang R, Follows, MJ, Grotzinger, JP, & Marshall J (2001). "Could the Late Permian deep ocean have been anoxic?". Paleoceanography 16 (3): 317–329. Bibcode:2001PalOc..16..317Z. doi:10.1029/2000PA000522.
- Over, Jess (editor), Understanding Late Devonian and Permian–Triassic Biotic and Climatic Events, (Volume 20 in series Developments in Palaeontology and Stratigraphy (2006). The state of the inquiry into the extinction events.
- Sweet, Walter C. (editor), Permo–Triassic Events in the Eastern Tethys : Stratigraphy Classification and Relations with the Western Tethys (in series World and Regional Geology)
- "Siberian Traps". Retrieved 2011-04-30.
- "Big Bang In Antarctica: Killer Crater Found Under Ice". Retrieved 2011-04-30.
- "Global Warming Led To Atmospheric Hydrogen Sulfide And Permian Extinction". Retrieved 2011-04-30.
- Morrison D. "Did an Impact Trigger the Permian-Triassic Extinction?". NASA. Retrieved 2011-04-30.
- "Permian Extinction Event". Retrieved 2011-04-30.
- "Explosive eruption of coal and basalt and the end-Permian mass extinction". Retrieved 2011-12-25.
- "BBC Radio 4 In Our Time discussion of the Permian-Triassic boundary". Retrieved 2012-02-01. Podcast available. |
The difference between an open and closed economy lies in a country's policies on international trade and financial markets. An open economy allows its businesses and individuals to trade with businesses and individuals in other economies and participate in foreign capital markets. A closed economy prevents its businesses and individuals from interacting with foreign economies in an effort to remain isolated and self-sufficient. The basic distinction between an open and closed economy concerns whether a country's government allows its citizens to participate in the global marketplace.
Interaction with foreign countries is the basis of international trade. Trading between countries happens through the export, or sale, of goods and services by parties in one country and the import, or purchase, of those goods and services by parties in another country. On the surface, the ability to conduct trade across international borders may seem a luxury rather than a necessity, but the ability is incredibly important to the health of a country's economy. International trade expands the market for goods and services, allowing businesses to employ more people to make a quantity of goods that exceeds the demand in their home country.
An open and closed economy differs in how each handles international trade. Open economies allow importing and exporting of goods. Closed economies prevent importing and exporting, and, instead, rely solely on goods and services produced within the country to satisfy domestic demand. The notion of an economy's production equaling its consumption is a type of autarky, or policy requiring self-sufficiency.
The other distinction between an open and closed economy is the participation in capital markets. The international capital market consists of stock exchanges that enable a country's corporations to raise money from the public. It also consists of the ability of governments to raise money by selling debt instruments, such as treasury bonds, and to make investments in foreign currencies. In an open economy, a person can buy stock in a corporation that is located in a foreign country or purchase foreign currency to go on a vacation. Closed economies, however, prevent businesses and individuals from using the country's money to make purchases outside its borders.
There are no countries that exist today with completely closed economies. Some countries, like North Korea, restrict their trade to a certain limited block of countries, but their economies are not completely closed. The only instances in world history where countries have implemented a classic closed economy for a time is when a country was under the thumb of a totalitarian regime that isolated the country to keep political or military control. Globalization of world markets ensures that countries prefer to operate under an open economy system, but that notion can also have limitations. For example, the U.S. might seem a classic example of an open economy, but it restricts its citizens from trading with Cuba. |
Diameter Calculator is a free online tool that displays the diameter of the circle when the circumference is given. BYJU’S online diameter calculator tool makes the calculation faster and it displays the diameter in a fraction of seconds.
How to Use the Diameter Calculator?
The procedure to use the diameter calculator is as follows:
Step 1: Enter the circumference in the respective input field
Step 2: Now click the button “Solve” to get the diameter
Step 3: Finally, the diameter of the circle for the given circumference will be displayed in the output field
What is Meant by the Diameter?
In Mathematics, the diameter is the term which is most frequently used in Geometry. The diameter is closely related to the two-dimensional shape circle. We know that the circle is a closed 2D figure where all the points on the surface of the circle are equidistant from the centre point. The distance between the surface and the centre of the circle is the radius “R”. The diameter of the circle “D” is defined as the double the radius. The diameter is defined as the straight line where it passes from one side of the circle to the other side through the centre point. The formula to calculate the diameter when the radius is given is D = 2R |
||The examples and perspective in this article deal primarily with Anglosphere and Europe and do not represent a worldwide view of the subject. (December 2010) (Learn how and when to remove this template message)|
A wildfire or wildland fire is a fire in an area of combustible vegetation that occurs in the countryside or rural area. Depending on the type of vegetation where it occurs, a wildfire can also be classified more specifically as a brush fire, bush fire, desert fire, forest fire, grass fire, hill fire, peat fire, vegetation fire, or veld fire. Fossil charcoal indicates that wildfires began soon after the appearance of terrestrial plants 420 million years ago. Wildfire’s occurrence throughout the history of terrestrial life invites conjecture that fire must have had pronounced evolutionary effects on most ecosystems' flora and fauna. Earth is an intrinsically flammable planet owing to its cover of carbon-rich vegetation, seasonally dry climates, atmospheric oxygen, and widespread lightning and volcano ignitions.
Wildfires can be characterized in terms of the cause of ignition, their physical properties, the combustible material present, and the effect of weather on the fire. Wildfires can cause damage to property and human life, but they have many beneficial effects on native vegetation, animals, and ecosystems that have evolved with fire. Many plant species depend on the effects of fire for growth and reproduction. However, wildfire in ecosystems where wildfire is uncommon or where non-native vegetation has encroached may have negative ecological effects. Wildfire behaviour and severity result from the combination of factors such as available fuels, physical setting, and weather. Analyses of historical meteorological data and national fire records in western North America show the primacy of climate in driving large regional fires via wet periods that create substantial fuels or drought and warming that extend conducive fire weather.
Strategies of wildfire prevention, detection, and suppression have varied over the years. One common and inexpensive technique is controlled burning: permitting or even igniting smaller fires to minimize the amount of flammable material available for a potential wildfire. Vegetation may be burned periodically to maintain high species diversity and frequent burning of surface fuels limits fuel accumulation. Wildland fire use is the cheapest and most ecologically appropriate policy for many forests. Fuels may also be removed by logging, but fuels treatments and thinning have no effect on severe fire behavior Wildfire itself is reportedly "the most effective treatment for reducing a fire's rate of spread, fireline intensity, flame length, and heat per unit of area" according to Jan Van Wagtendonk, a biologist at the Yellowstone Field Station. Building codes in fire-prone areas typically require that structures be built of flame-resistant materials and a defensible space be maintained by clearing flammable materials within a prescribed distance from the structure.
- 1 Causes
- 2 Spread
- 3 Physical properties
- 4 Effect of weather
- 5 Ecology
- 6 History
- 7 Prevention
- 8 US wildfire policy
- 9 Detection
The most common direct human causes of wildfire ignition include arson, discarded cigarettes, power-line arcs (as detected by arc mapping), and sparks from equipment Ignition of wildland fires via contact with hot rifle-bullet fragments is also possible under the right conditions. Wildfires can also be started in communities experiencing shifting cultivation, where land is cleared quickly and farmed until the soil loses fertility, and slash and burn clearing. Forested areas cleared by logging encourage the dominance of flammable grasses, and abandoned logging roads overgrown by vegetation may act as fire corridors. Annual grassland fires in southern Vietnam stem in part from the destruction of forested areas by US military herbicides, explosives, and mechanical land-clearing and -burning operations during the Vietnam War.
The most common cause of wildfires varies throughout the world. In Canada and northwest China, for example, lightning operates as the major source of ignition. In other parts of the world, human involvement is a major contributor. In Africa, Central America, Fiji, Mexico, New Zealand, South America, and Southeast Asia, wildfires can be attributed to human activities such as agriculture, animal husbandry, and land-conversion burning. In China and in the Mediterranean Basin, human carelessness is a major cause of wildfires. In the United States and Australia, the source of wildfires can be traced both to lightning strikes and to human activities (such as machinery sparks, cast-away cigarette butts, or arson)." Coal seam fires burn in the thousands around the world, such as those in Burning Mountain, New South Wales; Centralia, Pennsylvania; and several coal-sustained fires in China. They can also flare up unexpectedly and ignite nearby flammable material.
The spread of wildfires varies based on the flammable material present, its vertical arrangement and moisture content, and weather conditions. Fuel arrangement and density is governed in part by topography, as land shape determines factors such as available sunlight and water for plant growth. Overall, fire types can be generally characterized by their fuels as follows:
- Ground fires are fed by subterranean roots, duff and other buried organic matter. This fuel type is especially susceptible to ignition due to spotting. Ground fires typically burn by smoldering, and can burn slowly for days to months, such as peat fires in Kalimantan and Eastern Sumatra, Indonesia, which resulted from a riceland creation project that unintentionally drained and dried the peat.
- Crawling or surface fires are fueled by low-lying vegetation such as leaf and timber litter, debris, grass, and low-lying shrubbery.
- Ladder fires consume material between low-level vegetation and tree canopies, such as small trees, downed logs, and vines. Kudzu, Old World climbing fern, and other invasive plants that scale trees may also encourage ladder fires.
- Crown, canopy, or aerial fires burn suspended material at the canopy level, such as tall trees, vines, and mosses. The ignition of a crown fire, termed crowning, is dependent on the density of the suspended material, canopy height, canopy continuity, sufficient surface and ladder fires, vegetation moisture content, and weather conditions during the blaze. Stand-replacing fires lit by humans can spread into the Amazon rain forest, damaging ecosystems not particularly suited for heat or arid conditions.
Wildfires occur when all of the necessary elements of a fire triangle come together in a susceptible area: an ignition source is brought into contact with a combustible material such as vegetation, that is subjected to sufficient heat and has an adequate supply of oxygen from the ambient air. A high moisture content usually prevents ignition and slows propagation, because higher temperatures are required to evaporate any water within the material and heat the material to its fire point. Dense forests usually provide more shade, resulting in lower ambient temperatures and greater humidity, and are therefore less susceptible to wildfires. Less dense material such as grasses and leaves are easier to ignite because they contain less water than denser material such as branches and trunks. Plants continuously lose water by evapotranspiration, but water loss is usually balanced by water absorbed from the soil, humidity, or rain. When this balance is not maintained, plants dry out and are therefore more flammable, often a consequence of droughts.
A wildfire front is the portion sustaining continuous flaming combustion, where unburned material meets active flames, or the smoldering transition between unburned and burned material. As the front approaches, the fire heats both the surrounding air and woody material through convection and thermal radiation. First, wood is dried as water is vaporized at a temperature of 100 °C (212 °F). Next, the pyrolysis of wood at 230 °C (450 °F) releases flammable gases. Finally, wood can smoulder at 380 °C (720 °F) or, when heated sufficiently, ignite at 590 °C (1,000 °F). Even before the flames of a wildfire arrive at a particular location, heat transfer from the wildfire front warms the air to 800 °C (1,470 °F), which pre-heats and dries flammable materials, causing materials to ignite faster and allowing the fire to spread faster. High-temperature and long-duration surface wildfires may encourage flashover or torching: the drying of tree canopies and their subsequent ignition from below.
Wildfires have a rapid forward rate of spread (FROS) when burning through dense, uninterrupted fuels. They can move as fast as 10.8 kilometres per hour (6.7 mph) in forests and 22 kilometres per hour (14 mph) in grasslands. Wildfires can advance tangential to the main front to form a flanking front, or burn in the opposite direction of the main front by backing. They may also spread by jumping or spotting as winds and vertical convection columns carry firebrands (hot wood embers) and other burning materials through the air over roads, rivers, and other barriers that may otherwise act as firebreaks. Torching and fires in tree canopies encourage spotting, and dry ground fuels that surround a wildfire are especially vulnerable to ignition from firebrands. Spotting can create spot fires as hot embers and firebrands ignite fuels downwind from the fire. In Australian bushfires, spot fires are known to occur as far as 20 kilometres (12 mi) from the fire front.
Especially large wildfires may affect air currents in their immediate vicinities by the stack effect: air rises as it is heated, and large wildfires create powerful updrafts that will draw in new, cooler air from surrounding areas in thermal columns. Great vertical differences in temperature and humidity encourage pyrocumulus clouds, strong winds, and fire whirls with the force of tornadoes at speeds of more than 80 kilometres per hour (50 mph). Rapid rates of spread, prolific crowning or spotting, the presence of fire whirls, and strong convection columns signify extreme conditions.
Effect of weather
Heat waves, droughts, cyclical climate changes such as El Niño, and regional weather patterns such as high-pressure ridges can increase the risk and alter the behavior of wildfires dramatically. Years of precipitation followed by warm periods can encourage more widespread fires and longer fire seasons. Since the mid-1980s, earlier snowmelt and associated warming has also been associated with an increase in length and severity of the wildfire season in the Western United States. Global warming may increase the intensity and frequency of droughts in many areas, creating more intense and frequent wildfires. A 2015 study indicates that the increase in fire risk in California may be attributable to human-induced climate change. A study of alluvial sediment deposits going back over 8,000 years found warmer climate periods experienced severe droughts and stand-replacing fires and concluded climate was such a powerful influence on wildfire that trying to recreate presettlement forest structure is likely impossible in a warmer future.
Intensity also increases during daytime hours. Burn rates of smoldering logs are up to five times greater during the day due to lower humidity, increased temperatures, and increased wind speeds. Sunlight warms the ground during the day which creates air currents that travel uphill. At night the land cools, creating air currents that travel downhill. Wildfires are fanned by these winds and often follow the air currents over hills and through valleys. Fires in Europe occur frequently during the hours of 12:00 p.m. and 2:00 p.m. Wildfire suppression operations in the United States revolve around a 24-hour fire day that begins at 10:00 a.m. due to the predictable increase in intensity resulting from the daytime warmth.
Wildfire’s occurrence throughout the history of terrestrial life invites conjecture that fire must have had pronounced evolutionary effects on most ecosystems' flora and fauna. Wildfires are common in climates that are sufficiently moist to allow the growth of vegetation but feature extended dry, hot periods. Such places include the vegetated areas of Australia and Southeast Asia, the veld in southern Africa, the fynbos in the Western Cape of South Africa, the forested areas of the United States and Canada, and the Mediterranean Basin.
High-severity wildfire creates complex early seral habitat (also called “snag forest habitat”), which often has higher species richness and diversity than unburned old forest. Plant and animal species in most types of North American forests evolved with fire, and many of these species depend on wildfires, and particularly high-severity fires, to reproduce and grow. Fire helps to return nutrients from plant matter back to soil, the heat from fire is necessary to the germination of certain types of seeds, and the snags (dead trees) and early successional forests created by high-severity fire create habitat conditions that are beneficial to wildlife. Early successional forests created by high-severity fire support some of the highest levels of native biodiversity found in temperate conifer forests. Post-fire logging has no ecological benefits and many negative impacts; the same is often true for post-fire seeding.
Although some ecosystems rely on naturally occurring fires to regulate growth, some ecosystems suffer from too much fire, such as the chaparral in southern California and lower elevation deserts in the American Southwest. The increased fire frequency in these ordinarily fire-dependent areas has upset natural cycles, damaged native plant communities, and encouraged the growth of non-native weeds. Invasive species, such as Lygodium microphyllum and Bromus tectorum, can grow rapidly in areas that were damaged by fires. Because they are highly flammable, they can increase the future risk of fire, creating a positive feedback loop that increases fire frequency and further alters native vegetation communities.
In the Amazon Rainforest, drought, logging, cattle ranching practices, and slash-and-burn agriculture damage fire-resistant forests and promote the growth of flammable brush, creating a cycle that encourages more burning. Fires in the rainforest threaten its collection of diverse species and produce large amounts of CO2. Also, fires in the rainforest, along with drought and human involvement, could damage or destroy more than half of the Amazon rainforest by the year 2030. Wildfires generate ash, destroy available organic nutrients, and cause an increase in water runoff, eroding away other nutrients and creating flash flood conditions. A 2003 wildfire in the North Yorkshire Moors destroyed 2.5 square kilometers (600 acres) of heather and the underlying peat layers. Afterwards, wind erosion stripped the ash and the exposed soil, revealing archaeological remains dating back to 10,000 BC. Wildfires can also have an effect on climate change, increasing the amount of carbon released into the atmosphere and inhibiting vegetation growth, which affects overall carbon uptake by plants.
In tundra there is a natural pattern of accumulation of fuel and wildfire which varies depending on the nature of vegetation and terrain. Research in Alaska has shown fire-event return intervals, (FRIs) that typically vary from 150 to 200 years with dryer lowland areas burning more frequently than wetter upland areas.
Plants in wildfire-prone ecosystems often survive through adaptations to their local fire regime. Such adaptations include physical protection against heat, increased growth after a fire event, and flammable materials that encourage fire and may eliminate competition. For example, plants of the genus Eucalyptus contain flammable oils that encourage fire and hard sclerophyll leaves to resist heat and drought, ensuring their dominance over less fire-tolerant species. Dense bark, shedding lower branches, and high water content in external structures may also protect trees from rising temperatures. Fire-resistant seeds and reserve shoots that sprout after a fire encourage species preservation, as embodied by pioneer species. Smoke, charred wood, and heat can stimulate the germination of seeds in a process called serotiny. Exposure to smoke from burning plants promotes germination in other types of plants by inducing the production of the orange butenolide.
Grasslands in Western Sabah, Malaysian pine forests, and Indonesian Casuarina forests are believed to have resulted from previous periods of fire. Chamise deadwood litter is low in water content and flammable, and the shrub quickly sprouts after a fire. Cape lilies lie dormant until flames brush away the covering, then blossom almost overnight. Sequoia rely on periodic fires to reduce competition, release seeds from their cones, and clear the soil and canopy for new growth. Caribbean Pine in Bahamian pineyards have adapted to and rely on low-intensity, surface fires for survival and growth. An optimum fire frequency for growth is every 3 to 10 years. Too frequent fires favor herbaceous plants, and infrequent fires favor species typical of Bahamian dry forests.
Most of the Earth's weather and air pollution resides in the troposphere, the part of the atmosphere that extends from the surface of the planet to a height of about 10 kilometers (6 mi). The vertical lift of a severe thunderstorm or pyrocumulonimbus can be enhanced in the area of a large wildfire, which can propel smoke, soot, and other particulate matter as high as the lower stratosphere. Previously, prevailing scientific theory held that most particles in the stratosphere came from volcanoes, but smoke and other wildfire emissions have been detected from the lower stratosphere. Pyrocumulus clouds can reach 6,100 meters (20,000 ft) over wildfires. Satellite observation of smoke plumes from wildfires revealed that the plumes could be traced intact for distances exceeding 1,600 kilometers (1,000 mi). Computer-aided models such as CALPUFF may help predict the size and direction of wildfire-generated smoke plumes by using atmospheric dispersion modeling.
Wildfires can affect local atmospheric pollution, and release carbon in the form of carbon dioxide. Wildfire emissions contain fine particulate matter which can cause cardiovascular and respiratory problems. Increased fire byproducts in the troposphere can increase ozone concentration beyond safe levels. Forest fires in Indonesia in 1997 were estimated to have released between 0.81 and 2.57 gigatonnes (0.89 and 2.83 billion short tons) of CO2 into the atmosphere, which is between 13%–40% of the annual global carbon dioxide emissions from burning fossil fuels. Atmospheric models suggest that these concentrations of sooty particles could increase absorption of incoming solar radiation during winter months by as much as 15%.
In the Welsh Borders, the first evidence of wildfire is rhyniophytoid plant fossils preserved as charcoal, dating to the Silurian period (about ). Smoldering surface fires started to occur sometime before the Early Devonian period . Low atmospheric oxygen during the Middle and Late Devonian was accompanied by a decrease in charcoal abundance. Additional charcoal evidence suggests that fires continued through the Carboniferous period. Later, the overall increase of atmospheric oxygen from 13% in the Late Devonian to 30-31% by the Late Permian was accompanied by a more widespread distribution of wildfires. Later, a decrease in wildfire-related charcoal deposits from the late Permian to the Triassic periods is explained by a decrease in oxygen levels.
Wildfires during the Paleozoic and Mesozoic periods followed patterns similar to fires that occur in modern times. Surface fires driven by dry seasons[clarification needed] are evident in Devonian and Carboniferous progymnosperm forests. Lepidodendron forests dating to the Carboniferous period have charred peaks, evidence of crown fires. In Jurassic gymnosperm forests, there is evidence of high frequency, light surface fires. The increase of fire activity in the late Tertiary is possibly due to the increase of C4-type grasses. As these grasses shifted to more mesic habitats, their high flammability increased fire frequency, promoting grasslands over woodlands. However, fire-prone habitats may have contributed to the prominence of trees such as those of the genera Eucalyptus, Pinus and Sequoia, which have thick bark to withstand fires and employ serotiny.
The human use of fire for agricultural and hunting purposes during the Paleolithic and Mesolithic ages altered the preexisting landscapes and fire regimes. Woodlands were gradually replaced by smaller vegetation that facilitated travel, hunting, seed-gathering and planting. In recorded human history, minor allusions to wildfires were mentioned in the Bible and by classical writers such as Homer. However, while ancient Hebrew, Greek, and Roman writers were aware of fires, they were not very interested in the uncultivated lands where wildfires occurred. Wildfires were used in battles throughout human history as early thermal weapons. From the Middle ages, accounts were written of occupational burning as well as customs and laws that governed the use of fire. In Germany, regular burning was documented in 1290 in the Odenwald and in 1344 in the Black Forest. In the 14th century Sardinia, firebreaks were used for wildfire protection. In Spain during the 1550s, sheep husbandry was discouraged in certain provinces by Philip II due to the harmful effects of fires used in transhumance. As early as the 17th century, Native Americans were observed using fire for many purposes including cultivation, signaling, and warfare. Scottish botanist David Douglas noted the native use of fire for tobacco cultivation, to encourage deer into smaller areas for hunting purposes, and to improve foraging for honey and grasshoppers. Charcoal found in sedimentary deposits off the Pacific coast of Central America suggests that more burning occurred in the 50 years before the Spanish colonization of the Americas than after the colonization. In the post-World War II Baltic region, socio-economic changes led more stringent air quality standards and bans on fires that eliminated traditional burning practices. In the mid-19th century, explorers from the HMS Beagle observed Australian Aborigines using fire for ground clearing, hunting, and regeneration of plant food in a method later named fire-stick farming. Such careful use of fire has been employed for centuries in the lands protected by Kakadu National Park to encourage biodiversity.
Wildfires typically occurred during periods of increased temperature and drought. An increase in fire-related debris flow in alluvial fans of northeastern Yellowstone National Park was linked to the period between AD 1050 and 1200, coinciding with the Medieval Warm Period. However, human influence caused an increase in fire frequency. Dendrochronological fire scar data and charcoal layer data in Finland suggests that, while many fires occurred during severe drought conditions, an increase in the number of fires during 850 BC and 1660 AD can be attributed to human influence. Charcoal evidence from the Americas suggested a general decrease in wildfires between 1 AD and 1750 compared to previous years. However, a period of increased fire frequency between 1750 and 1870 was suggested by charcoal data from North America and Asia, attributed to human population growth and influences such as land clearing practices. This period was followed by an overall decrease in burning in the 20th century, linked to the expansion of agriculture, increased livestock grazing, and fire prevention efforts. A meta-analysis found that 17 times more land burned annually in California before 1800 compared to recent decades (1,800,000 hectares/year compared to 102,000 hectares/year).
Wildfire prevention refers to the preemptive methods aimed at reducing the risk of fires as well as lessening its severity and spread. Prevention techniques aim to manage air quality, maintain ecological balances, protect resources, and to affect future fires. North American firefighting policies permit naturally caused fires to burn to maintain their ecological role, so long as the risks of escape into high-value areas are mitigated. However, prevention policies must consider the role that humans play in wildfires, since, for example, 95% of forest fires in Europe are related to human involvement. Sources of human-caused fire may include arson, accidental ignition, or the uncontrolled use of fire in land-clearing and agriculture such as the slash-and-burn farming in Southeast Asia.
In 1937, U.S. President Franklin D. Roosevelt initiated a nationwide fire prevention campaign, highlighting the role of human carelessness in forest fires. Later posters of the program featured Uncle Sam, leaders of the Axis powers of World War II, characters from the Disney movie Bambi, and the official mascot of the U.S. Forest Service, Smokey Bear. Reducing human-caused ignitions may be the most effective means of reducing unwanted wildfire. Alteration of fuels is commonly undertaken when attempting to affect future fire risk and behavior. Wildfire prevention programs around the world may employ techniques such as wildland fire use and prescribed or controlled burns. Wildland fire use refers to any fire of natural causes that is monitored but allowed to burn. Controlled burns are fires ignited by government agencies under less dangerous weather conditions.
Vegetation may be burned periodically to maintain high species diversity and frequent burning of surface fuels limits fuel accumulation. Wildland fire use is the cheapest and most ecologically appropriate policy for many forests. Fuels may also be removed by logging, but fuels treatments and thinning have no effect on severe fire behavior Wildfire models are often used to predict and compare the benefits of different fuel treatments on future wildfire spread, but their accuracy is low.
Wildfire itself is reportedly "the most effective treatment for reducing a fire's rate of spread, fireline intensity, flame length, and heat per unit of area" according to Jan Van Wagtendonk, a biologist at the Yellowstone Field Station.
Building codes in fire-prone areas typically require that structures be built of flame-resistant materials and a defensible space be maintained by clearing flammable materials within a prescribed distance from the structure. Communities in the Philippines also maintain fire lines 5 to 10 meters (16 to 33 ft) wide between the forest and their village, and patrol these lines during summer months or seasons of dry weather. Continued residential development in fire-prone areas and rebuilding structures destroyed by fires has been met with criticism. The ecological benefits of fire are often overridden by the economic and safety benefits of protecting structures and human life.
US wildfire policy
History of wildfire policy in the U.S.
Since the turn of the 20th century, various federal and state agencies have been involved in wildland fire management in one form or another. In the early 20th century, for example, the federal government, through the U.S. Army and the U.S. Forest Service, solicited fire suppression as a primary goal of managing the nation's forests. At this time in history fire was viewed as a threat to timber, an economically important natural resource. As such, the decision was made to devote public funds to fire suppression and fire prevention efforts. For example, the Forest Fire Emergency Fund Act of 1908 permitted deficit spending in the case of emergency fire situations. As a result, the U.S. Forest Service was able to acquire a deficit of over $1 million in 1910 due to emergency fire suppression efforts. Following the same tone of timber resource protection, the U.S. Forest Service adopted the "10 AM Policy" in 1935. Through this policy, the agency advocated the control of all fires by 10 o'clock of the morning following the discovery of a wildfire. Fire prevention was also heavily advocated through public education campaigns such as Smokey Bear. Through these and similar public education campaigns the general public was, in a sense, trained to perceive all wildfire as a threat to civilized society and natural resources. The negative sentiment towards wildland fire prevailed and helped to shape wildland fire management objectives throughout most of the 20th century.
Beginning in the 1970s public perception of wildland fire management began to shift. Despite strong funding for fire suppression in the first half of the 20th century, massive wildfires continued to be prevalent across the landscape of North America. Ecologists were beginning to recognize the presence and ecological importance of natural, lightning-ignited wildfires across the United States. It was learned that suppression of fire in certain ecosystems may in fact increase the likelihood that a wildfire will occur and may increase the intensity of those wildfires. With the emergence of fire ecology as a science also came an effort to apply fire to ecosystems in a controlled manner; however, suppression is still the main tactic when a fire is set by a human or if it threatens life or property. By the 1980s, in light of this new understanding, funding efforts began to support prescribed burning in order to prevent wildfire events. In 2001, the United States implemented a National Fire Plan, increasing the budget for the reduction of hazardous fuels from $108 million in 2000 to $401 million.
In addition to using prescribed fire to reduce the chance of catastrophic wildfires, mechanical methods have recently been adopted as well. Mechanical methods include the use of chippers and other machinery to remove hazardous fuels and thereby reduce the risk of wildfire events. Today the United States' maintains that, "fire, as a critical natural process, will be integrated into land and resource management plans and activities on a landscape scale, and across agency boundaries. Response to wildfire is based on ecological, social and legal consequences of fire. The circumstance under which a fire occurs, and the likely consequences and public safety and welfare, natural and cultural resources, and values to be protected dictate the appropriate management response to fire" (United States Department of Agriculture Guidance for Implementation of Federal Wildland Fire Management Policy, 13 February 2009). The five federal regulatory agencies managing forest fire response and planning for 676 million acres in the United States are the Department of the Interior, the Bureau of Land Management, the Bureau of Indian Affairs, the National Park Service, the United States Department of Agriculture-Forest Service and the United States Fish and Wildlife Services. Several hundred million U.S. acres of wildfire management are also conducted by state, county, and local fire management organizations. In 2014, legislators proposed The Wildfire Disaster Funding Act to provide $2.7 billion fund appropriated by congress for the USDA and Department of Interior to use in fire suppression. The bill is a reaction to United States Forest Service and Department of Interior costs of Western Wildfire suppression appending that amounted to $3.5 billion in 2013.
Wildland-urban interface policy
An aspect of wildfire policy that is gaining attention is the wildland-urban interface (WUI). More and more people are living in "red zones," or areas that are at high risk of wildfires. FEMA and the NFPA develop specific policies to guide homeowners and builders in how to build and maintain structures at the WUI and how protect against property losses. For example, NFPA-1141 is a standard for fire protection infrastructure for land development in wildland, rural and suburban areas and NFPA-1144 is a standard for reducing structure ignition hazards from wildland fire. For a full list of these policies and guidelines, see http://www.nfpa.org/categoryList.asp?categoryID=124&URL=Codes%20&%20Standards. Compensation for losses in the WUI are typically negotiated on an incident-by-incident basis. This is generating discussion about the burden of responsibility for funding and fighting a fire in the WUI, in that, if a resident chooses to live in a known red zone, should he or she retain a higher level of responsibility for funding home protection against wildfires. One initiative aimed at helping U.S. WUI communities live more safely with fire is called fire-adapted communities.
Economics of fire management policy
Similar to that of military operations, fire management is often very expensive in the U.S. and the rest of the world. Today, it is not uncommon for suppression operations for a single wildfire to exceed costs of $1 million in just a few days. The United States Department of Agriculture allotted $2.2 billion for wildfire management in 2012. Although fire suppression purports to benefit society, other options for fire management exist. While these options cannot completely replace fire suppression as a fire management tool, other options can play an important role in overall fire management and can therefore affect the costs of fire suppression.
It is commonly accepted that past fire suppression and climate change has resulted in larger, more intense wildfire events which are seen today. In economic terms, expenditures used for wildfire suppression in the early 20th century have contributed to increased suppression costs which are being realized today.
Fast and effective detection is a key factor in wildfire fighting. Early detection efforts were focused on early response, accurate results in both daytime and nighttime, and the ability to prioritize fire danger. Fire lookout towers were used in the United States in the early 20th century and fires were reported using telephones, carrier pigeons, and heliographs. Aerial and land photography using instant cameras were used in the 1950s until infrared scanning was developed for fire detection in the 1960s. However, information analysis and delivery was often delayed by limitations in communication technology. Early satellite-derived fire analyses were hand-drawn on maps at a remote site and sent via overnight mail to the fire manager. During the Yellowstone fires of 1988, a data station was established in West Yellowstone, permitting the delivery of satellite-based fire information in approximately four hours.
Currently, public hotlines, fire lookouts in towers, and ground and aerial patrols can be used as a means of early detection of forest fires. However, accurate human observation may be limited by operator fatigue, time of day, time of year, and geographic location. Electronic systems have gained popularity in recent years as a possible resolution to human operator error. A government report on a recent trial of three automated camera fire detection systems in Australia did, however, conclude "...detection by the camera systems was slower and less reliable than by a trained human observer". These systems may be semi- or fully automated and employ systems based on the risk area and degree of human presence, as suggested by GIS data analyses. An integrated approach of multiple systems can be used to merge satellite data, aerial imagery, and personnel position via Global Positioning System (GPS) into a collective whole for near-realtime use by wireless Incident Command Centers.
A small, high risk area that features thick vegetation, a strong human presence, or is close to a critical urban area can be monitored using a local sensor network. Detection systems may include wireless sensor networks that act as automated weather systems: detecting temperature, humidity, and smoke. These may be battery-powered, solar-powered, or tree-rechargeable: able to recharge their battery systems using the small electrical currents in plant material. Larger, medium-risk areas can be monitored by scanning towers that incorporate fixed cameras and sensors to detect smoke or additional factors such as the infrared signature of carbon dioxide produced by fires. Additional capabilities such as night vision, brightness detection, and color change detection may also be incorporated into sensor arrays.
Satellite and aerial monitoring through the use of planes, helicopter, or UAVs can provide a wider view and may be sufficient to monitor very large, low risk areas. These more sophisticated systems employ GPS and aircraft-mounted infrared or high-resolution visible cameras to identify and target wildfires. Satellite-mounted sensors such as Envisat's Advanced Along Track Scanning Radiometer and European Remote-Sensing Satellite's Along-Track Scanning Radiometer can measure infrared radiation emitted by fires, identifying hot spots greater than 39 °C (102 °F). The National Oceanic and Atmospheric Administration's Hazard Mapping System combines remote-sensing data from satellite sources such as Geostationary Operational Environmental Satellite (GOES), Moderate-Resolution Imaging Spectroradiometer (MODIS), and Advanced Very High Resolution Radiometer (AVHRR) for detection of fire and smoke plume locations. However, satellite detection is prone to offset errors, anywhere from 2 to 3 kilometers (1 to 2 mi) for MODIS and AVHRR data and up to 12 kilometers (7.5 mi) for GOES data. Satellites in geostationary orbits may become disabled, and satellites in polar orbits are often limited by their short window of observation time. Cloud cover and image resolution and may also limit the effectiveness of satellite imagery.
in 2015 a new fire detection tool is in operation at the U.S. Department of Agriculture (USDA) Forest Service (USFS) which uses data from the Suomi National Polar-orbiting Partnership (NPP) satellite to detect smaller fires in more detail than previous space-based products. The high-resolution data is used with a computer model to predict how a fire will change direction based on weather and land conditions. The active fire detection product using data from Suomi NPP's Visible Infrared Imaging Radiometer Suite (VIIRS) increases the resolution of fire observations to 1,230 feet (375 meters). Previous NASA satellite data products available since the early 2000s observed fires at 3,280 foot (1 kilometer) resolution. The data is one of the intelligence tools used by the USFS and Department of Interior agencies across the United States to guide resource allocation and strategic fire management decisions. The enhanced VIIRS fire product enables detection every 12 hours or less of much smaller fires and provides more detail and consistent tracking of fire lines during long duration wildfires – capabilities critical for early warning systems and support of routine mapping of fire progression. Active fire locations are available to users within minutes from the satellite overpass through data processing facilities at the USFS Remote Sensing Applications Center, which uses technologies developed by the NASA Goddard Space Flight Center Direct Readout Laboratory in Greenbelt, Maryland. The model uses data on weather conditions and the land surrounding an active fire to predict 12–18 hours in advance whether a blaze will shift direction. The state of Colorado decided to incorporate the weather-fire model in its firefighting efforts beginning with the 2016 fire season.
In 2014, an international campaign was organized
- Cambridge Advanced Learner's Dictionary (Third ed.). Cambridge University Press. 2008. ISBN 978-0-521-85804-5.
- "BBC Earth - Forest fire videos - See how fire started on Earth". Retrieved 2016-02-13.
- Scott, Andrew C.; Glasspool, Ian J. (2006-07-18). "The diversification of Paleozoic fire systems and fluctuations in atmospheric oxygen concentration". Proceedings of the National Academy of Sciences. 103 (29): 10861–10865. doi:10.1073/pnas.0604090103. ISSN 0027-8424. PMC . PMID 16832054.
- Bowman, David M. J. S.; Balch, Jennifer K.; Artaxo, Paulo; Bond, William J.; Carlson, Jean M.; Cochrane, Mark A.; D’Antonio, Carla M.; DeFries, Ruth S.; Doyle, John C. (2009-04-24). "Fire in the Earth System". Science. 324 (5926): 481–484. doi:10.1126/science.1163886. ISSN 0036-8075. PMID 19390038.
- Flannigan, M.D.; B.D. Amiro; K.A. Logan; B.J. Stocks & B.M. Wotton (2005). "Forest Fires and Climate Change in the 21st century" (PDF). Mitigation and Adaptation Strategies for Global Change. 11 (4): 847–859. doi:10.1007/s11027-005-9020-7. Retrieved 26 June 2009.
- "The Ecological Importance of Mixed-Severity Fires - ScienceDirect". www.sciencedirect.com. Retrieved 2016-08-22.
- Hutto, Richard L. (2008-12-01). "The Ecological Importance of Severe Wildfires: Some Like It Hot". Ecological Applications. 18 (8): 1827–1834. doi:10.1890/08-0895.1. ISSN 1939-5582.
- Stephen J. Pyne. "How Plants Use Fire (And Are Used By It)". NOVA online. Retrieved 30 June 2009.
- Graham, et al., 12, 36
- National Wildfire Coordinating Group Communicator's Guide For Wildland Fire Management, 4-6.
- "National Wildfire Coordinating Group Fireline Handbook, Appendix B: Fire Behavior" (PDF). National Wildfire Coordinating Group. April 2006. Retrieved 11 December 2008.
- Westerling, A. L.; Hidalgo, H. G.; Cayan, D. R.; Swetnam, T. W. (2006-08-18). "Warming and Earlier Spring Increase Western U.S. Forest Wildfire Activity". Science. 313 (5789): 940–943. doi:10.1126/science.1128834. ISSN 0036-8075. PMID 16825536.
- "International Experts Study Ways to Fight Wildfires". Voice of America (VOA) News. 24 June 2009. Retrieved 9 July 2009.
- Interagency Strategy for the Implementation of the Federal Wildland Fire Policy, entire text
- National Wildfire Coordinating Group Communicator's Guide For Wildland Fire Management, entire text
- Fire. The Australian Experience, 5-6.
- Graham, et al., 15.
- Noss, Reed F.; Franklin, Jerry F.; Baker, William L.; Schoennagel, Tania; Moyle, Peter B. (2006-11-01). "Managing fire-prone forests in the western United States". Frontiers in Ecology and the Environment. 4 (9): 481–487. doi:10.1890/1540-9295(2006)4[481:MFFITW]2.0.CO;2. ISSN 1540-9309.
- Lydersen, Jamie M.; North, Malcolm P.; Collins, Brandon M. (2014-09-15). "Severity of an uncharacteristically large wildfire, the Rim Fire, in forests with relatively restored frequent fire regimes". Forest Ecology and Management. 328: 326–334. doi:10.1016/j.foreco.2014.06.005.
- van Wagtendonk (1996), 1164
- "California's Fire Hazard Severity Zone Update and Building Standards Revision" (PDF). CAL FIRE. May 2007. Retrieved 18 December 2008.
- "California Senate Bill No. 1595, Chapter 366" (PDF). State of California. 27 September 2008. Retrieved 18 December 2008.
- "Wildfire Prevention Strategies" (PDF). National Wildfire Coordinating Group. March 1998. p. 17. Retrieved 3 December 2008.
- Scott, A (2000). "The Pre-Quaternary history of fire". Palaeogeography Palaeoclimatology Palaeoecology. 164: 281–329. doi:10.1016/S0031-0182(00)00192-9.
- Pyne, Stephen J.; Andrews, Patricia L.; Laven, Richard D. (1996). Introduction to wildland fire (2nd ed.). John Wiley and Sons. p. 65. ISBN 978-0-471-54913-0. Retrieved 26 January 2010.
- "News 8 Investigation: SDG&E Could Be Liable For Power Line Wildfires". UCAN News. 5 November 2007. Retrieved 20 July 2009.
- Finney, Mark A.; Maynard, Trevor B.; McAllister, Sara S.; Grob, Ian J. (2013). A Study of Ignition by Rifle Bullets. Fort Collins, CO: United States Forest Service. Retrieved 15 June 2014.
- The Associated Press (16 November 2006). "Orangutans in losing battle with slash-and-burn Indonesian farmers". TheStar online. Retrieved 1 December 2008.
- Karki, 4.
- Krock, Lexi (June 2002). "The World on Fire". NOVA online - Public Broadcasting System (PBS). Retrieved 13 July 2009.
- Krajick, Kevin (May 2005). "Fire in the hole". Smithsonian Magazine. Retrieved 30 July 2009.
- Graham, et al., iv.
- Graham, et al., 9, 13
- Rincon, Paul (9 March 2005). "Asian peat fires add to warming". British Broadcasting Corporation (BBC) News. Retrieved 9 December 2008.
- Graham, et al ., iv, 10, 14
- "Global Fire Initiative: Fire and Invasives". The Nature Conservancy. Retrieved 3 December 2008.
- Graham, et al., iv, 8, 11, 15.
- Butler, Rhett (19 June 2008). "Global Commodities Boom Fuels New Assault on Amazon". Yale School of Forestry & Environmental Studies. Retrieved 9 July 2009.
- "The Science of Wildland fire". National Interagency Fire Center. Retrieved 21 November 2008.
- Graham, et al., 12.
- National Wildfire Coordinating Group Communicator's Guide For Wildland Fire Management, 3.
- "Ashes cover areas hit by Southern Calif. fires". MSNBC. Associated Press. 15 November 2008. Retrieved 4 December 2008.
- "Influence of Forest Structure on Wildfire Behavior and the Severity of Its Effects" (PDF). US Forest Service. November 2003. Retrieved 19 November 2008.
- "Prepare for a Wildfire". Federal Emergency Management Agency (FEMA). Retrieved 1 December 2008.
- Glossary of Wildland Fire Terminology, 74.
- de Sousa Costa and Sandberg, 229-230.
- "Archimedes Death Ray: Idea Feasibility Testing". Massachusetts Institute of Technology (MIT). October 2005. Retrieved 1 February 2009.
- "Satellites are tracing Europe's forest fire scars". European Space Agency. 27 July 2004. Retrieved 12 January 2009.
- Graham, et al., 10-11.
- "Protecting Your Home From Wildfire Damage" (PDF). Florida Alliance for Safe Homes (FLASH). p. 5. Retrieved 3 March 2010.
- Billing, 5-6
- Graham, et al., 12
- Shea, Neil (July 2008). "Under Fire". National Geographic. Retrieved 8 December 2008.
- Graham, et al., 16.
- Graham, et al., 9, 16.
- Volume 1: The Kilmore East Fire. 2009 Victorian Bushfires Royal Commission. Victorian Bushfires Royal Commission, Australia. July 2010. ISBN 978-0-9807408-2-0. Retrieved 26 October 2013.
- National Wildfire Coordinating Group Communicator's Guide For Wildland Fire Management, 4.
- Graham, et al., 16-17.
- Olson, et al., 2
- "The New Generation Fire Shelter" (PDF). National Wildfire Coordinating Group. March 2003. p. 19. Retrieved 16 January 2009.
- Glossary of Wildland Fire Terminology, 69.
- "Chronological List of U.S. Billion Dollar Events". National Oceanic and Atmospheric Administration (NOAA) Satellite and Information Service. Retrieved 4 February 2009.
- McKenzie, et al., 893
- Graham, et al., 2
- Westerling, Al; Hidalgo, Hg; Cayan, Dr; Swetnam, Tw (August 2006). "Warming and earlier spring increase western U.S. Forest wildfire activity". Science. 313 (5789): 940–3. Bibcode:2006Sci...313..940W. doi:10.1126/science.1128834. ISSN 0036-8075. PMID 16825536.
- Bill Gabbert (November 9, 2015). "Was the 2014 wildfire season in California affected by climate change?". Wildfire Today. Retrieved May 17, 2016.
- Yoon; et al. (2015). "Extreme Fire Season in California: A Glimpse Into the Future?". 96 (11). Bibcode:2015BAMS...96S...5Y. doi:10.1175/BAMS-D-15-00114.1.
- Pierce, Jennifer L.; Meyer, Grant A.; Timothy Jull, A. J. (2004-11-04). "Fire-induced erosion and millennial-scale climate change in northern ponderosa pine forests". Nature. 432 (7013): 87–90. doi:10.1038/nature03058. ISSN 0028-0836.
- de Souza Costa and Sandberg, 228
- National Wildfire Coordinating Group Communicator's Guide For Wildland Fire Management, 5.
- San-Miguel-Ayanz, et al., 364.
- Glossary of Wildland Fire Terminology, 73.
- Donato, Daniel C.; Fontaine, Joseph B.; Robinson, W. Douglas; Kauffman, J. Boone; Law, Beverly E. (2009-01-01). "Vegetation response to a short interval between high-severity wildfires in a mixed-evergreen forest". Journal of Ecology. 97 (1): 142–154. doi:10.1111/j.1365-2745.2008.01456.x. ISSN 1365-2745.
- Interagency Strategy for the Implementation of the Federal Wildland Fire Policy, 3, 37.
- Graham, et al., 3.
- Keeley, J.E. (1995). "Future of California floristics and systematics: wildfire threats to the California flora" (PDF). Madrono. US Geological Survey. 42: 175–179. Retrieved 26 June 2009.
- Zedler, P.H. (1995). "Fire frequency in southern California shrublands: biological effects and management options". In Keeley, J.E.; Scott, T. Brushfires in California wildlands: ecology and resource management. Fairfield, WA: International Association of Wildland Fire. pp. 101–112.
- van Wagtendonk (2007), 14.
- Nepstad, 4, 8-11
- Lindsey, Rebecca (5 March 2008). "Amazon fires on the rise". Earth Observatory (NASA). Retrieved 9 July 2009.
- Nepstad, 4
- "Bushfire and Catchments: Effects of Fire on Soils and Erosion". eWater Cooperative Research Center's. Retrieved 8 January 2009.
- Refern, Neil; Vyner, Blaise. "Fylingdales Moor a lost landscape rises from the ashes". Current Archaeology. Current Publishing. XIX (226): 20–27. ISSN 0011-3212.
- Running, S.W. (2008). "Ecosystem Disturbance, Carbon and Climate". Science. 321 (5889): 652–653. doi:10.1126/science.1159607. PMID 18669853.
- Higuera, Philip E.; Chipman, Melissa L.; Barnes, Jennifer L.; Urban, Michael A.; Hu, Feng Sheng (2011). "Variability of tundra fire regimes in Arctic Alaska: Millennial-scale patterns and ecological implications". Ecological Applications. 21 (8): 3211–3226. doi:10.1890/11-0387.1.
- Santos, Robert L. (1997). "Section Three: Problems, Cares, Economics, and Species". The Eucalyptus of California. California State University. Retrieved 26 June 2009.
- Fire. The Australian Experience, 5.
- Keeley, J.E. & C.J. Fotheringham (1997). "Trace gas emission in smoke-induced germination" (PDF). Science. 276: 1248–1250. doi:10.1126/science.276.5316.1248. Retrieved 26 June 2009.
- Flematti GR; Ghisalberti EL; Dixon KW; Trengove RD (2004). "A compound from smoke that promotes seed germination". Science. 305 (5686): 977. doi:10.1126/science.1099944. PMID 15247439.
- Karki, 3.
- Pyne, Stephen. "How Plants Use Fire (And How They Are Used By It)". Nova. Retrieved 26 September 2013.
- "Giant Sequoias and Fire". US National Park Service. Retrieved 30 June 2009.
- "Fire Management Assessment of the Caribbean Pine (Pinus caribea) Forest Ecosystems on Andros and Abaco Islands, Bahamas" (PDF). TNC Global Fire Initiative. The Nature Conservancy. September 2004. Retrieved 27 August 2009.
- Wang, P.K. (2003). The physical mechanism of injecting biomass burning materials into the stratosphere during fire-induced thunderstorms. San Francisco, California: American Geophysical Union fall meeting.
- Fromm, M.; Stocks, B.; Servranckx, R.; Lindsey, D. Smoke in the Stratosphere: What Wildfires have Taught Us About Nuclear Winter; abstract #U14A-04. American Geophysical Union, Fall Meeting 2006. Bibcode:2006AGUFM.U14A..04F. Retrieved 4 February 2009.
- Graham, et al., 17
- John R. Scala; et al. "Meteorological Conditions Associated with the Rapid Transport of Canadian Wildfire Products into the Northeast during 5–8 July 2002" (PDF). American Meteorological Society. Retrieved 4 February 2009.
- Breyfogle, Steve; Sue A., Ferguson (December 1996). "User Assessment of Smoke-Dispersion Models for Wildland Biomass Burning" (PDF). US Forest Service. Retrieved 6 February 2009.
- Bravo, A.H.; E. R. Sosa; A. P. Sánchez; P. M. Jaimes & R. M. I. Saavedra (2002). "Impact of wildfires on the air quality of Mexico City, 1992-1999". Environmental Pollution. 117 (2): 243–253. doi:10.1016/S0269-7491(01)00277-9. PMID 11924549.
- Dore, S.; Kolb, T. E.; Montes-Helu, M.; Eckert, S. E.; Sullivan, B. W.; Hungate, B. A.; Kaye, J. P.; Hart, S. C.; Koch, G. W. (2010-04-01). "Carbon and water fluxes from ponderosa pine forests disturbed by wildfire and thinning". Ecological Applications. 20 (3): 663–683. doi:10.1890/09-0934.1. ISSN 1939-5582.
- Douglass, R. (2008). "Quantification of the health impacts associated with fine particulate matter due to wildfires. MS Thesis" (PDF). Nicholas School of the Environment and Earth Sciences of Duke University.
- National Center for Atmospheric Research (13 October 2008). "Wildfires Cause Ozone Pollution to Violate Health Standards". Geophysical Research Letters. Retrieved 4 February 2009.
- Page, Susan E.; Florian Siegert; John O. Rieley; Hans-Dieter V. Boehm; Adi Jaya & Suwido Limin (11 July 2002). "The amount of carbon released from peat and forest fires in Indonesia during 1997". Nature. 420 (6911): 61–65. Bibcode:2002Natur.420...61P. doi:10.1038/nature01131. PMID 12422213.
- Tacconi, Luca (February 2003). "Fires in Indonesia: Causes, Costs, and Policy Implications (CIFOR Occasional Paper No. 38)" (PDF). Bogor, Indonesia: Center for International Forestry Research. ISSN 0854-9818. Retrieved 6 February 2009.
- Baumgardner, D.; et al. (2003). "Warming of the Arctic lower stratosphere by light absorbing particles". American Geophysical Union fall meeting. San Francisco, California.
- Glasspool Ij, E. D.; Edwards, D.; Axe, L. (2004). "Charcoal in the Silurian as evidence for the earliest wildfire". Geology. 32 (5): 381–383. Bibcode:2004Geo....32..381G. doi:10.1130/G20363.1.
- Edwards, D.; Axe, L. (April 2004). "Anatomical Evidence in the Detection of the Earliest Wildfires". PALAIOS. 19 (2): 113–128. doi:10.1669/0883-1351(2004)019<0113:AEITDO>2.0.CO;2. ISSN 0883-1351.
- Scott, C.; Glasspool, J. (Jul 2006). "The diversification of Paleozoic fire systems and fluctuations in atmospheric oxygen concentration" (Free full text). Proceedings of the National Academy of Sciences of the United States of America. 103 (29): 10861–10865. Bibcode:2006PNAS..10310861S. doi:10.1073/pnas.0604090103. ISSN 0027-8424. PMC . PMID 16832054.
- Pausas and Keeley, 594
- Historically, the Cenozoic has been divided up into the Quaternary and Tertiary sub-eras, as well as the Neogene and Paleogene periods. The 2009 version of the ICS time chart recognizes a slightly extended Quaternary as well as the Paleogene and a truncated Neogene, the Tertiary having been demoted to informal status.
- Pausas and Keeley, 595
- Pausas and Keeley, 596
- "Redwood Trees".
- Pausas and Keeley, 597
- Rackham, Oliver (November–December 2003). "Fire in the European Mediterranean: History". AridLands Newsletter. University of Arizona College of Agriculture & Life Sciences. 54. Retrieved 17 July 2009.
- Rackham, 229-230
- Goldammer, Johann G. (5–9 May 1998). "History of Fire in Land-Use Systems of the Baltic Region: Implications on the Use of Prescribed Fire in Forestry, Nature Conservation and Landscape Management". First Baltic Conference on Forest Fires. Radom-Katowice, Poland: Global Fire Monitoring Center (GFMC).
- * "Wildland fire - An American legacy|" (PDF). Fire Management Today. USDA Forest Service. 60 (3): 4, 5, 9, 11. Summer 2000. Retrieved 31 July 2009.
- Fire. The Australian Experience, 7.
- Karki, 27.
- Meyer, G.A.; Wells, S.G.; Jull, A.J.T. (1995). "Fire and alluvial chronology in Yellowstone National Park: Climatic and intrinsic controls on Holocene geomorphic processes". GSA Bulletin. 107 (10): 1211–1230. Bibcode:1995GSAB..107.1211M. doi:10.1130/0016-7606(1995)107<1211:FAACIY>2.3.CO;2.
- Pitkänen, et al., 15-16 and 27-30
- J. R. Marlon; P. J. Bartlein; C. Carcaillet; D. G. Gavin; S. P. Harrison; P. E. Higuera; F. Joos; M. J. Power; I. C. Prentice (2008). "Climate and human influences on global biomass burning over the past two millennia". Nature Geoscience. 1 (10): 697–702. Bibcode:2008NatGe...1..697M. doi:10.1038/ngeo313. University of Oregon Summary, accessed 2 February 2010
- Stephens, Scott L.; Martin, Robert E.; Clinton, Nicholas E. (2007). "Prehistoric fire area and emissions from California's forests, woodlands, shrublands, and grasslands". Forest Ecology and Management. 251: 205–216. doi:10.1016/j.foreco.2007.06.005. Retrieved 4 May 2015.
- Karki, 6.
- van Wagtendonk (1996), 1156.
- Interagency Strategy for the Implementation of the Federal Wildland Fire Policy, 42.
- San-Miguel-Ayanz, et al., 361.
- Karki, 7, 11-19.
- "Smokey's Journey". Smokeybear.com. Retrieved 26 January 2010.
- "Backburn". MSN Encarta. Retrieved 9 July 2009.
- "UK: The Role of Fire in the Ecology of Heathland in Southern Britain". International Forest Fire News. 18: 80–81. January 1998.
- "Prescribed Fires". SmokeyBear.com. Retrieved 21 November 2008.
- Karki, 14.
- Manning, Richard (1 December 2007). "Our Trial by Fire". onearth.org. Retrieved 7 January 2009.
- "Extreme Events: Wild & Forest Fire". National Oceanic and Atmospheric Administration (NOAA). Retrieved 7 January 2009.
- Pyne, S.J. 1984. Introduction to wildland fire: Fire management in the United States. New York, NY: John Wiley & Sons, Inc.
- Stephens, Scott L.; Ruth (2005). "Federal Forest Fire Policy in US.". Ecological Application. 15 (2): 532–542. doi:10.1890/04-0545.
- "National Interagency Fire Center". Nifc.gov. Retrieved 19 January 2014.
- Graff, Trevor. "Congressional wildfire bill would adjust state wildland fire funding". Retrieved 19 September 2014.
- "NFPA 1141: Standard for Fire Protection Infrastructure for Land Development in Wildland, Rural, and Suburban Areas". Nfpa.org. 31 May 2011. Retrieved 19 January 2014.
- "NFPA 1144: Standard for Reducing Structure Ignition Hazards from Wildland Fire". Nfpa.org. Retrieved 19 January 2014.
- United States Department of Agriculture . FY 2012 Budget Summary
- Western Forestry Leadership Coalition (2010). "The true cost of wildfire in the western U.S." (PDF). Retrieved 17 April 2011.
- San-Miguel-Ayanz, et al., 362.
- "An Integration of Remote Sensing, GIS, and Information Distribution for Wildfire Detection and Management" (PDF). Photogrammetric Engineering and Remote Sensing. Western Disaster Center. 64 (10): 977–985. October 1998. Retrieved 26 June 2009.
- "Radio communication keeps rangers in touch". Canadian Broadcasting Corporation (CBC) Digital Archives. 21 August 1957. Retrieved 6 February 2009.
- "Wildfire Detection and Control". Alabama Forestry Commission. Retrieved 12 January 2009.
- "Evaluation of three wildfire smoke detection systems", 4
- Fok, Chien-Liang; Roman, Gruia-Catalin & Lu, Chenyang (29 November 2004). "Mobile Agent Middleware for Sensor Networks: An Application Case Study". Washington University in St. Louis. Archived from the original (PDF) on 3 January 2007. Retrieved 15 January 2009.
- Chaczko, Z.; Ahmad, F. (July 2005). "Wireless Sensor Network Based System for Fire Endangered Areas". Third International Conference on Information Technology and Applications. 2 (4–7): 203–207. doi:10.1109/ICITA.2005.313. ISBN 0-7695-2316-1. Retrieved 15 January 2009.
- "Wireless Weather Sensor Networks for Fire Management". University of Montana - Missoula. Retrieved 19 January 2009.
- Solobera, Javier (9 April 2010). "Detecting Forest Fires using Wireless Sensor Networks with Waspmote". Libelium Comunicaciones Distribuidas S.L.
- Thomson, Elizabeth A. (23 September 2008). "Preventing forest fires with tree power". Massachusetts Institute of Technology (MIT) News. Retrieved 15 January 2009.
- "Evaluation of three wildfire smoke detection systems", 6
- "SDSU Tests New Wildfire-Detection Technology". San Diego, CA: San Diego State University. 23 June 2005. Archived from the original on 1 September 2006. Retrieved 12 January 2009.
- San-Miguel-Ayanz, et al., 366-369, 373-375.
- Rochester Institute of Technology (4 October 2003). "New Wildfire-detection Research Will Pinpoint Small Fires From 10,000 feet". ScienceDaily. Retrieved 12 January 2009.
- "Airborne campaign tests new instrumentation for wildfire detection". European Space Agency. 11 October 2006. Retrieved 12 January 2009.
- "World fire maps now available online in near-real time". European Space Agency. 24 May 2006. Retrieved 12 January 2009.
- "Earth from Space: California's 'Esperanza' fire". European Space Agency. 11 March 2006. Retrieved 12 January 2009.
- "Hazard Mapping System Fire and Smoke Product". National Oceanic and Atmospheric Administration (NOAA) Satellite and Information Service. Retrieved 15 January 2009.
- Ramachandran, Chandrasekar; Misra, Sudip & Obaidat, Mohammad S. (9 June 2008). "A probabilistic zonal approach for swarm-inspired wildfire detection using sensor networks". Int. J. Commun. Syst. 21 (10): 1047–1073. doi:10.1002/dac.937.
- Miller, Jerry; Borne, Kirk; Thomas, Brian; Huang Zhenping & Chi, Yuechen. "Automated Wildfire Detection Through Artificial Neural Networks" (PDF). NASA. Retrieved 15 January 2009.
- Zhang, Junguo; Li, Wenbin; Han, Ning & Kan, Jiangming (September 2008). "Forest fire detection system based on a ZigBee wireless sensor network" (PDF). Frontiers of Forestry in China. Higher Education Press, co-published with Springer-Verlag GmbH. 3 (3): 369–374. doi:10.1007/s11461-008-0054-3. Retrieved 26 June 2009. |
About This Chapter
ASVAB: Numbers & Operations - Chapter Summary
This chapter will go over the order of steps to take when simplifying and solving various expressions, and it also discusses the PEMDAS shortcut that can be used. The lessons in this Numbers and Operations chapter can help to ensure that you're also prepared for any ASVAB questions requiring familiarity with:
- Factoring in algebra
- Using exponential notation
- Arithmetic calculations with signed numbers
- Finding prime factorization of a number
- Finding square roots and understanding the order of operations
- Identifying the greatest common factor and least common multiple
- Simplifying square roots when not a perfect square
The helpful examples in this chapter that you'll see in video and in text will reinforce your understanding of how to solve a variety of problems. You can conveniently demonstrate your grip on the material by taking a practice quiz that's included with each lesson.
Objectives of the ASVAB: Numbers & Operations Chapter
The ASVAB is used as an indicator of how well someone will perform in the military, based on the competencies measured. The multiple-choice practice questions that you can take across this chapter encompass subject matter that you could be asked about on the ASVAB's Mathematics Knowledge subtest. All questions belonging to this subtest make up around 11% of the ASVAB's questions overall.
1. What is Factoring in Algebra? - Definition & Example
Factoring with ordinary numbers involves knowing that 6 is the product of 2 and 3. But what about factoring in algebra? In this lesson, we'll learn the essential elements of algebra factoring.
2. How to Find the Greatest Common Factor
If the factors of a number are the different numbers that you can multiply together to get that original number, then the greatest common factor of two numbers is just the biggest one that both have in common. See some examples of what I'm talking about here!
3. How to Find the Prime Factorization of a Number
The prime factorization of a number involves breaking that number down to its smallest parts. This lesson will show you two different ways to discover the prime factorization of any number.
4. Using Prime Factorizations to Find the Least Common Multiples
Finding the least common multiple can seem like a lot of work. But we can use prime factorization as a shortcut. Find out how and practice finding least common multiples in this lesson.
5. How to Use Exponential Notation
Exponential notation is a mathematical method for writing longer multiplication problems in a simplified manner. This lesson will define how to work with exponential notation and give some examples of how it is used.
6. Evaluating Square Roots of Perfect Squares
Squares and square roots are inverse, or opposite, operations involving radicals. Learn how to determine the square root of perfect squares in this lesson.
7. Simplifying Square Roots When not a Perfect Square
Numbers that are imperfect squares are those that, when evaluated, do not give solutions that are integers. The proper mathematical way to simplify these imperfect squares is discussed in this lesson.
8. What Is The Order of Operations in Math? - Definition & Examples
The order of operations is the steps used to simplify any mathematical expression. In this video, learn how to solve problems using these steps and easy tricks to remember them.
9. Arithmetic Calculations with Signed Numbers
Signed numbers are often referred to as integers. Integers include both positive and negative numbers. In this lesson, you will learn how to add, subtract, multiply, and divide integers.
Earning College Credit
Did you know… We have over 200 college courses that prepare you to earn credit by exam that is accepted by over 1,500 colleges and universities. You can test out of the first two years of college and save thousands off your degree. Anyone can earn credit-by-exam regardless of age or education level.
To learn more, visit our Earning Credit Page
Transferring credit to the school of your choice
Not sure what college you want to attend yet? Study.com has thousands of articles about every imaginable degree, area of study and career path that can help you find the school that's right for you.
Other chapters within the ASVAB Armed Services Vocational Aptitude Battery: Practice & Study Guide course
- ASVAB: Cell Biology
- ASVAB: Respiration & Photosynthesis
- ASVAB: Cell Division
- ASVAB: Genetics
- ASVAB: Evolution
- ASVAB: Classification
- ASVAB: Plant Biology
- ASVAB: Animal Biology
- ASVAB: Nutrition
- ASVAB: The Human Digestive System
- ASVAB: The Human Respiratory & Circulatory Systems
- ASVAB: The Human Excretory System
- ASVAB: The Human Endocrine System
- ASVAB: The Human Nervous System
- ASVAB: Human Reproduction
- ASVAB: Ecology
- ASVAB: Matter & Atomic Structure
- ASVAB: The Periodic Table
- ASVAB: Chemical Reactions & Bonding
- ASVAB: Measurement
- ASVAB: Nuclear Chemistry
- ASVAB: Motion
- ASVAB: Force
- ASVAB: Energy & Work
- ASVAB: Fluids
- ASVAB: Waves & Sound
- ASVAB: Geology
- ASVAB: Meteorology
- ASVAB: Oceanography
- ASVAB: Astronomy
- ASVAB: Understanding Words & Context
- ASVAB: Paragraph Comprehension
- ASVAB: Numbers
- ASVAB: Mathematical Operations
- ASVAB: Math Word Problems
- ASVAB: Fractions
- ASVAB: Basic Expressions
- ASVAB: Solving Equations
- ASVAB: Exponents
- ASVAB: Polynomials
- ASVAB: Geometry & Algebra
- ASVAB: Electric Force & Charge
- ASVAB: Magnetism
- ASVAB: Electric Circuits
- ASVAB: Power, Devices & Computers
- ASVAB: Mechanical Comprehension
- ASVAB: Assembling Objects
- ASVAB Armed Services Vocational Aptitude Battery Flashcards |
Inflation is one of the most important economic concepts. At its most basic level, inflation is simply a rise in prices. Over time, as the cost of goods and services increases, the value of for example of a dollar is going to go down because you won’t be able to purchase as much with that dollar as you could have last month or last year.
When the purchasing power of a currency starts to decline steadily and dramatically the result is always inflation.
There are 3 main types of inflation based on the rate of rising prices: * when prices rise by upto 3% per annum (year), it is called Creeping Inflation. It is the mildest form of inflation and also known as a Mild Inflation or Low Inflation. * if prices rise by double or triple digit inflation rates like 30% or 400% or 999% per annum, then the situation can be termed as Galloping Inflation. * when prices rise above 1000% per annum ( four digit inflation rate), it is termed as Hyperinflation.
It is a stage of very high rate of inflation. While economies seem to survive under galloping inflation, nothing good can be said about this type . Hyperinflation occurs when the prices go out of control and the monetary authorities are unable to impose any check on it. Two worst examples of hyperinflation recorded in world history are of those experienced by Germany in 1923 and Hungary in 1946.
Apart from that, economists have formulated several definitions and theories. There is no single cause of inflation.
According to Keynesian economics there are two basic types of inflation: Demand-Pull Inflation and Cost-Push Inflation.The concept of Demand-Pull Inflation deals with the idea that the demand for goods and services in an economy is more than the supply. For example fuel prices can rise when there’s no change in oil-producing, but more people want to use car on the roads, so they will have to share |
Area of a disk
|Part of a series of articles on the|
|mathematical constant π|
The area of a disk, more commonly called the area of a circle, of radius r is equal to πr2. Here the symbol π (Greek letter pi) denotes the constant ratio of the circumference of a circle to its diameter or of the area of a circle to the square of its radius. Since the area of a regular polygon is half its perimeter times its apothem, and a regular polygon becomes a circle as the number of sides increase, the area of a disk is half its circumference times its radius (i.e. 1⁄2 × 2πr × r).
- 1 History
- 2 Using polygons
- 3 Archimedes's proof
- 4 Rearrangement proof
- 5 Onion proof
- 6 Triangle proof
- 7 Fast approximation
- 8 Dart approximation
- 9 Finite rearrangement
- 10 Generalizations
- 11 Triangle method
- 12 Bibliography
- 13 References
- 14 External links
Modern mathematics can obtain the area using the methods of integral calculus or its more sophisticated offspring, real analysis. However the area of circles was studied by the Ancient Greeks. Eudoxus of Cnidus in the fifth century B.C. had found that the areas of circles are proportional to their radius squared. The great mathematician Archimedes used the tools of Euclidean geometry to show that the area inside a circle is equal to that of a right triangle whose base has the length of the circle's circumference and whose height equals the circle's radius in his book Measurement of a Circle. The circumference is 2πr, and the area of a triangle is half the base times the height, yielding the area πr2 for the disk. Prior to Archimedes, Hippocrates of Chios was the first to show that the area of a disk is proportional to the square of its diameter, as part of his quadrature of the lune of Hippocrates, but did not identify the constant of proportionality.
The area of a regular polygon is half its perimeter times the apothem. As the number of sides of the regular polygon increases, the polygon tends to a circle, and the apothem tends to the radius. This suggests that the area of a circle is half its circumference times the radius.
Following Archimedes (c. 260 BCE), compare a circle to a right triangle whose base has the length of the circle's circumference and whose height equals the circle's radius. If the area of the circle is not equal to that of the triangle, then it must be either greater or less. We eliminate each of these by contradiction, leaving equality as the only possibility. We use regular polygons in the same way.
Suppose the circle area, C, may be greater than the triangle area, T = 1⁄2cr. Let E denote the excess amount. Inscribe a square in the circle, so that its four corners lie on the circle. Between the square and the circle are four segments. If the total area of those gaps, G4, is greater than E, split each arc in half. This makes the inscribed square into an inscribed octagon, and produces eight segments with a smaller total gap, G8. Continue splitting until the total gap area, Gn, is less than E. Now the area of the inscribed polygon, Pn = C − Gn, must be greater than that of the triangle.
But this forces a contradiction, as follows. Draw a perpendicular from the center to the midpoint of a side of the polygon; its length, h, is less than the circle radius. Also, let each side of the polygon have length s; then the sum of the sides, ns, is less than the circle circumference. The polygon area consists of n equal triangles with height h and base s, thus equals 1⁄2nhs. But since h < r and ns < c, the polygon area must be less than the triangle area, 1⁄2cr, a contradiction. Therefore our supposition that C might be greater than T must be wrong.
Suppose the circle area may be less than the triangle area. Let D denote the deficit amount. Circumscribe a square, so that the midpoint of each edge lies on the circle. If the total area gap between the square and the circle, G4, is greater than D, slice off the corners with circle tangents to make a circumscribed octagon, and continue slicing until the gap area is less than D. The area of the polygon, Pn, must be less than T.
This, too, forces a contradiction. For, a perpendicular to the midpoint of each polygon side is a radius, of length r. And since the total side length is greater than the circumference, the polygon consists of n identical triangles with total area greater than T. Again we have a contradiction, so our supposition that C might be less than T must be wrong as well.
Therefore it must be the case that the area of the circle is precisely the same as the area of the triangle. This concludes the proof.
Following Satō Moshun (Smith & Mikami 1914, pp. 130–132) and Leonardo da Vinci (Beckmann 1976, p. 19), we can use inscribed regular polygons in a different way. Suppose we inscribe a hexagon. Cut the hexagon into six triangles by splitting it from the center. Two opposite triangles both touch two common diameters; slide them along one so the radial edges are adjacent. They now form a parallelogram, with the hexagon sides making two opposite edges, one of which is the base, s. Two radial edges form slanted sides, and the height is h (as in the Archimedes proof). In fact, we can assemble all the triangles into one big parallelogram by putting successive pairs next to each other. The same is true if we increase to eight sides and so on. For a polygon with 2n sides, the parallelogram will have a base of length ns, and a height h. As the number of sides increases, the length of the parallelogram base approaches half the circle circumference, and its height approaches the circle radius. In the limit, the parallelogram becomes a rectangle with width πr and height r.
Unit disk area by rearranging n polygons. polygon parallelogram n side base height area 4 1.4142136 2.8284271 0.7071068 2.0000000 6 1.0000000 3.0000000 0.8660254 2.5980762 8 0.7653669 3.0614675 0.9238795 2.8284271 10 0.6180340 3.0901699 0.9510565 2.9389263 12 0.5176381 3.1058285 0.9659258 3.0000000 14 0.4450419 3.1152931 0.9749279 3.0371862 16 0.3901806 3.1214452 0.9807853 3.0614675 96 0.0654382 3.1410320 0.9994646 3.1393502 ∞ 1/∞ π 1 π
Using calculus, we can sum the area incrementally, partitioning the disk into thin concentric rings like the layers of an onion. This is the method of shell integration in two dimensions. For an infinitesimally thin ring of the "onion" of radius t, the accumulated area is 2πt dt, the circumferential length of the ring times its infinitesimal width (you can approach this ring by a rectangle with width=2πt and height=dt). This gives an elementary integral for a disk of radius r.
Similar to the onion proof outlined above, we could exploit calculus in a different way in order to arrive at the formula for the area of a circle. In this case, we imagine dividing up a circle into triangles, each with a base of length equal to the circle's radius and a height that is infinitesimally small. The area of each of these triangles is equal to 1/2 * r * dt. By summing up (integrating) all of the areas of these triangles, we arrive at the formula for the circle's area:
The calculations Archimedes used to approximate the area numerically were laborious, and he stopped with a polygon of 96 sides. A faster method uses ideas of Willebrord Snell (Cyclometricus, 1621), further developed by Christiaan Huygens (De Circuli Magnitudine Inventa, 1654), described in Gerretsen & Verdenduin (1983, pp. 243–250).
Archimedes' doubling method
Given a circle, let un be the perimeter of an inscribed regular n-gon, and let Un be the perimeter of a circumscribed regular n-gon. Then un and Un are lower and upper bounds for the circumference of the circle that become sharper and sharper as n increases, and their average (un + Un)/2 is an especially good approximation to the circumference. To compute un and Un for large n, Archimedes derived the following doubling formulae:
Starting from a hexagon, Archimedes doubled n four times to get a 96-gon, which gave him a good approximation to the circumference of the circle.
In modern notation, we can reproduce his computation (and go farther) as follows. For a unit circle, an inscribed hexagon has u6 = 6, and a circumscribed hexagon has U6 = 4√3. Doubling seven times yields
Archimedes doubling seven times; n = 6×2k. k n un Un (un + Un)/4 0 6 6.0000000 6.9282032 3.2320508 1 12 6.2116571 6.4307806 3.1606094 2 24 6.2652572 6.3193199 3.1461443 3 48 6.2787004 6.2921724 3.1427182 4 96 6.2820639 6.2854292 3.1418733 5 192 6.2829049 6.2837461 3.1416628 6 384 6.2831152 6.2833255 3.1416102 7 768 6.2831678 6.2832204 3.1415970
(Here (un + Un)/2 approximates the circumference of the unit circle, which is 2π, so (un + Un)/4 approximates π.)
The last entry of the table has 355⁄113 as one of its best rational approximations; i.e., there is no better approximation among rational numbers with denominator up to 113. The number 355⁄113 is also an excellent approximation to π, better than any other rational number with denominator less than 16604.
The Snell–Huygens refinement
Snell proposed (and Huygens proved) a tighter bound than Archimedes':
This for n = 48 gives a better approximation (about 3.14159292) than Archimedes' method for n = 768.
Derivation of Archimedes' doubling formulae
Let one side of an inscribed regular n-gon have length sn and touch the circle at points A and B. Let A′ be the point opposite A on the circle, so that A′A is a diameter, and A′AB is an inscribed triangle on a diameter. By Thales' theorem, this is a right triangle with right angle at B. Let the length of A′B be cn, which we call the complement of sn; thus cn2+sn2 = (2r)2. Let C bisect the arc from A to B, and let C′ be the point opposite C on the circle. Thus the length of CA is s2n, the length of C′A is c2n, and C′CA is itself a right triangle on diameter C′C. Because C bisects the arc from A to B, C′C perpendicularly bisects the chord from A to B, say at P. Triangle C′AP is thus a right triangle, and is similar to C′CA since they share the angle at C′. Thus all three corresponding sides are in the same proportion; in particular, we have C′A : C′C = C′P : C′A and AP : C′A = CA : C′C. The center of the circle, O, bisects A′A, so we also have triangle OAP similar to A′AB, with OP half the length of A′B. In terms of side lengths, this gives us
In the first equation C′P is C′O+OP, length r+1⁄2cn, and C′C is the diameter, 2r. For a unit circle we have the famous doubling equation of Ludolph van Ceulen,
If we now circumscribe a regular n-gon, with side A″B″ parallel to AB, then OAB and OA″B″ are similar triangles, with A″B″ : AB = OC : OP. Call the circumscribed side Sn; then this is Sn : sn = 1 : 1⁄2cn. (We have again used that OP is half the length of A′B.) Thus we obtain
Call the inscribed perimeter un = nsn, and the circumscribed perimenter Un = nSn. Then combining equations, we have
This gives a geometric mean equation.
We can also deduce
This gives a harmonic mean equation.
When more efficient methods of finding areas are not available, we can resort to “throwing darts”. This Monte Carlo method uses the fact that if random samples are taken uniformly scattered across the surface of a square in which a disk resides, the proportion of samples that hit the disk approximates the ratio of the area of the disk to the area of the square. This should be considered a method of last resort for computing the area of a disk (or any shape), as it requires an enormous number of samples to get useful accuracy; an estimate good to 10−n requires about 100n random samples (Thijsse 2006, p. 273).
We have seen that by partitioning the disk into an infinite number of pieces we can reassemble the pieces into a rectangle. A remarkable fact discovered relatively recently (Laczkovich 1990) is that we can dissect the disk into a large but finite number of pieces and then reassemble the pieces into a square of equal area. This is called Tarski's circle-squaring problem. The nature of Laczkovich's proof is such that it proves the existence of such a partition (in fact, of many such partitions) but does not exhibit any particular partition.
We can stretch a disk to form an ellipse. Because this stretch is a linear transformation of the plane, it has a distortion factor which will change the area but preserve ratios of areas. This observation can be used to compute the area of an arbitrary ellipse from the area of a unit circle.
Consider the unit circle circumscribed by a square of side length 2. The transformation sends the circle to an ellipse by stretching or shrinking the horizontal and vertical diameters to the major and minor axes of the ellipse. The square gets sent to a rectangle circumscribing the ellipse. The ratio of the area of the circle to the square is π/4, which means the ratio of the ellipse to the rectangle is also π/4. Suppose a and b are the lengths of the major and minor axes of the ellipse. Since the area of the rectangle is ab, the area of the ellipse is πab/4.
We can also consider analogous measurements in higher dimensions. For example, we may wish to find the volume inside a sphere. When we have a formula for the surface area, we can use the same kind of “onion” approach we used for the disk.
This approach is a slight modification of onion proof. Consider unwrapping the concentric circles to straight strips. This will form a right angled triangle with r as its height and 2πr (being the outer slice of onion) as its base.
Finding the area of this triangle will give the area of circle
- Archimedes (c. 260 BCE), "Measurement of a circle", in T. L. Heath (trans.), The Works of Archimedes, Dover (published 2002), pp. 91–93, ISBN 978-0-486-42084-4 Check date values in:
(Originally published by Cambridge University Press, 1897, based on J. L. Heiberg's Greek version.)
- Beckmann, Petr (1976), A History of Pi, St. Martin's Griffin, ISBN 978-0-312-38185-1
- Gerretsen, J.; Verdenduin, P. (1983), "Chapter 8: Polygons and Polyhedra", in H. Behnke, F. Bachmann, K. Fladt, H. Kunle (eds.), S. H. Gould (trans.), Fundamentals of Mathematics, Volume II: Geometry, MIT Press, pp. 243–250, ISBN 978-0-262-52094-2
(Originally Grundzüge der Mathematik, Vandenhoeck & Ruprecht, Göttingen, 1971.)
- Laczkovich, Miklós (1990), Equidecomposability and discrepancy: A solution to Tarski's circle squaring problem, Journal für die reine und angewandte Mathematik 404: 77–117, MR 1037431
- Lange, Serge (1985), "The length of the circle", Math! : Encounters with High School Students, Springer-Verlag, ISBN 978-0-387-96129-3
- Smith, David Eugene; Mikami, Yoshio (1914), A history of Japanese mathematics, Chicago: Open Court Publishing, pp. 130–132, ISBN 978-0-87548-170-8
- Thijsse, J. M. (2006), Computational Physics, Cambridge University Press, p. 273, ISBN 978-0-521-57588-1
- Stewart, James (2003). Single variable calculus early transcendentals. (5th. ed.). Toronto ON: Brook/Cole. p. 3. ISBN 0-534-39330-6. "However, by indirect reasoning, Eudoxus (fifth century B.C.) used exhaustion to prove the familiar formula for the area of a circle: "
- Heath, Thomas L. (2003), A Manual of Greek Mathematics, Courier Dover Publications, pp. 121–132, ISBN 0-486-43231-9.
- Hill, George. Lessons in Geometry: For the Use of Beginners, page 124 (1894).
- Not all best rational approximations are the convergents of the continued fraction! |
When a ray OA starting from its original position, rotates about its fixed point O, to a final position OB, then an angle AOB is formed. Point O is called the Vertex of the angle.
9.2.1 Measure of the angle: It is defined as the amount of rotation from initial side to final side.
18.104.22.168 Degree measure (Sexagesimal System): The number of degrees on the circumference of circle between initial and final sides of the angle is called its degree measure. Each degree is divided into 60 equal parts called minutes and each minute is divided into another 60 equal parts named as seconds.
Degree is denoted by the symbol (o), minute is denoted by the symbol (‘) and second is denoted by the symbol (“).
Example: 35o 34’ 23’’ means 35 degrees 34 minutes and 23 seconds.
Types of angles: For an angle θ
Acute angle:- 0o ≤ θ <90o
Obtuse angle: 90o < θ <180o
Reflex angle: 180o < θ <360o
Right angle: θ = 90o
Straight angle: θ = 180o
22.214.171.124 Radian measure: It is the measure of an angle subtended at the centre of a circle of radius r by an arc of the length r. To convert from degrees to radians, multiply by (π/180o). To convert from radians
to degrees, multiply by (180o/π).
126.96.36.199 Grade measure: Each right angle is divided into 100 equal parts known as grades. Each grade is subdivided into 100 equal parts called as minutes. Each minute is subdivided into 100 equal parts known as seconds.
Angles Assignment Help | Geometry Help | Calculus Help | Math Tutors | Algebra Tutor | Tutorial Algebra | Algebra Learn | Math Tutorial | Algebra Tutoring | Calculus Tutor | Precalculus Help | Geometry Tutor | Maths Tutor | Geometry Homework Help | Homework Tutor | Mathematics Tutor | Calculus Tutoring | Online Algebra Tutor | Geometry Tutoring | Online Algebra Tutoring | Algebra Tutors | Math Homework Helper | Calculus Homework Help | Online Tutoring | Calculus Tutors | Homework Tutoring |
The Hubbert curve is an approximation of the production rate of a resource over time. It is a symmetric logistic distribution curve, often confused with the "normal" gaussian function. It first appeared in "Nuclear Energy and the Fossil Fuels," geologist M. King Hubbert's 1956 presentation to the American Petroleum Institute, as an idealized symmetric curve, during his tenure at the Shell Oil Company. It has gained a high degree of popularity in the scientific community for predicting the depletion of various natural resources. The curve is the main component of Hubbert peak theory, which has led to the rise of peak oil concerns. Basing his calculations on the peak of oil well discovery in 1948, Hubbert used his model in 1956 to create a curve which predicted that oil production in the contiguous United States would peak around 1970.
The prototypical Hubbert curve is a probability density function of a logistic distribution curve. It is not a gaussian function (which is used to plot normal distributions), but the two have a similar appearance. The density of a Hubbert curve approaches zero more slowly than a gaussian function:
The graph of a Hubbert curve consists of three key elements:
- a gradual rise from zero resource production that then increases quickly
- a "Hubbert peak", representing the maximum production level
- a drop from the peak that then follows a steep production decline.
The actual shape of a graph of real world production trends is determined by various factors, such as development of enhanced production techniques, availability of competing resources, and government regulations on production or consumption. Because of such factors, real world Hubbert curves are often not symmetrical.
Using the curve, Hubbert modeled the rate of petroleum production for several regions, determined by the rate of new oil well discovery, and extrapolated a world production curve. The relative steepness of decline in this projection is the main concern in peak oil discussions. This is because a steep drop in the production implies that global oil production will decline so rapidly that the world will not have enough time to develop sources of energy to replace the energy now used from oil, possibly leading to drastic social and economic impacts.
Hubbert models have been used to predict the production trends of various resources, such as natural gas (Hubbert's attempt in the late 1970s resulted in an inaccurate prediction that natural gas production would fall dramatically in the 1980s), Coal, fissionable materials, Helium, transition metals (such as copper), and water. At least one researcher has attempted to create a Hubbert curve for the whaling industry and caviar, while another applied it to cod.
After the predicted early-1970s peak of oil production in the U.S., production declined over the following 35 years in a pattern closely matching the Hubbert curve. However, new extraction methods began reversing this trend beginning in the mid-2000s decade, with production reaching 10.07 million b/d in November 2017 – the highest monthly level of crude oil production in U.S. history. As such, the Hubbert curve has to be calculated separately for different oil provinces, whose exploration has started at a different time, and oil extracted by new techniques, sometimes called unconventional oil, resulting in individual Hubbert cycles. The Hubbert Curve for US oil production is generally measured in years.
- Bioeconomics (biophysical)
- Energy accounting
- Gaussian function, a "bell curve" shape
- M. King Hubbert. "Nuclear Energy and the Fossil Fuels" (PDF). Drilling and Production Practice (1956) American Petroleum Institute & Shell Development Co. Publication No. 95, See Pp 9-11, 21-22. Archived from the original (PDF) on 2008-05-27.
- Ugo Bardi and Leigh Yaxley. Proceedings of the 4th ASPO Workshop, Lisbon 2005
- Jean Laherrere. Multi-Hubbert Modeling. July, 1997.
- Patzek, Tad (2008-05-17). "Exponential growth, energetic Hubbert cycles, and the advancement of technology". Archives of Mining Sciences. 53 (2): 131–159. Retrieved 2018-11-17. |
DNA sequencing is the process of determining the nucleic acid sequence – the order of nucleotides in DNA. It includes any method or technology that is used to determine the order of the four bases: adenine, guanine, cytosine, and thymine. The advent of rapid DNA sequencing methods has greatly accelerated biological and medical research and discovery.
Knowledge of DNA sequences has become indispensable for basic biological research, DNA Genographic Projects and in numerous applied fields such as medical diagnosis, biotechnology, forensic biology, virology and biological systematics. Comparing healthy and mutated DNA sequences can diagnose different diseases including various cancers, characterize antibody repertoire, and can be used to guide patient treatment. Having a quick way to sequence DNA allows for faster and more individualized medical care to be administered, and for more organisms to be identified and cataloged.
The rapid speed of sequencing attained with modern DNA sequencing technology has been instrumental in the sequencing of complete DNA sequences, or genomes, of numerous types and species of life, including the human genome and other complete DNA sequences of many animal, plant, and microbial species.
The first DNA sequences were obtained in the early 1970s by academic researchers using laborious methods based on two-dimensional chromatography. Following the development of fluorescence-based sequencing methods with a DNA sequencer, DNA sequencing has become easier and orders of magnitude faster.
DNA sequencing may be used to determine the sequence of individual genes, larger genetic regions (i.e. clusters of genes or operons), full chromosomes, or entire genomes of any organism. DNA sequencing is also the most efficient way to indirectly sequence RNA or proteins (via their open reading frames). In fact, DNA sequencing has become a key technology in many areas of biology and other sciences such as medicine, forensics, and anthropology.
Sequencing is used in molecular biology to study genomes and the proteins they encode. Information obtained using sequencing allows researchers to identify changes in genes and noncoding DNA (including regulatory sequences), associations with diseases and phenotypes, and identify potential drug targets.
Since DNA is an informative macromolecule in terms of transmission from one generation to another, DNA sequencing is used in evolutionary biology to study how different organisms are related and how they evolved. In February 2021, scientists reported, for the first time, the sequencing of DNA from animal remains, a mammoth in this instance, over a million years old, the oldest DNA sequenced to date.
The field of metagenomics involves identification of organisms present in a body of water, sewage, dirt, debris filtered from the air, or swab samples from organisms. Knowing which organisms are present in a particular environment is critical to research in ecology, epidemiology, microbiology, and other fields. Sequencing enables researchers to determine which types of microbes may be present in a microbiome, for example.
As most viruses are too small to be seen by a light microscope, sequencing is one of the main tools in virology to identify and study the virus. Viral genomes can be based in DNA or RNA. RNA viruses are more time-sensitive for genome sequencing, as they degrade faster in clinical samples. Traditional Sanger sequencing and next-generation sequencing are used to sequence viruses in basic and clinical research, as well as for the diagnosis of emerging viral infections, molecular epidemiology of viral pathogens, and drug-resistance testing. There are more than 2.3 million unique viral sequences in GenBank. Recently, NGS has surpassed traditional Sanger as the most popular approach for generating viral genomes.
During the 1990 avian influenza outbreak, viral sequencing determined that the influenza sub-type originated through reassortment between quail and poultry. This led to legislation in Hong Kong that prohibited selling live quail and poultry together at market. Viral sequencing can also be used to estimate when a viral outbreak began by using a molecular clock technique.
Medical technicians may sequence genes (or, theoretically, full genomes) from patients to determine if there is risk of genetic diseases. This is a form of genetic testing, though some genetic tests may not involve DNA sequencing.
DNA sequencing is also being increasingly used to diagnose and treat rare diseases. As more and more genes are identified that cause rare genetic diseases, molecular diagnoses for patients becomes more mainstream. DNA sequencing allows clinicians to identify genetic diseases, improve disease management, provide reproductive counseling, and more effective therapies.
Also, DNA sequencing may be useful for determining a specific bacteria, to allow for more precise antibiotics treatments, hereby reducing the risk of creating antimicrobial resistance in bacteria populations.
DNA sequencing may be used along with DNA profiling methods for forensic identification and paternity testing. DNA testing has evolved tremendously in the last few decades to ultimately link a DNA print to what is under investigation. The DNA patterns in fingerprint, saliva, hair follicles, etc. uniquely separate each living organism from another. Testing DNA is a technique which can detect specific genomes in a DNA strand to produce a unique and individualized pattern.
The four canonical basesEdit
The canonical structure of DNA has four bases: thymine (T), adenine (A), cytosine (C), and guanine (G). DNA sequencing is the determination of the physical order of these bases in a molecule of DNA. However, there are many other bases that may be present in a molecule. In some viruses (specifically, bacteriophage), cytosine may be replaced by hydroxy methyl or hydroxy methyl glucose cytosine. In mammalian DNA, variant bases with methyl groups or phosphosulfate may be found. Depending on the sequencing technique, a particular modification, e.g., the 5mC (5 methyl cytosine) common in humans, may or may not be detected.
Discovery of DNA structure and functionEdit
Deoxyribonucleic acid (DNA) was first discovered and isolated by Friedrich Miescher in 1869, but it remained under-studied for many decades because proteins, rather than DNA, were thought to hold the genetic blueprint to life. This situation changed after 1944 as a result of some experiments by Oswald Avery, Colin MacLeod, and Maclyn McCarty demonstrating that purified DNA could change one strain of bacteria into another. This was the first time that DNA was shown capable of transforming the properties of cells.
In 1953, James Watson and Francis Crick put forward their double-helix model of DNA, based on crystallized X-ray structures being studied by Rosalind Franklin. According to the model, DNA is composed of two strands of nucleotides coiled around each other, linked together by hydrogen bonds and running in opposite directions. Each strand is composed of four complementary nucleotides – adenine (A), cytosine (C), guanine (G) and thymine (T) – with an A on one strand always paired with T on the other, and C always paired with G. They proposed that such a structure allowed each strand to be used to reconstruct the other, an idea central to the passing on of hereditary information between generations.
The foundation for sequencing proteins was first laid by the work of Frederick Sanger who by 1955 had completed the sequence of all the amino acids in insulin, a small protein secreted by the pancreas. This provided the first conclusive evidence that proteins were chemical entities with a specific molecular pattern rather than a random mixture of material suspended in fluid. Sanger's success in sequencing insulin spurred on x-ray crystallographers, including Watson and Crick, who by now were trying to understand how DNA directed the formation of proteins within a cell. Soon after attending a series of lectures given by Frederick Sanger in October 1954, Crick began developing a theory which argued that the arrangement of nucleotides in DNA determined the sequence of amino acids in proteins, which in turn helped determine the function of a protein. He published this theory in 1958.
RNA sequencing was one of the earliest forms of nucleotide sequencing. The major landmark of RNA sequencing is the sequence of the first complete gene and the complete genome of Bacteriophage MS2, identified and published by Walter Fiers and his coworkers at the University of Ghent (Ghent, Belgium), in 1972 and 1976. Traditional RNA sequencing methods require the creation of a cDNA molecule which must be sequenced.
Early DNA sequencing methodsEdit
The first method for determining DNA sequences involved a location-specific primer extension strategy established by Ray Wu at Cornell University in 1970. DNA polymerase catalysis and specific nucleotide labeling, both of which figure prominently in current sequencing schemes, were used to sequence the cohesive ends of lambda phage DNA. Between 1970 and 1973, Wu, R Padmanabhan and colleagues demonstrated that this method can be employed to determine any DNA sequence using synthetic location-specific primers. Frederick Sanger then adopted this primer-extension strategy to develop more rapid DNA sequencing methods at the MRC Centre, Cambridge, UK and published a method for "DNA sequencing with chain-terminating inhibitors" in 1977. Walter Gilbert and Allan Maxam at Harvard also developed sequencing methods, including one for "DNA sequencing by chemical degradation". In 1973, Gilbert and Maxam reported the sequence of 24 basepairs using a method known as wandering-spot analysis. Advancements in sequencing were aided by the concurrent development of recombinant DNA technology, allowing DNA samples to be isolated from sources other than viruses.
Sequencing of full genomesEdit
The first full DNA genome to be sequenced was that of bacteriophage φX174 in 1977. Medical Research Council scientists deciphered the complete DNA sequence of the Epstein-Barr virus in 1984, finding it contained 172,282 nucleotides. Completion of the sequence marked a significant turning point in DNA sequencing because it was achieved with no prior genetic profile knowledge of the virus.
A non-radioactive method for transferring the DNA molecules of sequencing reaction mixtures onto an immobilizing matrix during electrophoresis was developed by Herbert Pohl and co-workers in the early 1980s. Followed by the commercialization of the DNA sequencer "Direct-Blotting-Electrophoresis-System GATC 1500" by GATC Biotech, which was intensively used in the framework of the EU genome-sequencing programme, the complete DNA sequence of the yeast Saccharomyces cerevisiae chromosome II. Leroy E. Hood's laboratory at the California Institute of Technology announced the first semi-automated DNA sequencing machine in 1986. This was followed by Applied Biosystems' marketing of the first fully automated sequencing machine, the ABI 370, in 1987 and by Dupont's Genesis 2000 which used a novel fluorescent labeling technique enabling all four dideoxynucleotides to be identified in a single lane. By 1990, the U.S. National Institutes of Health (NIH) had begun large-scale sequencing trials on Mycoplasma capricolum, Escherichia coli, Caenorhabditis elegans, and Saccharomyces cerevisiae at a cost of US$0.75 per base. Meanwhile, sequencing of human cDNA sequences called expressed sequence tags began in Craig Venter's lab, an attempt to capture the coding fraction of the human genome. In 1995, Venter, Hamilton Smith, and colleagues at The Institute for Genomic Research (TIGR) published the first complete genome of a free-living organism, the bacterium Haemophilus influenzae. The circular chromosome contains 1,830,137 bases and its publication in the journal Science marked the first published use of whole-genome shotgun sequencing, eliminating the need for initial mapping efforts.
High-throughput sequencing (HTS) methodsEdit
Several new methods for DNA sequencing were developed in the mid to late 1990s and were implemented in commercial DNA sequencers by 2000. Together these were called the "next-generation" or "second-generation" sequencing (NGS) methods, in order to distinguish them from the earlier methods, including Sanger sequencing. In contrast to the first generation of sequencing, NGS technology is typically characterized by being highly scalable, allowing the entire genome to be sequenced at once. Usually, this is accomplished by fragmenting the genome into small pieces, randomly sampling for a fragment, and sequencing it using one of a variety of technologies, such as those described below. An entire genome is possible because multiple fragments are sequenced at once (giving it the name "massively parallel" sequencing) in an automated process.
NGS technology has tremendously empowered researchers to look for insights into health, anthropologists to investigate human origins, and is catalyzing the "Personalized Medicine" movement. However, it has also opened the door to more room for error. There are many software tools to carry out the computational analysis of NGS data, often compiled at online platforms such as CSI NGS Portal, each with its own algorithm. Even the parameters within one software package can change the outcome of the analysis. In addition, the large quantities of data produced by DNA sequencing have also required development of new methods and programs for sequence analysis. Several efforts to develop standards in the NGS field have been attempted to address these challenges, most of which have been small-scale efforts arising from individual labs. Most recently, a large, organized, FDA-funded effort has culminated in the BioCompute standard.
On 26 October 1990, Roger Tsien, Pepi Ross, Margaret Fahnestock and Allan J Johnston filed a patent describing stepwise ("base-by-base") sequencing with removable 3' blockers on DNA arrays (blots and single DNA molecules). In 1996, Pål Nyrén and his student Mostafa Ronaghi at the Royal Institute of Technology in Stockholm published their method of pyrosequencing.
On 1 April 1997, Pascal Mayer and Laurent Farinelli submitted patents to the World Intellectual Property Organization describing DNA colony sequencing. The DNA sample preparation and random surface-polymerase chain reaction (PCR) arraying methods described in this patent, coupled to Roger Tsien et al.'s "base-by-base" sequencing method, is now implemented in Illumina's Hi-Seq genome sequencers.
In 1998, Phil Green and Brent Ewing of the University of Washington described their phred quality score for sequencer data analysis, a landmark analysis technique that gained widespread adoption, and which is still the most common metric for assessing the accuracy of a sequencing platform.
Lynx Therapeutics published and marketed massively parallel signature sequencing (MPSS), in 2000. This method incorporated a parallelized, adapter/ligation-mediated, bead-based sequencing technology and served as the first commercially available "next-generation" sequencing method, though no DNA sequencers were sold to independent laboratories.
Allan Maxam and Walter Gilbert published a DNA sequencing method in 1977 based on chemical modification of DNA and subsequent cleavage at specific bases. Also known as chemical sequencing, this method allowed purified samples of double-stranded DNA to be used without further cloning. This method's use of radioactive labeling and its technical complexity discouraged extensive use after refinements in the Sanger methods had been made.
Maxam-Gilbert sequencing requires radioactive labeling at one 5' end of the DNA and purification of the DNA fragment to be sequenced. Chemical treatment then generates breaks at a small proportion of one or two of the four nucleotide bases in each of four reactions (G, A+G, C, C+T). The concentration of the modifying chemicals is controlled to introduce on average one modification per DNA molecule. Thus a series of labeled fragments is generated, from the radiolabeled end to the first "cut" site in each molecule. The fragments in the four reactions are electrophoresed side by side in denaturing acrylamide gels for size separation. To visualize the fragments, the gel is exposed to X-ray film for autoradiography, yielding a series of dark bands each corresponding to a radiolabeled DNA fragment, from which the sequence may be inferred.
The chain-termination method developed by Frederick Sanger and coworkers in 1977 soon became the method of choice, owing to its relative ease and reliability. When invented, the chain-terminator method used fewer toxic chemicals and lower amounts of radioactivity than the Maxam and Gilbert method. Because of its comparative ease, the Sanger method was soon automated and was the method used in the first generation of DNA sequencers.
Sanger sequencing is the method which prevailed from the 1980s until the mid-2000s. Over that period, great advances were made in the technique, such as fluorescent labelling, capillary electrophoresis, and general automation. These developments allowed much more efficient sequencing, leading to lower costs. The Sanger method, in mass production form, is the technology which produced the first human genome in 2001, ushering in the age of genomics. However, later in the decade, radically different approaches reached the market, bringing the cost per genome down from $100 million in 2001 to $10,000 in 2011.
Large-scale sequencing and de novo sequencingEdit
Large-scale sequencing often aims at sequencing very long DNA pieces, such as whole chromosomes, although large-scale sequencing can also be used to generate very large numbers of short sequences, such as found in phage display. For longer targets such as chromosomes, common approaches consist of cutting (with restriction enzymes) or shearing (with mechanical forces) large DNA fragments into shorter DNA fragments. The fragmented DNA may then be cloned into a DNA vector and amplified in a bacterial host such as Escherichia coli. Short DNA fragments purified from individual bacterial colonies are individually sequenced and assembled electronically into one long, contiguous sequence. Studies have shown that adding a size selection step to collect DNA fragments of uniform size can improve sequencing efficiency and accuracy of the genome assembly. In these studies, automated sizing has proven to be more reproducible and precise than manual gel sizing.
The term "de novo sequencing" specifically refers to methods used to determine the sequence of DNA with no previously known sequence. De novo translates from Latin as "from the beginning". Gaps in the assembled sequence may be filled by primer walking. The different strategies have different tradeoffs in speed and accuracy; shotgun methods are often used for sequencing large genomes, but its assembly is complex and difficult, particularly with sequence repeats often causing gaps in genome assembly.
Most sequencing approaches use an in vitro cloning step to amplify individual DNA molecules, because their molecular detection methods are not sensitive enough for single molecule sequencing. Emulsion PCR isolates individual DNA molecules along with primer-coated beads in aqueous droplets within an oil phase. A polymerase chain reaction (PCR) then coats each bead with clonal copies of the DNA molecule followed by immobilization for later sequencing. Emulsion PCR is used in the methods developed by Marguilis et al. (commercialized by 454 Life Sciences), Shendure and Porreca et al. (also known as "polony sequencing") and SOLiD sequencing, (developed by Agencourt, later Applied Biosystems, now Life Technologies). Emulsion PCR is also used in the GemCode and Chromium platforms developed by 10x Genomics.
Shotgun sequencing is a sequencing method designed for analysis of DNA sequences longer than 1000 base pairs, up to and including entire chromosomes. This method requires the target DNA to be broken into random fragments. After sequencing individual fragments using the chain termination method, the sequences can be reassembled on the basis of their overlapping regions.
High-throughput sequencing, which includes next-generation "short-read" and third-generation "long-read" sequencing methods,[nt 1] applies to exome sequencing, genome sequencing, genome resequencing, transcriptome profiling (RNA-Seq), DNA-protein interactions (ChIP-sequencing), and epigenome characterization.
The high demand for low-cost sequencing has driven the development of high-throughput sequencing technologies that parallelize the sequencing process, producing thousands or millions of sequences concurrently. High-throughput sequencing technologies are intended to lower the cost of DNA sequencing beyond what is possible with standard dye-terminator methods. In ultra-high-throughput sequencing as many as 500,000 sequencing-by-synthesis operations may be run in parallel. Such technologies led to the ability to sequence an entire human genome in as little as one day. As of 2019[update], corporate leaders in the development of high-throughput sequencing products included Illumina, Qiagen and ThermoFisher Scientific.
|Method||Read length||Accuracy (single read not consensus)||Reads per run||Time per run||Cost per 1 billion bases (in US$)||Advantages||Disadvantages|
|Single-molecule real-time sequencing (Pacific Biosciences)||30,000 bp (N50);||87% raw-read accuracy||4,000,000 per Sequel 2 SMRT cell, 100–200 gigabases||30 minutes to 20 hours||$7.2-$43.3||Fast. Detects 4mC, 5mC, 6mA.||Moderate throughput. Equipment can be very expensive.|
|Ion semiconductor (Ion Torrent sequencing)||up to 600 bp||99.6%||up to 80 million||2 hours||$66.8-$950||Less expensive equipment. Fast.||Homopolymer errors.|
|Pyrosequencing (454)||700 bp||99.9%||1 million||24 hours||$10,000||Long read size. Fast.||Runs are expensive. Homopolymer errors.|
|Sequencing by synthesis (Illumina)||MiniSeq, NextSeq: 75–300 bp;
MiSeq: 50–600 bp;
HiSeq 2500: 50–500 bp;
HiSeq 3/4000: 50–300 bp;
HiSeq X: 300 bp
|99.9% (Phred30)||MiniSeq/MiSeq: 1–25 Million;
NextSeq: 130-00 Million;
HiSeq 2500: 300 million – 2 billion;
HiSeq 3/4000 2.5 billion;
HiSeq X: 3 billion
|1 to 11 days, depending upon sequencer and specified read length||$5 to $150||Potential for high sequence yield, depending upon sequencer model and desired application.||Equipment can be very expensive. Requires high concentrations of DNA.|
|Combinatorial probe anchor synthesis (cPAS- BGI/MGI)||BGISEQ-50: 35-50bp;
MGISEQ 200: 50-200bp;
BGISEQ-500, MGISEQ-2000: 50-300bp
|99.9% (Phred30)||BGISEQ-50: 160M;
MGISEQ 200: 300M;
BGISEQ-500: 1300M per flow cell;
MGISEQ-2000: 375M FCS flow cell, 1500M FCL flow cell per flow cell.
|1 to 9 days depending on instrument, read length and number of flow cells run at a time.||$5– $120|
|Sequencing by ligation (SOLiD sequencing)||50+35 or 50+50 bp||99.9%||1.2 to 1.4 billion||1 to 2 weeks||$60–130||Low cost per base.||Slower than other methods. Has issues sequencing palindromic sequences.|
|Nanopore Sequencing||Dependent on library preparation, not the device, so user chooses read length (up to 2,272,580 bp reported).||~92–97% single read||dependent on read length selected by user||data streamed in real time. Choose 1 min to 48 hrs||$7–100||Longest individual reads. Accessible user community. Portable (Palm sized).||Lower throughput than other machines, Single read accuracy in 90s.|
|GenapSys Sequencing||Around 150 bp single-end||99.9% (Phred30)||1 to 16 million||Around 24 hours||$667||Low-cost of instrument ($10,000)|
|Chain termination (Sanger sequencing)||400 to 900 bp||99.9%||N/A||20 minutes to 3 hours||$2,400,000||Useful for many applications.||More expensive and impractical for larger sequencing projects. This method also requires the time-consuming step of plasmid cloning or PCR.|
Long-read sequencing methodsEdit
Single molecule real time (SMRT) sequencingEdit
SMRT sequencing is based on the sequencing by synthesis approach. The DNA is synthesized in zero-mode wave-guides (ZMWs) – small well-like containers with the capturing tools located at the bottom of the well. The sequencing is performed with use of unmodified polymerase (attached to the ZMW bottom) and fluorescently labelled nucleotides flowing freely in the solution. The wells are constructed in a way that only the fluorescence occurring by the bottom of the well is detected. The fluorescent label is detached from the nucleotide upon its incorporation into the DNA strand, leaving an unmodified DNA strand. According to Pacific Biosciences (PacBio), the SMRT technology developer, this methodology allows detection of nucleotide modifications (such as cytosine methylation). This happens through the observation of polymerase kinetics. This approach allows reads of 20,000 nucleotides or more, with average read lengths of 5 kilobases. In 2015, Pacific Biosciences announced the launch of a new sequencing instrument called the Sequel System, with 1 million ZMWs compared to 150,000 ZMWs in the PacBio RS II instrument. SMRT sequencing is referred to as "third-generation" or "long-read" sequencing.
Nanopore DNA sequencingEdit
The DNA passing through the nanopore changes its ion current. This change is dependent on the shape, size and length of the DNA sequence. Each type of the nucleotide blocks the ion flow through the pore for a different period of time. The method does not require modified nucleotides and is performed in real time. Nanopore sequencing is referred to as "third-generation" or "long-read" sequencing, along with SMRT sequencing.
Early industrial research into this method was based on a technique called 'exonuclease sequencing', where the readout of electrical signals occurred as nucleotides passed by alpha(α)-hemolysin pores covalently bound with cyclodextrin. However the subsequent commercial method, 'strand sequencing', sequenced DNA bases in an intact strand.
Two main areas of nanopore sequencing in development are solid state nanopore sequencing, and protein based nanopore sequencing. Protein nanopore sequencing utilizes membrane protein complexes such as α-hemolysin, MspA (Mycobacterium smegmatis Porin A) or CssG, which show great promise given their ability to distinguish between individual and groups of nucleotides. In contrast, solid-state nanopore sequencing utilizes synthetic materials such as silicon nitride and aluminum oxide and it is preferred for its superior mechanical ability and thermal and chemical stability. The fabrication method is essential for this type of sequencing given that the nanopore array can contain hundreds of pores with diameters smaller than eight nanometers.
The concept originated from the idea that single stranded DNA or RNA molecules can be electrophoretically driven in a strict linear sequence through a biological pore that can be less than eight nanometers, and can be detected given that the molecules release an ionic current while moving through the pore. The pore contains a detection region capable of recognizing different bases, with each base generating various time specific signals corresponding to the sequence of bases as they cross the pore which are then evaluated. Precise control over the DNA transport through the pore is crucial for success. Various enzymes such as exonucleases and polymerases have been used to moderate this process by positioning them near the pore's entrance.
Short-read sequencing methods Edit
Massively parallel signature sequencing (MPSS)Edit
The first of the high-throughput sequencing technologies, massively parallel signature sequencing (or MPSS), was developed in the 1990s at Lynx Therapeutics, a company founded in 1992 by Sydney Brenner and Sam Eletr. MPSS was a bead-based method that used a complex approach of adapter ligation followed by adapter decoding, reading the sequence in increments of four nucleotides. This method made it susceptible to sequence-specific bias or loss of specific sequences. Because the technology was so complex, MPSS was only performed 'in-house' by Lynx Therapeutics and no DNA sequencing machines were sold to independent laboratories. Lynx Therapeutics merged with Solexa (later acquired by Illumina) in 2004, leading to the development of sequencing-by-synthesis, a simpler approach acquired from Manteia Predictive Medicine, which rendered MPSS obsolete. However, the essential properties of the MPSS output were typical of later high-throughput data types, including hundreds of thousands of short DNA sequences. In the case of MPSS, these were typically used for sequencing cDNA for measurements of gene expression levels.
The polony sequencing method, developed in the laboratory of George M. Church at Harvard, was among the first high-throughput sequencing systems and was used to sequence a full E. coli genome in 2005. It combined an in vitro paired-tag library with emulsion PCR, an automated microscope, and ligation-based sequencing chemistry to sequence an E. coli genome at an accuracy of >99.9999% and a cost approximately 1/9 that of Sanger sequencing. The technology was licensed to Agencourt Biosciences, subsequently spun out into Agencourt Personal Genomics, and eventually incorporated into the Applied Biosystems SOLiD platform. Applied Biosystems was later acquired by Life Technologies, now part of Thermo Fisher Scientific.
A parallelized version of pyrosequencing was developed by 454 Life Sciences, which has since been acquired by Roche Diagnostics. The method amplifies DNA inside water droplets in an oil solution (emulsion PCR), with each droplet containing a single DNA template attached to a single primer-coated bead that then forms a clonal colony. The sequencing machine contains many picoliter-volume wells each containing a single bead and sequencing enzymes. Pyrosequencing uses luciferase to generate light for detection of the individual nucleotides added to the nascent DNA, and the combined data are used to generate sequence reads. This technology provides intermediate read length and price per base compared to Sanger sequencing on one end and Solexa and SOLiD on the other.
Illumina (Solexa) sequencingEdit
Solexa, now part of Illumina, was founded by Shankar Balasubramanian and David Klenerman in 1998, and developed a sequencing method based on reversible dye-terminators technology, and engineered polymerases. The reversible terminated chemistry concept was invented by Bruno Canard and Simon Sarfati at the Pasteur Institute in Paris. It was developed internally at Solexa by those named on the relevant patents. In 2004, Solexa acquired the company Manteia Predictive Medicine in order to gain a massively parallel sequencing technology invented in 1997 by Pascal Mayer and Laurent Farinelli. It is based on "DNA clusters" or "DNA colonies", which involves the clonal amplification of DNA on a surface. The cluster technology was co-acquired with Lynx Therapeutics of California. Solexa Ltd. later merged with Lynx to form Solexa Inc.
In this method, DNA molecules and primers are first attached on a slide or flow cell and amplified with polymerase so that local clonal DNA colonies, later coined "DNA clusters", are formed. To determine the sequence, four types of reversible terminator bases (RT-bases) are added and non-incorporated nucleotides are washed away. A camera takes images of the fluorescently labeled nucleotides. Then the dye, along with the terminal 3' blocker, is chemically removed from the DNA, allowing for the next cycle to begin. Unlike pyrosequencing, the DNA chains are extended one nucleotide at a time and image acquisition can be performed at a delayed moment, allowing for very large arrays of DNA colonies to be captured by sequential images taken from a single camera.
Decoupling the enzymatic reaction and the image capture allows for optimal throughput and theoretically unlimited sequencing capacity. With an optimal configuration, the ultimately reachable instrument throughput is thus dictated solely by the analog-to-digital conversion rate of the camera, multiplied by the number of cameras and divided by the number of pixels per DNA colony required for visualizing them optimally (approximately 10 pixels/colony). In 2012, with cameras operating at more than 10 MHz A/D conversion rates and available optics, fluidics and enzymatics, throughput can be multiples of 1 million nucleotides/second, corresponding roughly to 1 human genome equivalent at 1x coverage per hour per instrument, and 1 human genome re-sequenced (at approx. 30x) per day per instrument (equipped with a single camera).
Combinatorial probe anchor synthesis (cPAS)Edit
This method is an upgraded modification to combinatorial probe anchor ligation technology (cPAL) described by Complete Genomics which has since become part of Chinese genomics company BGI in 2013. The two companies have refined the technology to allow for longer read lengths, reaction time reductions and faster time to results. In addition, data are now generated as contiguous full-length reads in the standard FASTQ file format and can be used as-is in most short-read-based bioinformatics analysis pipelines.
The two technologies that form the basis for this high-throughput sequencing technology are DNA nanoballs (DNB) and patterned arrays for nanoball attachment to a solid surface. DNA nanoballs are simply formed by denaturing double stranded, adapter ligated libraries and ligating the forward strand only to a splint oligonucleotide to form a ssDNA circle. Faithful copies of the circles containing the DNA insert are produced utilizing Rolling Circle Amplification that generates approximately 300–500 copies. The long strand of ssDNA folds upon itself to produce a three-dimensional nanoball structure that is approximately 220 nm in diameter. Making DNBs replaces the need to generate PCR copies of the library on the flow cell and as such can remove large proportions of duplicate reads, adapter-adapter ligations and PCR induced errors.
The patterned array of positively charged spots is fabricated through photolithography and etching techniques followed by chemical modification to generate a sequencing flow cell. Each spot on the flow cell is approximately 250 nm in diameter, are separated by 700 nm (centre to centre) and allows easy attachment of a single negatively charged DNB to the flow cell and thus reducing under or over-clustering on the flow cell.
Sequencing is then performed by addition of an oligonucleotide probe that attaches in combination to specific sites within the DNB. The probe acts as an anchor that then allows one of four single reversibly inactivated, labelled nucleotides to bind after flowing across the flow cell. Unbound nucleotides are washed away before laser excitation of the attached labels then emit fluorescence and signal is captured by cameras that is converted to a digital output for base calling. The attached base has its terminator and label chemically cleaved at completion of the cycle. The cycle is repeated with another flow of free, labelled nucleotides across the flow cell to allow the next nucleotide to bind and have its signal captured. This process is completed a number of times (usually 50 to 300 times) to determine the sequence of the inserted piece of DNA at a rate of approximately 40 million nucleotides per second as of 2018.
Applied Biosystems' (now a Life Technologies brand) SOLiD technology employs sequencing by ligation. Here, a pool of all possible oligonucleotides of a fixed length are labeled according to the sequenced position. Oligonucleotides are annealed and ligated; the preferential ligation by DNA ligase for matching sequences results in a signal informative of the nucleotide at that position. Each base in the template is sequenced twice, and the resulting data are decoded according to the 2 base encoding scheme used in this method. Before sequencing, the DNA is amplified by emulsion PCR. The resulting beads, each containing single copies of the same DNA molecule, are deposited on a glass slide. The result is sequences of quantities and lengths comparable to Illumina sequencing. This sequencing by ligation method has been reported to have some issue sequencing palindromic sequences.
Ion Torrent semiconductor sequencingEdit
Ion Torrent Systems Inc. (now owned by Life Technologies) developed a system based on using standard sequencing chemistry, but with a novel, semiconductor-based detection system. This method of sequencing is based on the detection of hydrogen ions that are released during the polymerisation of DNA, as opposed to the optical methods used in other sequencing systems. A microwell containing a template DNA strand to be sequenced is flooded with a single type of nucleotide. If the introduced nucleotide is complementary to the leading template nucleotide it is incorporated into the growing complementary strand. This causes the release of a hydrogen ion that triggers a hypersensitive ion sensor, which indicates that a reaction has occurred. If homopolymer repeats are present in the template sequence, multiple nucleotides will be incorporated in a single cycle. This leads to a corresponding number of released hydrogens and a proportionally higher electronic signal.
DNA nanoball sequencingEdit
DNA nanoball sequencing is a type of high throughput sequencing technology used to determine the entire genomic sequence of an organism. The company Complete Genomics uses this technology to sequence samples submitted by independent researchers. The method uses rolling circle replication to amplify small fragments of genomic DNA into DNA nanoballs. Unchained sequencing by ligation is then used to determine the nucleotide sequence. This method of DNA sequencing allows large numbers of DNA nanoballs to be sequenced per run and at low reagent costs compared to other high-throughput sequencing platforms. However, only short sequences of DNA are determined from each DNA nanoball which makes mapping the short reads to a reference genome difficult. This technology has been used for multiple genome sequencing projects and is scheduled to be used for more.
Heliscope single molecule sequencingEdit
Heliscope sequencing is a method of single-molecule sequencing developed by Helicos Biosciences. It uses DNA fragments with added poly-A tail adapters which are attached to the flow cell surface. The next steps involve extension-based sequencing with cyclic washes of the flow cell with fluorescently labeled nucleotides (one nucleotide type at a time, as with the Sanger method). The reads are performed by the Heliscope sequencer. The reads are short, averaging 35 bp. What made this technology especially novel was that it was the first of its class to sequence non-amplified DNA, thus preventing any read errors associated with amplification steps. In 2009 a human genome was sequenced using the Heliscope, however in 2012 the company went bankrupt.
There are two main microfluidic systems that are used to sequence DNA; droplet based microfluidics and digital microfluidics. Microfluidic devices solve many of the current limitations of current sequencing arrays.
Abate et al. studied the use of droplet-based microfluidic devices for DNA sequencing. These devices have the ability to form and process picoliter sized droplets at the rate of thousands per second. The devices were created from polydimethylsiloxane (PDMS) and used Forster resonance energy transfer, FRET assays to read the sequences of DNA encompassed in the droplets. Each position on the array tested for a specific 15 base sequence.
Fair et al. used digital microfluidic devices to study DNA pyrosequencing. Significant advantages include the portability of the device, reagent volume, speed of analysis, mass manufacturing abilities, and high throughput. This study provided a proof of concept showing that digital devices can be used for pyrosequencing; the study included using synthesis, which involves the extension of the enzymes and addition of labeled nucleotides.
Boles et al. also studied pyrosequencing on digital microfluidic devices. They used an electro-wetting device to create, mix, and split droplets. The sequencing uses a three-enzyme protocol and DNA templates anchored with magnetic beads. The device was tested using two protocols and resulted in 100% accuracy based on raw pyrogram levels. The advantages of these digital microfluidic devices include size, cost, and achievable levels of functional integration.
DNA sequencing research, using microfluidics, also has the ability to be applied to the sequencing of RNA, using similar droplet microfluidic techniques, such as the method, inDrops. This shows that many of these DNA sequencing techniques will be able to be applied further and be used to understand more about genomes and transcriptomes.
Methods in developmentEdit
DNA sequencing methods currently under development include reading the sequence as a DNA strand transits through nanopores (a method that is now commercial but subsequent generations such as solid-state nanopores are still in development), and microscopy-based techniques, such as atomic force microscopy or transmission electron microscopy that are used to identify the positions of individual nucleotides within long DNA fragments (>5,000 bp) by nucleotide labeling with heavier elements (e.g., halogens) for visual detection and recording.Third generation technologies aim to increase throughput and decrease the time to result and cost by eliminating the need for excessive reagents and harnessing the processivity of DNA polymerase.
Tunnelling currents DNA sequencingEdit
Another approach uses measurements of the electrical tunnelling currents across single-strand DNA as it moves through a channel. Depending on its electronic structure, each base affects the tunnelling current differently, allowing differentiation between different bases.
The use of tunnelling currents has the potential to sequence orders of magnitude faster than ionic current methods and the sequencing of several DNA oligomers and micro-RNA has already been achieved.
Sequencing by hybridizationEdit
Sequencing by hybridization is a non-enzymatic method that uses a DNA microarray. A single pool of DNA whose sequence is to be determined is fluorescently labeled and hybridized to an array containing known sequences. Strong hybridization signals from a given spot on the array identifies its sequence in the DNA being sequenced.
This method of sequencing utilizes binding characteristics of a library of short single stranded DNA molecules (oligonucleotides), also called DNA probes, to reconstruct a target DNA sequence. Non-specific hybrids are removed by washing and the target DNA is eluted. Hybrids are re-arranged such that the DNA sequence can be reconstructed. The benefit of this sequencing type is its ability to capture a large number of targets with a homogenous coverage. A large number of chemicals and starting DNA is usually required. However, with the advent of solution-based hybridization, much less equipment and chemicals are necessary.
Sequencing with mass spectrometryEdit
Mass spectrometry may be used to determine DNA sequences. Matrix-assisted laser desorption ionization time-of-flight mass spectrometry, or MALDI-TOF MS, has specifically been investigated as an alternative method to gel electrophoresis for visualizing DNA fragments. With this method, DNA fragments generated by chain-termination sequencing reactions are compared by mass rather than by size. The mass of each nucleotide is different from the others and this difference is detectable by mass spectrometry. Single-nucleotide mutations in a fragment can be more easily detected with MS than by gel electrophoresis alone. MALDI-TOF MS can more easily detect differences between RNA fragments, so researchers may indirectly sequence DNA with MS-based methods by converting it to RNA first.
The higher resolution of DNA fragments permitted by MS-based methods is of special interest to researchers in forensic science, as they may wish to find single-nucleotide polymorphisms in human DNA samples to identify individuals. These samples may be highly degraded so forensic researchers often prefer mitochondrial DNA for its higher stability and applications for lineage studies. MS-based sequencing methods have been used to compare the sequences of human mitochondrial DNA from samples in a Federal Bureau of Investigation database and from bones found in mass graves of World War I soldiers.
Early chain-termination and TOF MS methods demonstrated read lengths of up to 100 base pairs. Researchers have been unable to exceed this average read size; like chain-termination sequencing alone, MS-based DNA sequencing may not be suitable for large de novo sequencing projects. Even so, a recent study did use the short sequence reads and mass spectroscopy to compare single-nucleotide polymorphisms in pathogenic Streptococcus strains.
Microfluidic Sanger sequencingEdit
In microfluidic Sanger sequencing the entire thermocycling amplification of DNA fragments as well as their separation by electrophoresis is done on a single glass wafer (approximately 10 cm in diameter) thus reducing the reagent usage as well as cost. In some instances researchers have shown that they can increase the throughput of conventional sequencing through the use of microchips. Research will still need to be done in order to make this use of technology effective.
This approach directly visualizes the sequence of DNA molecules using electron microscopy. The first identification of DNA base pairs within intact DNA molecules by enzymatically incorporating modified bases, which contain atoms of increased atomic number, direct visualization and identification of individually labeled bases within a synthetic 3,272 base-pair DNA molecule and a 7,249 base-pair viral genome has been demonstrated.
This method is based on use of RNA polymerase (RNAP), which is attached to a polystyrene bead. One end of DNA to be sequenced is attached to another bead, with both beads being placed in optical traps. RNAP motion during transcription brings the beads in closer and their relative distance changes, which can then be recorded at a single nucleotide resolution. The sequence is deduced based on the four readouts with lowered concentrations of each of the four nucleotide types, similarly to the Sanger method. A comparison is made between regions and sequence information is deduced by comparing the known sequence regions to the unknown sequence regions.
In vitro virus high-throughput sequencingEdit
A method has been developed to analyze full sets of protein interactions using a combination of 454 pyrosequencing and an in vitro virus mRNA display method. Specifically, this method covalently links proteins of interest to the mRNAs encoding them, then detects the mRNA pieces using reverse transcription PCRs. The mRNA may then be amplified and sequenced. The combined method was titled IVV-HiTSeq and can be performed under cell-free conditions, though its results may not be representative of in vivo conditions.
The success of any DNA sequencing protocol relies upon the DNA or RNA sample extraction and preparation from the biological material of interest.
- A successful DNA extraction will yield a DNA sample with long, non-degraded strands.
- A successful RNA extraction will yield a RNA sample that should be converted to complementary DNA (cDNA) using reverse transcriptase—a DNA polymerase that synthesizes a complementary DNA based on existing strands of RNA in a PCR-like manner. Complementary DNA can then be processed the same way as genomic DNA.
After DNA or RNA extraction, samples may require further preparation depending on the sequencing method. For Sanger sequencing, either cloning procedures or PCR are required prior to sequencing. In the case of next-generation sequencing methods, library preparation is required before processing. Assessing the quality and quantity of nucleic acids both after extraction and after library preparation identifies degraded, fragmented, and low-purity samples and yields high-quality sequencing data.
The high-throughput nature of current DNA/RNA sequencing technologies has posed a challenge for sample preparation method to scale-up. Several liquid handling instruments are being used for the preparation of higher numbers of samples with a lower total hands-on time:
|company||Liquid handlers / Automation||lower_mark_USD||upper_mark_USD||landing_url|
|Hudson Robotics||Hudson Robotics SOLO||$40,000||$50,000||https://hudsonrobotics.com/products/applications/automated-solutions-next-generation-sequencing-ngs/|
|Hamilton||Hamilton Microlab NIMBUS||$40,000||$80,000||https://www.hamiltoncompany.com/automated-liquid-handling/platforms/microlab-nimbus#specifications|
|TTP Labtech||TTP Labtech Mosquito HV Genomics||$45,000||$80,000||https://www.sptlabtech.com/products/liquid-handling/mosquito-hv-genomics/|
|Beckman Coulter||Biomek 4000||$50,000||$65,000||https://www.mybeckman.uk/liquid-handlers/biomek-4000/b22640|
|Hamilton||Hamilton Genomic STARlet||$50,000||$100,000||https://www.hamiltoncompany.com/automated-liquid-handling/assay-ready-workstations/genomic-starlet|
|Eppendorf||Eppendorf epMotion 5075t||$95,000||$110,000||https://www.eppendorf.com/epmotion/|
|Beckman Coulter||Beckman Coulter Biomek i5||$100,000||$150,000||https://www.beckman.com/liquid-handlers/biomek-i5|
|Hamilton||Hamilton NGS STAR||$100,000||$200,000||http://www.hamiltonrobotics.com/|
|PerkinElmer||PerkinElmer Sciclone G3 NGS and NGSx Workstation||$150,000||$220,000||https://www.perkinelmer.com/uk/product/sciclone-g3-ngs-workstation-cls145321|
|Agilent||Agilent Bravo NGS||$170,000||$290,000||https://www.agilent.com/en/products/automated-liquid-handling/automated-liquid-handling-applications/bravo-ngs|
|Beckman Coulter||Beckman Coulter Biomek i7||$200,000||$250,000||https://www.beckman.com/liquid-handlers/biomek-i7|
|Labcyte Echo 525||Beckman Coulter Labcyte Echo 525||$260,000||$300,000||https://www.labcyte.com/products/liquid-handling/echo-525-liquid-handler|
In October 2006, the X Prize Foundation established an initiative to promote the development of full genome sequencing technologies, called the Archon X Prize, intending to award $10 million to "the first Team that can build a device and use it to sequence 100 human genomes within 10 days or less, with an accuracy of no more than one error in every 100,000 bases sequenced, with sequences accurately covering at least 98% of the genome, and at a recurring cost of no more than $10,000 (US) per genome."
Each year the National Human Genome Research Institute, or NHGRI, promotes grants for new research and developments in genomics. 2010 grants and 2011 candidates include continuing work in microfluidic, polony and base-heavy sequencing methodologies.
The sequencing technologies described here produce raw data that needs to be assembled into longer sequences such as complete genomes (sequence assembly). There are many computational challenges to achieve this, such as the evaluation of the raw sequence data which is done by programs and algorithms such as Phred and Phrap. Other challenges have to deal with repetitive sequences that often prevent complete genome assemblies because they occur in many places of the genome. As a consequence, many sequences may not be assigned to particular chromosomes. The production of raw sequence data is only the beginning of its detailed bioinformatical analysis. Yet new methods for sequencing and correcting sequencing errors were developed.
Sometimes, the raw reads produced by the sequencer are correct and precise only in a fraction of their length. Using the entire read may introduce artifacts in the downstream analyses like genome assembly, SNP calling, or gene expression estimation. Two classes of trimming programs have been introduced, based on the window-based or the running-sum classes of algorithms. This is a partial list of the trimming algorithms currently available, specifying the algorithm class they belong to:
|Name of algorithm||Type of algorithm||Link|
|FASTX quality trimmer||Window based||FASTX quality trimmer|
This section needs expansion. You can help by adding to it. (May 2015)
Human genetics have been included within the field of bioethics since the early 1970s and the growth in the use of DNA sequencing (particularly high-throughput sequencing) has introduced a number of ethical issues. One key issue is the ownership of an individual's DNA and the data produced when that DNA is sequenced. Regarding the DNA molecule itself, the leading legal case on this topic, Moore v. Regents of the University of California (1990) ruled that individuals have no property rights to discarded cells or any profits made using these cells (for instance, as a patented cell line). However, individuals have a right to informed consent regarding removal and use of cells. Regarding the data produced through DNA sequencing, Moore gives the individual no rights to the information derived from their DNA.
As DNA sequencing becomes more widespread, the storage, security and sharing of genomic data has also become more important. For instance, one concern is that insurers may use an individual's genomic data to modify their quote, depending on the perceived future health of the individual based on their DNA. In May 2008, the Genetic Information Nondiscrimination Act (GINA) was signed in the United States, prohibiting discrimination on the basis of genetic information with respect to health insurance and employment. In 2012, the US Presidential Commission for the Study of Bioethical Issues reported that existing privacy legislation for DNA sequencing data such as GINA and the Health Insurance Portability and Accountability Act were insufficient, noting that whole-genome sequencing data was particularly sensitive, as it could be used to identify not only the individual from which the data was created, but also their relatives.
In most of the United States, DNA that is "abandoned", such as that found on a licked stamp or envelope, coffee cup, cigarette, chewing gum, household trash, or hair that has fallen on a public sidewalk, may legally be collected and sequenced by anyone, including the police, private investigators, political opponents, or people involved in paternity disputes. As of 2013, eleven states have laws that can be interpreted to prohibit "DNA theft".
Ethical issues have also been raised by the increasing use of genetic variation screening, both in newborns, and in adults by companies such as 23andMe. It has been asserted that screening for genetic variations can be harmful, increasing anxiety in individuals who have been found to have an increased risk of disease. For example, in one case noted in Time, doctors screening an ill baby for genetic variants chose not to inform the parents of an unrelated variant linked to dementia due to the harm it would cause to the parents. However, a 2011 study in The New England Journal of Medicine has shown that individuals undergoing disease risk profiling did not show increased levels of anxiety.
- Bioinformatics – computational analysis of large, complex sets of biological data
- Cancer genome sequencing
- DNA computing – computing using molecular biology hardware
- DNA field-effect transistor – transistor which uses the field-effect due to the partial charges of DNA
- DNA sequencing theory – biological theory
- DNA sequencer – a scientific instrument used to automate the DNA sequencing process
- Genographic Project – citizen science project
- Genome project – type of project
- Genome sequencing of endangered species – science
- Genome skimming – method of genome sequencing
- IsoBase – functionally related proteins across PPI networks
- Jumping library
- Nucleic acid sequence – succession of nucleotides in a nucleic acid
- Multiplex ligation-dependent probe amplification – multiplex ligation-dependent probe amplification
- Personalized medicine – medical model that tailors medical practices to the individual patient
- Protein sequencing – sequencing of amino acid arrangement in a protein
- Sequence mining
- Sequence profiling tool
- Sequencing by hybridization – method for determining the constituent nucleotides of a fixed size in a strand of DNA
- Sequencing by ligation
- TIARA (database) – database of personal genomics information
- Transmission electron microscopy DNA sequencing – single-molecule sequencing technology that uses transmission electron microscopy techniques
- "Next-generation" remains in broad use as of 2019. For instance, Straiton J, Free T, Sawyer A, Martin J (February 2019). "From Sanger Sequencing to Genome Databases and Beyond". BioTechniques. 66 (2): 60–63. doi:10.2144/btn-2019-0011. PMID 30744413.
Next-generation sequencing (NGS) technologies have revolutionized genomic research. (opening sentence of the article)
- "Introducing 'dark DNA' – the phenomenon that could change how we think about evolution".
- Behjati S, Tarpey PS (December 2013). "What is next generation sequencing?". Archives of Disease in Childhood: Education and Practice Edition. 98 (6): 236–8. doi:10.1136/archdischild-2013-304340. PMC 3841808. PMID 23986538.
- Chmielecki J, Meyerson M (14 January 2014). "DNA sequencing of cancer: what have we learned?". Annual Review of Medicine. 65 (1): 63–79. doi:10.1146/annurev-med-060712-200152. PMID 24274178.
- Abate AR, Hung T, Sperling RA, Mary P, Rotem A, Agresti JJ, et al. (December 2013). "DNA sequence analysis with droplet-based microfluidics". Lab on a Chip. 13 (24): 4864–9. doi:10.1039/c3lc50905b. PMC 4090915. PMID 24185402.
- Pekin D, Skhiri Y, Baret JC, Le Corre D, Mazutis L, Salem CB, et al. (July 2011). "Quantitative and sensitive detection of rare mutations using droplet-based microfluidics". Lab on a Chip. 11 (13): 2156–66. doi:10.1039/c1lc20128j. PMID 21594292.
- Olsvik O, Wahlberg J, Petterson B, Uhlén M, Popovic T, Wachsmuth IK, Fields PI (January 1993). "Use of automated sequencing of polymerase chain reaction-generated amplicons to identify three types of cholera toxin subunit B in Vibrio cholerae O1 strains". J. Clin. Microbiol. 31 (1): 22–25. doi:10.1128/JCM.31.1.22-25.1993. PMC 262614. PMID 7678018.
- Pettersson E, Lundeberg J, Ahmadian A (February 2009). "Generations of sequencing technologies". Genomics. 93 (2): 105–11. doi:10.1016/j.ygeno.2008.10.003. PMID 18992322.
- Hunt, Katie (17 February 2021). "World's oldest DNA sequenced from a mammoth that lived more than a million years ago". CNN News. Retrieved 17 February 2021.
- Callaway, Ewen (17 February 2021). "Million-year-old mammoth genomes shatter record for oldest ancient DNA - Permafrost-preserved teeth, up to 1.6 million years old, identify a new kind of mammoth in Siberia". Nature. 590 (7847): 537–538. doi:10.1038/d41586-021-00436-x. PMID 33597786.
- Castro, Christina; Marine, Rachel; Ramos, Edward; Ng, Terry Fei Fan (2019). "The effect of variant interference on de novo assembly for viral deep sequencing". BMC Genomics. 21 (1): 421. bioRxiv 10.1101/815480. doi:10.1186/s12864-020-06801-w. PMC 7306937. PMID 32571214.
- Wohl, Shirlee; Schaffner, Stephen F.; Sabeti, Pardis C. (2016). "Genomic Analysis of Viral Outbreaks". Annual Review of Virology. 3 (1): 173–195. doi:10.1146/annurev-virology-110615-035747. PMC 5210220. PMID 27501264.
- Boycott, Kym M.; Vanstone, Megan R.; Bulman, Dennis E.; MacKenzie, Alex E. (October 2013). "Rare-disease genetics in the era of next-generation sequencing: discovery to translation". Nature Reviews Genetics. 14 (10): 681–691. doi:10.1038/nrg3555. ISSN 1471-0064. PMID 23999272. S2CID 8496181.
- Schleusener V, Köser CU, Beckert P, Niemann S, Feuerriegel S (2017). "Mycobacterium tuberculosis resistance prediction and lineage classification from genome sequencing: comparison of automated analysis tools". Sci Rep. 7: 46327. Bibcode:2017NatSR...746327S. doi:10.1038/srep46327. PMC 7365310. PMID 28425484.
- Mahé P, El Azami M, Barlas P, Tournoud M (2019). "A large scale evaluation of TBProfiler and Mykrobe for antibiotic resistance prediction in Mycobacterium tuberculosis". PeerJ. 7: e6857. doi:10.7717/peerj.6857. PMC 6500375. PMID 31106066.
- Mykrobe predictor –Antibiotic resistance prediction for S. aureus and M. tuberculosis from whole genome sequence data
- Bradley, Phelim; Gordon, N. Claire; Walker, Timothy M.; Dunn, Laura; Heys, Simon; Huang, Bill; Earle, Sarah; Pankhurst, Louise J.; Anson, Luke; de Cesare, Mariateresa; Piazza, Paolo; Votintseva, Antonina A.; Golubchik, Tanya; Wilson, Daniel J.; Wyllie, David H. (21 December 2015). "Rapid antibiotic-resistance predictions from genome sequence data for Staphylococcus aureus and Mycobacterium tuberculosis". Nature Communications. 6 (1): 10063. doi:10.1038/ncomms10063. ISSN 2041-1723. PMC 4703848. PMID 26686880.
- "Michael Mosley vs the superbugs". Archived from the original on 24 November 2020. Retrieved 21 October 2019.
- Mykrobe, Mykrobe-tools, 24 December 2022, retrieved 2 January 2023
- Curtis C, Hereward J (29 August 2017). "From the crime scene to the courtroom: the journey of a DNA sample". The Conversation.
- Moréra S, Larivière L, Kurzeck J, Aschke-Sonnenborn U, Freemont PS, Janin J, Rüger W (August 2001). "High resolution crystal structures of T4 phage beta-glucosyltransferase: induced fit and effect of substrate and metal binding". Journal of Molecular Biology. 311 (3): 569–77. doi:10.1006/jmbi.2001.4905. PMID 11493010.
- Ehrlich M, Gama-Sosa MA, Huang LH, Midgett RM, Kuo KC, McCune RA, Gehrke C (April 1982). "Amount and distribution of 5-methylcytosine in human DNA from different types of tissues of cells". Nucleic Acids Research. 10 (8): 2709–21. doi:10.1093/nar/10.8.2709. PMC 320645. PMID 7079182.
- Ehrlich M, Wang RY (June 1981). "5-Methylcytosine in eukaryotic DNA". Science. 212 (4501): 1350–7. Bibcode:1981Sci...212.1350E. doi:10.1126/science.6262918. PMID 6262918.
- Song CX, Clark TA, Lu XY, Kislyuk A, Dai Q, Turner SW, et al. (November 2011). "Sensitive and specific single-molecule sequencing of 5-hydroxymethylcytosine". Nature Methods. 9 (1): 75–7. doi:10.1038/nmeth.1779. PMC 3646335. PMID 22101853.
- Watson JD, Crick FH (1953). "The structure of DNA". Cold Spring Harb. Symp. Quant. Biol. 18: 123–31. doi:10.1101/SQB.1953.018.01.020. PMID 13168976.
- "WhatisBiotechnology • The sciences, places and people that have created biotechnology". WhatisBiotechnology.org. Retrieved 2 January 2023.
- Min Jou W, Haegeman G, Ysebaert M, Fiers W (May 1972). "Nucleotide sequence of the gene coding for the bacteriophage MS2 coat protein". Nature. 237 (5350): 82–8. Bibcode:1972Natur.237...82J. doi:10.1038/237082a0. PMID 4555447. S2CID 4153893.
- Fiers W, Contreras R, Duerinck F, Haegeman G, Iserentant D, Merregaert J, Min Jou W, Molemans F, Raeymaekers A, Van den Berghe A, Volckaert G, Ysebaert M (April 1976). "Complete nucleotide sequence of bacteriophage MS2 RNA: primary and secondary structure of the replicase gene". Nature. 260 (5551): 500–7. Bibcode:1976Natur.260..500F. doi:10.1038/260500a0. PMID 1264203. S2CID 4289674.
- Ozsolak F, Milos PM (February 2011). "RNA sequencing: advances, challenges and opportunities". Nature Reviews Genetics. 12 (2): 87–98. doi:10.1038/nrg2934. PMC 3031867. PMID 21191423.
- "Ray Wu Faculty Profile". Cornell University. Archived from the original on 4 March 2009.
- Padmanabhan R, Jay E, Wu R (June 1974). "Chemical synthesis of a primer and its use in the sequence analysis of the lysozyme gene of bacteriophage T4". Proceedings of the National Academy of Sciences of the United States of America. 71 (6): 2510–4. Bibcode:1974PNAS...71.2510P. doi:10.1073/pnas.71.6.2510. PMC 388489. PMID 4526223.
- Onaga LA (June 2014). "Ray Wu as Fifth Business: Demonstrating Collective Memory in the History of DNA Sequencing". Studies in the History and Philosophy of Science. Part C. 46: 1–14. doi:10.1016/j.shpsc.2013.12.006. PMID 24565976.
- Wu R (1972). "Nucleotide sequence analysis of DNA". Nature New Biology. 236 (68): 198–200. doi:10.1038/newbio236198a0. PMID 4553110.
- Padmanabhan R, Wu R (1972). "Nucleotide sequence analysis of DNA. IX. Use of oligonucleotides of defined sequence as primers in DNA sequence analysis". Biochem. Biophys. Res. Commun. 48 (5): 1295–302. doi:10.1016/0006-291X(72)90852-2. PMID 4560009.
- Wu R, Tu CD, Padmanabhan R (1973). "Nucleotide sequence analysis of DNA. XII. The chemical synthesis and sequence analysis of a dodecadeoxynucleotide which binds to the endolysin gene of bacteriophage lambda". Biochem. Biophys. Res. Commun. 55 (4): 1092–99. doi:10.1016/S0006-291X(73)80007-5. PMID 4358929.
- Jay E, Bambara R, Padmanabhan R, Wu R (March 1974). "DNA sequence analysis: a general, simple and rapid method for sequencing large oligodeoxyribonucleotide fragments by mapping". Nucleic Acids Research. 1 (3): 331–53. doi:10.1093/nar/1.3.331. PMC 344020. PMID 10793670.
- Sanger F, Nicklen S, Coulson AR (December 1977). "DNA sequencing with chain-terminating inhibitors". Proc. Natl. Acad. Sci. USA. 74 (12): 5463–77. Bibcode:1977PNAS...74.5463S. doi:10.1073/pnas.74.12.5463. PMC 431765. PMID 271968.
- Maxam AM, Gilbert W (February 1977). "A new method for sequencing DNA". Proc. Natl. Acad. Sci. USA. 74 (2): 560–64. Bibcode:1977PNAS...74..560M. doi:10.1073/pnas.74.2.560. PMC 392330. PMID 265521.
- Gilbert, W. DNA sequencing and gene structure. Nobel lecture, 8 December 1980.
- Gilbert W, Maxam A (December 1973). "The Nucleotide Sequence of the lac Operator". Proc. Natl. Acad. Sci. U.S.A. 70 (12): 3581–84. Bibcode:1973PNAS...70.3581G. doi:10.1073/pnas.70.12.3581. PMC 427284. PMID 4587255.
- Sanger F, Air GM, Barrell BG, Brown NL, Coulson AR, Fiddes CA, Hutchison CA, Slocombe PM, Smith M (February 1977). "Nucleotide sequence of bacteriophage phi X174 DNA". Nature. 265 (5596): 687–95. Bibcode:1977Natur.265..687S. doi:10.1038/265687a0. PMID 870828. S2CID 4206886.
- "WhatisBiotechnology • The sciences, places and people that have created biotechnology". WhatisBiotechnology.org. Retrieved 2 January 2023.
- Beck S, Pohl FM (1984). "DNA sequencing with direct blotting electrophoresis". EMBO J. 3 (12): 2905–09. doi:10.1002/j.1460-2075.1984.tb02230.x. PMC 557787. PMID 6396083.
- United States Patent 4,631,122 (1986)
- Feldmann H, et al. (1994). "Complete DNA sequence of yeast chromosome II". EMBO J. 13 (24): 5795–809. doi:10.1002/j.1460-2075.1994.tb06923.x. PMC 395553. PMID 7813418.
- Smith LM, Sanders JZ, Kaiser RJ, Hughes P, Dodd C, Connell CR, Heiner C, Kent SB, Hood LE (12 June 1986). "Fluorescence Detection in Automated DNA Sequence Analysis". Nature. 321 (6071): 674–79. Bibcode:1986Natur.321..674S. doi:10.1038/321674a0. PMID 3713851. S2CID 27800972.
- Prober JM, Trainor GL, Dam RJ, Hobbs FW, Robertson CW, Zagursky RJ, Cocuzza AJ, Jensen MA, Baumeister K (16 October 1987). "A system for rapid DNA sequencing with fluorescent chain-terminating dideoxynucleotides". Science. 238 (4825): 336–41. Bibcode:1987Sci...238..336P. doi:10.1126/science.2443975. PMID 2443975.
- Adams MD, Kelley JM, Gocayne JD, Dubnick M, Polymeropoulos MH, Xiao H, Merril CR, Wu A, Olde B, Moreno RF (June 1991). "Complementary DNA sequencing: expressed sequence tags and human genome project". Science. 252 (5013): 1651–56. Bibcode:1991Sci...252.1651A. doi:10.1126/science.2047873. PMID 2047873. S2CID 13436211.
- Fleischmann RD, Adams MD, White O, Clayton RA, Kirkness EF, Kerlavage AR, Bult CJ, Tomb JF, Dougherty BA, Merrick JM (July 1995). "Whole-genome random sequencing and assembly of Haemophilus influenzae Rd". Science. 269 (5223): 496–512. Bibcode:1995Sci...269..496F. doi:10.1126/science.7542800. PMID 7542800.
- Lander ES, Linton LM, Birren B, Nusbaum C, Zody MC, et al. (February 2001). "Initial sequencing and analysis of the human genome" (PDF). Nature. 409 (6822): 860–921. Bibcode:2001Natur.409..860L. doi:10.1038/35057062. PMID 11237011.
- Venter JC, Adams MD, et al. (February 2001). "The sequence of the human genome". Science. 291 (5507): 1304–51. Bibcode:2001Sci...291.1304V. doi:10.1126/science.1058040. PMID 11181995.
- Yang, Aimin; Zhang, Wei; Wang, Jiahao; Yang, Ke; Han, Yang; Zhang, Limin (2020). "Review on the Application of Machine Learning Algorithms in the Sequence Data Mining of DNA". Frontiers in Bioengineering and Biotechnology. 8: 1032. doi:10.3389/fbioe.2020.01032. PMC 7498545. PMID 33015010.
- "Espacenet – Bibliographic data". worldwide.espacenet.com.
- Ronaghi M, Karamohamed S, Pettersson B, Uhlén M, Nyrén P (1996). "Real-time DNA sequencing using detection of pyrophosphate release". Analytical Biochemistry. 242 (1): 84–89. doi:10.1006/abio.1996.0432. PMID 8923969.
- Kawashima, Eric H.; Laurent Farinelli; Pascal Mayer (12 May 2005). "Patent: Method of nucleic acid amplification". Archived from the original on 22 February 2013. Retrieved 22 December 2012.
- Ewing B, Green P (March 1998). "Base-calling of automated sequencer traces using phred. II. Error probabilities". Genome Res. 8 (3): 186–94. doi:10.1101/gr.8.3.186. PMID 9521922.
- "Quality Scores for Next-Generation Sequencing" (PDF). Illumina. 31 October 2011. Retrieved 8 May 2018.
- Brenner S, Johnson M, Bridgham J, Golda G, Lloyd DH, Johnson D, Luo S, McCurdy S, Foy M, Ewan M, Roth R, George D, Eletr S, Albrecht G, Vermaas E, Williams SR, Moon K, Burcham T, Pallas M, DuBridge RB, Kirchner J, Fearon K, Mao J, Corcoran K (2000). "Gene expression analysis by massively parallel signature sequencing (MPSS) on microbead arrays". Nature Biotechnology. 18 (6): 630–34. doi:10.1038/76469. PMID 10835600. S2CID 13884154.
- Sanger F, Coulson AR (May 1975). "A rapid method for determining sequences in DNA by primed synthesis with DNA polymerase". J. Mol. Biol. 94 (3): 441–48. doi:10.1016/0022-2836(75)90213-2. PMID 1100841.
- Wetterstrand, Kris. "DNA Sequencing Costs: Data from the NHGRI Genome Sequencing Program (GSP)". National Human Genome Research Institute. Retrieved 30 May 2013.
- Quail MA, Gu Y, Swerdlow H, Mayho M (2012). "Evaluation and optimisation of preparative semi-automated electrophoresis systems for Illumina library preparation". Electrophoresis. 33 (23): 3521–28. doi:10.1002/elps.201200128. PMID 23147856. S2CID 39818212.
- Duhaime MB, Deng L, Poulos BT, Sullivan MB (2012). "Towards quantitative metagenomics of wild viruses and other ultra-low concentration DNA samples: a rigorous assessment and optimization of the linker amplification method". Environ. Microbiol. 14 (9): 2526–37. doi:10.1111/j.1462-2920.2012.02791.x. PMC 3466414. PMID 22713159.
- Peterson BK, Weber JN, Kay EH, Fisher HS, Hoekstra HE (2012). "Double digest RADseq: an inexpensive method for de novo SNP discovery and genotyping in model and non-model species". PLOS ONE. 7 (5): e37135. Bibcode:2012PLoSO...737135P. doi:10.1371/journal.pone.0037135. PMC 3365034. PMID 22675423.
- Williams R, Peisajovich SG, Miller OJ, Magdassi S, Tawfik DS, Griffiths AD (2006). "Amplification of complex gene libraries by emulsion PCR". Nature Methods. 3 (7): 545–50. doi:10.1038/nmeth896. PMID 16791213. S2CID 27459628.
- Margulies M, Egholm M, et al. (September 2005). "Genome Sequencing in Open Microfabricated High Density Picoliter Reactors". Nature. 437 (7057): 376–80. Bibcode:2005Natur.437..376M. doi:10.1038/nature03959. PMC 1464427. PMID 16056220.
- Shendure J, Porreca GJ, Reppas NB, Lin X, McCutcheon JP, Rosenbaum AM, Wang MD, Zhang K, Mitra RD, Church GM (2005). "Accurate Multiplex Polony Sequencing of an Evolved Bacterial Genome". Science. 309 (5741): 1728–32. Bibcode:2005Sci...309.1728S. doi:10.1126/science.1117389. PMID 16081699. S2CID 11405973.
- "Applied Biosystems – File Not Found (404 Error)". 16 May 2008. Archived from the original on 16 May 2008.
- Goodwin S, McPherson JD, McCombie WR (May 2016). "Coming of age: ten years of next-generation sequencing technologies". Nature Reviews Genetics. 17 (6): 333–51. doi:10.1038/nrg.2016.49. PMID 27184599. S2CID 8295541.
- Staden R (11 June 1979). "A strategy of DNA sequencing employing computer programs". Nucleic Acids Research. 6 (7): 2601–10. doi:10.1093/nar/6.7.2601. PMC 327874. PMID 461197.
- de Magalhães JP, Finch CE, Janssens G (2010). "Next-generation sequencing in aging research: emerging applications, problems, pitfalls and possible solutions". Ageing Research Reviews. 9 (3): 315–23. doi:10.1016/j.arr.2009.10.006. PMC 2878865. PMID 19900591.
- Grada A (August 2013). "Next-generation sequencing: methodology and application". J Invest Dermatol. 133 (8): e11. doi:10.1038/jid.2013.248. PMID 23856935.
- Hall N (May 2007). "Advanced sequencing technologies and their wider impact in microbiology". J. Exp. Biol. 210 (Pt 9): 1518–25. doi:10.1242/jeb.001370. PMID 17449817.
- Church GM (January 2006). "Genomes for all". Sci. Am. 294 (1): 46–54. Bibcode:2006SciAm.294a..46C. doi:10.1038/scientificamerican0106-46. PMID 16468433. S2CID 28769137.(subscription required)
- Schuster SC (January 2008). "Next-generation sequencing transforms today's biology". Nat. Methods. 5 (1): 16–18. doi:10.1038/nmeth1156. PMID 18165802. S2CID 1465786.
- Kalb, Gilbert; Moxley, Robert (1992). Massively Parallel, Optical, and Neural Computing in the United States. IOS Press. ISBN 978-90-5199-097-3.[page needed]
- ten Bosch JR, Grody WW (2008). "Keeping Up with the Next Generation". The Journal of Molecular Diagnostics. 10 (6): 484–92. doi:10.2353/jmoldx.2008.080027. PMC 2570630. PMID 18832462.
- Tucker T, Marra M, Friedman JM (2009). "Massively Parallel Sequencing: The Next Big Thing in Genetic Medicine". The American Journal of Human Genetics. 85 (2): 142–54. doi:10.1016/j.ajhg.2009.06.022. PMC 2725244. PMID 19679224.
- Straiton J, Free T, Sawyer A, Martin J (February 2019). "From Sanger sequencing to genome databases and beyond". BioTechniques. Future Science. 66 (2): 60–63. doi:10.2144/btn-2019-0011. PMID 30744413.
- Quail MA, Smith M, Coupland P, Otto TD, Harris SR, Connor TR, Bertoni A, Swerdlow HP, Gu Y (1 January 2012). "A tale of three next generation sequencing platforms: comparison of Ion Torrent, Pacific Biosciences and illumina MiSeq sequencers". BMC Genomics. 13 (1): 341. doi:10.1186/1471-2164-13-341. PMC 3431227. PMID 22827831.
- Liu L, Li Y, Li S, Hu N, He Y, Pong R, Lin D, Lu L, Law M (1 January 2012). "Comparison of Next-Generation Sequencing Systems". Journal of Biomedicine and Biotechnology. 2012: 251364. doi:10.1155/2012/251364. PMC 3398667. PMID 22829749.
- "New Software, Polymerase for Sequel System Boost Throughput and Affordability – PacBio". 7 March 2018.
- "After a Year of Testing, Two Early PacBio Customers Expect More Routine Use of RS Sequencer in 2012". GenomeWeb. 10 January 2012.(registration required)
- Inc., Pacific Biosciences (2013). "Pacific Biosciences Introduces New Chemistry With Longer Read Lengths to Detect Novel Features in DNA Sequence and Advance Genome Studies of Large Organisms".
|last=has generic name (help)
- Chin CS, Alexander DH, Marks P, Klammer AA, Drake J, Heiner C, Clum A, Copeland A, Huddleston J, Eichler EE, Turner SW, Korlach J (2013). "Nonhybrid, finished microbial genome assemblies from long-read SMRT sequencing data". Nat. Methods. 10 (6): 563–69. doi:10.1038/nmeth.2474. PMID 23644548. S2CID 205421576.
- "De novo bacterial genome assembly: a solved problem?". 5 July 2013.
- Rasko DA, Webster DR, Sahl JW, Bashir A, Boisen N, Scheutz F, Paxinos EE, Sebra R, Chin CS, Iliopoulos D, Klammer A, Peluso P, Lee L, Kislyuk AO, Bullard J, Kasarskis A, Wang S, Eid J, Rank D, Redman JC, Steyert SR, Frimodt-Møller J, Struve C, Petersen AM, Krogfelt KA, Nataro JP, Schadt EE, Waldor MK (25 August 2011). "Origins of the Strain Causing an Outbreak of Hemolytic–Uremic Syndrome in Germany". N Engl J Med. 365 (8): 709–17. doi:10.1056/NEJMoa1106920. PMC 3168948. PMID 21793740.
- Tran B, Brown AM, Bedard PL, Winquist E, Goss GD, Hotte SJ, Welch SA, Hirte HW, Zhang T, Stein LD, Ferretti V, Watt S, Jiao W, Ng K, Ghai S, Shaw P, Petrocelli T, Hudson TJ, Neel BG, Onetto N, Siu LL, McPherson JD, Kamel-Reid S, Dancey JE (1 January 2012). "Feasibility of real time next generation sequencing of cancer genes linked to drug response: Results from a clinical trial". Int. J. Cancer. 132 (7): 1547–55. doi:10.1002/ijc.27817. PMID 22948899. S2CID 72705.(subscription required)
- Murray IA, Clark TA, Morgan RD, Boitano M, Anton BP, Luong K, Fomenkov A, Turner SW, Korlach J, Roberts RJ (2 October 2012). "The methylomes of six bacteria". Nucleic Acids Research. 40 (22): 11450–62. doi:10.1093/nar/gks891. PMC 3526280. PMID 23034806.
- "Ion 520 & Ion 530 ExT Kit-Chef – Thermo Fisher Scientific". www.thermofisher.com.
- "Raw accuracy". Archived from the original on 30 March 2018. Retrieved 29 March 2018.
- van Vliet AH (1 January 2010). "Next generation sequencing of microbial transcriptomes: challenges and opportunities". FEMS Microbiology Letters. 302 (1): 1–7. doi:10.1111/j.1574-6968.2009.01767.x. PMID 19735299.
- "BGI and MGISEQ". en.mgitech.cn. Retrieved 5 July 2018.
- Huang YF, Chen SC, Chiang YS, Chen TH, Chiu KP (2012). "Palindromic sequence impedes sequencing-by-ligation mechanism". BMC Systems Biology. 6 Suppl 2 (Suppl 2): S10. doi:10.1186/1752-0509-6-S2-S10. PMC 3521181. PMID 23281822.
- Loose, Matthew; Rakyan, Vardhman; Holmes, Nadine; Payne, Alexander (3 May 2018). "Whale watching with BulkVis: A graphical viewer for Oxford Nanopore bulk fast5 files". bioRxiv 10.1101/312256.
- "PacBio Sales Start to Pick Up as Company Delivers on Product Enhancements". 12 February 2013.
- "Bio-IT World". www.bio-itworld.com. Archived from the original on 29 July 2020. Retrieved 16 November 2015.
- "PacBio Launches Higher-Throughput, Lower-Cost Single-Molecule Sequencing System". October 2015.
- Clarke J, Wu HC, Jayasinghe L, Patel A, Reid S, Bayley H (April 2009). "Continuous base identification for single-molecule nanopore DNA sequencing". Nature Nanotechnology. 4 (4): 265–70. Bibcode:2009NatNa...4..265C. doi:10.1038/nnano.2009.12. PMID 19350039.
- dela Torre R, Larkin J, Singer A, Meller A (2012). "Fabrication and characterization of solid-state nanopore arrays for high-throughput DNA sequencing". Nanotechnology. 23 (38): 385308. Bibcode:2012Nanot..23L5308D. doi:10.1088/0957-4484/23/38/385308. PMC 3557807. PMID 22948520.
- Pathak B, Lofas H, Prasongkit J, Grigoriev A, Ahuja R, Scheicher RH (2012). "Double-functionalized nanopore-embedded gold electrodes for rapid DNA sequencing". Applied Physics Letters. 100 (2): 023701. Bibcode:2012ApPhL.100b3701P. doi:10.1063/1.3673335.
- Korlach J, Marks PJ, Cicero RL, Gray JJ, Murphy DL, Roitman DB, Pham TT, Otto GA, Foquet M, Turner SW (2008). "Selective aluminum passivation for targeted immobilization of single DNA polymerase molecules in zero-mode waveguide nanostructures". Proceedings of the National Academy of Sciences. 105 (4): 1176–81. Bibcode:2008PNAS..105.1176K. doi:10.1073/pnas.0710982105. PMC 2234111. PMID 18216253.
- Shendure J, Porreca GJ, Reppas NB, Lin X, McCutcheon JP, Rosenbaum AM, Wang MD, Zhang K, Mitra RD, Church GM (9 September 2005). "Accurate multiplex polony sequencing of an evolved bacterial genome". Science. 309 (5741): 1728–32. Bibcode:2005Sci...309.1728S. doi:10.1126/science.1117389. PMID 16081699. S2CID 11405973.
- Bentley DR, Balasubramanian S, et al. (2008). "Accurate whole human genome sequencing using reversible terminator chemistry". Nature. 456 (7218): 53–59. Bibcode:2008Natur.456...53B. doi:10.1038/nature07517. PMC 2581791. PMID 18987734.
- Canard B, Sarfati S (13 October 1994), Novel derivatives usable for the sequencing of nucleic acids, retrieved 9 March 2016
- Canard B, Sarfati RS (October 1994). "DNA polymerase fluorescent substrates with reversible 3'-tags". Gene. 148 (1): 1–6. doi:10.1016/0378-1119(94)90226-7. PMID 7523248.
- Mardis ER (2008). "Next-generation DNA sequencing methods". Annu Rev Genom Hum Genet. 9: 387–402. doi:10.1146/annurev.genom.9.081307.164359. PMID 18576944.
- Drmanac R, Sparks AB, Callow MJ, Halpern AL, Burns NL, Kermani BG, et al. (January 2010). "Human genome sequencing using unchained base reads on self-assembling DNA nanoarrays". Science. 327 (5961): 78–81. Bibcode:2010Sci...327...78D. doi:10.1126/science.1181498. PMID 19892942. S2CID 17309571.
- brandonvd. "About Us - Complete Genomics". Complete Genomics. Retrieved 2 July 2018.
- Huang J, Liang X, Xuan Y, Geng C, Li Y, Lu H, et al. (May 2017). "A reference human genome dataset of the BGISEQ-500 sequencer". GigaScience. 6 (5): 1–9. doi:10.1093/gigascience/gix024. PMC 5467036. PMID 28379488.
- Valouev A, Ichikawa J, Tonthat T, Stuart J, Ranade S, Peckham H, Zeng K, Malek JA, Costa G, McKernan K, Sidow A, Fire A, Johnson SM (July 2008). "A high-resolution, nucleosome position map of C. elegans reveals a lack of universal sequence-dictated positioning". Genome Res. 18 (7): 1051–63. doi:10.1101/gr.076463.108. PMC 2493394. PMID 18477713.
- Rusk N (2011). "Torrents of sequence". Nat Methods. 8 (1): 44. doi:10.1038/nmeth.f.330. S2CID 41040192.
- Drmanac R, Sparks AB, et al. (2010). "Human Genome Sequencing Using Unchained Base Reads in Self-Assembling DNA Nanoarrays". Science. 327 (5961): 78–81. Bibcode:2010Sci...327...78D. doi:10.1126/science.1181498. PMID 19892942. S2CID 17309571.
- Porreca GJ (2010). "Genome Sequencing on Nanoballs". Nature Biotechnology. 28 (1): 43–44. doi:10.1038/nbt0110-43. PMID 20062041. S2CID 54557996.
- "Complete Genomics >> Press Releases". 25 August 2010. Archived from the original on 25 August 2010. Retrieved 2 January 2023.
- "HeliScope Gene Sequencing / Genetic Analyzer System : Helicos BioSciences". 2 November 2009. Archived from the original on 2 November 2009.
- Thompson JF, Steinmann KE (October 2010). Single molecule sequencing with a HeliScope genetic analysis system. Current Protocols in Molecular Biology. Vol. Chapter 7. pp. Unit7.10. doi:10.1002/0471142727.mb0710s92. ISBN 978-0471142720. PMC 2954431. PMID 20890904.
- "tSMS SeqLL Technical Explanation". SeqLL. Archived from the original on 8 August 2014. Retrieved 9 August 2015.
- Heather, James M.; Chain, Benjamin (January 2016). "The sequence of sequencers: The history of sequencing DNA". Genomics. 107 (1): 1–8. doi:10.1016/j.ygeno.2015.11.003. ISSN 1089-8646. PMC 4727787. PMID 26554401.
- Sara El-Metwally; Osama M. Ouda; Mohamed Helmy (2014). "New Horizons in Next-Generation Sequencing". Next Generation Sequencing Technologies and Challenges in Sequence Assembly. Next Generation Sequencing Technologies and Challenges in Sequence Assembly, Springer Briefs in Systems Biology Volume 7. SpringerBriefs in Systems Biology. Vol. 7. pp. 51–59. doi:10.1007/978-1-4939-0715-1_6. ISBN 978-1-4939-0714-4.
- Fair RB, Khlystov A, Tailor TD, Ivanov V, Evans RD, Srinivasan V, Pamula VK, Pollack MG, Griffin PB, Zhou J (January 2007). "Chemical and Biological Applications of Digital-Microfluidic Devices". IEEE Design & Test of Computers. 24 (1): 10–24. CiteSeerX 10.1.1.559.1440. doi:10.1109/MDT.2007.8. hdl:10161/6987. S2CID 10122940.
- Boles DJ, Benton JL, Siew GJ, Levy MH, Thwar PK, Sandahl MA, et al. (November 2011). "Droplet-based pyrosequencing using digital microfluidics". Analytical Chemistry. 83 (22): 8439–47. doi:10.1021/ac201416j. PMC 3690483. PMID 21932784.
- Zilionis R, Nainys J, Veres A, Savova V, Zemmour D, Klein AM, Mazutis L (January 2017). "Single-cell barcoding and sequencing using droplet microfluidics". Nature Protocols. 12 (1): 44–73. doi:10.1038/nprot.2016.154. PMID 27929523. S2CID 767782.
- "The Harvard Nanopore Group". Mcb.harvard.edu. Archived from the original on 21 February 2002. Retrieved 15 November 2009.
- "Nanopore Sequencing Could Slash DNA Analysis Costs".
- US patent 20060029957, ZS Genetics, "Systems and methods of analyzing nucleic acid polymers and related components", issued 2005-07-14
- Xu M, Fujita D, Hanagata N (December 2009). "Perspectives and challenges of emerging single-molecule DNA sequencing technologies". Small. 5 (23): 2638–49. doi:10.1002/smll.200900976. PMID 19904762.
- Schadt EE, Turner S, Kasarskis A (2010). "A window into third-generation sequencing". Human Molecular Genetics. 19 (R2): R227–40. doi:10.1093/hmg/ddq416. PMID 20858600.
- Xu M, Endres RG, Arakawa Y (2007). "The electronic properties of DNA bases". Small. 3 (9): 1539–43. doi:10.1002/smll.200600732. PMID 17786897.
- Di Ventra M (2013). "Fast DNA sequencing by electrical means inches closer". Nanotechnology. 24 (34): 342501. Bibcode:2013Nanot..24H2501D. doi:10.1088/0957-4484/24/34/342501. PMID 23899780. S2CID 140101884.
- Ohshiro T, Matsubara K, Tsutsui M, Furuhashi M, Taniguchi M, Kawai T (2012). "Single-molecule electrical random resequencing of DNA and RNA". Sci Rep. 2: 501. Bibcode:2012NatSR...2E.501O. doi:10.1038/srep00501. PMC 3392642. PMID 22787559.
- Hanna GJ, Johnson VA, Kuritzkes DR, Richman DD, Martinez-Picado J, Sutton L, Hazelwood JD, D'Aquila RT (1 July 2000). "Comparison of Sequencing by Hybridization and Cycle Sequencing for Genotyping of Human Immunodeficiency Virus Type 1 Reverse Transcriptase". J. Clin. Microbiol. 38 (7): 2715–21. doi:10.1128/JCM.38.7.2715-2721.2000. PMC 87006. PMID 10878069.
- Morey M, Fernández-Marmiesse A, Castiñeiras D, Fraga JM, Couce ML, Cocho JA (2013). "A glimpse into past, present, and future DNA sequencing". Molecular Genetics and Metabolism. 110 (1–2): 3–24. doi:10.1016/j.ymgme.2013.04.024. PMID 23742747.
- Qin Y, Schneider TM, Brenner MP (2012). Gibas C (ed.). "Sequencing by Hybridization of Long Targets". PLOS ONE. 7 (5): e35819. Bibcode:2012PLoSO...735819Q. doi:10.1371/journal.pone.0035819. PMC 3344849. PMID 22574124.
- Edwards JR, Ruparel H, Ju J (2005). "Mass-spectrometry DNA sequencing". Mutation Research. 573 (1–2): 3–12. doi:10.1016/j.mrfmmm.2004.07.021. PMID 15829234.
- Hall TA, Budowle B, Jiang Y, Blyn L, Eshoo M, Sannes-Lowery KA, Sampath R, Drader JJ, Hannis JC, Harrell P, Samant V, White N, Ecker DJ, Hofstadler SA (2005). "Base composition analysis of human mitochondrial DNA using electrospray ionization mass spectrometry: A novel tool for the identification and differentiation of humans". Analytical Biochemistry. 344 (1): 53–69. doi:10.1016/j.ab.2005.05.028. PMID 16054106.
- Howard R, Encheva V, Thomson J, Bache K, Chan YT, Cowen S, Debenham P, Dixon A, Krause JU, Krishan E, Moore D, Moore V, Ojo M, Rodrigues S, Stokes P, Walker J, Zimmermann W, Barallon R (15 June 2011). "Comparative analysis of human mitochondrial DNA from World War I bone samples by DNA sequencing and ESI-TOF mass spectrometry". Forensic Science International: Genetics. 7 (1): 1–9. doi:10.1016/j.fsigen.2011.05.009. PMID 21683667.
- Monforte JA, Becker CH (1 March 1997). "High-throughput DNA analysis by time-of-flight mass spectrometry". Nature Medicine. 3 (3): 360–62. doi:10.1038/nm0397-360. PMID 9055869. S2CID 28386145.
- Beres SB, Carroll RK, Shea PR, Sitkiewicz I, Martinez-Gutierrez JC, Low DE, McGeer A, Willey BM, Green K, Tyrrell GJ, Goldman TD, Feldgarden M, Birren BW, Fofanov Y, Boos J, Wheaton WD, Honisch C, Musser JM (8 February 2010). "Molecular complexity of successive bacterial epidemics deconvoluted by comparative pathogenomics". Proceedings of the National Academy of Sciences. 107 (9): 4371–76. Bibcode:2010PNAS..107.4371B. doi:10.1073/pnas.0911295107. PMC 2840111. PMID 20142485.
- Kan CW, Fredlake CP, Doherty EA, Barron AE (1 November 2004). "DNA sequencing and genotyping in miniaturized electrophoresis systems". Electrophoresis. 25 (21–22): 3564–88. doi:10.1002/elps.200406161. PMID 15565709. S2CID 4851728.
- Chen YJ, Roller EE, Huang X (2010). "DNA sequencing by denaturation: experimental proof of concept with an integrated fluidic device". Lab on a Chip. 10 (9): 1153–59. doi:10.1039/b921417h. PMC 2881221. PMID 20390134.
- Bell DC, Thomas WK, Murtagh KM, Dionne CA, Graham AC, Anderson JE, Glover WR (9 October 2012). "DNA Base Identification by Electron Microscopy". Microscopy and Microanalysis. 18 (5): 1049–53. Bibcode:2012MiMic..18.1049B. doi:10.1017/S1431927612012615. PMID 23046798. S2CID 25713635.
- Pareek CS, Smoczynski R, Tretyn A (November 2011). "Sequencing technologies and genome sequencing". Journal of Applied Genetics. 52 (4): 413–35. doi:10.1007/s13353-011-0057-x. PMC 3189340. PMID 21698376.
- Pareek CS, Smoczynski R, Tretyn A (2011). "Sequencing technologies and genome sequencing". Journal of Applied Genetics. 52 (4): 413–35. doi:10.1007/s13353-011-0057-x. PMC 3189340. PMID 21698376.
- Fujimori S, Hirai N, Ohashi H, Masuoka K, Nishikimi A, Fukui Y, Washio T, Oshikubo T, Yamashita T, Miyamoto-Sato E (2012). "Next-generation sequencing coupled with a cell-free display technology for high-throughput production of reliable interactome data". Scientific Reports. 2: 691. Bibcode:2012NatSR...2E.691F. doi:10.1038/srep00691. PMC 3466446. PMID 23056904.
- Harbers M (2008). "The Current Status of cDNA Cloning". Genomics. 91 (3): 232–42. doi:10.1016/j.ygeno.2007.11.004. PMID 18222633.
- Alberti A, Belser C, Engelen S, Bertrand L, Orvain C, Brinas L, Cruaud C, et al. (2014). "Comparison of Library Preparation Methods Reveals Their Impact on Interpretation of Metatranscriptomic Data". BMC Genomics. 15 (1): 912–12. doi:10.1186/1471-2164-15-912. PMC 4213505. PMID 25331572.
- "Scalable Nucleic Acid Quality Assessments for Illumina Next-Generation Sequencing Library Prep" (PDF). Retrieved 27 December 2017.
- "Archon Genomics XPRIZE". Archon Genomics XPRIZE. Archived from the original on 17 June 2013. Retrieved 9 August 2007.
- "Grant Information". National Human Genome Research Institute (NHGRI).
- Severin J, Lizio M, Harshbarger J, Kawaji H, Daub CO, Hayashizaki Y, Bertin N, Forrest AR (2014). "Interactive visualization and analysis of large-scale sequencing datasets using ZENBU". Nat. Biotechnol. 32 (3): 217–19. doi:10.1038/nbt.2840. PMID 24727769. S2CID 26575621.
- Shmilovici A, Ben-Gal I (2007). "Using a VOM model for reconstructing potential coding regions in EST sequences" (PDF). Computational Statistics. 22 (1): 49–69. doi:10.1007/s00180-007-0021-8. S2CID 2737235.
- Del Fabbro C, Scalabrin S, Morgante M, Giorgi FM (2013). "An Extensive Evaluation of Read Trimming Effects on Illumina NGS Data Analysis". PLOS ONE. 8 (12): e85024. Bibcode:2013PLoSO...885024D. doi:10.1371/journal.pone.0085024. PMC 3871669. PMID 24376861.
- Martin, Marcel (2 May 2011). "Cutadapt removes adapter sequences from high-throughput sequencing reads". EMBnet.journal. 17 (1): 10. doi:10.14806/ej.17.1.200.
- Smeds L, Künstner A (19 October 2011). "ConDeTri--a content dependent read trimmer for Illumina data". PLOS ONE. 6 (10): e26314. Bibcode:2011PLoSO...626314S. doi:10.1371/journal.pone.0026314. PMC 3198461. PMID 22039460.
- Prezza N, Del Fabbro C, Vezzi F, De Paoli E, Policriti A (2012). ERNE-BS5: Aligning BS-treated Sequences by Multiple Hits on a 5-letters Alphabet. Proceedings of the ACM Conference on Bioinformatics, Computational Biology and Biomedicine. Vol. 12. pp. 12–19. doi:10.1145/2382936.2382938. ISBN 9781450316705. S2CID 5673753.
- Schmieder R, Edwards R (March 2011). "Quality control and preprocessing of metagenomic datasets". Bioinformatics. 27 (6): 863–4. doi:10.1093/bioinformatics/btr026. PMC 3051327. PMID 21278185.
- Bolger AM, Lohse M, Usadel B (August 2014). "Trimmomatic: a flexible trimmer for Illumina sequence data". Bioinformatics. 30 (15): 2114–20. doi:10.1093/bioinformatics/btu170. PMC 4103590. PMID 24695404.
- Cox MP, Peterson DA, Biggs PJ (September 2010). "SolexaQA: At-a-glance quality assessment of Illumina second-generation sequencing data". BMC Bioinformatics. 11 (1): 485. doi:10.1186/1471-2105-11-485. PMC 2956736. PMID 20875133.
- Murray TH (January 1991). "Ethical issues in human genome research". FASEB Journal. 5 (1): 55–60. doi:10.1096/fasebj.5.1.1825074. PMID 1825074. S2CID 20009748.
- Robertson JA (August 2003). "The $1000 genome: ethical and legal issues in whole genome sequencing of individuals". The American Journal of Bioethics. 3 (3): W–IF1. doi:10.1162/152651603322874762. PMID 14735880. S2CID 15357657.
- Henderson, Mark (9 September 2013). "Human genome sequencing: the real ethical dilemmas". The Guardian. Retrieved 20 May 2015.
- Harmon, Amy (24 February 2008). "Insurance Fears Lead Many to Shun DNA Tests". The New York Times. Retrieved 20 May 2015.
- Statement of Administration policy, Executive Office of the President, Office of Management and Budget, 27 April 2007
- National Human Genome Research Institute (21 May 2008). "President Bush Signs the Genetic Information Nondiscrimination Act of 2008". Retrieved 17 February 2014.
- Baker, Monya. "US ethics panel reports on DNA sequencing and privacy". Nature New Blog. Retrieved 20 May 2015.
- "Privacy and Progress in Whole Genome Sequencing" (PDF). Presidential Commission for the Study of Bioethical Issues. Archived from the original (PDF) on 12 June 2015. Retrieved 20 May 2015.
- Hartnett, Kevin (12 May 2013). "The DNA in your garbage: up for grabs - The Boston Globe". The Boston Globe. Retrieved 2 January 2023.
- Goldenberg AJ, Sharp RR (February 2012). "The ethical hazards and programmatic challenges of genomic newborn screening". JAMA. 307 (5): 461–2. doi:10.1001/jama.2012.68. PMC 3868436. PMID 22298675.
- Hughes, Virginia (7 January 2013). "It's Time To Stop Obsessing About the Dangers of Genetic Information". Slate Magazine. Retrieved 22 May 2015.
- Bloss CS, Schork NJ, Topol EJ (February 2011). "Effect of direct-to-consumer genomewide profiling to assess disease risk". The New England Journal of Medicine. 364 (6): 524–34. doi:10.1056/NEJMoa1011893. PMC 3786730. PMID 21226570.
- Rochman, Bonnie (25 October 2012). "What Your Doctor Isn't Telling You About Your DNA". Time.com. Retrieved 22 May 2015. |
|Document Name:||Magna Carta|
|Location Of Document:||2 at British Library; 1 at Lincoln Cathedral and Salisbury Cathedral|
Latin: Magna Carta Libertatum (Medieval Latin for "the Great Charter of the Liberties"), commonly called Latin: Magna Carta (also Magna Charta; "Great Charter"), is a charter of rights agreed to by King John of England at Runnymede, near Windsor, on 15 June 1215. First drafted by the Archbishop of Canterbury to make peace between the unpopular King and a group of rebel barons, it promised the protection of church rights, protection for the barons from illegal imprisonment, access to swift justice, and limitations on feudal payments to the Crown, to be implemented through a council of 25 barons. Neither side stood behind their commitments, and the charter was annulled by Pope Innocent III, leading to the First Barons' War.
After John's death, the regency government of his young son, Henry III, reissued the document in 1216, stripped of some of its more radical content, in an unsuccessful bid to build political support for their cause. At the end of the war in 1217, it formed part of the peace treaty agreed at Lambeth, where the document acquired the name Magna Carta, to distinguish it from the smaller Charter of the Forest which was issued at the same time. Short of funds, Henry reissued the charter again in 1225 in exchange for a grant of new taxes. His son, Edward I, repeated the exercise in 1297, this time confirming it as part of England's statute law. The charter became part of English political life and was typically renewed by each monarch in turn, although as time went by and the fledgling Parliament of England passed new laws, it lost some of its practical significance.
At the end of the 16th century there was an upsurge in interest in Magna Carta. Lawyers and historians at the time believed that there was an ancient English constitution, going back to the days of the Anglo-Saxons, that protected individual English freedoms. They argued that the Norman invasion of 1066 had overthrown these rights, and that Magna Carta had been a popular attempt to restore them, making the charter an essential foundation for the contemporary powers of Parliament and legal principles such as habeas corpus. Although this historical account was badly flawed, jurists such as Sir Edward Coke used Magna Carta extensively in the early 17th century, arguing against the divine right of kings propounded by the Stuart monarchs. Both James I and his son Charles I attempted to suppress the discussion of Magna Carta, until the issue was curtailed by the English Civil War of the 1640s and the execution of Charles. The political myth of Magna Carta and its protection of ancient personal liberties persisted after the Glorious Revolution of 1688 until well into the 19th century. It influenced the early American colonists in the Thirteen Colonies and the formation of the American Constitution in 1787, which became the supreme law of the land in the new republic of the United States. Research by Victorian historians showed that the original 1215 charter had concerned the medieval relationship between the monarch and the barons, rather than the rights of ordinary people, but the charter remained a powerful, iconic document, even after almost all of its content was repealed from the statute books in the 19th and 20th centuries.
Magna Carta still forms an important symbol of liberty today, often cited by politicians and campaigners, and is held in great respect by the British and American legal communities, Lord Denning describing it as "the greatest constitutional document of all times – the foundation of the freedom of the individual against the arbitrary authority of the despot". In the 21st century, four exemplifications of the original 1215 charter remain in existence, two at the British Library, one at Lincoln Cathedral and one at Salisbury Cathedral. There are also a handful of the subsequent charters in public and private ownership, including copies of the 1297 charter in both the United States and Australia. The original charters were written on parchment sheets using quill pens, in heavily abbreviated medieval Latin, which was the convention for legal documents at that time. Each was sealed with the royal great seal (made of beeswax and resin sealing wax): very few of the seals have survived. Although scholars refer to the 63 numbered "clauses" of Magna Carta, this is a modern system of numbering, introduced by Sir William Blackstone in 1759; the original charter formed a single, long unbroken text. The four original 1215 charters were displayed together at the British Library for one day, 3 February 2015, to mark the 800th anniversary of Magna Carta.
See main article: article and John, King of England. Magna Carta originated as an unsuccessful attempt to achieve peace between royalist and rebel factions in 1215, as part of the events leading to the outbreak of the First Barons' War. England was ruled by King John, the third of the Angevin kings. Although the kingdom had a robust administrative system, the nature of government under the Angevin monarchs was ill-defined and uncertain. John and his predecessors had ruled using the principle of vis et voluntas, or "force and will", taking executive and sometimes arbitrary decisions, often justified on the basis that a king was above the law. Many contemporary writers believed that monarchs should rule in accordance with the custom and the law, with the counsel of the leading members of the realm, but there was no model for what should happen if a king refused to do so.
John had lost most of his ancestral lands in France to King Philip II in 1204 and had struggled to regain them for many years, raising extensive taxes on the barons to accumulate money to fight a war which ended in expensive failure in 1214. Following the defeat of his allies at the Battle of Bouvines, John had to sue for peace and pay compensation. John was already personally unpopular with many of the barons, many of whom owed money to the Crown, and little trust existed between the two sides. A triumph would have strengthened his position, but in the face of his defeat, within a few months after his return from France John found that rebel barons in the north and east of England were organising resistance to his rule.
The rebels took an oath that they would "stand fast for the liberty of the church and the realm", and demanded that the King confirm the Charter of Liberties that had been declared by King Henry I in the previous century, and which was perceived by the barons to protect their rights. The rebel leadership was unimpressive by the standards of the time, even disreputable, but were united by their hatred of John; Robert FitzWalter, later elected leader of the rebel barons, claimed publicly that John had attempted to rape his daughter, and was implicated in a plot to assassinate John in 1212.
John held a council in London in January 1215 to discuss potential reforms, and sponsored discussions in Oxford between his agents and the rebels during the spring. Both sides appealed to Pope Innocent III for assistance in the dispute. During the negotiations, the rebellious barons produced an initial document, which historians have termed "the Unknown Charter of Liberties", which drew on Henry I's Charter of Liberties for much of its language; seven articles from that document later appeared in the "Articles of the Barons" and the subsequent charter.
It was John's hope that the Pope would give him valuable legal and moral support, and accordingly John played for time; the King had declared himself to be a papal vassal in 1213 and correctly believed he could count on the Pope for help. John also began recruiting mercenary forces from France, although some were later sent back to avoid giving the impression that the King was escalating the conflict. In a further move to shore up his support, John took an oath to become a crusader, a move which gave him additional political protection under church law, even though many felt the promise was insincere.
Letters backing John arrived from the Pope in April, but by then the rebel barons had organised into a military faction. They congregated at Northampton in May and renounced their feudal ties to John, marching on London, Lincoln, and Exeter. John's efforts to appear moderate and conciliatory had been largely successful, but once the rebels held London, they attracted a fresh wave of defectors from the royalists. The King offered to submit the problem to a committee of arbitration with the Pope as the supreme arbiter, but this was not attractive to the rebels. Stephen Langton, the Archbishop of Canterbury, had been working with the rebel barons on their demands, and after the suggestion of papal arbitration failed, John instructed Langton to organise peace talks.
John met the rebel leaders at Runnymede, a water-meadow on the south bank of the River Thames, on 10 June 1215. Runnymede was a traditional place for assemblies, but it was also located on neutral ground between the royal fortress of Windsor Castle and the rebel base at Staines, and offered both sides the security of a rendezvous where they were unlikely to find themselves at a military disadvantage. Here the rebels presented John with their draft demands for reform, the 'Articles of the Barons'. Stephen Langton's pragmatic efforts at mediation over the next ten days turned these incomplete demands into a charter capturing the proposed peace agreement; a few years later, this agreement was renamed Magna Carta, meaning "Great Charter". By 15 June, general agreement had been made on a text, and on 19 June, the rebels renewed their oaths of loyalty to John and copies of the charter were formally issued.
Although, as the historian David Carpenter has noted, the charter "wasted no time on political theory", it went beyond simply addressing individual baronial complaints, and formed a wider proposal for political reform. It promised the protection of church rights, protection from illegal imprisonment, access to swift justice, and, most importantly, limitations on taxation and other feudal payments to the Crown, with certain forms of feudal taxation requiring baronial consent. It focused on the rights of free men—in particular the barons - however, the rights of serfs were included in articles 16, 20, and 28. Its style and content reflected Henry I's Charter of Liberties, as well as a wider body of legal traditions, including the royal charters issued to towns, the operations of the Church and baronial courts and European charters such as the Statute of Pamiers.
Under what historians later labelled "clause 61", or the "security clause", a council of 25 barons would be created to monitor and ensure John's future adherence to the charter. If John did not conform to the charter within 40 days of being notified of a transgression by the council, the 25 barons were empowered by clause 61 to seize John's castles and lands until, in their judgement, amends had been made. Men were to be compelled to swear an oath to assist the council in controlling the King, but once redress had been made for any breaches, the King would continue to rule as before. In one sense this was not unprecedented; other kings had previously conceded the right of individual resistance to their subjects if the King did not uphold his obligations. Magna Carta was however novel in that it set up a formally recognised means of collectively coercing the King. The historian Wilfred Warren argues that it was almost inevitable that the clause would result in civil war, as it "was crude in its methods and disturbing in its implications". The barons were trying to force John to keep to the charter, but clause 61 was so heavily weighted against the King that this version of the charter could not survive.
John and the rebel barons did not trust each other, and neither side seriously attempted to implement the peace accord. The 25 barons selected for the new council were all rebels, chosen by the more extremist barons, and many among the rebels found excuses to keep their forces mobilised. Disputes began to emerge between the royalist faction and those rebels who had expected the charter to return lands that had been confiscated .
Clause 61 of Magna Carta contained a commitment from John that he would "seek to obtain nothing from anyone, in our own person or through someone else, whereby any of these grants or liberties may be revoked or diminished". Despite this, the King appealed to Pope Innocent for help in July, arguing that the charter compromised the Pope's rights as John's feudal lord. As part of the June peace deal, the barons were supposed to surrender London by 15 August, but this they refused to do. Meanwhile, instructions from the Pope arrived in August, written before the peace accord, with the result that papal commissioners excommunicated the rebel barons and suspended Langton from office in early September. Once aware of the charter, the Pope responded in detail: in a letter dated 24 August and arriving in late September, he declared the charter to be "not only shameful and demeaning but also illegal and unjust" since John had been "forced to accept" it, and accordingly the charter was "null, and void of all validity for ever"; under threat of excommunication, the King was not to observe the charter, nor the barons try to enforce it.
By then, violence had broken out between the two sides; less than three months after it had been agreed, John and the loyalist barons firmly repudiated the failed charter: the First Barons' War erupted. The rebel barons concluded that peace with John was impossible, and turned to Philip II's son, the future Louis VIII, for help, offering him the English throne. The war soon settled into a stalemate. The King became ill and died on the night of 18 October 1216, leaving the nine-year-old Henry III as his heir.
The preamble to Magna Carta includes the names of the following 27 ecclesiastical and secular magnates who had counselled John to accept its terms. The names include some of the moderate reformers, notably Archbishop Stephen Langton, and some of John's loyal supporters, such as William Marshal, Earl of Pembroke. They are listed here in the order in which they appear in the charter itself:
The names of the Twenty-Five Barons appointed under clause 61 to monitor John's future conduct are not given in the charter itself, but do appear in four early sources, all seemingly based on a contemporary listing: a late 13th-century collection of law tracts and statutes, a Reading Abbey manuscript now in Lambeth Palace Library, and the Chronica Majora and Liber Additamentorum of Matthew Paris. The process of appointment is not known, but the names were drawn almost exclusively from among John's more active opponents. They are listed here in the order in which they appear in the original sources:
In September 1215, the papal commissioners in England – Subdeacon Pandulf, Peter des Roches, Bishop of Winchester, and Simon, Abbot of Reading – excommunicated the rebels, acting on instructions earlier received from Rome. A letter sent by the commissioners from Dover on 5 September to Archbishop Langton explicitly names nine senior rebel barons (all members of the Council of Twenty-Five), and six clerics numbered among the rebel ranks:
Although the Charter of 1215 was a failure as a peace treaty, it was resurrected under the new government of the young Henry III as a way of drawing support away from the rebel faction. On his deathbed, King John appointed a council of thirteen executors to help Henry reclaim the kingdom, and requested that his son be placed into the guardianship of William Marshal, one of the most famous knights in England. William knighted the boy, and Cardinal Guala Bicchieri, the papal legate to England, then oversaw his coronation at Gloucester Cathedral on 28 October.
The young King inherited a difficult situation, with over half of England occupied by the rebels. He had substantial support though from Guala, who intended to win the civil war for Henry and punish the rebels. Guala set about strengthening the ties between England and the Papacy, starting with the coronation itself, during which Henry gave homage to the Papacy, recognising the Pope as his feudal lord. Pope Honorius III declared that Henry was the Pope's vassal and ward, and that the legate had complete authority to protect Henry and his kingdom. As an additional measure, Henry took the cross, declaring himself a crusader and thereby entitled to special protection from Rome.
The war was not going well for the loyalists, but Prince Louis and the rebel barons were also finding it difficult to make further progress. John's death had defused some of the rebel concerns, and the royal castles were still holding out in the occupied parts of the country. Henry's government encouraged the rebel barons to come back to his cause in exchange for the return of their lands, and reissued a version of the 1215 Charter, albeit having first removed some of the clauses, including those unfavourable to the Papacy and clause 61, which had set up the council of barons. The move was not successful, and opposition to Henry's new government hardened.
See also: First Barons' War, Charter of the Forest and English land law. In February 1217, Louis set sail for France to gather reinforcements. In his absence, arguments broke out between Louis' French and English followers, and Cardinal Guala declared that Henry's war against the rebels was the equivalent of a religious crusade. This declaration resulted in a series of defections from the rebel movement, and the tide of the conflict swung in Henry's favour. Louis returned at the end of April, but his northern forces were defeated by William Marshal at the Battle of Lincoln in May.
Meanwhile, support for Louis' campaign was diminishing in France, and he concluded that the war in England was lost. He negotiated terms with Cardinal Guala, under which Louis would renounce his claim to the English throne; in return, his followers would be given back their lands, any sentences of excommunication would be lifted, and Henry's government would promise to enforce the charter of the previous year. The proposed agreement soon began to unravel amid claims from some loyalists that it was too generous towards the rebels, particularly the clergy who had joined the rebellion.
In the absence of a settlement, Louis remained in London with his remaining forces, hoping for the arrival of reinforcements from France. When the expected fleet did arrive in August, it was intercepted and defeated by loyalists at the Battle of Sandwich. Louis entered into fresh peace negotiations, and the factions came to agreement on the final Treaty of Lambeth, also known as the Treaty of Kingston, on 12 and 13 September 1217. The treaty was similar to the first peace offer, but excluded the rebel clergy, whose lands and appointments remained forfeit; it included a promise, however, that Louis' followers would be allowed to enjoy their traditional liberties and customs, referring back to the Charter of 1216. Louis left England as agreed and joined the Albigensian Crusade in the south of France, bringing the war to an end.
A great council was called in October and November to take stock of the post-war situation; this council is thought to have formulated and issued the Charter of 1217. The charter resembled that of 1216, although some additional clauses were added to protect the rights of the barons over their feudal subjects, and the restrictions on the Crown's ability to levy taxation were watered down. There remained a range of disagreements about the management of the royal forests, which involved a special legal system that had resulted in a source of considerable royal revenue; complaints existed over both the implementation of these courts, and the geographic boundaries of the royal forests. A complementary charter, the Charter of the Forest, was created, pardoning existing forest offences, imposing new controls over the forest courts, and establishing a review of the forest boundaries. To distinguish the two charters, the term magna carta libertatum, "the great charter of liberties", was used by the scribes to refer to the larger document, which in time became known simply as Magna Carta.
Magna Carta became increasingly embedded into English political life during Henry III's minority. As the King grew older, his government slowly began to recover from the civil war, regaining control of the counties and beginning to raise revenue once again, taking care not to overstep the terms of the charters. Henry remained a minor and his government's legal ability to make permanently binding decisions on his behalf was limited. In 1223, the tensions over the status of the charters became clear in the royal court, when Henry's government attempted to reassert its rights over its properties and revenues in the counties, facing resistance from many communities that argued—if sometimes incorrectly—that the charters protected the new arrangements. This resistance resulted in an argument between Archbishop Langton and William Brewer over whether the King had any duty to fulfil the terms of the charters, given that he had been forced to agree to them. On this occasion, Henry gave oral assurances that he considered himself bound by the charters, enabling a royal inquiry into the situation in the counties to progress.
Two years later, the question of Henry's commitment to the charters re-emerged, when Louis VIII of France invaded Henry's remaining provinces in France, Poitou and Gascony. Henry's army in Poitou was under-resourced, and the province quickly fell. It became clear that Gascony would also fall unless reinforcements were sent from England. In early 1225, a great council approved a tax of £40,000 to dispatch an army, which quickly retook Gascony. In exchange for agreeing to support Henry, the barons demanded that the King reissue Magna Carta and the Charter of the Forest. The content was almost identical to the 1217 versions, but in the new versions, the King declared that the charters were issued of his own "spontaneous and free will" and confirmed them with the royal seal, giving the new Great Charter and the Charter of the Forest of 1225 much more authority than the previous versions.
The barons anticipated that the King would act in accordance with these charters, subject to the law and moderated by the advice of the nobility. Uncertainty continued, and in 1227, when he was declared of age and able to rule independently, Henry announced that future charters had to be issued under his own seal. This brought into question the validity of the previous charters issued during his minority, and Henry actively threatened to overturn the Charter of the Forest unless the taxes promised in return for it were actually paid. In 1253, Henry confirmed the charters once again in exchange for taxation.
Henry placed a symbolic emphasis on rebuilding royal authority, but his rule was relatively circumscribed by Magna Carta. He generally acted within the terms of the charters, which prevented the Crown from taking extrajudicial action against the barons, including the fines and expropriations that had been common under his father, John. The charters did not address the sensitive issues of the appointment of royal advisers and the distribution of patronage, and they lacked any means of enforcement if the King chose to ignore them. The inconsistency with which he applied the charters over the course of his rule alienated many barons, even those within his own faction.
Despite the various charters, the provision of royal justice was inconsistent and driven by the needs of immediate politics: sometimes action would be taken to address a legitimate baronial complaint, while on other occasions the problem would simply be ignored. The royal courts, which toured the country to provide justice at the local level, typically for lesser barons and the gentry claiming grievances against major lords, had little power, allowing the major barons to dominate the local justice system. Henry's rule became lax and careless, resulting in a reduction in royal authority in the provinces and, ultimately, the collapse of his authority at court.
In 1258, a group of barons seized power from Henry in a coup d'état, citing the need to strictly enforce Magna Carta and the Charter of the Forest, creating a new baronial-led government to advance reform through the Provisions of Oxford. The barons were not militarily powerful enough to win a decisive victory, and instead appealed to Louis IX of France in 1263–1264 to arbitrate on their proposed reforms. The reformist barons argued their case based on Magna Carta, suggesting that it was inviolable under English law and that the King had broken its terms.
Louis came down firmly in favour of Henry, but the French arbitration failed to achieve peace as the rebellious barons refused to accept the verdict. England slipped back into the Second Barons' War, which was won by Henry's son, the Lord Edward. Edward also invoked Magna Carta in advancing his cause, arguing that the reformers had taken matters too far and were themselves acting against Magna Carta. In a conciliatory gesture after the barons had been defeated, in 1267 Henry issued the Statute of Marlborough, which included a fresh commitment to observe the terms of Magna Carta.
The following 65 individuals were witnesses to the 1225 issue of Magna Carta, named in the order in which they appear in the charter itself:
The Confirmatio Cartarum (Confirmation of Charters) was issued in Norman French by Edward I in 1297. Edward, needing money, had taxed the nobility, and they had armed themselves against him, forcing Edward to issue his confirmation of Magna Carta and the Forest Charter to avoid civil war. The nobles had sought to add another document, the De Tallagio, to Magna Carta. Edward I's government was not prepared to concede this, they agreed to the issuing of the Confirmatio, confirming the previous charters and confirming the principle that taxation should be by consent, although the precise manner of that consent was not laid down.
A passage mandates that copies shall be distributed in "cathedral churches throughout our realm, there to remain, and shall be read before the people two times by the year", hence the permanent installation of a copy in Salisbury Cathedral. In the Confirmation's second article, it is confirmed thatWith the reconfirmation of the Charters in 1300, an additional document was granted, the Articuli super Cartas (The Articles upon the Charters). It was composed of 17 articles and sought in part to deal with the problem of enforcing the Charters. Magna Carta and the Forest Charter were to be issued to the sheriff of each county, and should be read four times a year at the meetings of the county courts. Each county should have a committee of three men who could hear complaints about violations of the Charters.
Pope Clement V continued the papal policy of supporting monarchs (who ruled by divine grace) against any claims in Magna Carta which challenged the King's rights, and annulled the Confirmatio Cartarum in 1305. Edward I interpreted Clement V's papal bull annulling the Confirmatio Cartarum as effectively applying to the Articuli super Cartas, although the latter was not specifically mentioned. In 1306 Edward I took the opportunity given by the Pope's backing to reassert forest law over large areas which had been "disafforested". Both Edward and the Pope were accused by some contemporary chroniclers of "perjury", and it was suggested by Robert McNair Scott that Robert the Bruce refused to make peace with Edward I's son, Edward II, in 1312 with the justification: "How shall the king of England keep faith with me, since he does not observe the sworn promises made to his liege men..."
The Great Charter was referred to in legal cases throughout the medieval period. For example, in 1226, the knights of Lincolnshire argued that their local sheriff was changing customary practice regarding the local courts, "contrary to their liberty which they ought to have by the charter of the lord king". In practice, cases were not brought against the King for breach of Magna Carta and the Forest Charter, but it was possible to bring a case against the King's officers, such as his sheriffs, using the argument that the King's officers were acting contrary to liberties granted by the King in the charters.
In addition, medieval cases referred to the clauses in Magna Carta which dealt with specific issues such as wardship and dower, debt collection, and keeping rivers free for navigation. Even in the 13th century, some clauses of Magna Carta rarely appeared in legal cases, either because the issues concerned were no longer relevant, or because Magna Carta had been superseded by more relevant legislation. By 1350 half the clauses of Magna Carta were no longer actively used.
During the reign of King Edward III six measures, later known as the Six Statutes, were passed between 1331 and 1369. They sought to clarify certain parts of the Charters. In particular the third statute, in 1354, redefined clause 29, with "free man" becoming "no man, of whatever estate or condition he may be", and introduced the phrase "due process of law" for "lawful judgement of his peers or the law of the land".
Between the 13th and 15th centuries Magna Carta was reconfirmed 32 times according to Sir Edward Coke, and possibly as many as 45 times. Often the first item of parliamentary business was a public reading and reaffirmation of the Charter, and, as in the previous century, parliaments often exacted confirmation of it from the monarch. The Charter was confirmed in 1423 by King Henry VI.
By the mid-15th century, Magna Carta ceased to occupy a central role in English political life, as monarchs reasserted authority and powers which had been challenged in the 100 years after Edward I's reign. The Great Charter remained a text for lawyers, particularly as a protector of property rights, and became more widely read than ever as printed versions circulated and levels of literacy increased.
During the 16th century, the interpretation of Magna Carta and the First Barons' War shifted. Henry VII took power at the end of the turbulent Wars of the Roses, followed by Henry VIII, and extensive propaganda under both rulers promoted the legitimacy of the regime, the illegitimacy of any sort of rebellion against royal power, and the priority of supporting the Crown in its arguments with the Papacy.
Tudor historians rediscovered the Barnwell chronicler, who was more favourable to King John than other 13th-century texts, and, as historian Ralph Turner describes, they "viewed King John in a positive light as a hero struggling against the papacy", showing "little sympathy for the Great Charter or the rebel barons". Pro-Catholic demonstrations during the 1536 uprising cited Magna Carta, accusing the King of not giving it sufficient respect.
The first mechanically printed edition of Magna Carta was probably the Magna Carta cum aliis Antiquis Statutis of 1508 by Richard Pynson, although the early printed versions of the 16th century incorrectly attributed the origins of Magna Carta to Henry III and 1225, rather than to John and 1215, and accordingly worked from the later text. An abridged English-language edition was published by John Rastell in 1527.Thomas Berthelet, Pynson's successor as the royal printer during 1530–1547,printed an edition of the text along with other "ancient statutes" in 1531 and 1540. In 1534, George Ferrers published the first unabridged English-language edition of Magna Carta, dividing the Charter into 37 numbered clauses.
At the end of the 16th century, there was an upsurge in antiquarian interest in England. This work concluded that there was a set of ancient English customs and laws, temporarily overthrown by the Norman invasion of 1066, which had then been recovered in 1215 and recorded in Magna Carta, which in turn gave authority to important 16th century legal principles. Modern historians note that although this narrative was fundamentally incorrect—many refer to it as a "myth"—it took on great importance among the legal historians of the time.
The antiquarian William Lambarde, for example, published what he believed were the Anglo-Saxon and Norman law codes, tracing the origins of the 16th-century English Parliament back to this period, albeit misinterpreting the dates of many documents concerned. Francis Bacon argued that clause 39 of Magna Carta was the basis of the 16th-century jury system and judicial processes. Antiquarians Robert Beale, James Morice, and Richard Cosin argued that Magna Carta was a statement of liberty and a fundamental, supreme law empowering English government. Those who questioned these conclusions, including the Member of Parliament Arthur Hall, faced sanctions.
In the early 17th century, Magna Carta became increasingly important as a political document in arguments over the authority of the English monarchy. James I and Charles I both propounded greater authority for the Crown, justified by the doctrine of the divine right of kings, and Magna Carta was cited extensively by their opponents to challenge the monarchy.
Magna Carta, it was argued, recognised and protected the liberty of individual Englishmen, made the King subject to the common law of the land, formed the origin of the trial by jury system, and acknowledged the ancient origins of Parliament: because of Magna Carta and this ancient constitution, an English monarch was unable to alter these long-standing English customs. Although the arguments based on Magna Carta were historically inaccurate, they nonetheless carried symbolic power, as the charter had immense significance during this period; antiquarians such as Sir Henry Spelman described it as "the most majestic and a sacrosanct anchor to English Liberties".
Sir Edward Coke was a leader in using Magna Carta as a political tool during this period. Still working from the 1225 version of the text—the first printed copy of the 1215 charter only emerged in 1610—Coke spoke and wrote about Magna Carta repeatedly. His work was challenged at the time by Lord Ellesmere, and modern historians such as Ralph Turner and Claire Breay have critiqued Coke as "misconstruing" the original charter "anachronistically and uncritically", and taking a "very selective" approach to his analysis. More sympathetically, J. C. Holt noted that the history of the charters had already become "distorted" by the time Coke was carrying out his work.
In 1621, a bill was presented to Parliament to renew Magna Carta; although this bill failed, lawyer John Selden argued during Darnell's Case in 1627 that the right of habeas corpus was backed by Magna Carta. Coke supported the Petition of Right in 1628, which cited Magna Carta in its preamble, attempting to extend the provisions, and to make them binding on the judiciary. The monarchy responded by arguing that the historical legal situation was much less clear-cut than was being claimed, restricted the activities of antiquarians, arrested Coke for treason, and suppressed his proposed book on Magna Carta. Charles initially did not agree to the Petition of Right, and refused to confirm Magna Carta in any way that would reduce his independence as King.
England descended into civil war in the 1640s, resulting in Charles I's execution in 1649. Under the republic that followed, some questioned whether Magna Carta, an agreement with a monarch, was still relevant. An anti-Cromwellian pamphlet published in 1660, The English devil, said that the nation had been "compelled to submit to this Tyrant Nol or be cut off by him; nothing but a word and a blow, his Will was his Law; tell him of Magna Carta, he would lay his hand on his sword and cry Magna Farta". In a 2005 speech the Lord Chief Justice of England and Wales, Lord Woolf, repeated the claim that Cromwell had referred to Magna Carta as "Magna Farta".
The radical groups that flourished during this period held differing opinions of Magna Carta. The Levellers rejected history and law as presented by their contemporaries, holding instead to an "anti-Normanism" viewpoint. John Lilburne, for example, argued that Magna Carta contained only some of the freedoms that had supposedly existed under the Anglo-Saxons before being crushed by the Norman yoke. The Leveller Richard Overton described the charter as "a beggarly thing containing many marks of intolerable bondage". Both saw Magna Carta as a useful declaration of liberties that could be used against governments they disagreed with. Gerrard Winstanley, the leader of the more extreme Diggers, stated "the best lawes that England hath, [viz., Magna Carta] were got by our Forefathers importunate petitioning unto the kings that still were their Task-masters; and yet these best laws are yoaks and manicles, tying one sort of people to be slaves to another; Clergy and Gentry have got their freedom, but the common people still are, and have been left servants to work for them."
The first attempt at a proper historiography was undertaken by Robert Brady, who refuted the supposed antiquity of Parliament and belief in the immutable continuity of the law. Brady realised that the liberties of the Charter were limited and argued that the liberties were the grant of the King. By putting Magna Carta in historical context, he cast doubt on its contemporary political relevance; his historical understanding did not survive the Glorious Revolution, which, according to the historian J. G. A. Pocock, "marked a setback for the course of English historiography."
According to the Whig interpretation of history, the Glorious Revolution was an example of the reclaiming of ancient liberties. Reinforced with Lockean concepts, the Whigs believed England's constitution to be a social contract, based on documents such as Magna Carta, the Petition of Right, and the Bill of Rights. The English Liberties (1680, in later versions often British Liberties) by the Whig propagandist Henry Care (d. 1688) was a cheap polemical book that was influential and much-reprinted, in the American colonies as well as Britain, and made Magna Carta central to the history and the contemporary legitimacy of its subject.
Ideas about the nature of law in general were beginning to change. In 1716, the Septennial Act was passed, which had a number of consequences. First, it showed that Parliament no longer considered its previous statutes unassailable, as it provided for a maximum parliamentary term of seven years, whereas the Triennial Act (1694) (enacted less than a quarter of a century previously) had provided for a maximum term of three years.
It also greatly extended the powers of Parliament. Under this new constitution, monarchical absolutism was replaced by parliamentary supremacy. It was quickly realised that Magna Carta stood in the same relation to the King-in-Parliament as it had to the King without Parliament. This supremacy would be challenged by the likes of Granville Sharp. Sharp regarded Magna Carta as a fundamental part of the constitution, and maintained that it would be treason to repeal any part of it. He also held that the Charter prohibited slavery.
Sir William Blackstone published a critical edition of the 1215 Charter in 1759, and gave it the numbering system still used today. In 1763, Member of Parliament John Wilkes was arrested for writing an inflammatory pamphlet, No. 45, 23 April 1763; he cited Magna Carta continually. Lord Camden denounced the treatment of Wilkes as a contravention of Magna Carta. Thomas Paine, in his Rights of Man, would disregard Magna Carta and the Bill of Rights on the grounds that they were not a written constitution devised by elected representatives.
When English colonists left for the New World, they brought royal charters that established the colonies. The Massachusetts Bay Company charter, for example, stated that the colonists would "have and enjoy all liberties and immunities of free and natural subjects." The Virginia Charter of 1606, which was largely drafted by Sir Edward Coke, stated that the colonists would have the same "liberties, franchises and immunities" as people born in England. The Massachusetts Body of Liberties contained similarities to clause 29 of Magna Carta; when drafting it, the Massachusetts General Court viewed Magna Carta as the chief embodiment of English common law. The other colonies would follow their example. In 1638, Maryland sought to recognise Magna Carta as part of the law of the province, but the request was denied by Charles I.
In 1687, William Penn published The Excellent Privilege of Liberty and Property: being the birth-right of the Free-Born Subjects of England, which contained the first copy of Magna Carta printed on American soil. Penn's comments reflected Coke's, indicating a belief that Magna Carta was a fundamental law. The colonists drew on English law books, leading them to an anachronistic interpretation of Magna Carta, believing that it guaranteed trial by jury and habeas corpus.
The development of parliamentary supremacy in the British Isles did not constitutionally affect the Thirteen Colonies, which retained an adherence to English common law, but it directly affected the relationship between Britain and the colonies. When American colonists fought against Britain, they were fighting not so much for new freedom, but to preserve liberties and rights that they believed to be enshrined in Magna Carta.
In the late 18th century, the United States Constitution became the supreme law of the land, recalling the manner in which Magna Carta had come to be regarded as fundamental law. The Constitution's Fifth Amendment guarantees that "no person shall be deprived of life, liberty, or property, without due process of law", a phrase that was derived from Magna Carta. In addition, the Constitution included a similar writ in the Suspension Clause, Article 1, Section 9: "The privilege of the writ of habeas corpus shall not be suspended, unless when in cases of rebellion or invasion, the public safety may require it."
Each of these proclaim that no person may be imprisoned or detained without evidence that he or she committed a crime. The Ninth Amendment states that "The enumeration in the Constitution, of certain rights, shall not be construed to deny or disparage others retained by the people." The writers of the U.S. Constitution wished to ensure that the rights they already held, such as those that they believed were provided by Magna Carta, would be preserved unless explicitly curtailed.
Initially, the Whig interpretation of Magna Carta and its role in constitutional history remained dominant during the 19th century. The historian William Stubbs's Constitutional History of England, published in the 1870s, formed the high-water mark of this view. Stubbs argued that Magna Carta had been a major step in the shaping of the English nation, and he believed that the barons at Runnymede in 1215 were not just representing the nobility, but the people of England as a whole, standing up to a tyrannical ruler in the form of King John.
This view of Magna Carta began to recede. The late-Victorian jurist and historian Frederic William Maitland provided an alternative academic history in 1899, which began to return Magna Carta to its historical roots. In 1904, Edward Jenks published an article entitled "The Myth of Magna Carta", which undermined the previously accepted view of Magna Carta. Historians such as Albert Pollard agreed with Jenks in concluding that Edward Coke had largely "invented" the myth of Magna Carta in the 17th century; these historians argued that the 1215 charter had not referred to liberty for the people at large, but rather to the protection of baronial rights.
This view also became popular in wider circles, and in 1930 Sellar and Yeatman published their parody on English history, 1066 and All That, in which they mocked the supposed importance of Magna Carta and its promises of universal liberty: "Magna Charter was therefore the chief cause of Democracy in England, and thus a Good Thing for everyone (except the Common People)".
In many literary representations of the medieval past, however, Magna Carta remained a foundation of English national identity. Some authors used the medieval roots of the document as an argument to preserve the social status quo, while others pointed to Magna Carta to challenge perceived economic injustices. The Baronial Order of Magna Charta was formed in 1898 to promote the ancient principles and values felt to be displayed in Magna Carta. The legal profession in England and the United States continued to hold Magna Carta in high esteem; they were instrumental in forming the Magna Carta Society in 1922 to protect the meadows at Runnymede from development in the 1920s, and in 1957, the American Bar Association erected the Magna Carta Memorial at Runnymede. The prominent lawyer Lord Denning described Magna Carta in 1956 as "the greatest constitutional document of all times – the foundation of the freedom of the individual against the arbitrary authority of the despot".
Radicals such as Sir Francis Burdett believed that Magna Carta could not be repealed, but in the 19th century clauses which were obsolete or had been superseded began to be repealed. The repeal of clause 36 in 1829, by the Offences against the Person Act 1828 (9 Geo. 4 c. 31 s. 1), was the first time a clause of Magna Carta was repealed. Over the next 140 years, nearly the whole of Magna Carta (1297) as statute was repealed, leaving just clauses 1, 9, and 29 still in force (in England and Wales) after 1969. Most of the clauses were repealed in England and Wales by the Statute Law Revision Act 1863, and in modern Northern Ireland and also in the modern Republic of Ireland by the Statute Law (Ireland) Revision Act 1872.
Many later attempts to draft constitutional forms of government trace their lineage back to Magna Carta. The British dominions, Australia and New Zealand, Canada (except Quebec), and formerly the Union of South Africa and Southern Rhodesia, reflected the influence of Magna Carta in their laws, and the Charter's effects can be seen in the laws of other states that evolved from the British Empire.
Magna Carta continues to have a powerful iconic status in British society, being cited by politicians and lawyers in support of constitutional positions. Its perceived guarantee of trial by jury and other civil liberties, for example, led to Tony Benn's reference to the debate in 2008 over whether to increase the maximum time terrorism suspects could be held without charge from 28 to 42 days as "the day Magna Carta was repealed". Although rarely invoked in court in the modern era, in 2012 the Occupy London protestors attempted to use Magna Carta in resisting their eviction from St. Paul's Churchyard by the City of London. In his judgment the Master of the Rolls gave this short shrift, noting somewhat drily that although clause 29 was considered by many the foundation of the rule of law in England, he did not consider it directly relevant to the case, and the two other surviving clauses actually concerned the rights of the Church and the City of London.
Magna Carta carries little legal weight in modern Britain, as most of its clauses have been repealed and relevant rights ensured by other statutes, but the historian James Holt remarks that the survival of the 1215 charter in national life is a "reflexion of the continuous development of English law and administration" and symbolic of the many struggles between authority and the law over the centuries. The historian W. L. Warren has observed that "many who knew little and cared less about the content of the Charter have, in nearly all ages, invoked its name, and with good cause, for it meant more than it said".
It also remains a topic of great interest to historians; Natalie Fryde characterised the charter as "one of the holiest of cows in English medieval history", with the debates over its interpretation and meaning unlikely to end. In many ways still a "sacred text", Magna Carta is generally considered part of the uncodified constitution of the United Kingdom; in a 2005 speech, the Lord Chief Justice of England and Wales, Lord Woolf, described it as the "first of a series of instruments that now are recognised as having a special constitutional status".
The document also continues to be honoured in the United States as an antecedent of the United States Constitution and Bill of Rights. In 1976, the UK lent one of four surviving originals of the 1215 Magna Carta to the United States for their bicentennial celebrations and also donated an ornate display case for it. The original was returned after one year, but a replica and the case are still on display in the United States Capitol Crypt in Washington, D.C.
The 800th anniversary of the original charter occurred on 15 June 2015, and organisations and institutions planned celebratory events. The British Library brought together the four existing copies of the 1215 manuscript in February 2015 for a special exhibition. British artist Cornelia Parker was commissioned to create a new artwork, Magna Carta (An Embroidery), which was shown at the British Library between May and July 2015. The artwork is a copy of the Wikipedia article about Magna Carta (as it appeared on the document's 799th anniversary, 15 June 2014), hand-embroidered by over 200 people.
On 15 June 2015, a commemoration ceremony was conducted in Runnymede at the National Trust park, attended by British and American dignitaries.
The copy held by Lincoln Cathedral was exhibited in the Library of Congress in Washington, D.C., from November 2014 until January 2015. A new visitor centre at Lincoln Castle was opened for the anniversary. The Royal Mint released two commemorative two-pound coins.
Numerous copies, known as exemplifications, were made of the various charters, and many of them still survive. The documents were written in heavily abbreviated medieval Latin in clear handwriting, using quill pens on sheets of parchment made from sheep skin, approximately 15by across. They were sealed with the royal great seal by an official called the spigurnel, equipped with a special seal press, using beeswax and resin. There were no signatures on the charter of 1215, and the barons present did not attach their own seals to it. The text was not divided into paragraphs or numbered clauses: the numbering system used today was introduced by the jurist Sir William Blackstone in 1759.
At least thirteen original copies of the charter of 1215 were issued by the royal chancery during that year, seven in the first tranche distributed on 24 June and another six later; they were sent to county sheriffs and bishops, who were probably charged for the privilege. Slight variations exist between the surviving copies, and there was probably no single "master copy". Of these documents, only four survive, all held in England: two now at the British Library, one at Lincoln Cathedral, and one at Salisbury Cathedral. Each of these versions is slightly different in size and text, and each is considered by historians to be equally authoritative.
The two 1215 charters held by the British Library, known as Cotton MS. Augustus II.106 and Cotton Charter XIII.31a, were acquired by the antiquarian Sir Robert Cotton in the 17th century. The first had been found by Humphrey Wyems, a London lawyer, who may have discovered it in a tailor's shop, and who gave it to Cotton in January 1629. The second was found in Dover Castle in 1630 by Sir Edward Dering. The Dering charter was traditionally thought to be the copy sent in 1215 to the Cinque Ports; but in 2015 the historian David Carpenter argued that it was more probably that sent to Canterbury Cathedral, as its text was identical to a transcription made from the Cathedral's copy of the 1215 charter in the 1290s. This copy was damaged in the Cotton library fire of 1731, when its seal was badly melted. The parchment was somewhat shrivelled but otherwise relatively unscathed, and an engraved facsimile of the charter was made by John Pine in 1733. In the 1830s, however, an ill-judged and bungled attempt at cleaning and conservation rendered the manuscript largely illegible to the naked eye. This is, nonetheless, the only surviving 1215 copy still to have its great seal attached.
Lincoln Cathedral's copy has been held by the county since 1215. It was displayed in the Common Chamber in the cathedral, before being moved to another building in 1846. Between 1939 and 1940 it was displayed in the British Pavilion at the 1939 World Fair in New York City, and at the Library of Congress. When the Second World War broke out, Winston Churchill wanted to give the charter to the American people, hoping that this would encourage the United States, then neutral, to enter the war against the Axis powers, but the cathedral was unwilling, and the plans were dropped. After December 1941, the copy was stored in Fort Knox, Kentucky, for safety, before being put on display again in 1944 and returned to Lincoln Cathedral in early 1946. It was put on display in 1976 in the cathedral's medieval library. It was subsequently displayed in San Francisco, and was taken out of display for a time to undergo conservation in preparation for another visit to the United States, where it was exhibited in 2007 at the Contemporary Art Center of Virginia and the National Constitution Center in Philadelphia. In 2009 it returned to New York to be displayed at the Fraunces Tavern Museum. It is currently on permanent loan to the David P. J. Ross Vault at Lincoln Castle, along with an original copy of the 1217 Charter of the Forest.
The fourth copy, held by Salisbury Cathedral, was first given in 1215 to its predecessor, Old Sarum Cathedral. Rediscovered by the cathedral in 1812, it has remained in Salisbury throughout its history, except when being taken off-site for restoration work. It is possibly the best preserved of the four, although small pin holes can be seen in the parchment from where it was once pinned up. The handwriting on this version is different from that of the other three, suggesting that it was not written by a royal scribe but rather by a member of the cathedral staff, who then had it exemplified by the royal court.
Other early versions of the charters survive today. Only one exemplification of the 1216 charter survives, held in Durham Cathedral. Four copies of the 1217 charter exist; three of these are held by the Bodleian Library in Oxford and one by Hereford Cathedral. Hereford's copy is occasionally displayed alongside the Mappa Mundi in the cathedral's chained library and has survived along with a small document called the Articuli super Cartas that was sent along with the charter, telling the sheriff of the county how to observe the conditions outlined in the document. One of the Bodleian's copies was displayed at San Francisco's California Palace of the Legion of Honor in 2011.
Four exemplifications of the 1225 charter survive: the British Library holds one, which was preserved at Lacock Abbey until 1945; Durham Cathedral also holds a copy, with the Bodleian Library holding a third. The fourth copy of the 1225 exemplification was held by the museum of the Public Record Office and is now held by The National Archives. The Society of Antiquaries also holds a draft of the 1215 charter (discovered in 2013 in a late 13th-century register from Peterborough Abbey), a copy of the 1225 third re-issue (within an early 14th-century collection of statutes) and a roll copy of the 1225 reissue.
Only two exemplifications of Magna Carta are held outside England, both from 1297. One of these was purchased in 1952 by the Australian Government for £12,500 from King's School, Bruton, England. This copy is now on display in the Members' Hall of Parliament House, Canberra. The second was originally held by the Brudenell family, earls of Cardigan, before they sold it in 1984 to the Perot Foundation in the United States, which in 2007 sold it to U.S. businessman David Rubenstein for US$21.3 million. Rubenstein commented "I have always believed that this was an important document to our country, even though it wasn't drafted in our country. I think it was the basis for the Declaration of Independence and the basis for the Constitution". This exemplification is now on permanent loan to the National Archives in Washington, D.C. Only two other 1297 exemplifications survive, one of which is held in the UK's National Archives, the other in the Guildhall, London.
Seven copies of the 1300 exemplification by Edward I survive, in Faversham, Oriel College, Oxford, the Bodleian Library, Durham Cathedral, Westminster Abbey, the City of London (held in the archives at the London Guildhall) and Sandwich (held in the Kent County Council archives). The Sandwich copy was rediscovered in early 2015 in a Victorian scrapbook in the town archives of Sandwich, Kent, one of the Cinque Ports. In the case of the Sandwich and Oriel College exemplifications, the copies of the Charter of the Forest originally issued with them also survive.
Most of the 1215 charter and later versions sought to govern the feudal rights of the Crown over the barons. Under the Angevin kings, and in particular during John's reign, the rights of the King had frequently been used inconsistently, often in an attempt to maximise the royal income from the barons. Feudal relief was one way that a king could demand money, and clauses 2 and 3 fixed the fees payable when an heir inherited an estate or when a minor came of age and took possession of his lands. Scutage was a form of medieval taxation; all knights and nobles owed military service to the Crown in return for their lands, which theoretically belonged to the King, but many preferred to avoid this service and offer money instead; the Crown often used the cash to pay for mercenaries. The rate of scutage that should be payable, and the circumstances under which it was appropriate for the King to demand it, was uncertain and controversial; clauses 12 and 14 addressed the management of the process.
The English judicial system had altered considerably over the previous century, with the royal judges playing a larger role in delivering justice across the country. John had used his royal discretion to extort large sums of money from the barons, effectively taking payment to offer justice in particular cases, and the role of the Crown in delivering justice had become politically sensitive among the barons. Clauses 39 and 40 demanded due process be applied in the royal justice system, while clause 45 required that the King appoint knowledgeable royal officials to the relevant roles. Although these clauses did not have any special significance in the original charter, this part of Magna Carta became singled out as particularly important in later centuries. In the United States, for example, the Supreme Court of California interpreted clause 45 in 1974 as establishing a requirement in common law that a defendant faced with the potential of incarceration be entitled to a trial overseen by a legally trained judge.
Royal forests were economically important in medieval England and were both protected and exploited by the Crown, supplying the King with hunting grounds, raw materials, and money. They were subject to special royal jurisdiction and the resulting forest law was, according to the historian Richard Huscroft, "harsh and arbitrary, a matter purely for the King's will". The size of the forests had expanded under the Angevin kings, an unpopular development.
The 1215 charter had several clauses relating to the royal forests; clauses 47 and 48 promised to deforest the lands added to the forests under John and investigate the use of royal rights in this area, but notably did not address the forestation of the previous kings, while clause 53 promised some form of redress for those affected by the recent changes, and clause 44 promised some relief from the operation of the forest courts. Neither Magna Carta nor the subsequent Charter of the Forest proved entirely satisfactory as a way of managing the political tensions arising in the operation of the royal forests.
Some of the clauses addressed wider economic issues. The concerns of the barons over the treatment of their debts to Jewish moneylenders, who occupied a special position in medieval England and were by tradition under the King's protection, were addressed by clauses 10 and 11. The charter concluded this section with the phrase "debts owing to other than Jews shall be dealt with likewise", so it is debatable to what extent the Jews were being singled out by these clauses. Some issues were relatively specific, such as clause 33 which ordered the removal of all fishing weirs—an important and growing source of revenue at the time—from England's rivers.
The role of the English Church had been a matter for great debate in the years prior to the 1215 charter. The Norman and Angevin kings had traditionally exercised a great deal of power over the church within their territories. From the 1040s onwards successive popes had emphasised the importance of the church being governed more effectively from Rome, and had established an independent judicial system and hierarchical chain of authority. After the 1140s, these principles had been largely accepted within the English church, even if accompanied by an element of concern about centralising authority in Rome.
These changes brought the customary rights of lay rulers such as John over ecclesiastical appointments into question. As described above, John had come to a compromise with Pope Innocent III in exchange for his political support for the King, and clause 1 of Magna Carta prominently displayed this arrangement, promising the freedoms and liberties of the church. The importance of this clause may also reflect the role of Archbishop Langton in the negotiations: Langton had taken a strong line on this issue during his career.
|Included in later charters|
|1||Guaranteed the freedom of the English Church.||Y||Still in UK (England and Wales) law as clause 1 in the 1297 statute.|
|2||Regulated the operation of feudal relief upon the death of a baron.||Y||Repealed by Statute Law Revision Act 1863 and Statute Law (Ireland) Revision Act 1872.|
|3||Regulated the operation of feudal relief and minors' coming of age.||Y||Repealed by Statute Law Revision Act 1863 and Statute Law (Ireland) Revision Act 1872.|
|4||Regulated the process of wardship, and the role of the guardian.||Y||Repealed by Statute Law Revision Act 1863 and Statute Law (Ireland) Revision Act 1872.|
|5||Forbade the exploitation of a ward's property by his guardian.||Y||Repealed by Statute Law Revision Act 1863 and Statute Law (Ireland) Revision Act 1872.|
|6||Forbade guardians from marrying a ward to a partner of lower social standing.||Y||Repealed by Statute Law Revision Act 1863 and Statute Law (Ireland) Revision Act 1872.|
|7||Referred to the rights of a widow to receive promptly her dowry and inheritance.||Y||Repealed by Statute Law Revision Act 1863 and Statute Law (Ireland) Revision Act 1872.|
|8||Forbade the forcible remarrying of widows and confirmed the royal veto over baronial marriages.||Y||Repealed by Administration of Estates Act 1925, Administration of Estates Act (Northern Ireland) 1955 and Statute Law (Repeals) Act 1969.|
|9||Established protection for debtors, confirming that a debtor should not have his lands seized as long as he had other means to pay the debt.||Y||Repealed by Statute Law (Repeals) Act 1969.|
|10||Regulated Jewish money lending, stating that children would not pay interest on a debt they had inherited while they were under age.||N|
|11||Further addressed Jewish money lending, stating that a widow and children should be provided for before paying an inherited debt.||N|
|12||Determined that scutage or aid, forms of medieval taxation, could be levied and assessed only by the common consent of the realm.||N||Some exceptions to this general rule were given, such as for the payment of ransoms.|
|13||Confirmed the liberties and customs of the City of London and other boroughs.||Y||Still in UK (England and Wales) law as clause 9 in the 1297 statute.|
|14||Described how senior churchmen and barons would be summoned to give consent for scutage and aid.||N|
|15||Prohibited anyone from levying aid on their free men.||N||Some exceptions to this general rule were given, such as for the payment of ransoms.|
|16||Placed limits on the level of service required for a knight's fee.||Y||Repealed by Statute Law Revision Act 1948.|
|17||Established a fixed law court rather than one which followed the movements of the King.||Y||Repealed by Civil Procedure Acts Repeal Act 1879.|
|18||Defined the authority and frequency of county courts.||Y||Repealed by Civil Procedure Acts Repeal Act 1879.|
|19||Determined how excess business of a county court should be dealt with.||Y|
|20||Stated that an amercement, a type of medieval fine, should be proportionate to the offence, but even for a serious offence the fine should not be so heavy as to deprive a man of his livelihood. Fines should be imposed only through local assessment.||Y||Repealed by Statute Law Revision Act 1863 and Statute Law (Ireland) Revision Act 1872.|
|21||Determined that earls and barons should be fined only by other earls and barons.||Y||Repealed by Statute Law Revision Act 1863 and Statute Law (Ireland) Revision Act 1872.|
|22||Determined that the size of a fine on a member of the clergy should be independent of the ecclesiastical wealth held by the individual churchman.||Y||Repealed by Statute Law Revision Act 1863 and Statute Law (Ireland) Revision Act 1872.|
|23||Limited the right of feudal lords to demand assistance in building bridges across rivers.||Y||Repealed by Statute Law (Repeals) Act 1969.|
|24||Prohibited royal officials, such as sheriffs, from trying a crime as an alternative to a royal judge.||Y||Repealed by Statute Law (Repeals) Act 1969.|
|25||Fixed the royal rents on lands, with the exception of royal demesne manors.||N|
|26||Established a process for dealing with the death of those owing debts to the Crown.||Y||Repealed by Crown Proceedings Act 1947.|
|27||Laid out the process for dealing with intestacy.||N|
|28||Determined that a royal officer requisitioning goods must offer immediate payment to their owner.||Y||Repealed by Statute Law Revision Act 1863 and Statute Law (Ireland) Revision Act 1872.|
|29||Regulated the exercise of castle-guard duty.||Y||Repealed by Statute Law Revision Act 1863 and Statute Law (Ireland) Revision Act 1872.|
|30||Prevented royal officials from requisitioning horses or carts without the owner's consent.||Y||Repealed by Statute Law Revision Act 1863 and Statute Law (Ireland) Revision Act 1872.|
|31||Prevented royal officials from requisitioning timber without the owner's consent.||Y||Repealed by Statute Law Revision Act 1863 and Statute Law (Ireland) Revision Act 1872.|
|32||Prevented the Crown from confiscating the lands of felons for longer than a year and a day, after which they were to be returned to the relevant feudal lord.||Y||Repealed by Statute Law Revision Act 1948.|
|33||Ordered the removal of all fish weirs from rivers.||Y||Repealed by Statute Law (Repeals) Act 1969.|
|34||Forbade the issuing of writ precipes if doing so would undermine the right of trial in a local feudal court.||Y||Repealed by Statute Law Revision Act 1863 and Statute Law (Ireland) Revision Act 1872.|
|35||Ordered the establishment of standard measures for wine, ale, corn, and cloth.||Y||Repealed by Statute Law Revision Act 1948.|
|36||Determined that writs for loss of life or limb were to be freely given without charge.||Y||Repealed by Offences against the Person Act 1828 and Offences against the Person (Ireland) Act 1829.|
|37||Regulated the inheritance of Crown lands held by "fee-farm".||Y||Repealed by Statute Law Revision Act 1863 and Statute Law (Ireland) Revision Act 1872.|
|38||Stated that no one should be put on trial based solely on the unsupported word of a royal official.||Y||Repealed by Statute Law Revision Act 1863 and Statute Law (Ireland) Revision Act 1872.|
|39||Stated that no free man could be imprisoned or stripped of his rights or possessions without due process being legally applied.||Y||Still in UK (England and Wales) law as part of clause 29 in the 1297 statute.|
|40||Forbade the selling of justice, or its denial or delay.||Y||Still in UK (England and Wales) law as part of clause 29 in the 1297 statute.|
|41||Guaranteed the safety and the right of entry and exit of foreign merchants.||Y||Repealed by Statute Law (Repeals) Act 1969.|
|42||Permitted men to leave England for short periods without prejudicing their allegiance to the King, with the exceptions for outlaws and wartime.||N|
|43||Established special provisions for taxes due on estates temporarily held by the Crown.||Y||Repealed by Statute Law Revision Act 1863 and Statute Law (Ireland) Revision Act 1872.|
|44||Limited the need for people to attend forest courts, unless they were actually involved in the proceedings.||Y|
|45||Stated that the King should appoint only justices, constables, sheriffs, or bailiffs who knew and would enforce the law.||N|
|46||Permitted barons to take guardianship of monasteries in the absence of an abbot.||Y||Repealed by Statute Law Revision Act 1863 and Statute Law (Ireland) Revision Act 1872.|
|47||Abolished those royal forests newly created under King John's reign.||Y|
|48||Established an investigation of "evil customs" associated with royal forests, with an intent to abolishing them.||N|
|49||Ordered the return of hostages held by the King.||N|
|50||Forbade any member of the d'Athée family from serving as a royal officer.||N|
|51||Ordered that all foreign knights and mercenaries leave England once peace was restored.||N|
|52||Established a process for giving restitution to those who had been unlawfully dispossessed of their property or rights.||N|
|53||Established a process for giving restitution to those who had been mistreated by forest law.||N|
|54||Prevented men from being arrested or imprisoned on the testimony of a woman, unless the case involved the death of her husband.||Y||Repealed by Statute Law Revision Act 1863 and Statute Law (Ireland) Revision Act 1872.|
|55||Established a process for remitting any unjust fines imposed by the King.||N||Repealed by Statute Law Revision Act 1863 and Statute Law (Ireland) Revision Act 1872.|
|56||Established a process for dealing with Welshmen who had been unlawfully dispossessed of their property or rights.||Y|
|57||Established a process for returning the possessions of Welshmen who had been unlawfully dispossessed.||N|
|58||Ordered the return of Welsh hostages, including Prince Llywelyn's son.||N|
|59||Established a process for the return of Scottish hostages, including King Alexander's sisters.||N|
|60||Encouraged others in England to deal with their own subjects as the King dealt with his.||Y|
|61||Provided for the application and observation of the charter by twenty-five of the barons.||N|
|62||Pardoned those who had rebelled against the King.||N||Sometimes considered a subclause, "Suffix A", of clause 61.|
|63||Stated that the charter was binding on King John and his heirs.||N||Sometimes considered a subclause, "Suffix B", of clause 61.|
Only three clauses of Magna Carta still remain on statute in England and Wales. These clauses concern 1) the freedom of the English Church, 2) the "ancient liberties" of the City of London (clause 13 in the 1215 charter, clause 9 in the 1297 statute), and 3) a right to due legal process (clauses 39 and 40 in the 1215 charter, clause 29 in the 1297 statute). In detail, these clauses (using the numbering system from the 1297 statute) state that:
. Black. Charles. Charles Black (professor). A New Birth of Freedom: Human Rights, Named and Unnamed. 1999. Yale University Press. New Haven, CN. 978-0300077346. harv.
. Claire Breay . 2010 . Magna Carta: Manuscripts and Myths . The British Library . London . 978-0-7123-5833-0 . harv.
. Carpenter. David A.. David Carpenter (historian). The Minority of Henry III. 1990. University of California Press. Berkeley and Los Angeles. 978-0413623607. harv.
. Carpenter. David A.. David Carpenter (historian). Struggle for Mastery: The Penguin History of Britain 1066–1284. 2004. Penguin. London. 978-0-14-014824-4. harv.
. Clanchy. Michael T.. Michael Clanchy. Early Medieval England. 1997. The Folio Society. harv.
. Galef. David. David Galef. Second Thoughts: Focus on Rereading. 1998. Wayne State University Press. Detroit, MI. 978-0814326473. harv.
. Hill. Christopher. Christopher Hill (historian). Winstanley 'The Law of Freedom' and Other Writings. 2006. Cambridge University Press. 978-0521031608. harv.
. Holt. James C.. The Northerners: A Study in the Reign of King John. 1992a. Oxford University Press. Oxford. 978-0198203094. J. C. Holt. harv.
. Holt. James C.. Magna Carta. Cambridge. Cambridge University Press. 1992b . 978-0521277785. J. C. Holt. harv.
. Linebaugh. Peter. Peter Linebaugh. The Magna Carta Manifesto: Liberties and Commons for All. 2009. University of California Press. Berkeley. 978-0520260009. harv.
. Henry Mayr-Harting. 2011. Religion, Politics and Society in Britain, 1066–1272. Longman. Harlow, UK. 978-0-582-41413-6. harv.
. Pocock. J.G.A.. J. G. A. Pocock. The Ancient Constitution and the Feudal Law: A Study of English Historical Thought in the Seventeenth Century. 1987. Cambridge University Press. Cambridge. 978-0521316439. harv.
. Pollard. Albert Frederick. Albert Pollard. The history of England; a study in political evolution. 1912. H. Holt. harv.
. Poole. Austin Lane. Austin Lane Poole. From Domesday Book to Magna Carta 1087–1216. 1993. 2nd. 1951. Oxford University Press. Oxford. harv.
. Powicke. Frederick Maurice. F. M. Powicke. The Thirteenth Century 1216–1307. 1963. Oxford University Press. Oxford. 978-0198217084. harv.
. Prestwich. Michael. Michael Prestwich. Edward I. 1997. Yale University Press. New Haven, CN. 978-0300071573. harv.
. Russell. Conrad. Conrad Russell, 5th Earl Russell. Unrevolutionary England, 1603–1642. 1990. Continnuum-3PL. 978-1852850258. harv.
. Stimson. Frederick Jessup. Frederic Jesup Stimson. The Law Of The Federal And State Constitutions Of The United States. 2004. Lawbook Exchange Ltd. 978-1584773696. harv.
. Warren. W. Lewis. W. L. Warren. King John. 1990. Methuen. London. 978-0413455208. harv.
. McKechnie. William Sharp. Magna Carta: A Commentary on the Great Charter of King John with an Historical Introduction. Glasgow, UK. James Maclehose and Sons. 1914. William Sharp McKechnie.
Government Magna Carta websites |
|Part of a series on|
Individualism is the moral stance, political philosophy, ideology, or social outlook that emphasizes the moral worth of the individual. Individualists promote the exercise of one's goals and desires and so value independence and self-reliance and advocate that interests of the individual should achieve precedence over the state or a social group, while opposing external interference upon one's own interests by society or institutions such as the government. Individualism is often defined in contrast to totalitarianism, collectivism, and more corporate social forms.
Individualism makes the individual its focus and so starts "with the fundamental premise that the human individual is of primary importance in the struggle for liberation." Classical liberalism, existentialism, and anarchism are examples of movements that take the human individual as a central unit of analysis. Individualism thus involves "the right of the individual to freedom and self-realization".
It has also been used as a term denoting "The quality of being an individual; individuality" related to possessing "An individual characteristic; a quirk." Individualism is thus also associated with artistic and bohemian interests and lifestyles where there is a tendency towards self-creation and experimentation as opposed to tradition or popular mass opinions and behaviors, as with humanist philosophical positions and ethics.
In the English language, the word "individualism" was first introduced, as a pejorative, by the Owenites in the late 1830s, although it is unclear if they were influenced by Saint-Simonianism or came up with it independently. A more positive use of the term in Britain came to be used with the writings of James Elishama Smith, who was a millenarian and a Christian Israelite. Although an early Owenite socialist, he eventually rejected its collective idea of property, and found in individualism a "universalism" that allowed for the development of the "original genius." Without individualism, Smith argued, individuals cannot amass property to increase one's happiness. William Maccall, another Unitarian preacher, and probably an acquaintance of Smith, came somewhat later, although influenced by John Stuart Mill, Thomas Carlyle, and German Romanticism, to the same positive conclusions, in his 1847 work "Elements of Individualism".
An individual is a person or any specific object in a collection. In the 15th century and earlier, and also today within the fields of statistics and metaphysics, individual means "indivisible", typically describing any numerically singular thing, but sometimes meaning "a person." (q.v. "The problem of proper names"). From the 17th century on, individual indicates separateness, as in individualism. Individuality is the state or quality of being an individual; a person separate from other persons and possessing his or her own needs, goals, and desires.
Individualism and society
Individualism holds that a person taking part in society attempts to learn and discover what his or her own interests are on a personal basis, without a presumed following of the interests of a societal structure (an individualist need not be an egoist). The individualist does not follow one particular philosophy, rather creates an amalgamation of elements of many, based on personal interests in particular aspects that he/she finds of use. On a societal level, the individualist participates on a personally structured political and moral ground. Independent thinking and opinion is a common trait of an individualist. Jean-Jacques Rousseau, claims that his concept of "general will" in the "social contract" is not the simple collection of individual wills and that it furthers the interests of the individual (the constraint of law itself would be beneficial for the individual, as the lack of respect for the law necessarily entails, in Rousseau's eyes, a form of ignorance and submission to one's passions instead of the preferred autonomy of reason).
Societies and groups can differ in the extent to which they are based upon predominantly "self-regarding" (individualistic, and/or self-interested) behaviors, rather than "other-regarding" (group-oriented, and group, or society-minded) behaviors. Ruth Benedict made a distinction, relevant in this context, between "guilt" societies (e.g., medieval Europe) with an "internal reference standard", and "shame" societies (e.g., Japan, "bringing shame upon one's ancestors") with an "external reference standard", where people look to their peers for feedback on whether an action is "acceptable" or not (also known as "group-think").
Individualism is often contrasted either with totalitarianism or with collectivism, but in fact, there is a spectrum of behaviors at the societal level ranging from highly individualistic societies through mixed societies to collectivist.
The principle of individuation , or principium individuationis, describes the manner in which a thing is identified as distinguished from other things. For Carl Jung, individuation is a process of transformation, whereby the personal and collective unconscious is brought into consciousness (by means of dreams, active imagination or free association to take some examples) to be assimilated into the whole personality. It is a completely natural process necessary for the integration of the psyche to take place. Jung considered individuation to be the central process of human development. In L'individuation psychique et collective, Gilbert Simondon developed a theory of individual and collective individuation in which the individual subject is considered as an effect of individuation rather than a cause. Thus, the individual atom is replaced by a never-ending ontological process of individuation. Individuation is an always incomplete process, always leaving a "pre-individual" left-over, itself making possible future individuations. The philosophy of Bernard Stiegler draws upon and modifies the work of Gilbert Simondon on individuation and also upon similar ideas in Friedrich Nietzsche and Sigmund Freud. For Stiegler "the I, as a psychic individual, can only be thought in relationship to we, which is a collective individual. The I is constituted in adopting a collective tradition, which it inherits and in which a plurality of I 's acknowledge each other's existence."
Methodological individualism is the view that phenomena can only be understood by examining how they result from the motivations and actions of individual agents. In economics, people's behavior is explained in terms of rational choices, as constrained by prices and incomes. The economist accepts individuals' preferences as givens. Becker and Stigler provide a forceful statement of this view:
- On the traditional view, an explanation of economic phenomena that reaches a difference in tastes between people or times is the terminus of the argument: the problem is abandoned at this point to whoever studies and explains tastes (psychologists? anthropologists? phrenologists? sociobiologists?). On our preferred interpretation, one never reaches this impasse: the economist continues to search for differences in prices or incomes to explain any differences or changes in behavior.
Individualists are chiefly concerned with protecting individual autonomy against obligations imposed by social institutions (such as the state or religious morality). For L. Susan Brown "Liberalism and anarchism are two political philosophies that are fundamentally concerned with individual freedom yet differ from one another in very distinct ways. Anarchism shares with liberalism a radical commitment to individual freedom while rejecting liberalism's competitive property relations."
Civil libertarianism is a strain of political thought that supports civil liberties, or which emphasizes the supremacy of individual rights and personal freedoms over and against any kind of authority (such as a state, a corporation, social norms imposed through peer pressure, etc.). Civil libertarianism is not a complete ideology; rather, it is a collection of views on the specific issues of civil liberties and civil rights. Because of this, a civil libertarian outlook is compatible with many other political philosophies, and civil libertarianism is found on both the right and left in modern politics. For scholar Ellen Meiksins Wood "there are doctrines of individualism that are opposed to Lockean individualism ... and non-lockean individualism may encompass socialism".
British Historians Emily Robinson, Camilla Schofield, Florence Sutcliffe-Braithwaite, and Natalie Thomlinson have argued that by the 1970s Britons were keen about defining and claiming their individual rights, identities and perspectives. They demanded greater personal autonomy and self-determination and less outside control. They angrily complained that the 'establishment' was withholding it. They argue this shift in concerns helped cause Thatcherism, and was incorporated into Thatcherism's appeal.
|Part of a series on|
Liberalism (from the Latin liberalis, "of freedom; worthy of a free man, gentlemanlike, courteous, generous") is the belief in the importance of individual freedom. This belief is widely accepted in the United States, Europe, Australia and other Western nations, and was recognized as an important value by many Western philosophers throughout history, in particular since the Enlightenment. It is often rejected by collectivist, Islamic, or confucian societies in Asia or the Middle East (though Taoists were and are known to be individualists). The Roman Emperor Marcus Aurelius wrote praising "the idea of a polity administered with regard to equal rights and equal freedom of speech, and the idea of a kingly government which respects most of all the freedom of the governed".
For all intents and purposes, liberalism here refers to classical liberalism and should not be confused with modern liberalism in the United States,.
Liberalism has its roots in the Age of Enlightenment and rejects many foundational assumptions that dominated most earlier theories of government, such as the Divine Right of Kings, hereditary status, and established religion. John Locke is often credited with the philosophical foundations of classical liberalism. He wrote "no one ought to harm another in his life, health, liberty, or possessions."
In the 17th century, liberal ideas began to influence governments in Europe, in nations such as The Netherlands, Switzerland, England and Poland, but they were strongly opposed, often by armed might, by those who favored absolute monarchy and established religion. In the 18th century, in America, the first modern liberal state was founded, without a monarch or a hereditary aristocracy. The American Declaration of Independence includes the words (which echo Locke) "all men are created equal; that they are endowed by their Creator with certain unalienable rights; that among these are life, liberty, and the pursuit of happiness; that to insure these rights, governments are instituted among men, deriving their just powers from the consent of the governed."
Liberalism comes in many forms. According to John N. Gray, the essence of liberalism is toleration of different beliefs and of different ideas as to what constitutes a good life.
|Part of the Politics series on|
Anarchism is a set of political philosophies that hold the state to be undesirable, unnecessary, or harmful, and often advocate stateless societies. While anti-statism is central, some argue that anarchism entails opposing authority or hierarchical organisation in the conduct of human relations, including, but not limited to, the state system.
For influential Italian anarchist Errico Malatesta "All anarchists, whatever tendency they belong to, are individualists in some way or other. But the opposite is not true; not by any means. The individualists are thus divided into two distinct categories: one which claims the right to full development for all human individuality, their own and that of others; the other which only thinks about its own individuality and has absolutely no hesitation in sacrificing the individuality of others. The Tsar of all the Russias belongs to the latter category of individualists. We belong to the former."
Individualist anarchism refers to several traditions of thought within the anarchist movement that emphasize the individual and their will over any kinds of external determinants such as groups, society, traditions, and ideological systems. Individualist anarchism is not a single philosophy but refers to a group of individualistic philosophies that sometimes are in conflict.
In 1793, William Godwin, who has often been cited as the first anarchist, wrote Political Justice, which some consider to be the first expression of anarchism. Godwin, a philosophical anarchist, from a rationalist and utilitarian basis opposed revolutionary action and saw a minimal state as a present "necessary evil" that would become increasingly irrelevant and powerless by the gradual spread of knowledge. Godwin advocated individualism, proposing that all cooperation in labour be eliminated on the premise that this would be most conducive with the general good.
An influential form of individualist anarchism, called "egoism," or egoist anarchism, was expounded by one of the earliest and best-known proponents of individualist anarchism, the German Max Stirner. Stirner's The Ego and Its Own, published in 1844, is a founding text of the philosophy. According to Stirner, the only limitation on the rights of the individual is their power to obtain what they desire, without regard for God, state, or morality. To Stirner, rights were spooks in the mind, and he held that society does not exist but "the individuals are its reality". Stirner advocated self-assertion and foresaw unions of egoists, non-systematic associations continually renewed by all parties' support through an act of will, which Stirner proposed as a form of organization in place of the state. Egoist anarchists claim that egoism will foster genuine and spontaneous union between individuals. "Egoism" has inspired many interpretations of Stirner's philosophy. It was re-discovered and promoted by German philosophical anarchist and LGBT activist John Henry Mackay.
Josiah Warren is widely regarded as the first American anarchist, and the four-page weekly paper he edited during 1833, The Peaceful Revolutionist, was the first anarchist periodical published. For American anarchist historian Eunice Minette Schuster "It is apparent...that Proudhonian Anarchism was to be found in the United States at least as early as 1848 and that it was not conscious of its affinity to the Individualist Anarchism of Josiah Warren and Stephen Pearl Andrews...William B. Greene presented this Proudhonian Mutualism in its purest and most systematic form.". Henry David Thoreau (1817–1862) was an important early influence in individualist anarchist thought in the United States and Europe. Thoreau was an American author, poet, naturalist, tax resister, development critic, surveyor, historian, philosopher, and leading transcendentalist. He is best known for his books Walden, a reflection upon simple living in natural surroundings, and his essay, Civil Disobedience, an argument for individual resistance to civil government in moral opposition to an unjust state. Later Benjamin Tucker fused Stirner's egoism with the economics of Warren and Proudhon in his eclectic influential publication Liberty.
From these early influences individualist anarchism in different countries attracted a small but diverse following of bohemian artists and intellectuals, free love and birth control advocates (see Anarchism and issues related to love and sex), individualist naturists nudists (see anarcho-naturism), freethought and anti-clerical activists as well as young anarchist outlaws in what came to be known as illegalism and individual reclamation (see European individualist anarchism and individualist anarchism in France). These authors and activists included Oscar Wilde, Emile Armand, Han Ryner, Henri Zisly, Renzo Novatore, Miguel Gimenez Igualada, Adolf Brand and Lev Chernyi among others. In his important essay The Soul of Man under Socialism from 1891 Oscar Wilde defended socialism as the way to guarantee individualism and so he saw that "With the abolition of private property, then, we shall have true, beautiful, healthy Individualism. Nobody will waste his life in accumulating things, and the symbols for things. One will live. To live is the rarest thing in the world. Most people exist, that is all." For anarchist historian George Woodcock "Wilde's aim in The Soul of Man under Socialism is to seek the society most favorable to the artist ... for Wilde art is the supreme end, containing within itself enlightenment and regeneration, to which all else in society must be subordinated ... Wilde represents the anarchist as aesthete." Woodcock finds that "The most ambitious contribution to literary anarchism during the 1890s was undoubtedly Oscar Wilde The Soul of Man under Socialism" and finds that it is influenced mainly by the thought of William Godwin.
Ethical egoism (also called simply egoism) is the normative ethical position that moral agents ought to do what is in their own self-interest. It differs from psychological egoism, which claims that people do only act in their self-interest. Ethical egoism also differs from rational egoism, which holds merely that it is rational to act in one's self-interest. However, these doctrines may occasionally be combined with ethical egoism.
Ethical egoism contrasts with ethical altruism, which holds that moral agents have an obligation to help and serve others. Egoism and altruism both contrast with ethical utilitarianism, which holds that a moral agent should treat one's self (also known as the subject) with no higher regard than one has for others (as egoism does, by elevating self-interests and "the self" to a status not granted to others), but that one also should not (as altruism does) sacrifice one's own interests to help others' interests, so long as one's own interests (i.e. one's own desires or well-being) are substantially-equivalent to the others' interests and well-being. Egoism, utilitarianism, and altruism are all forms of consequentialism, but egoism and altruism contrast with utilitarianism, in that egoism and altruism are both agent-focused forms of consequentialism (i.e. subject-focused or subjective), but utilitarianism is called agent-neutral (i.e. objective and impartial) as it does not treat the subject's (i.e. the self's, i.e. the moral "agent's") own interests as being more or less important than if the same interests, desires, or well-being were anyone else's.
Ethical egoism does not, however, require moral agents to harm the interests and well-being of others when making moral deliberation; e.g. what is in an agent's self-interest may be incidentally detrimental, beneficial, or neutral in its effect on others. Individualism allows for others' interest and well-being to be disregarded or not, as long as what is chosen is efficacious in satisfying the self-interest of the agent. Nor does ethical egoism necessarily entail that, in pursuing self-interest, one ought always to do what one wants to do; e.g. in the long term, the fulfilment of short-term desires may prove detrimental to the self. Fleeting pleasance, then, takes a back seat to protracted eudaemonia. In the words of James Rachels, "Ethical egoism [...] endorses selfishness, but it doesn't endorse foolishness."
Ethical egoism is sometimes the philosophical basis for support of libertarianism or individualist anarchism as in Max Stirner, although these can also be based on altruistic motivations. These are political positions based partly on a belief that individuals should not coercively prevent others from exercising freedom of action.
Egoist anarchism is a school of anarchist thought that originated in the philosophy of Max Stirner, a nineteenth-century Hegelian philosopher whose "name appears with familiar regularity in historically orientated surveys of anarchist thought as one of the earliest and best-known exponents of individualist anarchism." According to Stirner, the only limitation on the rights of the individual is their power to obtain what they desire, without regard for God, state, or morality. Stirner advocated self-assertion and foresaw unions of egoists, non-systematic associations continually renewed by all parties' support through an act of will, which Stirner proposed as a form of organisation in place of the state. Egoist anarchists argue that egoism will foster genuine and spontaneous union between individuals. "Egoism" has inspired many interpretations of Stirner's philosophy but within anarchism it has also gone beyond Stirner. It was re-discovered and promoted by German philosophical anarchist and LGBT activist John Henry Mackay. John Beverley Robinson wrote an essay called "Egoism" in which he states that "Modern egoism, as propounded by Stirner and Nietzsche, and expounded by Ibsen, Shaw and others, is all these; but it is more. It is the realization by the individual that they are an individual; that, as far as they are concerned, they are the only individual." Nietzsche (see Anarchism and Friedrich Nietzsche) and Stirner were frequently compared by French "literary anarchists" and anarchist interpretations of Nietzschean ideas appear to have also been influential in the United States. Anarchists who adhered to egoism include Benjamin Tucker, Émile Armand, John Beverley Robinson, Adolf Brand, Steven T. Byington, Renzo Novatore, James L. Walker, Enrico Arrigoni, Biofilo Panclasta, Jun Tsuji, André Arru and contemporary ones such as Hakim Bey, Bob Black and Wolfi Landstreicher.
Existentialism is a term applied to the work of a number of 19th- and 20th-century philosophers who, despite profound doctrinal differences, generally held that the focus of philosophical thought should be to deal with the conditions of existence of the individual person and his or her emotions, actions, responsibilities, and thoughts. The early 19th century philosopher Søren Kierkegaard, posthumously regarded as the father of existentialism, maintained that the individual solely has the responsibilities of giving one's own life meaning and living that life passionately and sincerely, in spite of many existential obstacles and distractions including despair, angst, absurdity, alienation, and boredom.
Subsequent existential philosophers retain the emphasis on the individual, but differ, in varying degrees, on how one achieves and what constitutes a fulfilling life, what obstacles must be overcome, and what external and internal factors are involved, including the potential consequences of the existence or non-existence of God. Many existentialists have also regarded traditional systematic or academic philosophy, in both style and content, as too abstract and remote from concrete human experience. Existentialism became fashionable after World War II as a way to reassert the importance of human individuality and freedom.
Freethought holds that individuals should not accept ideas proposed as truth without recourse to knowledge and reason. Thus, freethinkers strive to build their opinions on the basis of facts, scientific inquiry, and logical principles, independent of any logical fallacies or intellectually limiting effects of authority, confirmation bias, cognitive bias, conventional wisdom, popular culture, prejudice, sectarianism, tradition, urban legend, and all other dogmas. Regarding religion, freethinkers hold that there is insufficient evidence to scientifically validate the existence of supernatural phenomena.
Humanism is a perspective common to a wide range of ethical stances that attaches importance to human dignity, concerns, and capabilities, particularly rationality. Although the word has many senses, its meaning comes into focus when contrasted to the supernatural or to appeals to authority. Since the 19th century, humanism has been associated with an anti-clericalism inherited from the 18th-century Enlightenment philosophes. 21st century Humanism tends to strongly endorse human rights, including reproductive rights, gender equality, social justice, and the separation of church and state. The term covers organized non-theistic religions, secular humanism, and a humanistic life stance.
Philosophical hedonism is a meta-ethical theory of value which argues that pleasure is the only intrinsic good and pain is the only intrinsic bad. The basic idea behind hedonistic thought is that pleasure (an umbrella term for all inherently likable emotions) is the only thing that is good in and of itself or by its very nature. This implies evaluating the moral worth of character or behavior according to the extent that the pleasure it produces exceeds the pain it entails.
A libertine is one devoid of most moral restraints, which are seen as unnecessary or undesirable, especially one who ignores or even spurns accepted morals and forms of behaviour sanctified by the larger society. Libertines place value on physical pleasures, meaning those experienced through the senses. As a philosophy, libertinism gained new-found adherents in the 17th, 18th, and 19th centuries, particularly in France and Great Britain. Notable among these were John Wilmot, 2nd Earl of Rochester, and the Marquis de Sade. During the Baroque era in France, there existed a freethinking circle of philosophers and intellectuals who were collectively known as libertinage érudit and which included Gabriel Naudé, Élie Diodati and François de La Mothe Le Vayer. The critic Vivian de Sola Pinto linked John Wilmot, 2nd Earl of Rochester's libertinism to Hobbesian materialism.
|Part of a series on|
Objectivism is a system of philosophy created by philosopher and novelist Ayn Rand (1905–1982) that holds: reality exists independent of consciousness; human beings gain knowledge rationally from perception through the process of concept formation and inductive and deductive logic; the moral purpose of one's life is the pursuit of one's own happiness or rational self-interest. Rand thinks the only social system consistent with this morality is full respect for individual rights, embodied in pure laissez faire capitalism; and the role of art in human life is to transform man's widest metaphysical ideas, by selective reproduction of reality, into a physical form—a work of art—that he can comprehend and to which he can respond emotionally. Objectivism celebrates man as his own hero, "with his own happiness as the moral purpose of his life, with productive achievement as his noblest activity, and reason as his only absolute."
Philosophical anarchism is an anarchist school of thought which contends that the State lacks moral legitimacy and – in contrast to revolutionary anarchism – does not advocate violent revolution to eliminate it but advocate peaceful evolution to superate it. Though philosophical anarchism does not necessarily imply any action or desire for the elimination of the State, philosophical anarchists do not believe that they have an obligation or duty to obey the State, or conversely, that the State has a right to command.
Philosophical anarchism is a component especially of individualist anarchism. Philosophical anarchists of historical note include Mohandas Gandhi, William Godwin, Pierre-Joseph Proudhon, Max Stirner, Benjamin Tucker, and Henry David Thoreau. Contemporary philosophical anarchists include A. John Simmons and Robert Paul Wolff.
Subjectivism is a philosophical tenet that accords primacy to subjective experience as fundamental of all measure and law. In extreme forms like Solipsism, it may hold that the nature and existence of every object depends solely on someone's subjective awareness of it. For example, Wittgenstein wrote in Tractatus Logico-Philosophicus: "The subject doesn't belong to the world, but it is a limit of the world" (proposition 5.632). Metaphysical subjectivism is the theory that reality is what we perceive to be real, and that there is no underlying true reality that exists independently of perception. One can also hold that it is consciousness rather than perception that is reality (subjective idealism). In probability, a subjectivism stands for the belief that probabilities are simply degrees-of-belief by rational agents in a certain proposition, and which have no objective reality in and of themselves.
Ethical subjectivism stands in opposition to moral realism, which claims that moral propositions refer to objective facts, independent of human opinion; to error theory, which denies that any moral propositions are true in any sense; and to non-cognitivism, which denies that moral sentences express propositions at all. The most common forms of ethical subjectivism are also forms of moral relativism, with moral standards held to be relative to each culture or society (c.f. cultural relativism), or even to every individual. The latter view, as put forward by Protagoras, holds that there are as many distinct scales of good and evil as there are subjects in the world. Moral subjectivism is that species of moral relativism that relativizes moral value to the individual subject.
Horst Matthai Quelle was a Spanish language German anarchist philosopher influenced by Max Stirner. He argued that since the individual gives form to the world, he is those objects, the others and the whole universe. One of his main views was a "theory of infinite worlds" which for him was developed by pre-socratic philosophers.
Solipsism is the philosophical idea that only one's own mind is sure to exist. The term comes from Latin solus (alone) and ipse (self). Solipsism as an epistemological position holds that knowledge of anything outside one's own mind is unsure. The external world and other minds cannot be known, and might not exist outside the mind. As a metaphysical position, solipsism goes further to the conclusion that the world and other minds do not exist. As such it is the only epistemological position that, by its own postulate, is both irrefutable and yet indefensible in the same manner. Although the number of individuals sincerely espousing solipsism has been small, it is not uncommon for one philosopher to accuse another's arguments of entailing solipsism as an unwanted consequence, in a kind of reductio ad absurdum. In the history of philosophy, solipsism has served as a skeptical hypothesis.
The doctrine of economic individualism holds that each individual should be allowed autonomy in making his or her own economic decisions as opposed to those decisions being made by the state, the community, the corporation etc. for him or her.
Classical liberalism is a political ideology that developed in the 19th century in England, Western Europe, and the Americas. It followed earlier forms of liberalism in its commitment to personal freedom and popular government, but differed from earlier forms of liberalism in its commitment to free markets and classical economics. Notable classical liberals in the 19th century include Jean-Baptiste Say, Thomas Malthus, and David Ricardo. Classical liberalism was revived in the 20th century by Ludwig von Mises and Friedrich Hayek, and further developed by Milton Friedman, Robert Nozick, Loren Lomasky, and Jan Narveson. The phrase classical liberalism is also sometimes used to refer to all forms of liberalism before the 20th century.
Individualist anarchism and economics
In regards to economic questions within individualist anarchism there are adherents to mutualism (Pierre Joseph Proudhon, Emile Armand, early Benjamin Tucker); natural rights positions (early Benjamin Tucker, Lysander Spooner, Josiah Warren); and egoistic disrespect for "ghosts" such as private property and markets (Max Stirner, John Henry Mackay, Lev Chernyi, later Benjamin Tucker, Renzo Novatore, illegalism). Contemporary individualist anarchist Kevin Carson characterizes American individualist anarchism saying that "Unlike the rest of the socialist movement, the individualist anarchists believed that the natural wage of labor in a free market was its product, and that economic exploitation could only take place when capitalists and landlords harnessed the power of the state in their interests. Thus, individualist anarchism was an alternative both to the increasing statism of the mainstream socialist movement, and to a classical liberal movement that was moving toward a mere apologetic for the power of big business."
Mutualism is an anarchist school of thought which can be traced to the writings of Pierre-Joseph Proudhon, who envisioned a society where each person might possess a means of production, either individually or collectively, with trade representing equivalent amounts of labor in the free market. Integral to the scheme was the establishment of a mutual-credit bank which would lend to producers at a minimal interest rate only high enough to cover the costs of administration. Mutualism is based on a labor theory of value which holds that when labor or its product is sold, in exchange, it ought to receive goods or services embodying "the amount of labor necessary to produce an article of exactly similar and equal utility". Receiving anything less would be considered exploitation, theft of labor, or usury.
|Part of a series on|
Libertarian socialism (sometimes dubbed socialist libertarianism, or left-libertarianism) is a group of anti-authoritarian political philosophies inside the socialist movement that rejects socialism as centralized state ownership and control of the economy, as well as the state itself. It criticizes wage labour relationships within the workplace. Instead, it emphasizes workers' self-management of the workplace and decentralized structures of political organization. It asserts that a society based on freedom and justice can be achieved through abolishing authoritarian institutions that control certain means of production and subordinate the majority to an owning class or political and economic elite. Libertarian socialists advocate for decentralized structures based on direct democracy and federal or confederal associations such as libertarian municipalism, citizens' assemblies, trade unions, and workers' councils. All of this is generally done within a general call for libertarian and voluntary human relationships through the identification, criticism, and practical dismantling of illegitimate authority in all aspects of human life. As such libertarian socialism, within the larger socialist movement, seeks to distinguish itself both from Leninism/Bolshevism and from social democracy.
Past and present political philosophies and movements commonly described as libertarian socialist include anarchism (especially anarchist communism, anarchist collectivism, anarcho-syndicalism, and mutualism) as well as autonomism, communalism, participism, guild socialism, revolutionary syndicalism, and libertarian Marxist philosophies such as council communism and Luxemburgism; as well as some versions of "utopian socialism" and individualist anarchism.
Left-libertarianism (or left-wing libertarianism) names several related but distinct approaches to politics, society, culture, and political and social theory, which stress both individual freedom and social justice. Unlike right-libertarians, they believe that neither claiming nor mixing one's labor with natural resources is enough to generate full private property rights, and maintain that natural resources (land, oil, gold, trees) ought to be held in some egalitarian manner, either unowned or owned collectively. Those left-libertarians who support private property do so under the condition that recompense is offered to the local community.
Left-libertarianism can refer generally to these related and overlapping schools of thought:
- Anti-authoritarian varieties of left-wing politics and, in particular, the socialist movement, usually known as libertarian socialism.
- Geolibertarianism: a synthesis of libertarianism and geoism (or Georgism)
- The Steiner–Vallentyne school, whose proponents draw conclusions from classical liberal or market liberal premises.
- Left-wing market anarchism, which stresses the socially transformative potential of non-aggression and anticapitalist, freed markets.
Right-libertarianism or right libertarianism is a phrase used by some to describe either non-collectivist forms of libertarianism or a variety of different libertarian views some label "right" of mainstream libertarianism including "libertarian conservatism".
Stanford Encyclopedia of Philosophy calls it "right libertarianism" but states: "Libertarianism is often thought of as 'right-wing' doctrine. This, however, is mistaken for at least two reasons. First, on social—rather than economic—issues, libertarianism tends to be 'left-wing'. It opposes laws that restrict consensual and private sexual relationships between adults (e.g., gay sex, non-marital sex, and deviant sex), laws that restrict drug use, laws that impose religious views or practices on individuals, and compulsory military service. Second, in addition to the better-known version of libertarianism—right-libertarianism—there is also a version known as 'left-libertarianism'. Both endorse full self-ownership, but they differ with respect to the powers agents have to appropriate unappropriated natural resources (land, air, water, etc.)."
As creative independent lifestyle
The anarchist writer and bohemian Oscar Wilde wrote in his famous essay The Soul of Man under Socialism that "Art is individualism, and individualism is a disturbing and disintegrating force. There lies its immense value. For what it seeks is to disturb monotony of type, slavery of custom, tyranny of habit, and the reduction of man to the level of a machine." For anarchist historian George Woodcock "Wilde's aim in The Soul of Man under Socialism is to seek the society most favorable to the artist...for Wilde art is the supreme end, containing within itself enlightenment and regeneration, to which all else in society must be subordinated...Wilde represents the anarchist as aesthete." The word individualism in this way has been used to denote a personality with a strong tendency towards self-creation and experimentation as opposed to tradition or popular mass opinions and behaviors.
Anarchist writer Murray Bookchin describes a lot of individualist anarchists as people who "expressed their opposition in uniquely personal forms, especially in fiery tracts, outrageous behavior, and aberrant lifestyles in the cultural ghettos of fin de siècle New York, Paris, and London. As a credo, individualist anarchism remained largely a bohemian lifestyle, most conspicuous in its demands for sexual freedom ('free love') and enamored of innovations in art, behavior, and clothing."
In relation to this view of individuality, French Individualist anarchist Emile Armand advocates egoistical denial of social conventions and dogmas to live in accord to one's own ways and desires in daily life since he emphasized anarchism as a way of life and practice. In this way he opines "So the anarchist individualist tends to reproduce himself, to perpetuate his spirit in other individuals who will share his views and who will make it possible for a state of affairs to be established from which authoritarianism has been banished. It is this desire, this will, not only to live, but also to reproduce oneself, which we shall call "activity".
In the book Imperfect garden : the legacy of humanism, humanist philosopher Tzvetan Todorov identifies individualism as an important current of socio-political thought within modernity and as examples of it he mentions Michel de Montaigne, François de La Rochefoucauld, Marquis de Sade, and Charles Baudelaire. In La Rochefoucauld, he identifies a tendency similar to stoicism in which "the honest person works his being in the manner of a sculptor who searches the liberation of the forms which are inside a block of marble, to extract the truth of that matter." In Baudelaire he finds the dandy trait in which one searches to cultivate "the idea of beauty within oneself, of satisfying one's passions of feeling and thinking."
The Russian-American poet Joseph Brodsky once wrote that "The surest defense against Evil is extreme individualism, originality of thinking, whimsicality, even—if you will—eccentricity. That is, something that can't be feigned, faked, imitated; something even a seasoned imposter couldn't be happy with." Ralph Waldo Emerson famously declared, "Whoso would be a man must be a nonconformist"—a point of view developed at length in both the life and work of Henry David Thoreau. Equally memorable and influential on Walt Whitman is Emerson's idea that "a foolish consistency is the hobgoblin of small minds, adored by little statesmen and philosophers and divines." Emerson opposes on principle the reliance on civil and religious social structures precisely because through them the individual approaches the divine second-hand, mediated by the once original experience of a genius from another age. "An institution," he explains, "is the lengthened shadow of one man." To achieve this original relation one must "Insist on one's self; never imitate" for if the relationship is secondary the connection is lost.
- "Individualism" on Encyclopædia Britannica Online
- Ellen Meiksins Wood. Mind and Politics: An Approach to the Meaning of Liberal and Socialist Individualism. University of California Press. 1972. ISBN 0-520-02029-4. p. 6
- "individualism" on The Free Dictionary
- Biddle, Craig. "Individualism vs. Collectivism: Our Future, Our Choice". The Objective Standard. 7 (1).
- Hayek, F.A. (1994). The Road to Serfdom. United States of America: The University of Chicago Press. pp. 17, 37–48. ISBN 0-226-32061-8.
- L. Susan Brown. The Politics of Individualism: Liberalism, Liberal Feminism, and Anarchism. Black Rose Books Ltd. 1993
- Ellen Meiksins Wood. Mind and Politics: An Approach to the Meaning of Liberal and Socialist Individualism. University of California Press. 1972. ISBN 0-520-02029-4 pp. 6–7
- Snyderman, George S.; Josephs, William (1939). "Bohemia: The Underworld of Art". Social Forces. 18 (2): 187–199. doi:10.2307/2570771. ISSN 0037-7732. JSTOR 2570771.
- "The leading intellectual trait of the era was the recovery, to a certain degree, of the secular and humane philosophy of Greece and Rome. Another humanist trend which cannot be ignored was the rebirth of individualism, which, developed by Greece and Rome to a remarkable degree, had been suppressed by the rise of a caste system in the later Roman Empire, by the Church and by feudalism in the Middle Ages."The history guide: Lectures on Modern European Intellectual History"
- "Anthropocentricity and individualism...Humanism and Italian art were similar in giving paramount attention to human experience, both in its everyday immediacy and in its positive or negative extremes...The human-centredness of Renaissance art, moreover, was not just a generalized endorsement of earthly experience. Like the humanists, Italian artists stressed the autonomy and dignity of the individual.""Humanism" on Encyclopædia Britannica
- Claeys, Gregory (1986). ""Individualism," "Socialism," and "Social Science": Further Notes on a Process of Conceptual Formation, 1800–1850". Journal of the History of Ideas. University of Pennsylvania Press. 47 (1): 81–93. doi:10.2307/2709596. JSTOR 2709596.
- Swart, Koenraad W. (1962). ""Individualism" in the Mid-Nineteenth Century (1826–1860)". Journal of the History of Ideas. University of Pennsylvania Press. 23 (1): 77–90. doi:10.2307/2708058. JSTOR 2708058.
- Abbs 1986, cited in Klein 2005, pp. 26–27
- "The Chrysanthemum and the Sword: Patterns of Japanese Culture." Rutland, VT and Tokyo, Japan: Charles E. Tuttle Co. 1954 orig. 1946.
- Reese, William L. (1980). Dictionary of Philosophy and Religion (1st ed.). Atlantic Highlands, New Jersey: Humanities Press. p. 251. ISBN 0-391-00688-6.
- Audi, Robert, ed. (1999). The Cambridge Dictionary of Philosophy (2nd ed.). Cambridge: Cambridge University Press. p. 424. ISBN 0-521-63136-X.
- Jung, C. G. (1962). Symbols of Transformation: An analysis of the prelude to a case of schizophrenia (Vol. 2, R. F. C. Hull, Trans.). New York: Harper & Brothers.
- Jung's Individuation process Retrieved on 2009-2-20
- Gilbert Simondon. L'individuation psychique et collective (Paris, Aubier, 1989; reprinted in 2007 with a preface by Bernard Stiegler)
- Bernard Stiegler: Culture and Technology, tate.org.uk, 13 May 2004
- Heath, Joseph (1 January 2015). Zalta, Edward N., ed. The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University – via Stanford Encyclopedia of Philosophy.
- Stigler, George; Gary Becker (Mar 1977). "De gustibus non est disputandum". American Economic Review. 67 (2): 76. JSTOR 1807222.
- "the definition of civil libertarian".
- Compass, The Political. "The Political Compass".
- Ellen Meiksins Wood. Mind and Politics: An Approach to the Meaning of Liberal and Socialist Individualism. University of California Press. 1972. ISBN 0-520-02029-4. p. 7
- Emily Robinson, et al. "Telling stories about post-war Britain: popular individualism and the ‘crisis’ of the 1970s." Twentieth Century British History 28.2 (2017): 268-304.
- "Latin Word Lookup".
- The Ancient Chinese Super State of Primary Societies: Taoist Philosophy for the 21st Century, You-Sheng Li, June 2010, p. 300
- Marcus Aurelius, Meditations, Oxford University Press, 2008, ISBN 978-0-19-954059-4.
- John C. Goodman, "Classical Liberalism vs Modern Liberalism and Modern Conservatism".
- Locke, John (1690). Two Treatises of Government (10th edition). Project Gutenberg. Retrieved January 21, 2009.
- Paul E. Sigmund, editor, The Selected Political Writings of John Locke, Norton, 2003, ISBN 0-393-96451-5 p. iv "(Locke's thoughts) underlie many of the fundamental political ideas of American liberal constitutional democracy...", "At the time Locke wrote, his principles were accepted in theory by a few and in practice by none."
- Thomas Jefferson, Declaration of Independence, July 4, 1776.
- John Gray, Two Faces of Liberalism, The New Press, 2008, ISBN 978-1-56584-678-4
Malatesta, Errico. "Towards Anarchism". MAN!. Los Angeles: International Group of San Francisco. OCLC 3930443. Archived from the original on 7 November 2012.
Agrell, Siri (14 May 2007). "Working for The Man". The Globe and Mail. Archived from the original on 16 May 2007. Retrieved 14 April 2008.
"Anarchism". Encyclopædia Britannica. Encyclopædia Britannica Premium Service. 2006. Archived from the original on 14 December 2006. Retrieved 29 August 2006.
"Anarchism". The Shorter Routledge Encyclopedia of Philosophy: 14. 2005.
Anarchism is the view that a society without the state, or government, is both possible and desirable.The following sources cite anarchism as a political philosophy: Mclaughlin, Paul (2007). Anarchism and Authority. Aldershot: Ashgate. p. 59. ISBN 0-7546-6196-2. Johnston, R. (2000). The Dictionary of Human Geography. Cambridge: Blackwell Publishers. p. 24. ISBN 0-631-20561-6.
- Slevin, Carl. "Anarchism." The Concise Oxford Dictionary of Politics. Ed. Iain McLean and Alistair McMillan. Oxford University Press, 2003.
- "ANARCHISM, a social philosophy that rejects authoritarian government and maintains that voluntary institutions are best suited to express man's natural social tendencies." George Woodcock. "Anarchism" at The Encyclopedia of Philosophy
- "In a society developed on these lines, the voluntary associations which already now begin to cover all the fields of human activity would take a still greater extension so as to substitute themselves for the state in all its functions." Peter Kropotkin. "Anarchism" from the Encyclopædia Britannica
- "Anarchism." The Shorter Routledge Encyclopedia of Philosophy. 2005. p. 14 "Anarchism is the view that a society without the state, or government, is both possible and desirable."
- Sheehan, Sean. Anarchism, London: Reaktion Books Ltd., 2004. p. 85
- "Anarchists do reject the state, as we will see. But to claim that this central aspect of anarchism is definitive is to sell anarchism short."Anarchism and Authority: A Philosophical Introduction to Classical Anarchism by Paul McLaughlin. AshGate. 2007. p. 28
- "The IAF - IFA fights for : the abolition of all forms of authority whether economical, political, social, religious, cultural or sexual.""Principles of The International of Anarchist Federations" Archived January 5, 2012, at the Wayback Machine.
- "Authority is defined in terms of the right to exercise social control (as explored in the "sociology of power") and the correlative duty to obey (as explored in the "philosophy of practical reason"). Anarchism is distinguished, philosophically, by its scepticism towards such moral relations – by its questioning of the claims made for such normative power – and, practically, by its challenge to those "authoritative" powers which cannot justify their claims and which are therefore deemed illegitimate or without moral foundation."Anarchism and Authority: A Philosophical Introduction to Classical Anarchism by Paul McLaughlin. AshGate. 2007. p. 1
- "Anarchism, then, really stands for the liberation of the human mind from the dominion of religion; the liberation of the human body from the dominion of property; liberation from the shackles and restraint of government. Anarchism stands for a social order based on the free grouping of individuals for the purpose of producing real social wealth; an order that will guarantee to every human being free access to the earth and full enjoyment of the necessities of life, according to individual desires, tastes, and inclinations." Emma Goldman. "What it Really Stands for Anarchy" in Anarchism and Other Essays.
- Individualist anarchist Benjamin Tucker defined anarchism as opposition to authority as follows "They found that they must turn either to the right or to the left, – follow either the path of Authority or the path of Liberty. Marx went one way; Warren and Proudhon the other. Thus were born State Socialism and Anarchism ... Authority, takes many shapes, but, broadly speaking, her enemies divide themselves into three classes: first, those who abhor her both as a means and as an end of progress, opposing her openly, avowedly, sincerely, consistently, universally; second, those who profess to believe in her as a means of progress, but who accept her only so far as they think she will subserve their own selfish interests, denying her and her blessings to the rest of the world; third, those who distrust her as a means of progress, believing in her only as an end to be obtained by first trampling upon, violating, and outraging her. These three phases of opposition to Liberty are met in almost every sphere of thought and human activity. Good representatives of the first are seen in the Catholic Church and the Russian autocracy; of the second, in the Protestant Church and the Manchester school of politics and political economy; of the third, in the atheism of Gambetta and the socialism of Karl Marx." Benjamin Tucker. Individual Liberty.
- Ward, Colin (1966). "Anarchism as a Theory of Organization". Archived from the original on 25 March 2010. Retrieved 1 March 2010.
- Anarchist historian George Woodcock report of Mikhail Bakunin's anti-authoritarianism and shows opposition to both state and non-state forms of authority as follows: "All anarchists deny authority; many of them fight against it." (p. 9) ... Bakunin did not convert the League's central committee to his full program, but he did persuade them to accept a remarkably radical recommendation to the Berne Congress of September 1868, demanding economic equality and implicitly attacking authority in both Church and State."
- Brown, L. Susan (2002). "Anarchism as a Political Philosophy of Existential Individualism: Implications for Feminism". The Politics of Individualism: Liberalism, Liberal Feminism and Anarchism. Black Rose Books Ltd. Publishing. p. 106.
- "Anarchy and Organization: The Debate at the 1907 International Anarchist Congress - The Anarchist Library".
- "What do I mean by individualism? I mean by individualism the moral doctrine which, relying on no dogma, no tradition, no external determination, appeals only to the individual conscience."Mini-Manual of Individualism by Han Ryner
- "I do not admit anything except the existence of the individual, as a condition of his sovereignty. To say that the sovereignty of the individual is conditioned by Liberty is simply another way of saying that it is conditioned by itself." "Anarchism and the State" in Individual Liberty
- Everhart, Robert B. The Public School Monopoly: A Critical Analysis of Education and the State in American Society. Pacific Institute for Public Policy Research, 1982. p. 115.
- Philip, Mark (2006-05-20). "William Godwin". In Zalta, Edward N. Stanford Encyclopedia of Philosophy.
- Adams, Ian. Political Ideology Today. Manchester University Press, 2001. p. 116.
- Godwin, William (1796) . Enquiry Concerning Political Justice and its Influence on Modern Morals and Manners. G.G. and J. Robinson. OCLC 2340417.
- Britannica Concise Encyclopedia. Retrieved 7 December 2006, from Encyclopædia Britannica Online.
- Paul McLaughlin. Anarchism and Authority: A Philosophical Introduction to Classical Anarchism. Ashgate Publishing, Ltd., 2007. p. 119.
- Goodway, David. Anarchist Seeds Beneath the Snow. Liverpool University Press, 2006, p. 99.
- Leopold, David (2006-08-04). "Max Stirner". In Zalta, Edward N. Stanford Encyclopedia of Philosophy.
- The Encyclopedia Americana: A Library of Universal Knowledge. Encyclopedia Corporation. p. 176.
- Miller, David. "Anarchism." 1987. The Blackwell Encyclopaedia of Political Thought. Blackwell Publishing. p. 11.
- "What my might reaches is my property; and let me claim as property everything I feel myself strong enough to attain, and let me extend my actual property as fas as I entitle, that is, empower myself to take..." In Ossar, Michael. 1980. Anarchism in the Dramas of Ernst Toller. SUNY Press. p. 27.
- Nyberg, Svein Olav. "The union of egoists" (PDF). Non Serviam. Oslo, Norway: Svein Olav Nyberg. 1: 13–14. OCLC 47758413. Archived from the original (PDF) on 7 December 2012. Retrieved 1 September 2012
- Thomas, Paul (1985). Karl Marx and the Anarchists. London: Routledge/Kegan Paul. p. 142. ISBN 0-7102-0685-2.
- Carlson, Andrew (1972). "Philosophical Egoism: German Antecedents". Anarchism in Germany. Metuchen: Scarecrow Press. ISBN 0-8108-0484-0. Archived from the original on 2008-12-10. Retrieved 2008-12-04.
- Palmer, Brian (2010-12-29) What do anarchists want from us?, Slate.com
- William Bailie, "Archived copy" (PDF). Archived from the original (PDF) on February 4, 2012. Retrieved June 17, 2013. Josiah Warren: The First American Anarchist — A Sociological Study, Boston: Small, Maynard & Co., 1906, p. 20
- "Paralelamente, al otro lado del atlántico, en el diferente contexto de una nación a medio hacer, los Estados Unidos, otros filósofos elaboraron un pensamiento individualista similar, aunque con sus propias especificidades. Henry David Thoreau (1817–1862), uno de los escritores próximos al movimiento de la filosofía trascendentalista, es uno de los más conocidos. Su obra más representativa es Walden, aparecida en 1854, aunque redactada entre 1845 y 1847, cuando Thoreau decide instalarse en el aislamiento de una cabaña en el bosque, y vivir en íntimo contacto con la naturaleza, en una vida de soledad y sobriedad. De esta experiencia, su filosofía trata de transmitirnos la idea que resulta necesario un retorno respetuoso a la naturaleza, y que la felicidad es sobre todo fruto de la riqueza interior y de la armonía de los individuos con el entorno natural. Muchos han visto en Thoreau a uno de los precursores del ecologismo y del anarquismo primitivista representado en la actualidad por Jonh Zerzan. Para George Woodcock, esta actitud puede estar también motivada por una cierta idea de resistencia al progreso y de rechazo al materialismo creciente que caracteriza la sociedad norteamericana de mediados de siglo XIX.""Voluntary non-submission. Spanish individualist anarchism during dictatorship and the second republic (1923–1938)" Archived July 23, 2011, at the Wayback Machine.
- "2. Individualist Anarchism and Reaction".
- "The Free Love Movement and Radical Individualism, By Wendy McElroy".
- "La insumisión voluntaria: El anarquismo individualista español durante la Dictadura y la Segunda República (1923–1938)" by Xavier Díez Archived July 23, 2011, at the Wayback Machine.
- "Los anarco-individualistas, G.I.A...Una escisión de la FAI producida en el IX Congreso (Carrara, 1965) se pr odujo cuando un sector de anarquistas de tendencia humanista rechazan la interpretación que ellos juzgan disciplinaria del pacto asociativo" clásico, y crean los GIA (Gruppi di Iniziativa Anarchica) . Esta pequeña federación de grupos, hoy nutrida sobre todo de veteranos anarco-individualistas de orientación pacifista, naturista, etcétera defiende la autonomía personal y rechaza a rajatabla toda forma de intervención en los procesos del sistema, como sería por ejemplo el sindicalismo. Su portavoz es L'Internazionale con sede en Ancona. La escisión de los GIA prefiguraba, en sentido contrario, el gran debate que pronto había de comenzar en el seno del movimiento""El movimiento libertario en Italia" by Bicicleta. REVISTA DE COMUNICACIONES LIBERTARIAS Year 1 No. Noviembre, 1 1977 Archived October 12, 2013, at the Wayback Machine.
- "Proliferarán así diversos grupos que practicarán el excursionismo, el naturismo, el nudismo, la emancipación sexual o el esperantismo, alrededor de asociaciones informales vinculadas de una manera o de otra al anarquismo. Precisamente las limitaciones a las asociaciones obreras impuestas desde la legislación especial de la Dictadura potenciarán indirectamente esta especie de asociacionismo informal en que confluirá el movimiento anarquista con esta heterogeneidad de prácticas y tendencias. Uno de los grupos más destacados, que será el impulsor de la revista individualista Ética será el Ateneo Naturista Ecléctico, con sede en Barcelona, con sus diferentes secciones la más destacada de las cuales será el grupo excursionista Sol y Vida.""La insumisión voluntaria: El anarquismo individualista español durante la Dictadura y la Segunda República (1923–1938)" by Xavier Díez Archived July 23, 2011, at the Wayback Machine.
- "Les anarchistes individualistes du début du siècle l'avaient bien compris, et intégraient le naturisme dans leurs préoccupations. Il est vraiment dommage que ce discours se soit peu à peu effacé, d'antan plus que nous assistons, en ce moment, à un retour en force du puritanisme (conservateur par essence).""Anarchisme et naturisme, aujourd'hui." by Cathy Ytak Archived February 25, 2009, at the Wayback Machine.
- anne (30 July 2014). "Culture of Individualist Anarchism in Late 19th Century America" (PDF).
- The "Illegalists" Archived September 8, 2015, at the Wayback Machine., by Doug Imrie (published by Anarchy: A Journal of Desire Armed)
- Parry, Richard. The Bonnot Gang. Rebel Press, 1987. p. 15
- "Oscar Wilde essay The soul of man under Socialism". Archived from the original on 2013-09-14.
- George Woodcock. Anarchism: A History of Libertarian Ideas and Movements. 1962. (p. 447)
- Sanders, Steven M. Is egoism morally defensible? Philosophia. Springer Netherlands. Volume 18, Numbers 2–3 / July 1988
- Rachels 2008, p. 534.
- Ridgely, D.A. (August 24, 2008). "Selfishness, Egoism and Altruistic Libertarianism". Archived from the original on December 2, 2008. Retrieved 2008-08-24.
- "Egoism - The Anarchist Library".
- O. Ewald, "German Philosophy in 1907", in The Philosophical Review, Vol. 17, No. 4, Jul., 1908, pp. 400–26; T. A. Riley, "Anti-Statism in German Literature, as Exemplified by the Work of John Henry Mackay", in PMLA, Vol. 62, No. 3, Sep. 1947, pp. 828–43; C. E. Forth, "Nietzsche, Decadence, and Regeneration in France, 1891–95", in Journal of the History of Ideas, Vol. 54, No. 1, Jan., 1993, pp. 97–117; see also Robert C. Holub's Nietzsche: Socialist, Anarchist, Feminist, an essay available online at the University of California, Berkeley website.
- Macquarrie, John. Existentialism, New York (1972), pp. 18–21.
- Oxford Companion to Philosophy, ed. Ted Honderich, New York (1995), p. 259.
- Macquarrie. Existentialism, pp. 14–15.
- Cooper, D. E. Existentialism: A Reconstruction (Basil Blackwell, 1999, p. 8)
- Marino, Gordon. Basic Writings of Existentialism (Modern Library, 2004, pp. ix, 3).
- Stanford Encyclopedia of Philosophy http://plato.stanford.edu/entries/kierkegaard/
- Watts, Michael. Kierkegaard (Oneworld, 2003, pp. 4–6).
- Lowrie, Walter. Kierkegaard's attack upon "Christendom" (Princeton, 1968, pp. 37–40)
- Corrigan, John. The Oxford handbook of religion and emotion (Oxford, 2008, pp. 387–88)
- Livingston, James et al. Modern Christian Thought: The Twentieth Century (Fortress Press, 2006, Chapter 5: Christian Existentialism).
- Martin, Clancy. Religious Existentialism in Companion to Phenomenology and Existentialism (Blackwell, 2006, pp. 188–205)
- Robert C. Solomon, Existentialism (McGraw-Hill, 1974, pp. 1–2)
- D.E. Cooper Existentialism: A Reconstruction (Basil Blackwell, 1999, p. 8).
- Ernst Breisach, Introduction to Modern Existentialism, New York (1962), p. 5
- Walter Kaufmann, Existentialism: From Dostoevesky to Sartre, New York (1956), p. 12
- Guignon, Charles B. and Derk Pereboom. Existentialism: basic writings (Hackett Publishing, 2001, p. xiii)
- Hastings, James. Encyclopedia of Religion
- Compact Oxford English Dictionary. Oxford University Press. 2007.
humanism n. 1 a rationalistic system of thought attaching prime importance to human rather than divine or supernatural matters. 2 a Renaissance cultural movement that turned away from medieval scholasticism and revived interest in ancient Greek and Roman thought.Typically, abridgments of this definition omit all senses except #1, such as in the Cambridge Advanced Learner's Dictionary, Collins Essential English Dictionary, and Webster's Concise Dictionary. New York: RHR Press. 2001. p. 177.
- "Definitions of humanism (subsection)". Institute for Humanist Studies. Archived from the original on 2007-01-18. Retrieved 16 Jan 2007.
- Edwords, Fred (1989). "What Is Humanism?". American Humanist Association. Archived from the original on 30 January 2010. Retrieved 19 August 2009.
Secular and Religious Humanists both share the same worldview and the same basic principles... From the standpoint of philosophy alone, there is no difference between the two. It is only in the definition of religion and in the practice of the philosophy that Religious and Secular Humanists effectively disagree.
- Moore, Andrew (1 January 2013). Zalta, Edward N., ed. The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University – via Stanford Encyclopedia of Philosophy.
- "libertine" – via The Free Dictionary.
- "WordNet Search - 3.1".
- René Pintard (2000). Le Libertinage érudit dans la première moitié du XVIIe siècle. Slatkine. p. 11. ISBN 978-2-05-101818-0. Retrieved 24 July 2012.
- Amesbury, Richard (1 January 2016). Zalta, Edward N., ed. The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University – via Stanford Encyclopedia of Philosophy.
- A Martyr to Sin
- "About the Author" in Rand 1992, pp. 1170–71
- Tucker said, "the fact that one class of men are dependent for their living upon the sale of their labour, while another class of men are relieved of the necessity of labour by being legally privileged to sell something that is not labour. . . . And to such a state of things I am as much opposed as any one. But the minute you remove privilege. . . every man will be a labourer exchanging with fellow-labourers . . . What Anarchistic-Socialism aims to abolish is usury . . . it wants to deprive capital of its reward."Benjamin Tucker. Instead of a Book, p. 404
- Wayne Gabardi, review of Anarchism by David Miller, published in American Political Science Review Vol. 80, No. 1. (Mar., 1986), pp. 300—02.
- According to scholar Allan Antliff, Benjamin Tucker coined the term "philosophical anarchism," to distinguish peaceful evolutionary anarchism from revolutionary variants. Antliff, Allan. 2001. Anarchist Modernism: Art, Politics, and the First American Avant-Garde. University of Chicago Press. p. 4
- Outhwaite, William & Tourain, Alain (Eds.). (2003). Anarchism. The Blackwell Dictionary of Modern Social Thought (2nd Edition, p. 12). Blackwell Publishing
- Michael Freeden identifies four broad types of individualist anarchism. He says the first is the type associated with William Godwin that advocates self-government with a "progressive rationalism that included benevolence to others." The second type is the amoral self-serving rationality of Egoism, as most associated with Max Stirner. The third type is "found in Herbert Spencer's early predictions, and in that of some of his disciples such as Donisthorpe, foreseeing the redundancy of the state in the source of social evolution." The fourth type retains a moderated form of Egoism and accounts for social cooperation through the advocacy of market. Freeden, Michael. Ideologies and Political Theory: A Conceptual Approach. Oxford University Press. ISBN 0-19-829414-X. pp. 313–14.
- Tucker, Benjamin R., Instead of a Book, by a Man too Busy to Write One: A Fragmentary Exposition of Philosophical Anarchism (1897, New York)
- Broderick, John C. Thoreau's Proposals for Legislation. American Quarterly, Vol. 7, No. 3 (Autumn, 1955). p. 285
- Horst Matthai Quelle. Textos Filosóficos (1989-1999). p. 15
- Hudelson, Richard (1 January 1999). "Modern Political Philosophy". M.E. Sharpe – via Google Books.
- David Conway. Classical Liberalism: The Unvanquished Ideal. Palgrave Macmillan. 1998. ISBN 978-0-312-21932-1 p. 8
- Kevin Carson. Organization Theory: A Libertarian Perspective. BOOKSURGE. 2008. p. 1
- Mutualist.org Introduction
- Miller, David. 1987. "Mutualism." The Blackwell Encyclopedia of Political Thought. Blackwell Publishing. p. 11
- Tandy, Francis D., 1896, Voluntary Socialism, chapter 6, paragraph 15.
- Miller, Wilbur R. (2012). The social history of crime and punishment in America. An encyclopedia. 5 vols. London: Sage Publications. p. 1007. ISBN 1412988764. "There exist three major camps in libertarian thought: right-libertarianism, socialist libertarianism, and ..."
- Bookchin, Murray and Janet Biehl. The Murray Bookchin Reader. Cassell, 1997. p. 170 ISBN 0-304-33873-7
- Hicks, Steven V. and Daniel E. Shannon. The American journal of economics and sociolology. Blackwell Pub, 2003. p. 612
- "It implies a classless and anti-authoritarian (i.e. libertarian) society in which people manage their own affairs" I.1 Isn't libertarian socialism an oxymoron? at An Anarchist FAQ
- "unlike other socialists, they tend to see (to various different degrees, depending on the thinker) to be skeptical of centralized state intervention as the solution to capitalist exploitation..." Roderick T. Long. "Toward a libertarian theory of class." Social Philosophy and Policy. Volume 15. Issue 02. Summer 1998. Pg. 305
- "So, libertarian socialism rejects the idea of state ownership and control of the economy, along with the state as such. Through workers' self-management it proposes to bring an end to authority, exploitation, and hierarchy in production." "I1. Isn´t libertarian socialism an oxymoron" in An Anarchist FAQ
- "Therefore, rather than being an oxymoron, "libertarian socialism" indicates that true socialism must be libertarian and that a libertarian who is not a socialist is a phoney. As true socialists oppose wage labour, they must also oppose the state for the same reasons. Similarly, libertarians must oppose wage labour for the same reasons they must oppose the state." "I1. Isn´t libertarian socialism an oxymoron" in An Anarchist FAQ
- "Their analysis treats libertarian socialism as a form of anti-parliamentary, democratic, antibureaucratic grass roots socialist organisation, strongly linked to working class activism." Alex Prichard, Ruth Kinna, Saku Pinta and Dave Berry (eds) Libertarian Socialism: Politics in Black and Red. Palgrave Macmillan, December 2012. pg. 13
- " ...preferringa system of popular self governance via networks of decentralized, local voluntary, participatory, cooperative associations. Roderick T. Long. "Toward a libertarian theory of class." Social Philosophy and Policy. Volume 15. Issue 02. Summer 1998. Pg. 305
- "What is of particular interest here, however, is the appeal to a form of emancipation grounded in decentralized, cooperative and democratic forms of political and economic governance which most libertarian socialist visions, including Cole's, tend to share." Charles Masquelier. Critical theory and libertarian socialism: Realizing the political potential of critical social theory. Bloombury. New York-London. 2014. pg. 189
- Mendes, Silva. Socialismo Libertário ou Anarchismo Vol. 1 (1896): "Society should be free through mankind's spontaneous federative affiliation to life, based on the community of land and tools of the trade; meaning: Anarchy will be equality by abolition of private property (while retaining respect for personal property) and liberty by abolition of authority".
- "...preferring a system of popular self governance via networks of decentralized, local, voluntary, participatory, cooperative associations-sometimes as a complement to and check on state power..."
- Rocker, Rudolf (2004). Anarcho-Syndicalism: Theory and Practice. AK Press. p. 65. ISBN 978-1-902593-92-0.
- "LibSoc share with LibCap an aversion to any interference to freedom of thought, expression or choicce of lifestyle." Roderick T. Long. "Toward a libertarian theory of class." Social Philosophy and Policy. Volume 15. Issue 02. Summer 1998. pp 305
- "...what categorizes libertarian socialism is a focus on forms of social organization to further the freedom of the individual combined with an advocacy of non-state means for achieving this." Matt Dawson. Late modernity, individualization and socialism: An Associational Critique of Neoliberalism. Palgrave MacMillan. 2013. pg. 64
- "What is implied by the term 'libertarian socialism'?: The idea that socialism is first and foremost about freedom and therefore about overcoming the domination, repression, and alienation that block the free flow of human creativity, thought, and action...An approach to socialism that incorporates cultural revolution, women's and children's liberation, and the critique and transformation of daily life, as well as the more traditional concerns of socialist politics. A politics that is completely revolutionary because it seeks to transform all of reality. We do not think that capturing the economy and the state lead automatically to the transformation of the rest of social being, nor do we equate liberation with changing our life-styles and our heads. Capitalism is a total system that invades all areas of life: socialism must be the overcoming of capitalist reality in its entirety, or it is nothing." "What is Libertarian Socialism?" by Ulli Diemer. Volume 2, Number 1 (Summer 1997 issue) of The Red Menace.
- "The Soviet Union Versus Socialism". chomsky.info. Retrieved 2015-11-22.
Libertarian socialism, furthermore, does not limit its aims to democratic control by producers over production, but seeks to abolish all forms of domination and hierarchy in every aspect of social and personal life, an unending struggle, since progress in achieving a more just society will lead to new insight and understanding of forms of oppression that may be concealed in traditional practice and consciousness.
- "Authority is defined in terms of the right to exercise social control (as explored in the "sociology of power") and the correlative duty to obey (as explred in the "philosophy of practical reason"). Anarchism is distinguished, philosophically, by its scepticism towards such moral relations – by its questioning of the claims made for such normative power – and, practically, by its challenge to those "authoritative" powers which cannot justify their claims and which are therefore deemed illegitimate or without moral foundation."Anarchism and Authority: A Philosophical Introduction to Classical Anarchism by Paul McLaughlin. AshGate. 2007. p. 1
- Individualist anarchist Benjamin Tucker defined anarchism as opposition to authority as follows "They found that they must turn either to the right or to the left, — follow either the path of Authority or the path of Liberty. Marx went one way; Warren and Proudhon the other. Thus were born State Socialism and Anarchism...Authority, takes many shapes, but, broadly speaking, her enemies divide themselves into three classes: first, those who abhor her both as a means and as an end of progress, opposing her openly, avowedly, sincerely, consistently, universally; second, those who profess to believe in her as a means of progress, but who accept her only so far as they think she will subserve their own selfish interests, denying her and her blessings to the rest of the world; third, those who distrust her as a means of progress, believing in her only as an end to be obtained by first trampling upon, violating, and outraging her. These three phases of opposition to Liberty are met in almost every sphere of thought and human activity. Good representatives of the first are seen in the Catholic Church and the Russian autocracy; of the second, in the Protestant Church and the Manchester school of politics and political economy; of the third, in the atheism of Gambetta and the socialism of Karl Marx." Benjamin Tucker. Individual Liberty.
- Anarchist historian George Woodcock report of Mikhail Bakunin's anti-authoritarianism and shows opposition to both state and non-state forms of authority as follows: "All anarchists deny authority; many of them fight against it." (p. 9)...Bakunin did not convert the League's central committee to his full program, but he did persuade them to accept a remarkably radical recommendation to the Berne Congress of September 1868, demanding economic equality and implicitly attacking authority in both Church and State."
- "It is forgotten that the early defenders of commercial society like (Adam) Smith were as much concerned with criticising the associational blocks to mobile labour represented by guilds as they were to the activities of the state. The history of socialist thought includes a long associational and anti-statist tradition prior to the political victory of the Bolshevism in the east and varieties of Fabianism in the west. John O´Neil." The Market: Ethics, knowledge and politics. Routledge. 1998. Pg. 3
- "In some ways, it is perhaps fair to say that if Left communism is an intellectual- political formation, it is so, first and foremost, negatively – as opposed to other socialist traditions. I have labelled this negative pole ‘socialist orthodoxy’, composed of both Leninists and social democrats...What I suggested was that these Left communist thinkers differentiated their own understandings of communism from a strand of socialism that came to follow a largely electoral road in the West, pursuing a kind of social capitalism, and a path to socialism that predominated in the peripheral and semi- peripheral countries, which sought revolutionary conquest of power and led to something like state capitalism. Generally, the Left communist thinkers were to find these paths locked within the horizons of capitalism (the law of value, money, private property, class, the state), and they were to characterize these solutions as statist, substitutionist and authoritarian." Chamsy el- Ojeili. Beyond post-socialism. Dialogues with the far-left. Palgrave Macmillan. 2015. pg 8
- Sims, Franwa (2006). The Anacostia Diaries As It Is. Lulu Press. p. 160.
- A Mutualist FAQ: A.4. Are Mutualists Socialists? Archived 2009-06-09 at the Wayback Machine.
- "It is by meeting such a twofold requirement that the libertarian socialism of G.D.H. Cole could be said to offer timely and sustainable avenues for the institutionalization of the liberal value of autonomy..." Charles Masquelier. Critical theory and libertarian socialism: Realizing the political potential of critical social theory. Bloombury. New York-London. 2014. pg. 190
- "Locating libertarian socialism in a grey area between anarchist and Marxist extremes, they argue that the multiple experiences of historical convergence remain inspirational and that, through these examples, the hope of socialist transformation survives." Alex Prichard, Ruth Kinna, Saku Pinta and Dave Berry (eds) Libertarian Socialism: Politics in Black and Red. Palgrave Macmillan, December 2012. pg. 13
- "Councilism and anarchism loosely merged into ‘libertarian socialism’, offering a non-dogmatic path by which both council communism and anarchism could be updated for the changed conditions of the time, and for the new forms of proletarian resistance to these new conditions." Toby Boraman. "Carnival and Class: Anarchism and Councilism in Australasia during the 1970s" in Alex Prichard, Ruth Kinna, Saku Pinta and Dave Berry (eds). Libertarian Socialism: Politics in Black and Red. Palgrave Macmillan, December 2012. pg. 268.
- Murray Bookchin, Ghost of Anarcho-Syndicalism; Robert Graham, The General Idea of Proudhon's Revolution
- Kent Bromley, in his preface to Peter Kropotkin's book The Conquest of Bread, considered early French utopian socialist Charles Fourier to be the founder of the libertarian branch of socialist thought, as opposed to the authoritarian socialist ideas of Babeuf and Buonarroti." Kropotkin, Peter. The Conquest of Bread, preface by Kent Bromley, New York and London, G. P. Putnam's Sons, 1906.
- "(Benjamin) Tucker referred to himself many times as a socialist and considered his philosophy to be "Anarchistic socialism." An Anarchist FAQ by Various Authors
- French individualist anarchist Émile Armand shows clearly opposition to capitalism and centralized economies when he said that the individualist anarchist "inwardly he remains refractory – fatally refractory – morally, intellectually, economically (The capitalist economy and the directed economy, the speculators and the fabricators of single are equally repugnant to him.)""Anarchist Individualism as a Life and Activity" by Emile Armand
- Anarchist Peter Sabatini reports that In the United States "of early to mid-19th century, there appeared an array of communal and "utopian" counterculture groups (including the so-called free love movement). William Godwin's anarchism exerted an ideological influence on some of this, but more so the socialism of Robert Owen and Charles Fourier. After success of his British venture, Owen himself established a cooperative community within the United States at New Harmony, Indiana during 1825. One member of this commune was Josiah Warren (1798–1874), considered to be the first individualist anarchist"Peter Sabatini. "Libertarianism: Bogus Anarchy"
- "It introduces an eye-opening approach to radical social thought, rooted equally in libertarian socialism and market anarchism." Chartier, Gary; Johnson, Charles W. (2011). Markets Not Capitalism: Individualist Anarchism Against Bosses, Inequality, Corporate Power, and Structural Poverty. Brooklyn, NY:Minor Compositions/Autonomedia. Pg. Back cover
- Vallentyne, Peter; Steiner, Hillel; Otsuka, Michael (2005). "Why Left-Libertarianism Is Not Incoherent, Indeterminate, or Irrelevant: A Reply to Fried" (PDF). Philosophy and Public Affairs. Blackwell Publishing, Inc. 33 (2). Archived from the original (PDF) on 2012-11-03. Retrieved 2013-07-23.
- Narveson, Jan; Trenchard, David (2008). "Left libertarianism". In Hamowy, Ronald. The Encyclopedia of Libertarianism. Thousand Oaks, CA: SAGE; Cato Institute. pp. 288–89. doi:10.4135/9781412965811.n174. ISBN 978-1-4129-6580-4. LCCN 2008009151. OCLC 750831024.
- Bookchin, Murray and Biehl, Janet (1997). The Murray Bookchin Reader. Cassell: p. 170. ISBN 0-304-33873-7
- "Foldvary, Fred E. Geoism and Libertarianism. The Progress Report". Progress.org. Archived from the original on November 4, 2012. Retrieved 2013-03-26.
- Karen DeCoster, Henry George and the Tariff Question, LewRockwell.com, April 19, 2006.
- Will Kymlicka (2005). "libertarianism, left-". In Ted Honderich. The Oxford Companion to Philosophy. New York City: Oxford University Press.
- Chartier, Gary. Johnson, Charles W. (2011). Markets Not Capitalism: Individualist Anarchism Against Bosses, Inequality, Corporate Power, and Structural Poverty. Minor Compositions. pp. 1–11. ISBN 978-1570272424
- Serena Olsaretti, Liberty, Desert and the Market: A Philosophical Study, Cambridge University Press, 2004, 14, 88, 100.
- Stanford Encyclopedia of Philosophy entry on Libertarianism, Stanford University, July 24, 2006 version.
- "The most ambitious contribution to literary anarchism during the 1890s was undoubtedly Oscar Wilde The Soul of Man under Socialism. Wilde, as we have seen, declared himself an anarchist on at least one occasion during the 1890s, and he greatly admired Peter Kropotkin, whom he had met. Later, in De Profundis, he described Kropotkin's life as one "of the most perfect lives I have come across in my own experience" and talked of him as "a man with a soul of that beautiful white Christ that seems coming out of Russia." But in The Soul of Man under Socialism, which appeared in 1890, it is Godwin rather than Kropotkin whose influence seems dominant." George Woodcock. Anarchism: A History of Libertarian Ideas and Movements. 1962. (p. 447)
- _wlo:dek. "Emil Armand "Anarchist Individualism as a Life and Activity"".
- Imperfect garden : the legacy of humanism. Princeton University Press. 2002.
- "Dictionary.com - The world's favorite online dictionary!". Archived from the original on 2010-11-19.
- "Emerson, Ralph Waldo - Internet Encyclopedia of Philosophy".
- Related, arguably synonymous, terms include libertarianism, left-wing libertarianism, egalitarian-libertarianism, and libertarian socialism.
- Sundstrom, William A. "An Egalitarian-Libertarian Manifesto Archived October 29, 2013, at the Wayback Machine.."
- Bookchin, Murray and Biehl, Janet (1997). The Murray Bookchin Reader. New York:Cassell. p. 170.
- Sullivan, Mark A. (July 2003). "Why the Georgist Movement Has Not Succeeded: A Personal Response to the Question Raised by Warren J. Samuels." American Journal of Economics and Sociology. 62:3. p. 612.
- Albrecht, James M. (2012) Reconstructing Individualism : A Pragmatic Tradition from Emerson to Ellison. Fordham University Press.
- Brown, L. Susan (1993). The Politics of Individualism: Liberalism, Liberal Feminism, and Anarchism. Black Rose Books.
- Dewey, John. (1930). Individualism Old and New.
- Emerson, Ralph Waldo (1847). Self-Reliance. London: J.M. Dent & Sons Ltd.
- Gagnier, Regenia. (2010). Individualism, Decadence and Globalization: On the Relationship of Part to Whole, 1859–1920. Palgrave Macmillan.
- Dumont, Louis (1986). Essays on Individualism: Modern Ideology in Anthropological Perspective. Chicago: University Of Chicago Press. ISBN 0-226-16958-8.
- Lukes, Steven (1973). Individualism. New York: Harper & Row. ISBN 0-631-14750-0.
- Meiksins Wood, Ellen. (1972). Mind and Politics: An Approach to the Meaning of Liberal and Socialist Individualism. University of California Press. ISBN 0-520-02029-4
- Renaut, Alain (1999). The Era of the Individual. Princeton, NJ: Princeton University Press. ISBN 0-691-02938-5.
- Shanahan, Daniel. (1991) Toward a Genealogy of Individualism. Amherst, MA: University of Massachusetts Press. ISBN 0-87023-811-6.
- Watt, Ian. (1996) Myths of Modern Individualism. Cambridge: Cambridge University Press. ISBN 0-521-48011-6.
- Barzilai, Gad. (2003). Communities and Law. Ann Arbor: University of Michigan Press. ISBN 0-472-11315-1.
- Fruehwald, Edwin, "A Biological Basis of Rights", 19 Southern California Interdisciplinary Law Journal 195 (2010).
|Wikisource has the text of the 1921 Collier's Encyclopedia article Individualism.|Media related to Individualism at Wikimedia Commons
- Archive of individualist texts at The Anarchist Library |
Newton and Einstein standard gravity model, when based on baryonic matter, give galaxy predictions very different from those actually observed. This is why the hypothesis of dark matter was introduced. As early as the 1880s, Lord Kelvin was describing dark bodies in relation to the Milky Way; Henri Poincare picked up the theme in 1906, actually using the term dark matter in his comments on Kelvin’s work. By the 1920s and ’30s, the term was gaining interest and a number of astronomers and astrophysicists were exploring its potential. The debate continues today: Could dark matter exist, or could it simply be a fudge factor that enables an incomplete model to fit observations? Modified Newton Dynamics introduced by Milgrom in 1983 suggests a minimum acceleration that is calibrated to the observational data, and the model then fits very well, although from baryonic matter only. However, MOND is more of a curve-fitting model since it does not provide a good explanation for why there should be such a minimum acceleration. Here we will also introduce a minimum acceleration, though not by modifying the standard gravity model, but rather by building on its assumption regarding the mass of the observable universe and the radius of the observable universe. It is worth noting that a solution to the dark matter problem can, in principle, also be achieved in the approach of extended gravity (see for example ). Extended gravity described by, for example the model described by can be partly seen as a relativistic extension of MOND. Other minimum acceleration models have also been suggested recently. One of these is so-called quantized inertia , which according to its inventor, McCulloch, has the advantage of having an explanatory model behind the minimum acceleration, something that seems to be missing in the standard MOND model. The hypothesis that will be suggested here falls into the category of non-relativistic minimum acceleration theories, which in this case also explains the minimum acceleration, unlike the original MOND theory.
The radius of the observable universe, as suggested by standard physics, is approximately 4.4 × 1026 meters (93 billion light years), see . The age of the universe is considered to be about 13.77 billion years. In this time period, light can travel meter, this is approximately equal to , where is the Hubble constant. The reason the radius of the universe is assumed to be considerably larger than this is due to the assumption of expanding space (inflation). In this paper, we will take that for granted, although that too is a subject of considerable debate. Further, the mass of the observable universe is assumed to be approximately 1.5 × 1053 kg. The mass of the observable universe can be calculated as as shown for example by , so there is considerable uncertainty in the exact value here as there is considerable uncertainty in the Hubble constant, and also in G. Based on the assumed radius of the universe and the mass of the universe, the minimum gravitational acceleration1 of the universe must then be
This is considerably smaller than the MOND optimized minimum acceleration of approximately 1.2 × 10−10 m/s2. However, the mathematical form of the MOND theory is different from what we are suggesting here; observational data is needed to make them directly comparable. First of all, our minimum acceleration is at the very edge of the observable universe. If the observations are concerning objects, such as galaxies, that are not at the edge of the universe, then the minimum acceleration could be higher. We will suggest the acceleration in the galaxy arms should be
where M is the baryonic matter in the galaxy, and is the mass of the universe, as before. In the next section, we will compare the prediction of this model with the observed data.
2. Comparison of Our Model with Observational Data
To test the model, we have used 2793 individual data points from 153 galaxies in the Spitzer Photometry and Accurate Rotation Curves (SPARC) database (see also ). Figure 1 shows the observations as black dots. The green line is the predicted galaxy rotation from only baryonic matter in the galaxy. As we see, the green line gives predictions far from the observed data, and this is why, as noted previously, the idea of dark matter was introduced originally in order to make this model work. The MOND best fit model is represented by the yellow line. The light blue line just below the yellow line is predictions from the quantized inertia model. The red line is our model when using radius 4.4 × 1026 m. As we
Figure 1. Galactic accelerations from 2793 individual data points for 153 SPARC galaxies are shown as black dots. Predictions by standard physics are shown in green. The yellow line is MOND, which fits the observations very well. The light blue line just below the yellow line indicates predictions made by the quantized inertia model. The red line includes the minimum acceleration from the mass in the observable universe with the standard assumed radius of approximately 4.4 × 1026 meters. The orange line shows the results when we have multiplied this radius by 1.3. The white line with green dots shows the results when we have multiplied the universe radius by 1.3, and the universe mass is adjusted , compared to in the red, orange and blue line. As we can see this simple model also can give predictions close to the observed data. The blue line depicts the predictions when we use a radius equal to the assumed time since the Big Bang multiplied by the speed of light. (Note: Log stands for Logarithm with base 10.)
can see, this gives a strong improvement over the standard model, e.g., the green line. This would, at least, dramatically reduce the amount of dark matter required to push the model to fit observations. However, even under the standard model, it is not certain what the radius and the mass of the universe are, or what the distribution of matter is. Hence, we could suggest a slightly different radius, which would give an even better fit. In the case of the orange line, we have inputted a radius of 1.3 times the commonly assumed radius of 4.4 × 1026 m. The blue line shows the results when using only 1.3 × 1026 m as radius. That is the radius one obtains by taking the assumed life of the universe times the speed of light; in other words, by ignoring the assumed expansion. This last value of R we see gives predictions that diverge greatly from observations.
As we can clearly see, taking the mass of the observable universe into account, in addition to that of the galaxy, provides much better predictions than can be produced without doing so. However, there are several issues with this method. For example, if a Galaxy is lying at the edge of the observable universe, then the observable universe gravitational acceleration field should not only increase the acceleration in galaxy arms that are turned away from the centre of the observable universe, but should perhaps also slow the acceleration in the galaxy arms on the opposite side. This should lead to different redshifts on different sides of the galaxy. We do not believe this has been observed (at least not yet), but it could be even more complicated than this. Naturally, different galaxies will have different radii to the centre of the observable universe, so if we are taking this into account, we would possibly obtain a much better fit than what we have shown here. Or, counterintuitively, the fit could be worse; this can only be determined by further studies.
We have looked at galaxy rotation predictions when taking the gravity acceleration field from the observable universe into account. This seems to produce predictions quite close to observations. However, there may be several issues with this method and the approach requires additional rigorous study. Still, we think the idea is interesting and merits further investigation by the physics community.
1We suggested this first in a working paper that we put out on vixra.org March 29, 2020, this is a strongly improved version of that working paper.
Corda, C. (2009) Interferometric Detection of Gravitational Waves: The Definitive Test for General Relativity. International Journal of Modern Physics D, 180, 2275-2282.
McGaugh, S.S., Lelli, F. and Schombert, J.M. (2016) The Radial Acceleration Relation in Rotationally Supported Galaxies. Physical Review Letters, 117, Article ID 201101. |
Gross Domestic Product, known by its acronym GDP, is a primary metric used to measure the health of an economy. The United States Bureau of Economic Analysis (BEA) – a division of the Department of Commerce – releases GDP information on the US economy every quarter.
The four quarters of the year are:
Quarter 1: January – March
Quarter 2: April – June
Quarter 3: July – September
Quarter 4: October – December
A month after each quarter ends, the BEA releases its Advance GDP estimate. This estimate of the previous quarter’s GDP is based on incomplete information, though it provides the first indication of the last quarter’s GDP. In the second and third months following the quarter’s end, “second” and “third” GDP estimates are released as more information is received and revisions are made to the GDP estimate.
What is GDP
Gross domestic product is the “total monetary or market value of all the finished goods and services produced within a country’s borders in a specific time period.”
Two types of GDP are often referenced: Real GDP and Nominal GDP.
Nominal GDP is the GDP measured using current market prices. Real GDP is when the GDP is calculated using fixed prices.
For example, the nominal GDP of a country could be five trillion dollars in 2015. In 2016, it could be six trillion dollars and in 2017 it could be seven trillion dollars. According to this measurement, the GDP is increasing by a net one trillion dollars per year. This GDP is calculated based on the prices of goods and inflation within each year. However, the problem with this calculation is that it doesn’t measure growth accurately. The reason for that is because each year’s GDP is calculated based on the value of the currency and prices of that specific year.
This is the problem that Real GDP comes to address.
Real GDP would take the three years mentioned above and calculate the GDP based on prices and the currency value in 2015 or another baseline year. Therefore, if the GDP rose one trillion nominally between 2015 and 2016, the Real GDP calculation would estimate how much did the GDP grow based on the prices and currency value in 2015. In this case, the calculation might find that the GDP grew only 600 billion dollars when factoring in prices and inflation throughout 2016.
One of the most common ways of calculating GDP is based on money spent by different groups in the economy.
The GDP formula is:
C + G + I + NX = GDP
C = consumption
G = government spending
I = Investment
NX = Net Exports
This formula groups together “consumption” which refers to all the private consumer spending within a country’s economy; “government spending” which is the government’s budget; “investments”, which reflects private domestic investment projects such as business investments in their various activities to maintain and expand the business; and “net exports” which is calculated as exports minus imports.
Advance GDP’s impact on the financial market
The Advance GDP numbers can affect different sectors of society in different ways.
Investors can view the GDP estimates as it relates to companies and their earnings. Strong economic numbers mean companies are prospering which impacts stock prices. Weak economic numbers may signal that the market is going down.
Built into stock prices is a certain assumption of GDP growth. Therefore, when the GDP is more or less what has been expected, stock prices may not be impacted. However, when GDP numbers exceed or fall below expectations, therein lies the potential for shocks to the market. Despite this, the Advance US GDP is incomplete and as information flows in throughout the upcoming months, the final GDP can be significantly different from the first numbers publicized in the Advance US GDP.
Your capital is at risk
Advance GDP’s impact on the US government
The strength of the economy can also help or hinder a government from advancing their agenda. For example, a strong economy could be used by a government to legitimize additional government programs or increased spending on an array of issues. A weak economy could generate pushback from the public if it sees that the government is engaged in perceived wasteful spending. Weak GDP numbers can also put pressure on a government to act to improve the economy.
For politicians, especially presidential candidates, the advanced GDP numbers can impact the outcome of the election. Elections for the US President are held every four years in early November. Advanced GDP numbers for the third quarter of the year are usually published a week prior to the elections at the end of October.
For the incumbent candidate, positive advanced GDP numbers can provide a tailwind to their campaign, whereas poor economic numbers can provide a boost to the opposing candidate looking to unseat the President.
Congressional elections are held every two years at the beginning of November. Incumbent members of Congress looking to shore up their bona fides can point to good economic numbers as proof that they are doing their job well. |
CENTRE- STATE RELATIONS
CENTRE- STATE RELATIONS
- The Indian Constitution envisages Federal Polity with a Union government and several state governments.
- The federal system not only ensures the efficient governance of the country but also reconciles national unity with regional autonomy.
- The term ‘federation’ has nowhere been mentioned in the Constitution. Instead, 1 of the Constitution describes India as a ‘Union of States’.
- Dr B R Ambedkar preferred the phrase ‘Union of States’ rather than ‘Federation of States’ to indicate two things:
- The Indian federation is not the result of an agreement among the states (like the American federation);
- The states have no right to secede from the federation.
- Montagu-Chelmsford Reforms (1919) and GOI Act, 1935 prepared the ground for 3 Lists and has paved the way of federal polity.
- All in all, our India has some extraordinary power with center but is not “Nation States” but “State Nations”.
- State Reorganization Commission report concluded “union of India is the basis of our nationality and states are the limbs of the Union and limbs must be healthy and strong”.
- The Constitution of India, given its federal structure, divides all powers (legislative, executive and financial) between the Centre and the states.
- Though the Centre and the states are supreme in their respective fields, the maximum harmony and coordination between them is essential for the effective operation of the federal system.
The Centre-state relations can be studied under following domains:
LEGISLATIVE RELATIONS (Art. 245 – 255)
- 245 to 255 in Part XI of the Constitution deal with the legislative relations between the Centre and the states.
- Given federal nature of the Indian Constitution, it divides the legislative powers between the Centre and the states with respect to both the territory and the subjects of legislation.
- There are four following aspects in the Centre-states legislative relations –
|TERRITORIAL EXTENT OF CENTRAL AND STATE LEGISLATION|
- The Constitution defines the territorial limits of the legislative powers vested in the Centre and the states in the following way:
- The Parliament can make laws for the whole or any part of the territory of India (the states, the UTs, and any other area included in the territory of India)
- A State Legislature can make laws for the whole or any part of the state. The laws made by a state legislature are not applicable outside the state, except when there is a sufficient nexus between the state and the object.
- The Parliament alone can make ‘extra-territorial legislation’ – Laws of the Parliament are applicable to the Indian citizens and their property in any part of the world.
- The Constitution places certain restrictions on the plenary territorial jurisdiction of the Parliament. The laws of Parliament are not applicable in the following areas:
- The President can make regulations for the peace, progress and good governance of the four UTs – Andaman and Nicobar Islands, Lakshadweep, Dadra and Nagar Haveli and Daman and Diu. A regulation so made has the same force and effect as an act of Parliament. It may also repeal or amend any act of Parliament in relation to these union territories.
- The Governor is empowered to direct that an act of Parliament does not apply to a Scheduled Area in the state or apply with specified modifications and exceptions.
- The Governor of Assam may likewise direct that an act of Parliament does not apply to a Tribal Area (autonomous district) in the state or apply with specified modifications and exceptions. President enjoys the same power with respect to Meghalaya, Tripura and Mizoram.
THEORY OF TERRITORIAL NEXUs
- The SC in H. Wadia vs ITC, Bombay has held that “the legality of any extra-territorial law can only be decided in India’s domestic courts”
- India also follows “theory of territorial nexus”, it means that “if a state law or union law has an extra-territorial effect, then it would be valid if there is sufficient nexus between object and interest of states.
- Example– If a newspaper is published in Bangalore but has a very wide circulation in Bombay then Bombay can tax that newspaper.
|DISTRIBUTION OF LEGISLATIVE SUBJECTS (ART. 246)|
- Indian Constitution provides for a division of the subjects between the Centre and the states through three lists – List-I (Union), List-II (State) and List-III (Concurrent) in the Seventh Schedule:
- The Parliament has exclusive powers to make laws with respect to the matters enumerated in the Union List. Presently, 100 Subjects (originally 97 subjects) like defence, banking, foreign affairs, currency, atomic energy, insurance, communication, inter-state trade and commerce, census, audit and so on. The matters of national importance and the matters which requires uniformity of legislation nationwide are included in the Union List.
- The State Legislature has, in normal circumstances exclusive powers to make laws with respect to any of the matters enumerated in the State List. This has at present 61 Subjects (originally 66 subjects) like public order, police, public health and sanitation, agriculture, prisons, local government, fisheries, markets, theatres, gambling and so on. The matters of regional and local importance and the matters which permits diversity of interest are specified in the State List.
- Both, the Parliament and State Legislature can make laws with respect to any of the matters enumerated in the Concurrent List. This list has at present 52 subjects (originally 47subjects) like criminal law and procedure, civil procedure, marriage and divorce, population control and family planning, electricity, labour welfare, economic and social planning, drugs, newspapers, books and printing press, and others. The matters on which uniformity of legislation throughout the country is desirable but not essential are enumerated in the concurrent list.
- The power to make laws with respect to Residuary Subjects is vested in the Parliament (Art. 248). This residuary power of legislation includes the power to levy residuary taxes.
- The 101st Amendment Act of 2016 (Goods and Services Tax) – Parliament and the state legislature have power to make laws with respect to GST imposed by the Union or by the State. Further, the Parliament has exclusive power to make laws with respect to GST where the supply of goods or services or both takes place in the course of inter-state trade or commerce.
|The 42nd Amendment Act of 1976 transferred five subjects to Concurrent List from State List –|
c) Weights and Measures,
d) Protection Of Wild Animals and Birds,
e) Administration Of Justice; constitution and organization of all courts except the Supreme Court and the high courts.
US à Only the powers of the Federal Government are enumerated in the Constitution and the residuary powers are left to the states. The Australian Constitution followed the same pattern. In Canada à there is a double enumeration of Federal and Provincial subjects, the residuary powers are vested in the Centre. India follows the Canadian precedent.
- The GoI Act of 1935 provided for a three-fold enumeration of the subjects – federal, provincial and concurrent. The Indian Constitution follows the scheme of this act but with one difference, that is, under GoI Act of 1935, the residuary powers were given neither to the federal legislature nor to the provincial legislature but to the Governor- General of India.
- The Constitution expressly secures the predominance of the Union List over the State List and the Concurrent List and that of the Concurrent List over the State List.
- In case of overlapping between the Union List and the State List, the former should prevail. In case of overlapping between the Union List and the Concurrent List, it is again the former which should prevail. Where there is a conflict between the Concurrent List and the State List, it is the former that should prevail.
- In case of a conflict between the Central law and the state law on a subject enumerated in the Concurrent List, the Central law prevails over the state law. But, there is an exception – If the state law has been reserved for the consideration of the President and has received his assent, then the state law prevails in that state. However, it would still be competent for the Parliament to override such a law by subsequently making a law on the same matter.
|PARLIAMENTARY LEGISLATION IN THE STATE FIELD (ART. 249)|
- The below scheme of distribution of legislative powers between the Centre and the states is to be maintained in normal times.
- But, in abnormal times, the scheme of distribution is either modified or suspended.
|Art. 246A||Parliament and state legislature has the power to make law for union or state GST respectively.|
|Art. 249||Parliament (Rajya Sabha ) can legislate on state list in the national interest.|
|Art. 250||Parliament can legislate if there is Emergency.|
|Art. 252||Parliament can legislate for 2 or more states by their consent.|
|Art. 253||Parliament can make law to give effect to International Agreements.|
|Art. 256||State executive should comply to all laws made by Parliament.|
|Art. 200||Assent to bills and reservation of money/finance bills for President’s consideration.|
|Art. 356||State emergency (President’s rule)|
|Art. 360||Financial emergency|
- The Constitution empowers the Parliament to make laws on any matter enumerated in the State List under the following five extraordinary circumstances:
1. When Rajya Sabha Passes a Resolution
- If the Rajya Sabha states that it is necessary in the national interest that Parliament should make laws on a matter in the State List, then the Parliament becomes competent to make laws on that matter.
- Such a resolution must be supported by two-thirds of the members present and voting.
- The resolution remains in force for one year; it can be renewed any number of times but not exceeding one year at a time.
- The laws cease to have effect on the expiration of six months after the resolution has ceased to be in force.
- This provision does not restrict the power of a state legislature to make laws on the same matter.
- But, in case of inconsistency between a State Law and a Parliamentary Law, the latter is to prevail.
2. During a National Emergency (Art. 352)
- The Parliament acquires the power to legislate with respect to matters in the State List, while a proclamation of national emergency is in operation.
- The laws become inoperative on the expiration of six months after the emergency has ceased to operate.
- In above case, the power of a State Legislature to make laws on the same matter is not restricted.
- In case of repugnancy between a state law and a parliamentary law, the latter is to prevail.
- The laws cease to have effect on the expiration of six months after the resolution has ceased to be in force.
3. When States Make a Request
- When the legislatures of two or more states pass resolutions requesting the Parliament to enact laws on a matter in the State List, then the Parliament can make laws for regulating that matter.
- A law so enacted applies only to those states which have passed the resolutions.
- Any other state may adopt it afterwards by passing a resolution to that effect in its legislature.
- Such a law can be amended or repealed only by the Parliament and not by the legislatures of the concerned states.
- In case of request by states, the State Legislature ceases to have the power to make a law with respect to that matter.
- Some examples of laws passed under the above provision are Wild Life (Protection) Act, 1972; Water (Prevention and Control of Pollution) Act, 1974; Urban Land (Ceiling and Regulation) Act, 1976; and Transplantation of Human Organs Act, 1994.
- Clinical Establishments Registrations and regulations Act, 2010 has been made for 4 states (Sikkim, Mizoram, Arunachal Pradesh and Himachal Pradesh)
4. During President’s Rule (Art. 356)
- When the President‘s rule is imposed in a state, the Parliament becomes empowered to make laws with respect to any matter in the State List in relation to that state.
- A law made so by the Parliament continues to be operative even after the President‘s rule and not co- terminus with the duration of the President‘s rule.
- Such a law can be repealed or altered or re-enacted by the state legislature.
5. To Implement International Agreements
- The Parliament can make laws on any matter in the State List for implementing the international treaties, agreements or conventions.
- This provision enables the Central government to fulfill its international obligations and commitments.
- Punchhi Commission has said that treaties which impinge on state list must be negotiated with increasing involvement of states
- Examples – United Nations (Privileges and Immunities) Act, 1947; Geneva Convention Act, 1960; Anti-Hijacking Act, 1982 and legislations relating to environment and TRIPS.
|CENTRE’S CONTROL OVER STATE LEGISLATION|
- Besides the Parliament’s power to legislate directly on the state subjects under the exceptional situations, the Constitution empowers the Centre to exercise control over the state’s legislative matters in the following ways:
- The governor can reserve certain types of bills passed by the state legislature for the consideration of the President. The President enjoys absolute veto over state bills.
- Bills on certain matters enumerated in the State List can be introduced in the state legislature only with the previous sanction of the President – E.g. bills imposing restrictions on the freedom of trade and commerce
- The Centre can direct the states to reserve money bills and other financial bills passed by the state legislature for the President’s consideration during a financial emergency.
- The Indian Constitution has assigned a position of superiority to the Centre in the legislative sphere.
|ARTICLES PERTAINING TO LEGISLATIVE RELATIONS|
|Art. 245||Extent of laws made by Parliament and by the Legislatures of States|
|Art. 246||Subject-matter of laws made by Parliament and by the Legislatures of States|
|Art. 247||Power of Parliament to provide for the establishment of certain additional courts|
|Art. 248||Residuary powers of legislation|
|Art. 249||Power of Parliament (Rajya Sabha) to legislate with respect to a matter in the State List in the national interest.|
|Art. 250||Power of Parliament to legislate with respect to any matter in the State List if a Proclamation of Emergency is in operation|
|Art. 251||Inconsistency between laws made by Parliament under Articles 249 and 250 and laws made by the Legislatures of States|
|Art. 252||Power of Parliament to legislate for two or more States by consent and adoption of such legislation by any other State|
|Art. 253||Legislation for giving effect to international agreements|
|Art. 254||Inconsistency between laws made by Parliament and laws made by the Legislatures of States|
|Art. 255||Requirements as to recommendations and previous sanctions to be regarded as matters of procedure only|
|INTERPRETATION OF THE STATE LIST|
- Supreme Court in Calcutta Gas ltd vs State of West Bengal has held that “widest possible and most liberal” interpretation should be given to the language of each entry of the constitution.
- In International tourism corporation vs State of Haryana, SC has said that “Residuary power given to union should not affect and jeopardies the federal polity”.
- In D. Joshi vs Ajit Mills, the SC has said that “Entries in concurrent lists must be given wide meaning implying all ancillary and incidental powers”.
|DOCTRINE OF PITH AND SUBSTANCE|
- Pith and substance means “true object of the legislation and competence” of the legislature which enacted it.
- General Rule à Union and States are supreme in their respective spheres (Lists) and they should not encroach into other’s sphere.
- In some instances, a law passed by one encroaches upon the field assigned to another, in such case the Court will apply this doctrine.
- Example– A law can be held intra vires even though it incidentally trances on other.
- In S. Krishna v. State of Madras (1957), SC has held that, “In order to ascertain the true character of the legislation, one must have regard to”-
- Whole enactment
- Underlying objective
- Scope and effect of its provision
|DOCTRINE OF COLOURABLE LEGISLATION|
- This doctrine also known as fraud on the constitution.
- Doctrine of Colorable Legislation is built upon the founding stones of the “Doctrine of Separation of Power”.
- Separation of Power mandates that a balance of power is to be struck between the different components of the Statee. between the Legislature, the Executive and the Judiciary.
- The Primary Function of the legislature is to make laws. Whenever, Legislature tries to shift this balance of power towards itself, the Doctrine of Colorable Legislation is invoked to take care of Legislative Accountability.
- In C.G. Narayan Dev vs State of Orissa (1953) judgement, the SC explained the meaning, scope of this doctrine as “when anything is prohibited directly, it is also prohibited indirectly”. In common parlance, it is meant to be understood as “Whatever legislature can’t do directly, it can’t do indirectly”.
- The SC in different judicial pronouncements has laid down the certain tests in order to determine the true nature of the legislation impeached as colourable :-
- The court must look to the substance of the impugned law, as distinguished from its form or the label which the legislature has given it. For the purpose of determining the substance of an enactment, the court will examine two things :- 1. Effect of the legislature and 2. Object and the purpose of the act.
- The doctrine of colourable legislation has nothing to do with the motive of the legislation, it is in the essence a question of vires or power of the legislature to enact the law in question.
- The doctrine is also not applicable to Subordinate Legislation.
|TESTS TO CHECK REPUGNANCY BETWEEN LAWS:|
1.Direct conflict test
- There is direct conflict between state law and the central law.
- Example– Central law says do but the state act says don’t then here there is directly and clearly conflict between two laws.
- Here court will act as per the situation.
2. Doctrine of occupied field
- GENERAL RULE à Even where there is no repugnancy between a Union law and a State law, the Union law will not allow a State law to co-exist if the Parliament intended to occupy the whole field relating to the subject
- g. – An Assam Act provided that a person may be appointed as a member of an Industrial Tribunal only in consultation with the High Court. Later, Parliament made a law which stated only the qualifications and did not mention consultation with High Court. It was held the Central legislation was indeed to be an exhaustive code and no consultation was required.
- In Deep Chand case the SC held that, the intention of Parliament while enacting the Motor Vehicles Amending Act, 1956 was to occupy the whole field of nationalization of motor transport. Hence, the U.P. Act providing for nationalization of transport services could not co-exist.
3. Intended occupation test
- Here test arises with intention of the legislature.
- E.g.- Parliament has given an exhaustive code for a particular matter by replacing the state law then here Parliament has intention to do so.
|DOCTRINE OF READING DOWN|
- When a Legislature has used wide or vague words which may extend the operation of an act to a subject outside the relevant entry, court then interprets the wide terms giving them restricted meaning. This is called reading down.
- The Act with the meaning assigned remains intra vires. The courts avoid striking down an Act.
|LIMITATIONS ON LEGISLATIVE POWERS|
- 245 – Extent of laws made by Parliament and by the Legislatures of States
- Requirements of prior sanctions on specific subjects
- Legislation must not be fraud on the constitution
- Doctrine of colourable legislation
- Legislature cannot delegate matter of policy to the executive, only the powers to fill in the details can be delegated.
|CRITICAL ANALYSIS OF LEGISLATIVE RELATIONS|
- Essential legislative functions cannot be delegated by the legislature.
- The center has practically monopolized the Central list. Several states want reduction in Central Lists and transform them to state list. West Bengal and Punjab center should confine itself to just four subjects- Defense, foreign affairs, communication and currency.
- State fears the abuse of Art. 200 (here only 4-6 months’ time should be given to center) and Art. 249 (National interest)
- Residuary powers should come to Concurrent list.
- Taxation should remain with center.
- There should be equal representation for states in Rajya Sabha
- Restore domicile qualification in Rajya Sabha.
- There should be broad agreement between center and state before introducing concurrent subjects bills.
- There should be liberal use of Art. 254 (inconsistency between state and parliament law)
- Sometime Parliament can make Skeletal Laws leaving rooms for States to fill up the detail.
|PROBLEMS IN LEGISLATIVE RELATIONS OF CENTRE-STATE|
- States like Punjab and West Bengal demanded that centre should confine itself to only four subjects – Currency, General Communication, External affairs and Defense.
- Abuse of Art. 200 – assent to bills and reservation of money/finance bills for presidents consideration.
- Strong centre is desirable but issue is with over-centralisation.
- Several states sought abolition or reduction of subjects enumerated in concurrent list and transferring some of them to state list.
- Residuary powers must be rest with the state or they should be treated as concurrent items.
- Some states feared of abuse of Art. 249 – Power of Parliament (Rajya Sabha) to legislate with respect to a matter in the State List in the national interest.
|SARKARIA COMMISSION RECOMMENDATIONS|
- Residuary w.r.t. taxation should remain with union.
- Concurrent list should be retained,
- Centre should occupy only those subjects of concurrent list which demands nationwide uniformity.
- Fears of the states w.r.t. Art. 249 are baseless.
- As far as Art. 200 is concerned, the centre should dispose of these bills within four months.
|PUNCHHI COMMISSION RECOMMENDATIONS|
- Equal representation of the states in Rajya Sabha.
- Domicile qualification for the Rajya Sabha should be restored.
- Broad agreement should be reached between the centre and state before introducing a bill on subjects in concurrent list.
- Greater flexibility must be shown to the state w.r.t. five transferred subjects (by 42nd amendment – Education, Forests, Weights and Measures, Protection Of Wild Animals and Birds, and Administration Of Justice)
- Centre should examine whether administration of five subjects under central laws has served its intended purpose and if not, subjects must be restored in state list.
|ADMINISTRATIVE RELATIONS (Art. 256-263)|
- Articles spanning from 256 to 263 in Part XI of the Constitution deal with the administrative relations between the Centre and the states. In addition, there are various other articles pertaining to the same matter.
- The scheme of allocating an administrative responsibilities is drawn for the purpose of :
- The administration of law
- Achieving coordination between the centre and state
- The settlement of disputes between the centre and state
|WHY PROVISION FOR ADMINISTRATIVE RELATIONS?|
- SC in Atiabari tea company v. State of Assam 1951 has said that “success and strength of federal policy depends on the maximum of cooperation and coordination between all the governments”.
- To ensure smooth and proper functioning of the administrative machinery at union and state levels, constitution embodies provisions to deal with all sorts of eventualities emerging as a result of the operation of federal system.
- According to Kautilya – the foundation of the government lies to frame policy and administration of its day to day business.
- The Indian citizens live in the territory where both laws of the government are applicable to him. All executive, legislative and financial actions of both the governments are attributed to him.
- So the purpose of the country’s administration is only duly served when there is a viable relationship between the center and state.
|IMPORTANT ARTICLES PERTAINS TO ADMINISTRATIVE RELATIONS|
|Art. 256||Obligation of States and the Union|
|Art. 257||Control of the Union over States in certain cases|
|Art. 258||Power of the Union to confer powers, on States in certain cases|
|Art. 258A||Power of the States to entrust functions to the Union|
|Art. 260||Jurisdiction of the Union in relation to territories outside India|
|Art. 261||Public acts, records and judicial proceedings|
|Art. 262||Adjudication of disputes relating to waters of inter-State rivers or river valleys|
|Art. 263||Provisions with respect to an inter-state Council|
|DISTRIBUTION OF EXECUTIVE POWERS|
- The executive power has been divided between the Centre and the states on the lines of the distribution of legislative powers, except in few cases.
- Thus, the executive power of the Centre extends to the whole of India:
- to the matters on which the Parliament has exclusive power of legislation (Union List);
- to the exercise of rights, authority and jurisdiction conferred on it by any treaty or agreement. Similarly, the executive power of a state extends to its territory in respect of matters on which the state legislature has exclusive power of legislation (State List).
- A law on a matters in concurrent subject, though enacted by the Parliament, is to be executed by the states except when the Constitution or the Parliament has directed otherwise.
- The executive power of a state extends to its territory in respect of matters on which the state legislature has exclusive power of legislation (State List).
|OBLIGATION STATE AND CENTRE (ART. 256)|
- 256 – Power of state should be exercised to ensure compliance to laws of the Parliament and GOI can also give direction for that.
- The Constitution has placed two restrictions on the executive power of the states in order to give ample scope to the Centre for exercising its executive power in an unrestricted manner –
- As to ensure compliance with the laws made by the Parliament and any existing law which apply in the state (general obligation upon the state)
- As not to impede or prejudice the exercise of executive power of the Centre in the state (specific obligation on the state)
- 365 – Where any state has failed to comply with any directions given by the Centre, it will be lawful for the President to hold that a situation has arisen in which the government of the state cannot be carried on in accordance with the provisions of the Constitution.
|CENTRE’S DIRECTION TO STATE IN CERTAIN CASES (ART.257)|
- As per 257 the Centre is empowered to give directions to the states with regard to the exercise of their executive power in the following matters:
- the construction and maintenance of means of communication (declared to be of national or military importance) by the state;
- the measures to be taken for the protection of the railways within the state;
- the provision of adequate facilities for instruction in the mother- tongue at the primary stage of education to children belonging to linguistic minority groups in the state; and
- the drawing up and execution of the specified schemes for the welfare of the Scheduled Tribes in the state.
- 257 also mentioned that “state should not prejudice or impede Union executive power”.
- The coercive sanction behind the Central directions under 365 is also applicable in these cases.
|MUTUAL DELEGATION OF FUNCTIONS (ART. 258 AND 258A)|
- The distribution of legislative powers between the Centre and the states is rigid.
- The Centre cannot delegate its legislative powers to the states and a single state cannot request the Parliament to make a law on a state subject.
- The distribution of executive power in general follows the distribution of legislative powers.
- The rigid division in the executive sphere may lead to occasional conflicts between the two.
- To promote harmony, the Constitution provides for inter-government delegation of executive functions in order to mitigate rigidity and avoid a situation of deadlock.
- 258 – Union can confer powers to states in some cases.
- Accordingly, the President with the consent of the state government, entrust to that government any of the executive functions of the Centre.
- Conversely, the Governor of a state, with the consent of the Central government, entrust to that government any of the executive functions of the state. This mutual delegation of administrative functions may be conditional or unconditional.
- The Parliament can delegate and entrust the executive functions of the Centre to a state without the consent of that state.
|In 1947, Indian Civil Service (ICS) was replaced by IAS and the Indian Police (IP) was replaced by IPS and were recognised by the Constitution as All-India Services. In 1966, the Indian Forest Service (IFS) was created as the third All-India Service.|
- Hence, a law made by the Parliament on a subject of the Union List can confer powers and impose duties on a state, or authorise the conferring of powers and imposition of duties by the Centre upon a state (irrespective of the consent of the state concerned). Notably, the same thing cannot be done by the state legislature.
- The mutual delegation of functions between the Centre and the state can take place either under an agreement (this method can only be use by the state) or by a legislation.
|COOPERATION BETWEEN CENTRE AND STATE|
- The Constitution contains the following provisions to secure cooperation and coordination between the Centre and the states:
- Full faith and credit is to be given throughout the territory of India to public acts, records and judicial proceedings of the Centre and every state. ( 261)
- The Parliament can provide for the adjudication of any dispute or complaint with respect to the use, distribution and control of waters of any inter-state river and river valley. ( 262)
- The President can establish an Inter-State Council to investigate and discuss subject of common interest between the Centre and the states. Such a council was set up in 1990. ( 263)
- The Parliament can appoint an appropriate authority to carry out the purposes of the constitutional provisions relating to the interstate freedom of trade, commerce and intercourse. (Art. 301). But, no such authority has been appointed so far.
a. All India Service
- 312 – Authorizes the Parliament to create new All-India Services on the basis of a Rajya Sabha resolution to that effect.
- All India services are controlled jointly by the Centre and the states. The ultimate control lies with the Central government while the immediate control vests with the state governments.
- Each of these three all-India services, form a single service with common rights and status and uniform scales of pay throughout the country, irrespective of their division among different states.
- Though the all-India services violate the principle of federalism under the Constitution by restricting the autonomy and patronage of the states, they are supported on the ground that –
- They help in maintaining high standard of administration in the Centre as well as in the states;
- They help to ensure uniformity of the administrative system throughout the country;
- They facilitate liaison, cooperation, coordination and joint action on the issues of common interest between the Centre and the states.
b. Public service commission
In the field of public service commissions, the Centre-state relations are as follows:
- The Chairman and members of a state public service commission, though appointed by the governor of the state, can be removed only by the President.
- The Parliament can establish a Joint State Public Service Commission (JSPSC) for two or more states on the request of the state legislatures The chairman and members of the JSPSC are appointed by the president.
- The Union Public Service Commission (UPSC) can serve the needs of a state on the request of the state governor and with the approval of the President.
- The UPSC assists the states (when requested by two or more states) in framing and operating schemes of joint recruitment for any services for which candidates possessing special qualifications are required.c
c. Integrated judicial system
- Although India has a federal setup, there is no dual system of administration of justice. The Constitution established an integrated judicial system with the Supreme Court at the top and the State High courts below This single system of courts enforces both the Central laws as well as the state laws. This is done to eliminate diversities in the remedial procedure.
- The Parliament can establish a common high court for two or more states. Example- Maharashtra and Goa or Punjab and Haryana have a common high court.
|RELATIONS DURING EMERGENCIES|
- National Emergency (Art. 352) – the Centre becomes entitled to give executive directions to a state on ‘any’ matter. Thus, the state governments are brought under the complete control of the Centre, though they are not suspended.
- President’s Rule (Art. 356) – the President can assume to himself the functions of the state government and powers vested in the Governor or any other executive authority in the state.
- Financial Emergency (Art. 360) – the Centre can direct the states to observe canons of financial propriety and can give other necessary directions including the reduction of salaries of persons serving in the state.
- The Constitution contains the following other provisions which enable the Centre to exercise control over the state administration:
- 355 imposes two duties on the Centre:
(a) to protect every state against external aggression and internal disturbance;
(b) to ensure that the government of every state is carried on in accordance with the provisions of the Constitution.
- Appointment of Governor of a state by the president. He holds office during the pleasure of the President. The governor acts as an agent of the Centre in the state. He submits periodical reports to the Centre about the administrative affairs of the state.
- The State Election Commissioner is appointed by the governor of the state, can be removed only by the President.
|EXTRA-CONSTITUTIONAL DEVICES TO FOSTER CENTRE-STATE RELATIONS|
- There are extra-constitutional devices to promote cooperation and coordination between the Centre and the States. These consist of a number of advisory bodies and conferences held at the Central level.
- Example – NITI Aayog, the Zonal Councils, the North Eastern Council, University Grants Commission etc.
- The important conferences held either annually or otherwise to facilitate Centre-state consultation on a wide range of matters are as follows:
- The governors’ conference (presided over by the President).
- The chief ministers’ conference (presided over by the prime minister).
- The chief justices’ conference (presided over by the chief justice of India).
- The conference of vice-chancellors.
- The home ministers’ conference (presided over by the Central home minister)
|MAJOR ISSUE OF ADMINISTRATIVE RELATIONS|
- There is heavy abuse of Art. 356 and it should be curtailed
- Management of All India Service.
- Position, appointment and role of governor.
- Central administrative directions to the states.
- Binding nature of the schemes on the states.
|PUNCHHI COMMISSION RECOMMENDATIONS|
- The constitution has provided limited institutional mechanisms for inter-state and centre-state coordination and even they are underutilized.
- Set up new All India Service in other domains– Judiciary, Education, Health as well.
- Interstate council should be strengthen in dispute settlement .
- Zonal council must meet at least twice a year.
- There should be rejuvenation of NIC
|FINANCIAL RELATIONS (Art. 256-291)|
- Articles spanning from 268 to 293 in Part XII of the Constitution deal with Centre – state financial relations.
- All the levels of the government must have adequate finance at their disposal.
- If the legislative and administrative authority of the center and states has to be maintained then they must be financially autonomous.
- In Canada and Australia– central grants to states are must for the states to survive.
- Swiss Constitution makes center subservient to states.
- American Constitution wants financial independence between states and center but their also states rely on centre’s grant-in-aid.
- Indian constitution does not give a watertight division of financial resources but wants to secure equitable distribution.
|EVOLUTION OF FINANCIAL RELATIONS|
- In 1870 Lord Mayo introduced “devolution scheme” which for the first time initiated financial relations between GOI and government of constituent units.
- Income Tax was levied much before GOI Act, 1919 and was shared between central and provincial governments.
- GOI Act 1919 failed to do a rigid divisions between revenues of the governments but introduced revenue heads for governments (under dyarchy).
- Meston Award of 1920s said “administration and finance need not be with the same authority”
- GOI Act, 1935 recognized that certain taxes other than IT may also be collected by central government and shared with the provincial governments.
|EVOLUTION OF FINANCIAL RELATIONS|
- There should be resource-responsibility parity.
- Lower levels of federal units should be able to raise resources independently.
- Elasticity of expenditure and income.
- Equalization of transfer both horizontally and vertically between states
- Efficiency should be ensured in resource utilization.
|TAXATION ONLY BY AUTHORITY OF LAW (ART.265)|
- 265 – Taxes not to be imposed save by authority of law – “No tax shall be levied or collected except by authority of law”.
- No tax can be imposed by an executive order.
- The law providing for imposition of tax must be a valid law (Chotabhai vs. Union of India 1962).
- Example– A tax law would be void if it violates fundamental right to equality.
- Legislature can also impose tax twice on a thing. It is Double taxation (Avinder Singh vs. State of Punjab 1979).
|CONSOLIDATED FUND (ART.266)|
- 266– There will be Consolidated fund for India and Consolidated fund of State.
- Consolidated Fund of India is related to all revenues received by the government and expenses made by it, excluding the exceptional items.
- All revenues received by the government by way of direct taxes and indirect taxes, money borrowed and receipts from loans given by the government flow into the Consolidated Fund of India.
- All government expenditure is made from this fund, except exceptional items (which are met from the Contingency Fund or the Public Account)
- Importantly, no money can be withdrawn from this fund without the Parliament’s approval.
|CONTINGENCY FUND (ART.267)|
- 267(1) – Established Contingency Fund of India.
- It is in the nature of an imprest (money maintained for a specific purpose). Accordingly, Parliament enacted the Contingency fund of India Act 1950.
- The fund is held by the Finance Secretary (Department of Economic Affairs) on behalf of the President of India and it can be operated by executive action.
- The Contingency Fund of India exists for disasters and related unforeseen expenditures.
- Contingency Fund of each State Government is established under 267(2) of the Constitution .It is held by Governor and corpus varies from state to state (Legislature).
|SOURCES OF REVENUES|
|Union Govt.||State Govt.|
|Custom and excise duty||Grants under Art. 275|
|Income tax||Devolution on recommendations of Finance commission|
|Corporation tax||Provisions of CAG|
|Estate duty (except agriculture)||Toll tax, vehicle tax|
|Excise duty on tobacco and other intoxicants||Tax on minerals|
|Succession duty (except agriculture)||Entertainment tax|
|Inter-state trade tax||Housing taxes|
|Art. 268||Duties levied by the Union but collected and appropriated by the states|
|Art. 269||Taxes levied and collected by the Union but assigned to the states|
|Art. 269A||Levy and collection of goods and services tax in course of inter-state trade or commerce|
|Art. 270||Taxes levied and distributed between the Union and the states|
|Art. 271||Surcharge on certain duties and taxes for purposes of the Union|
|Art. 274||Prior recommendation of President required to bills affecting taxation in which states are interested|
|Art. 275||Grants from the Union to certain states|
|Art. 279A||Goods and Services Tax Council|
|Art. 280||Finance commission|
|Art. 285||Exemption of property of the Union from state taxation|
|Art. 289||Exemption of property and income of a state from Union taxation|
|Art. 292||Borrowing by the Government of India|
|Art. 293||Borrowing by states|
|ALLOCATION OF TAXING POWERS|
|Union List||-It contains 97 subjects of national importance such as defence, railways, currency, foreign affairs, post, among others.|
-On this list, only parliament can make laws.
|State List||-It comprises of 66 subjects of local importance such as police, agriculture, health, among others.|
-State legislature make laws on this subjects.
|Concurrent List||-It has 47 subjects of common concern to both centre and state govt. like marriage, social securities, etc.|
-Both parliament and state legislature can make laws on this subjects.
-If conflict arises, then central legislation will prevail.
- Centre list – The Parliament has exclusive power to levy taxes on subjects enumerated in the Union List (which are 13 in number).
- State list – The state legislature has exclusive power to levy taxes on subjects enumerated in the State List (which are 18 in number).
- Concurrent list – There are no tax entries in the Concurrent List. In other words, the concurrent jurisdiction is not available with respect to tax legislation.
|The residuary power of is vested in the Parliament. Under this provision, the Parliament has imposed gift tax, wealth tax and expenditure tax.|
- But, the 101st Amendment Act of 2016 has made an exception by making a special provision with respect to GST. This Amendment has conferred concurrent power upon Parliament and State Legislatures to make laws governing GST.
- The Constitution also draws a distinction between the power to levy and collect a tax and the power to appropriate the proceeds of the tax so levied and collected.
|RESTRICTIONS ON THE TAXING POWERS OF THE STATE|
- A state legislature can impose taxes on professions, trades, callings and employments. But, the total amount of such taxes payable by any person should not exceed ₹2,500 per annum.
- A State Legislature can Impose Taxes On the Sale Or Purchase of goods (other than newspapers). But, this power of the states to impose sales tax is subjected to following restrictions –
- 287- A State Legislature can impose tax on the consumption or sale of electricity. But, no tax can be imposed on the consumption or Sale of Electricity which is (a) Consumed by the Centre or sold to the Centre; or (b) Consumed in the construction, maintenance or operation of any railway by the Centre or by the concerned railway company or sold to the Centre or the railway company for the same purpose.
- 288 – A State Legislature can impose a tax in respect of any water or electricity stored, generated, consumed, distributed or sold by any authority established by Parliament for regulating or developing any Inter- state River or River Valley. However, such a law, to be effective, should be reserved for the president‘s consideration and receive his assent.
|DISTRIBUTION OF TAX REVENUE|
- The 80th Amendment Act of 2000 and the 101st Amendment Act of 2016 have introduced major changes in the scheme of the distribution of tax revenues between the centre and the states.
- 80th Amendment – Enacted to give effect to the recommendations of the 10th Finance Commission. The Commission recommended ‘Alternative Scheme of Devolution’ which states that out of the total income obtained from certain central taxes and duties, 29% should go to the states.
- 101st Amendment – paved the way for the introduction of a new indirect tax regime – GST. Accordingly, the Amendment conferred concurrent taxing powers upon the Parliament and the State Legislatures to make laws for levying GST on every transaction of supply of goods or services or both. The Amendment provided for subsuming of various central indirect taxes and levies such as – Central Excise Duty, Additional Excise Duties, Excise Duty levied under the Medicinal and Toilet Preparations (Excise Duties) Act, 1955, Service Tax, Additional Customs Duty commonly known as Countervailing Duty, Central Surcharges and Cesses so far as they related to the supply of goods and services.
- Further, 101st Amendment removed Art. 268-A as well as Entry 92-C in the Union List, both were dealing with service tax (added earlier by the 88th Amendment Act of 2003).
- After the 80th and 101st Amendment, the present position with respect to the distribution of tax revenues between the centre and the states is as follows:
A. Taxes Levied by the Centre but Collected and Appropriated by the States (Art. 268):
- This category includes the stamp duties on bills of exchange, cheques, promissory notes, policies of insurance, transfer of shares and others.
- The proceeds of these duties levied within any state do not form a part of the Consolidated Fund of India, but are assigned to that state.
B. Taxes Levied and Collected by the Centre but Assigned to the States (Art 269):
- Taxes on the sale or purchase of goods (other than newspapers) in the course of inter-state trade or commerce.
- Taxes on the consignment of goods in the course of inter-state trade or commerce.
- The net proceeds of these taxes do not form a part of the Consolidated Fund of India. They are assigned to the concerned states in accordance with the principles laid down by the Parliament.
C. Levy and Collection of GST in Course of Inter-State Trade or Commerce (Art 269A):
- The Goods and Services Tax (GST) on supplies in the course of inter-state trade or commerce are levied and collected by the Centre. However, this tax is divided between the Centre and the States in the manner provided by Parliament on the recommendations of the GST Council.
- The Parliament is also authorized to formulate the principles for determining the place of supply, and when a supply of goods or services or both takes place in the course of inter-state trade or commerce.
D. Taxes Levied and Collected by the Centre but Distributed between the Centre and the States (Art 270): This category includes all taxes and duties referred to in the Union List except the following:
- Duties and taxes referred to in Art. 268, 269 and 269-A;
- Surcharge on taxes and duties referred to in Art 271;
- Any cess levied for specific purposes.
- The manner of distribution of the net proceeds of these taxes and duties is prescribed by the President on the recommendation of the Finance Commission.
E. Surcharge on Certain Taxes and Duties for Purposes of the Centre (Art 271):
- The Parliament can at any time levy the surcharges on taxes and duties referred to in Art. 269 and 270.
- The proceeds of such surcharges go to the Centre exclusively (it should be noted that states have no share in these surcharges)
- However, GST is exempted from this surcharge (surcharge cannot be imposed on the GST)
F. Taxes Levied and Collected and Retained by the States:
- These are the taxes belonging to the states exclusively. They are enumerated in the state list and are 18 in total. Some important of these are:
- land revenue;
- taxes on agricultural income, succession and estate duties in respect of agricultural land;
- taxes on lands and buildings, on mineral rights, on animals and boats, on road vehicles, on luxuries, on entertainments, and on gambling;
- excise duties on alcoholic liquors for human consumption and narcotics;
- Taxes on the Entry Of Goods into a Local Area, on Advertisements (except newspapers), on consumption or sale Of Electricity, and on goods and passengers carried by road or on Inland Waterways;
- Taxes On Professions, trades, callings and employments not exceeding Rs. 2,500 per annum;
- Capitation Taxes;
- Stamp Duty on Documents (except those specified in the Union List);
- Sales Tax (other than newspaper); and
- Fees on the matters enumerated in the state list (except court fees).
|DISTRIBUTION OF NON-TAX REVENUE|
|The Centre||The States|
|Posts and telegraphs||Irrigation|
|Broadcasting||State Public Sector Enterprises|
|Coinage and currency||Escheat and lapse|
|Central public sector enterprises||Others|
|Escheat and lapse|
|GRANTS IN AID|
The Constitution provides for grants-in-aid to the states from the Central resources. There are two types of grants-in-aid –
|Statutory grants||Discretionary grants|
|Art. 275 empowers the Parliament to make grants to the states which are in need of financial assistance and not to every state. These sums are charged on the Consolidated Fund of India every year.|
The Constitution also provides for specific grants for promoting the welfare of the scheduled tribes in a state or for raising the level of administration of the scheduled areas in a state (including the State of Assam)
The statutory grants under Art. 275 are given to the states on the recommendation of the Finance Commission.
|Art 282 empowers both the Centre and the states to make any grants for any public purpose, even if it is not within their respective legislative competence.|
These grants are also known as discretionary grants, the reason being that the Centre is under no obligation to give these grants and the matter lies within its discretion.
These grants are to help the state financially to fulfil plan targets and to give some leverage to the Centre to influence and coordinate state action to effectuate the national plan.
- The Constitution also provided for a third type of grants-in-aid, but for a temporary period.
- A provision was made for grants in lieu of export duties on jute and jute products to the States of Assam, Bihar, Orissa and West Bengal.
- These grants were to be given for a period of ten years from the commencement of the Constitution.
- These sums were charged on the Consolidated Fund of India and were made to the states on the recommendation of the Finance Commission.
|GST COUNCIL (ART. 279-A)|
- The effective and efficient administration of the GST requires a co-operation and co-ordination between the Centre and the States.
- The 101st Amendment Act of 2016 provided for the establishment of a GST Council for consultation process.
- 279-A empowered the President to constitute a GST Council (joint forum of the Centre and the States). It is required to make recommendations to the Centre and the States on the following matters:
- The taxes, cesses and surcharges levied by the Centre, the States and the local bodies that would get merged in GST.
- The goods and services that may be subjected to GST or exempted from GST.
- Model GST Laws, principles of levy, apportionment of GST levied on supplies in the course of inter-state trade or commerce and the principles that govern the place of supply.
- The threshold limit of turnover below which goods and services may be exempted from GST.
- The rates including floor rates with bands of GST.
- Any special rate or rates for a specified period to raise additional resources during any natural calamity or disaster.
|FINANCE COMMISSION (ART. 280)|
- 280 provides for a Finance Commission as a quasi-judicial body. It is constituted by the President every fifth year or even earlier.
- It is required to make recommendations to the President on the following matters:
- The distribution of the net proceeds of taxes to be shared between the Centre and the states, and the allocation between the states, the respective shares of such proceeds.
- The principles which should govern the grants-in-aid to the states by the Centre (i.e., out of the Consolidated Fund of India).
- The measures needed to augment the Consolidated fund of a state to supplement the resources of the panchayats and the municipalities in the state on the basis of the recommendations made by the State Finance Commission.
- Any other matter referred to it by the President in the interests of sound finance.
- The Constitution envisages the Finance Commission as the “balancing wheel of fiscal federalism in India”.
|PROTECTION OF INTEREST OF THE STATES|
- To protect the interest of states in the financial matters, the Constitution lays down that the following bills can be introduced in the Parliament only on the recommendation of the President:
- A bill which imposes or varies any tax or duty in which states are interested;
- A bill which varies the meaning of the expression “agricultural income” as defined for the purposes of the enactments relating to Indian income tax;
- A bill which affects the principles on which moneys are or may be distributable to states; and
- A bill which imposes any surcharge on any specified tax or duty for the purpose of the Centre.
|BORROWINGS BY THE CENTRE AND STATE|
Net proceeds means the proceeds of a tax or a duty minus the cost of collection. The net proceeds of a tax or a duty in any area is to be ascertained and certified by the Comptroller and Auditor-General of India. His certificate is final.
- The Constitution makes the following provisions with regard to the borrowing powers of the Centre and the states:
- The Central government can borrow (within the limits fixed by the Parliament) either within India or outside upon the security of the Consolidated Fund of India or can give guarantees. However, no such law has been enacted by the Parliament till date.
- The state government can borrow within India only (and not abroad) (within the limits fixed by the legislature of that state) upon the security of the Consolidated Fund of the State or can give guarantees.
- The Central government can make loans to any state or give guarantees in respect of loans raised by any state. Any sums required for the purpose of making such loans are to be charged on the Consolidated Fund of India.
- A state cannot raise any loan without the consent of the Centre, if there is still outstanding any part of a loan made to the state by the Centre or in respect of which a guarantee has been given by the Centre.
|INTER-GOVERNMENT TAX IMMUNITIES|
1.Exemption of Union property from taxation of state (Art. 285)
- Centre’s property is exempted from all taxes imposed by a state or any authority within a state like municipalities, district boards, panchayats and so on. But, the Parliament is empowered to remove this ban.
- Property includes – lands, buildings, chattels, shares, debts, everything that has a money value, and movable or immovable and tangible or intangible.
- The property may be used for sovereign (like armed forces) or commercial purposes.
- The corporations or the companies created by the Central government are not immune (as they are separate legal entity) from state taxation or local taxation.
2.Exemption of State property from central taxation(Art. 289)
- The property and income of a state is exempted from Central taxation. Such income may be derived from sovereign functions or commercial functions.
- But the Centre can tax the commercial operations of a state if Parliament provides so.
- However, the Parliament can declare any particular trade or business as incidental to the ordinary functions of the government and it would then not be taxable.
- It should be noted that, the property and income of local authorities situated within a state are not exempted from the Central taxation.
- Likewise, the property or income of corporations and companies owned by a state can be taxed by the Centre.
- The Centre can impose customs duty on goods imported or exported by a state, or an excise duty on goods produced or manufactured by a state – advisory opinion of the Supreme Court, 1963.
|EFFECTS OF EMERGENCIES|
1.National emergency (Art.352)
- The President can modify the constitutional distribution of revenues between the Centre and the states in operation of national emergency.
- The president can either reduce or cancel the transfer of finances (both tax sharing and grants-in-aid) from the Centre to the states.
- Such modification continues till the end of the financial year in which the emergency ceases to operate.
2. Financial emergency (Art.360)
- In case of financial emergency, the Centre can give directions to the states: (i) to observe the specified canons of financial propriety; (ii) to reduce the salaries and allowances of all class of persons serving in the state; and (iii) to reserve all money bills and other financial bills for the consideration of the President.
|FACTORS RESPONSIBLE FOR POOR STATE FINANCES|
- Populist policies to win the elections
- Less elastic nature of state taxes
- Corruption in the tax administration state
- States have not tapped their fullest taxation potential – agriculture is out of taxation
- State level public sectors enterprises are more or less inefficient to accrue fiscal benefits
- Limited avenues of taxation available with the states
- In between 1986-1997, state governments accumulates high cost debts and recommendations of fifth pay commission came as final hammer on state finances
- Outbreak of COVID-19 pandemic drained financial resources of the states at unprecedented scale.
|PUNCHHI COMMISSION ON CENTRE-STATE FINANCIAL RELATIONS|
- All future central laws involving the state should provide for cost sharing as in RTE act.
- Do away with ceiling on profession tax under Art. 276
- Adopt a state specific approach towards fiscal consolidation as opposed uniform FRBM act.
- A part of state proceeds of spectrum should be shared with state for infrastructural projects.
- Synchronisation of an award of Union Finance Commission and State Finance Commission.
- Setting up Inter-State Commerce Commission under Art. 301.
- Existing central laws where the states are entrusted with implementation should be suitably modify to provide for cost sharing.
|TRENDS IN CENTRE-STATE RELATIONS|
- Till 1967, the centre–state relations by and large were smooth due to one- party rule at the Centre and in most of the states.
- In 1967 elections, the Congress party was defeated in nine states and its position at the Centre became weak. This changed political scenario heralded a new era in the Centre–state relations.
- The Non- Congress Governments in the states opposed the increasing centralisation and intervention of the Central government.
- They raised the issue of state autonomy and demanded more powers and financial resources to the states. This caused tensions and conflicts in Centre–state relations.
|FRICTIONAL AREAS IN CENTRE-STATE RELATIONS|
- Mode of appointment and dismissal of governor;
- Discriminatory and partisan role of governors;
- Imposition of President’s Rule for partisan interests;
- Deployment of Central forces in the states to maintain law and order;
- Reservation of state bills for the consideration of the President;
- Discrimination in financial allocations to the states;
- Role of Planning Commission in approving state projects;
- Management of All-India Services (IAS, IPS, and IFS);
- Use of electronic media for political purposes;
- Appointment of enquiry commissions against the chief ministers;
- Sharing of finances (between Centre and states);
- Encroachment by the Centre on the State List.
|COMMITTEES ON CENTRE-STATE RELATIONS|
a. Administrative Reforms Commission (ARC)
- The Central government appointed a six-member ARC in 1966 under the chairmanship of Morarji Desai
- In final report of 1969, it made 22 recommendations for improving the Centre-state relations. The important recommendations are:
- Establishment of an Inter-State Council under 263 of the Constitution.
- Appointment of persons having long experience in public life and administration and non-partisan attitude as governors.
- Delegation of powers to the maximum extent to the states.
- Transferring of more financial resources to the states to reduce their dependency upon the Centre.
- Deployment of Central armed forces in the states either on their request or otherwise.
- No action was taken by the Central government on the recommendations of the ARC.
b. Raja Mannar committee
- In 1969, the DMK govt. in Tamil Nadu appointed a three – member committee under the chairmanship of Dr. V. Rajamannar to examine the entire question of Centre-state relations.
- The committee submitted its report to the Tamil Nadu Government in 1971.
- The Committee identified the following reasons for the prevailing unitary (centralisation) trends:
- Certain provisions in the Constitution which confer special powers on the Centre;
- One-party rule both at the Centre and in the states;
- Inadequacy of states’ fiscal resources and consequent dependence on the Centre for financial assistance;
- The institution of Central planning and the role of the Planning Commission.
- The important recommendations of the committee are as follows:
- An Inter-State Council should be set up immediately;
- Finance Commission should be made a permanent body;
- Planning Commission should be disbanded and its place should be taken by a statutory body;
- 356, 357 and 365 (dealing with President’s Rule) should be totally omitted;
- The provision that the state ministry holds office during the pleasure of the governor should be omitted;
- Certain subjects of the Union List and the Concurrent List should be transferred to the State List;
- the residuary powers should be allocated to the states;
- All-India services should be abolished.
c. Anandpur Sahib resolution
- In 1973, the Akali Dal (Punjab) adopted a resolution containing both political and religious demands in a meeting held at Anandpur Sahib in Punjab.
- The resolution demanded that the Centre’s jurisdiction should be restricted only to defence, foreign affairs, communications, and currency
- The entire residuary powers should be vested in the states. It stated that the Constitution should be made federal in the real sense and should ensure equal authority and representation to all the states at the Centre.
d. West Bengal memorandum
- In 1977, the West Bengal Government (led by the Communists) published a memorandum on Centre-state relations and sent to the Central government.
- The memorandum, among others suggested the following recommendations:
- The word ‘union’ in the Constitution should be replaced by the word ‘federal’;
- The jurisdiction of the Centre should be confined to defence, foreign affairs, currency, communications and economic co-ordination;
- All other subjects including the residuary should be vested in the states;
- 356 and 357 and 360 should be repealed;
- State’s consent should be made obligatory for formation of new states or reorganisation of existing states;
- Of the total revenue raised by the Centre from all sources, 75 per cent should be allocated to the states;
- Rajya Sabha should have equal powers with that of the Lok Sabha;
- There should be only Central and state services and the all India services should be abolished.
- The Central government did not accept the demands made in the memorandum.
e. Sarkaria commission
- In 1983, the Central government appointed a three-member Commission on Centre– State relations under the chairmanship of S. Sarkaria.
- The commission was asked to examine and review the working of existing arrangements between the Centre and states in all spheres and recommend appropriate changes and measures.
- The Commission made 247 recommendations to improve Centre– state relations. The important recommendations are mentioned below:
- A permanent Inter-State Council called the should be set up under 263.
- 356 (President‘s Rule) should be used very sparingly, in extreme cases as a last resort when all the available alternatives fail.
- The institution of All-India Services should be further strengthened and some more such services should be created.
- The residuary powers of taxation should continue to remain with the Parliament, while the other residuary powers should be placed in the Concurrent List.
- When the president withholds his assent to the state bills, the reasons should be communicated to the state government.
- The zonal councils should be constituted afresh and reactivated to promote the spirit of federalism.
- The Centre should have powers to deploy its armed forces, even without the consent of states. However, it is desirable that the states should be consulted.
- The Centre should consult the states before making a law on a subject of the Concurrent List.
- The procedure of consulting the chief minister in the appointment of the state governor should be prescribed in the Constitution itself.
- The net proceeds of the corporation tax may be made permissibly shareable with the states.
- The governor cannot dismiss the council of ministers so long as it commands a majority in the assembly.
- The governor‘s term of five years in a state should not be disturbed except for some extremely compelling reasons.
- No commission of enquiry should be set up against a state minister unless a demand is made by the Parliament.
- The surcharge on income tax should not be levied by the Centre except for a specific purpose and for a strictly limited period.
- Steps should be taken to uniformly implement the three language formula in its true spirit.
- No change in the role of Rajya Sabha and Centre‘s power to reorganise the states.
- The commissioner for linguistic minorities should be activated.
- Till December 2011, the Central government has implemented 180 (out of 247) recommendations of the Sarkaria Commission.
- The most important is the establishment of the Inter-State Council in 1990.
f. Punchhi commission
- The Second commission on Centre-State Relations was set-up by the GoI in April 2007 under the Chairmanship of M. Punchhi. It submitted its report in April 2010
- In finalizing the report, the Commission took extensive help from the Sarkaria Commission report, the National Commission to Review the Working of the Constitution (NCRWC) report and the Second ARC
- The Planning Commission has a crucial role in the current situation. But its role should be that of coordination rather that of micro managing sectoral plans of the Central ministries and the states.
- Steps should be taken for the setting up of an Inter-State Trade and Commerce Commission under 307 read with Entry 42 of List-I.
- This Commission should be vested with both advisory and executive roles with decision making powers.
- As a Constitutional Body, the decisions of the Commission should be final and binding on all states as well as the Union of India.
Any party aggrieved with the decision of the Commission may prefer an appeal to the Supreme Court. |
- What are the benefits of a price system?
- What decisions do prices help consumers and producers make quizlet?
- In what ways do prices help us allocate goods and services quizlet?
- What are the 5 benefits of the price system?
- How do prices connect markets in an economy?
- How is it difficult to distribute goods and services without a price system?
- How do prices help allocate resources between markets?
- What are the advantages of competitive pricing?
- What are the 3 functions of prices?
- How do prices act as signals to allocate goods and services?
- What is an economic model and how is it useful to business and others?
- What is an economic signal?
- What factors affect prices?
- What is the importance of price?
- What are 3 problems with rationing?
- How does the government allocate scarce resources?
- What are the effects of price controls?
- What is the difference between demand and quantity demanded?
What are the benefits of a price system?
– The price system is flexible and free, and it allows for a wide diversity of goods and services.
Prices can act as a signal to both producers and consumers: – A high price tells producers that a product is in demand and they should make more.
– A low price indicates to producers that a good is being overproduced..
What decisions do prices help consumers and producers make quizlet?
When prices are high, producers produce more, and consumers buy less. When prices are low, producers produce less, and consumers demand more.
In what ways do prices help us allocate goods and services quizlet?
The price system is the most efficient way to allocate resources. Prices do more than help individuals make decisions; they also help allocate resources both within and between markets. Rationing is a system of allocating goods and services without prices. The price system uses price whereas rationing does not.
What are the 5 benefits of the price system?
Terms in this set (5) Tells producers how much their product will cost to make. Encourages producers to supply more prices are high. More competitors means more choices available on the market. Wise use of resources and which products that consumers want.
How do prices connect markets in an economy?
How do prices connect markets in an economy? Prices connect markets because changes in one market create a ripple effect that is felt through prices in another market. … The price of the product at the equilibrium quantity is the equilibrium price.
How is it difficult to distribute goods and services without a price system?
Allocating resources without price, or rationing, is difficult because first, almost everyone feels his or her share is too small. … The government sets the price and it can’t change, therefore equilibrium can’t be reached.
How do prices help allocate resources between markets?
Markets use prices as signals to allocate resources to their highest valued uses. Consumers will pay higher prices for goods and services that they value more highly. … The interaction of demand and supply in product and resource markets generates prices that serve to allocate items to their highest valued alternatives.
What are the advantages of competitive pricing?
The advantages of competitive pricing strategyLow Price. The products or services you offer are lower than your competitors. … High Price. The prices of the products or services you offer are higher in comparison to your competitors. … Matched Price. The prices of the products or services match the price that’s offered by your competitors.
What are the 3 functions of prices?
Prices have three seperate functions: rationing, signalling and incentive functions. These ensure collectively that resources are allocated correctly by co-ordinating the buying and selling decisions in the market. Below is a diagram to illustrate how the price mechanism works in a supply and demand framework.
How do prices act as signals to allocate goods and services?
D) Price controls increase efficiency in markets by sending clear signals to buyers and sellers, thus making the allocation of goods and services easier to facilitate.
What is an economic model and how is it useful to business and others?
In economics, a model is a theoretical construct representing economic processes by a set of variables and a set of logical and/or quantitative relationships between them. The economic model is a simplified, often mathematical, framework designed to illustrate complex processes.
What is an economic signal?
Economic Signal. Any piece of information that helps people make better economic decisions. Inefficient. A market or economy is described as this if there are missed opportunities. (Some people could be made better off without making other people worse off.)
What factors affect prices?
Pricing – factors to consider when setting priceCompetitors – a huge impact on pricing decisions. … Costs – a business cannot ignore the cost of production or buying a product when it comes to setting a selling price. … The state of the market for the product – if there is a high demand for the product, but a shortage of supply, then the business can put prices up.More items…
What is the importance of price?
Price is important to marketers because it represents marketers’ assessment of the value customers see in the product or service and are willing to pay for a product or service.
What are 3 problems with rationing?
the first problem with rationing is that almost everyone feels his or her share is too small. second problem is the administrative cost of rationing. someone must pay the salaries and the printing and distribution costs of the coupons . the third is the negative impact on the incentive to produce.
How does the government allocate scarce resources?
The price mechanism acts as an allocative mechanism for allocating scarce resources in a free market. … The non-market sector (government) intervenes in the allocation of scarce resources through the planning mechanism. It uses subsidies and taxes to determine the relative price to be charged in the market.
What are the effects of price controls?
Over the long term, price controls inevitably lead to problems such as shortages, rationing, deterioration of product quality, and black markets that arise to supply the price-controlled goods through unofficial channels.
What is the difference between demand and quantity demanded?
In economics, demand refers to the demand schedule i.e. the demand curve while the quantity demanded is a point on a single demand curve which corresponds to a specific price. |
Many political events characterized the crisis that precipitated the Civil War (1861-1865). In the early 1850s, the Kansas-Nebraska Act was enacted giving the former colony sovereign power to decide on whether to abolish or allow African slavery (Kelly 91). This single event turned Kansas into a battlefield pitting the proponents of slavery against abolitionists mainly from the North. Northerners wanted the territory to become a free Kansas state while the Southerners wanted it to retain slavery bringing to the fore the ideological differences between the Union and the Confederate. Besides slavery, the North and South differed in their economic models.
The North was industrializing fast while the South maintained the social order. The stark differences and disagreements over slavery, tariffs, the subjugation of individual rights, and federal/state powers precipitated the Civil War (Guelzo 55). This paper examines the issues central to the Civil War, its causes, and its effects on the Union.
Although ending African slavery was the primary goal of the Civil War, some Northern soldiers fought on moral grounds while others battled to preserve the Union. This paper examines the research question: was the Civil War fought to end slavery, preserve the unity, and entrench individual rights or was it a struggle for equity? It is no doubt the conflict was a defining moment in American history coming after the Revolutionary War of the 1770s and the Constitution of the Republic. Its critical milestone was that it ensured Americans not only remain united but also enjoy the civil liberties enshrined in the Constitution and an inclusive economic system. Though particular challenges, such as racism, threaten the unity and civil rights, the War shaped American society during the Reconstruction era and our unity and values in the modern era.
The Incompatible Economic Interests
The discussion of what caused the Civil War to break out primarily focuses on African slavery, individual freedoms, and state sovereignty. However, it can be argued that deeper issues related to socioeconomic disparities between the two regions precipitated the conflict. The rift between the North and the South has its roots in the colonial era where the slave trade thrived as a source of cheap labor to the plantations owned by white settlers. The antipathy between the North and South grew after the Republic was formed because the two regions pursued economic models that were largely antithetical.
According to Fleming, the North and South differed in industrialization levels and demands from the government (73). The North was attracting thousands of working immigrants and women seeking to work in new industries in main urban centers. The availability of cheap labor fueled industrialization in the North forcing entrepreneurs to lobby the government to protect them from cheap European imports (Fleming 77). In this view, the demands of the industrialized North were strikingly different from the agricultural South.
Fleming further contends that after the Revolution, the South became a cotton empire with huge tracts of arable land supporting plantations of tobacco, cotton, and rice for export (96). Therefore, the South reliance on slave labor is founded on economic prosperity argument. The prospect of the North dominating Congress and passing laws to raise tariffs caused fear among Southerners, which forced John Calhoun, the V.P., to insist that the region would annul any federal law considered unfair by the South (Fleming 97). It is evident that the Southerners held the belief that the area’s economic prosperity and African slavery were inextricably linked and could go to great lengths to protect their interests.
A standard argument was that a universal abolition of slavery across the Republic could ruin the Southern economy (Fleming 77). In fact, one of the justifications adduced by the South for the expansion of slavery was economic prosperity (Fleming 78).
The defenders of the slave system cited other slavery systems that drove the economies of Greece, Egypt, and Rome forward (Fleming 78). They further stated that the slavery system offered a civilized environment to African slaves making them better people than the savages in Africa. Therefore, unlike the North, the South could not do away with slavery sooner for fear of unknown economic repercussions. Even after the Proclamation of Amnesty, the Southern states retained African Americans as the serving class, which was a temporary economic measure (Guelzo 59). It could be argued that the economic interests of the South and the belief that abolishing African slavery would ruin the economy were at the center of the war between the factions.
The Constitution and Slave System
The U.S. Constitution in many ways accommodated the interests of the slave owners in the South. Slavery, as an institution, contributed immensely to the country’s economy, especially in the Southern states. McPherson writes that while black suffrage was not allowed, the Constitution allotted each slave three-fifths of a vote to boost the Southerners’ numerical strength in Congress (114). This statute enabled the South to have more influence on Federal matters than they could have based on the number of slaveholders.
The Reconstruction sought to expand the political freedoms of the freed slaves. The Southern states’ ratification of the Fourteen Amendment to the Constitution and the introduction of black suffrage epitomize the Reconstruction era (Foner 44). Radical Republicans were the proponents of the bill passed by Congress to enfranchise the African slaves. Thus, it could be argued that it is through the bill that the political rights were formally entrenched in the Constitution.
The Constitution was also not expressly clear on the status of slavery in the Republic. Article IV allowed fugitive African Americans to be integrated into society while Article I held that the complete abolition of the slave system would happen after two decades (Foner 64). Thus, the place of slavery in the Southern states was not clear in the Constitution upon the expiry of the 20 years in 1808. One can argue that the failure of the Constitution to indicate the place of slavery was meant to cushion the South from possible economic repercussions of an abrupt end of slavery. The vagueness of the Constitution on the issue of slavery was one of the factors fueling the war because it was prone to different interpretations by the factions.
Although the Reconstruction Acts and the Fourteenth Amendment conferred equal rights to freed slaves, the exclusionary practices in the South, such as excluding African Americans from the justice system and imposing severe penalties on black offenders was an attempt to maintain the socioeconomic order (Foner 82). Other legislative and economic measures, such as poll tax and the Ku Klux Klan’s violent attacks, aimed at reversing the black suffrage and individual freedoms (Foner 71). One can contend that one of the successes of Reconstruction was the direct Constitutional measures taken to abolish slavery in the new states. In this view, the legislators and policymakers resorted to exclusionary practices and violence in a bid to preserve the pre-Reconstruction political and socioeconomic status.
The States’ Rights
The issue of the rights of the states to permit or prohibit slavery was meant to protect the economic system of the slaveholding territories from possible repercussions of abolitionism. During the Westward expansion, some newly acquired territories became slave states while others joined as the Free States. Compromises such as 1850 Missouri Compromise gave rise to Free states (Maine) and slave states (Missouri) (Catton 56).
Congress also disallowed slavery in states to the South of the Missouri border (Catton 61). It would appear that the decision to keep some states free and others bondage might have been driven by the need to pacify tensions and promote unity. However, a more plausible explanation could be that the compromises, including the Fugitive Slaves Act, were enacted to give the slaveholding state’s transition to an industrial economy.
The Wilmot Proviso, which prohibited slaveholding in states freed by Mexico, was formed out of fear that slavery would disadvantage an average worker against a slaveholder (Catton 72). Therefore, the dominant view was that abolishment was necessary to give non-slaveholders in the North a fair chance in the country’s economy. Proponents like James K. Polk believed that the act would quell the hard-line positions on the slavery issue during the western expansion (Catton 67). It would appear that the economy was the dominant issue shaping the debate on permitting or prohibiting slavery in the new states. Though slaveholding was a divisive issue between the North and the South, some compromises had to be made to protect the economy of the states formed from the western territories.
The idea of popular sovereignty further strengthened the states’ power for self-determination. The Kansas-Nebraska Act of 1854 gave the citizens of new states the right to allow or disallow slaveholding (Fleming 157). It can be argued that the reasoning behind this act was to preserve the unity, but most importantly to appease Southern settlers who favored slave labor for economic reasons. As a result, the pro-slavery groups fought to retain slave labor in Kansas.
Northerners led by the likes of John Brown had to battle with the pro-slavery faction to free the states (Fleming 158). Therefore, the antipathy between the two factions was fueled by differences in philosophical inclinations on the slavery issue and economic models. While the Northerners raised moral questions over slaveholding, the South fought to protect their way of life. Northern abolitionists like William Garrison considered Southerners as uncivilized, fueling the conflict (Catton 92). The 1863 proposal for gradual emancipation by Abraham Lincoln underscores his desire to protect the Southern economy from potential shocks.
The Self-preservation Position
One of the positions held by the pro-slavery sympathizers was self-preservation, which explains why they fought to retain slavery. They feared a race war if the slaves were freed. Sympathizers like Thomas Jefferson who openly denounced slavery as an immoral institution could not avoid pointing out its necessity (Fleming 118). Thus, though freedom-loving slaveholders might have wanted to free the slaves, the fear of reprisals forced them to oppose abolition. They were aware that the whole master-slave relationship was degrading to the black population but could not let go of enslaved men, especially after the Nat Turner revolt.
The enslaved African American, Nat Turner, inspired by the Bible, mobilized about 70 rebels to launch an attack on their masters killing 60 whites in 1831 (Kelly 29). In retaliation, local white militiamen caught and killed 56 slaves (Kelly 29). One can argue that this revolt entrenched fear the idea of self-preservation in the South, which bred the modern gun culture in America.
Slaveholding became the necessary evil the South needed to advance its economic welfare despite the moral and security issues involved. Therefore, at the center of the idea of self-preservation was the economics of the region. As Catton writes, the typical position in the South was that the economy would fail if slave labor were to be abolished (76). The Reconstruction Plan introduced industrialization and new technologies after the termination of slavery when the states ratified the 13th Amendment (Catton 85). Thus, the complete abolishment of slavery was achieved through legislative and economic measures.
The compromises made also underscored the self-preservation position of Southern states. The Missouri Compromise during the westward expansion was meant to balance between free and slaveholding states (Kelly 53). The Missouri state, which applied to join the Union, wanted to keep its slaveholding practices (Kelly 53). Thus, the issue of slave/free states was both political and moral. From a political standpoint, the parity between slave and free territories could be seen as a strategy to preserve local interests, which were primarily economic. Other states, such as Maine, joined as free territories. It could be argued that Southerners supported the idea of self-preservation even as the nation expanded westwards.
The spread of the abolitionist movement also forced the Southerners to develop a more robust defense for slaveholding. Initially, the justification for slavery focused on state rights and economic considerations. However, the abolitionists cited scriptural verses on conscience to attack the integrity of slaveholders. The Southerners justified their actions by saying that the Bible approves natural slavery and requires servants to obey their masters (Guelzo 55). However, the justification could be construed as serving the interests of the slaveholders, which explains why they were afraid of a revolt.
How the War Shaped America
Although during the Reconstruction era slaveholding was not completely out, the period saw America undergo significant social, political, and economic transformations. The Southern figures such as Hansel Beckworth battled to preserve the existing social order (Guelzo 112). Ultimately, the war changed the social system not just for the South but also in the North. One of the successes of the abolitionists of the North is the freedom of the enslaved blacks.
By eradicating slavery, the War changed the social and economic systems of the South and established the United States. Arguably, the war unified the states into one indivisible nation by bringing together the two opposing factions. It also eradicated the clamor for secession by Southern states to create America founded on national values. The federal government became the ultimate authority, and the states to the South had to be vetted afresh to join the Union.
The primary reason for going to the war was the emancipation of the enslaved Africans. Besides preserving the Union, as President Lincoln wanted, the war introduced political rights and expanded the freedoms of the African Americans in the South. During the war, the country enacted the Emancipation Proclamation and the Reconstruction Plan with implications on the social and economic systems.
Initially, the goal of the Reconstruction plan was to bring the conflict to an ending (Foner 79). The subsequent Proclamation of Amnesty and Reconstruction restored individual rights to the former slaveholders who committed to ending slavery (Foner 79). It also introduced black suffrage and equality in American society. Thus, the war precipitated legislation that restored individual rights, which we still enjoy in contemporary America.
Freedom meant that African Americans were no longer in bondage. For the blacks, freedom meant that they had equal rights as those enjoyed by the white population (Catton 65). It encompassed individual and institutional independence, whereby whites no longer supervised activities such as learning and worshiping. Thus, the freedom extended to the right to worship and academic freedom.
Although the South experienced a dramatic transformation during the Reconstruction, the North too underwent significant changes. The North went through industrial expansion and socioeconomic changes that favored the growth of capitalism (Foner 156). In fact, the capitalist economy of today has its roots in the Reconstruction era. Sectors such as manufacturing, mining, and lumbering grew tremendously in the North during this period. Major infrastructural and railway projects were established to drive this growth. There were also changes in socioeconomic structure with white-collar employees replacing wage earners in industries (Foner 157). However, the Reconstruction plan failed to stimulate the same economic transformation in the South, which perpetuated the inequalities witnessed even today.
The Reconstruction also entailed the compensation of the slave owners to release the enslaved men. The settlement cost the country up to $2bn in the current rate (Guelzo 49). Further, by allowing serving class in the South, the Union enhanced economic disparities between the races. It can also be argued that the Reconstruction plan did not forthrightly deal with the issue of racial equality. In modern America, racial tensions flare up because the Reconstruction failed to address the slavery issue definitively, which led to racial segregation. It also did little to improve the economic condition of the freed African-Americans in the South.
Slavery also bred racism against blacks. Although the white masters showed paternalism towards the slaves under their care, it could be argued that they considered them an inferior race. As Guelzo points out, the slave owners showed concern, especially to house slaves, and held the belief that they were not responsible for the slaves’ suffering (121). The white slave owners developed the master-slave mentality, which made it hard for them to reconcile with the idea of slave resistance (Guelzo 125). They considered slaves lazy and intellectually inferior to whites, which arguably bred the racist attitudes and social classes we see today.
The Civil War indeed defined the history of modern America. It entrenched unity and the identity of being American and created the rights and freedoms that everyone enjoys today. The Civil War and the Reconstitution era had dramatic political and socioeconomic changes that remain up to today. However, the war also inadvertently entrenched racism and economic disparities in American society. In addition, there are obstacles to achieving racial equity present a challenge to the current generation.
Catton, Bruce. The Civil War. New York: Houghton Mifflin Harcourt, 2004. Print.
Fleming, Thomas. A Disease in the Public Mind: A New Understanding of Why We Fought the Civil War. Boston, MA: De Capo Press, 2013. Print.
Foner, Eric. A Short History of Reconstruction. New York: Harper Collins, 2010. Print.
Guelzo, Allen. Fateful Lightning: A New History of the Civil War & Reconstruction. New York: Oxford University Press, 2012. Print.
Kelly, Brian. Best Little Stories From The Civil War. Charlottesville, Virginia: Montpelier Publishing, 1995. Print.
McPherson, James. The War that Forged a Nation: Why the Civil War Still Matters. New York: Oxford, 2015. Print. |
What shall we do with a neutron microscope?
Neutrons have a set of unique properties that make them better suited than light, electrons, or x-rays for looking at the physics and chemistry going on inside an object. Scientists working out of MIT's Nuclear Reactor Laboratory have now invented and built a high-resolution neutron microscope, a feat that required developing new approaches to neutron optics.
Why would anyone want to use neutron imaging to study materials? Optical microscopes tell you what the reflectivity of the surface of a material is, but little else. X-ray microscopes tell you what the mass density of the insides of an object is, but again, little of any structure that isn't mirrored in the density of the material.
In contrast, neutrons are heavy compared to the other particles (photons and electrons) used in forming images, and have no electric charge, properties that make it possible to look deeply inside an object while gaining information about the structure that is not accessible through the other forms of microscopy. Unfortunately, these same properties make it difficult to focus a beam of neutrons – a prerequisite for forming an image.
Neutrons do interact with atomic nuclei via the strong force. This interaction can cause the neutrons to scatter from their original path, and can also remove neutrons through absorption. Either way, a neutron beam that is penetrating a material becomes progressively less intense. In this way, neutrons are analogous to x-rays for studying the invisible interiors of objects.
However, while the darker regions of an x-ray image indicates how much matter the x-rays have passed through, the density of a neutron image provides information on the neutron absorption of the material. This absorption can vary by many orders of magnitude among the chemical elements.
As a result, a neutron image provides different information about the composition and structure of the interior of an object than do x-ray images. In particular, neutron imaging has great potential for studying so-called soft materials, as small changes in the location of hydrogen within a material can produce highly visible changes in a neutron image.
Neutrons also offer unique capabilities for research in magnetic materials. Neutrons may be uncharged, but they do have spin, and hence also a magnetic moment. It can help to think of a tiny bar magnet within the neutron that can interact with other magnetic fields. The neutron's lack of electric charge means there is no need to correct magnetic measurements for errors caused by stray electric fields and charges, another argument for using neutrons to study magnetism.
The most informative approach to using neutrons to study magnetic materials is likely the use of polarized neutron beams, beams in which the neutron spins are oriented in the same direction. This allows measurement of the strength and characteristics of magnetism within a material. Such information is extraordinarily difficult to determine in any other way, and cuts to the essence of the magnetic properties of a material.
Neutron images, such as those used in nondestructive testing, have been based mainly on shadowgraphs – images produced by casting a shadow on a surface, usually taken with a pinhole camera. Such methods, however, always involve an awkward balance between low illumination levels (and hence long exposure times) and poor spatial resolution – both being the natural result of using only pinhole optics.
Similar problems are associated with the pinhole optics of the camera obscura, a camera that forms an image of a scene by projecting light from the scene through a pinhole. A rule of thumb states that a good balance between illumination and resolution is obtained when the diameter of the pinhole is about 100 times smaller than the distance between the pinhole and the image screen, effectively making the pinhole an f/100 lens.
Optimum, however, is not necessarily good. The level of illumination on the image screen projected from an f/100 pinhole would be more than 1,000 times dimmer than that from a standard f/2.8 camera lens. Perhaps worse, the resolution of the pinhole lens cannot be smaller than the diameter of the hole. The resolution of an f/100 pinhole is about half a degree, making the camera obscura barely able to notice that the Moon looks like a disk rather than a point of light. However, an f/100 glass lens with a diameter of an inch can see lunar craters smaller than 10 miles (16 km) across.
The potential for dramatically improving the performance of pinhole-based neutron optics led the MIT Nuclear Reactor Laboratory group to develop an imaging neutron microscope. Their goals were to increase both the resolution of the image and the level of illumination, so that the neutron microscope can quickly produce higher-quality images. Unlike the case of an optical microscope, however, there is no equivalent of optical glass from which lenses for neutrons can be made. Conventional mirrors also tend not to work, as the neutrons simply go through them.
The key to the design of the neutron microscope is the Wolter mirror, similar in principle to grazing incidence mirrors used for x-ray and gamma-ray telescopes.
When a neutron grazes the surface of a metal at a sufficiently small angle, it is reflected away from the metal surface at the same angle. When this occurs with light, the effect is called total internal reflection. However, owing to the way neutrons interact with the electrons in a metal, it would be better to call this total external reflection – the neutrons refuse to enter the material. Fortunately, the critical angle for grazing reflection is large enough (a few tenths of a degree for thermal neutrons) that a curved mirror can be constructed. Given curved mirrors, an optical system that creates an image can be made. The figure below shows a cartoon of a four power neutron microscope after the MIT design.
Having formed a neutron image, it is necessary to find a way to visualize it. In the MIT microscope, the neutron flux at the imaging focal plane was measured by a CCD imaging array with a neutron scintillation screen placed in front of it. The scintillation screen is made of zinc sulfide (a traditional fluorescent compound) laced with lithium. When a thermal neutron is absorbed by a lithium-6 nucleus, it causes a fission reaction that produces helium, tritium, and a lot of excess energy. These fission products cause the ZnS phosphor to light up like a Christmas tree, producing an image in light that can be captured with the CCD array.
MIT’s new neutron microscope is a proof-of-principle, attaining only a four-fold magnification and 10-20 times better illumination than earlier pinhole neutron cameras. However, it points the way toward new approaches to study properties of whole classes of fascinating and potentially useful materials.
Source: MIT Nuclear Reactor Laboratory |
- Activities & Projects
- Graphics & Images
- Handouts & References
- Lab Resources
- Learning Games
- Lesson Plans
- Primary Sources
- Printables & Templates
- Professional Documents
- Study Guides
- Graphic Organizers
- Writing Prompts
- Constructed Response Items
- AP Test Preps
- Lesson Planet Articles
- Interactive Whiteboards
- All Resource Types
- Show All
See similar resources:
A Five Day Approach to Using Technology and Manipulatives to Explore Area and Perimeter
Young mathematicians build an understanding of area and perimeter with their own two hands in a series of interactive geometry lessons. Through the use of different math manipulatives, children investigate the properties of rectangles,...
3rd - 6th Math CCSS: Adaptable
Unit of Measures used in Measuring the Area of a Triangle/Parallelogram
Third, fourth, and fifth graders explore area measurement. First, they define and identify parallelograms. They then construct parallelograms on a geoboard and determine the area by covering it with paper square centimeters. They also...
3rd - 5th Math
Find the Perimeter of a Rectangle Using an Area Model
A real-world problem is posed at the beginning of the fourth of six lessons on applying formulas for area and perimeter. A review of the definition of squares and rectangles begins the discussion, followed by an example of solving a...
5 mins 3rd - 5th Math CCSS: Designed
Area Model of Multiplication Using Base 10 Manipulatives
Explore two-digit multiplication with your class as they work in groups to build models of two-digit multiplication using base 10 manipulatives. They construct rectangles replacing standard numbers with equivalent place values using the...
3rd - 5th Math
Relating Area and Perimeter in 2-D Objects
Students explore the area and perimeter of two-dimensional shapes. In this measurement lesson, students compare the lengths, widths, perimeter, and area of two-dimensional objects. Students view an interactive presentation and complete a...
4th - 6th Math CCSS: Adaptable
Comparing Volume and Surface Area in 3-D Shapes
Students explore 3-dimensional shapes. In this volume and surface area comparison lesson, students discover the lengths, widths, and height of rectangular prisms. Students solve problems and record their measurements and data on a...
5th - 7th Math CCSS: Adaptable
Measurement: Finding Areas of Rectangles
Gardening geometers construct the formula for the area of a rectangle by viewing a video revolving around two young people, their lawn mowing business, and their need to charge by the square meter. A relevant real-life application lesson!
3rd - 5th Math CCSS: Adaptable |
Difference between revisions of "Central angle"
m (Central Angle moved to Central angle: AoPSWiki standard is *not* to capitalize all words in a title, except in a proper name (e.g., the name of a theorem))
|Line 1:||Line 1:|
'''angle''' is an anglethat has its [[vertex]] on the centerof [] the circle , so [[subtend]]an [[arc]] of the circle. The [] that the central angle subtends equal to the central angle, and is known as the [[arc segment]]'s [[angular distance]].
* [[Inscribed ]]
Revision as of 09:46, 11 July 2007
This article is a stub. Help us out by.
In a given circle, a central angle is an angle that has its vertex on the center of the circle. The sides of any such angle each intersect the circle in exactly one point, so the angle subtends an arc of the circle. The measure of the arc that the central angle subtends is by definition equal to the measure of the central angle, and is known as the arc segment's angular distance. |
Another useful summary measure for a collection of data is the median. As you learned in Session 2, the median is the middle data value in an ordered list. Here's one way to find the median of our ordered noodles.
This activity requires the Flash plug-in, which you can download for free from Macromedia's Web site. If you prefer, you can continue to work with your noodles in a non-interactive version of this activity, which doesn't require the Flash plug-in.
We'll begin with the 11 noodles arranged in order from shortest to longest. We'll remove two noodles at a time, one from each end, and put them to the side. We'll continue this process until
only one noodle remains. This noodle is the median, which we'll label "Med."
If you could see only the median noodle, what would you know about the other noodles?
What would knowing the median tell you about each of the first five (the shortest five) noodles? What would it tell you about each of the last five (the longest five) noodles? Close Tip
If you could see only the median noodle, describe some information you would not know about the other noodles. |
Data: data are simply values or set of values. A data item refers to a single unit of value. Data item divided into subitems are called group items. Items that are not in group are called elementary items. For example: An employ name is a data item but you may divide into three subitem: first name, middle name and last name. Difference between data and information:
- Data is something like raw material.
- When we process data and it start giving some meaning than it becomes information.
For example: Data:
Data Structures: Storing and organizing the data in such a way that it becomes easier to access and modify when needed is called data structure.
The only way to storing and organizing data in a way we have discussed above is storing the data into some logical manner mathematical manners. So according to the above explanation we can say... A logical or mathematical model which allow us to organize data into some particular organization. Classification of Data Structures:
- Linear data structure
Non-linear data structures.
Linear Data structures: The data structure in which all of its items are stored in or lets say arranged in linear order.
Non-Linear Data structure: There is no sequence in these kind of data structures arrangement. The items in these kind of data structure are arranged in connections with other items. Let say one item of this data structure type will be connected with two or more items.
Non-Linear Data Structures |
Types of Validity
Validity is the extent to which a concept, measurement, and a given conclusion is well-founded and accurately correlates or corresponds to the real world. There are three types of validity such as internal validity, external validity, and social validity. These types of validity often involve the evaluation of a research study and procedure.
Internal validity is defined as the extent to which a piece of research supports the cause and effects of the relationship of the dependent-independent variable. Internal validity can be improved by using standardized instructions and eliminating demand characteristics as well as investigators. External validity refers to the extent to which a research study can be generalized or applied to other settings, over a given period of time, and to other people (Rothwell, 2009). It can be improved by using random selection procedures and undertaking research studies or experiments in many natural settings. On the other hand, social validity refers to the social acceptability and satisfaction of the intervention procedures of a research study by the people who receive and implement the procedures (Schwartz & Baer, 1991). These types of validity play a crucial role in behavioral research as they help to determine the research procedures, ensure that the researchers effectively use or follow the procedures, and evaluate the significance the research has on a given setting, or population (Steckler & McLeroy, 2008).
One can evaluate a particular study’s external, internal, and social validity through testing. A research study is designed to pre-test certain subjects in order for a researcher to establish their correlation with other subjects. Instrumentation can also be used to evaluate a particular study’s validity. Thus, one can change measurement or administration methods when undertaking a given research study. Moreover, one can use selection interaction to establish how the subjects of a given study were selected and treated.
Rothwell, P. M. (2009). Commentary: External Validity of Results of Randomized Trials: Disentangling A Complex Concept. International Journal of Epidemiology, 39(1), 94-96. Retrieved from https://academic.oup.com/ije/article/39/1/94/713944
Schwartz, I. S., & Baer, D. M. (1991). Social Validity Assessments: Is Current Practice State of The Art?. Journal of Applied Behavior Analysis, 24(2), 189-204. Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1279564/pdf/jaba00020-0015.pdf
Steckler, A., & McLeroy, K. R. (2008). The Importance of External Validity. Retrieved from https://ajph.aphapublications.org/doi/pdfplus/10.2105/AJPH.2007.126847 |
Notice that node D happens to be an endpoint of three different arcs. That property can be seen instantly from the diagram, but it takes careful checking to verify it from the set of pairs. For people, diagrams are the most convenient way of thinking about graphs.
Due to the nature of the mathematics on this site it is best views in landscape mode. If your device is not in landscape mode many of the equations will run off the side of your device should be able to scroll to see them and some of the menu items will be cut off due to the narrow screen width.
Equations of Planes In the first section of this chapter we saw a couple of equations of planes. However, none of those equations had three variables in them and were really extensions of graphs that we could look at in two dimensions.
We would like a more general equation for planes. This vector is called the normal vector. Here is a sketch of all these vectors.
Also notice that we put the normal vector on the plane, but there is actually no reason to expect this to be the case. We put it here to illustrate the point. It is completely possible that the normal vector does not touch the plane in any way.
Recall from the Dot Product section that two orthogonal vectors will have a dot product of zero. A slightly more useful form of the equations is as follows.
Start with the first form of the vector equation and write down a vector for the difference. This second form is often how we are given equations of planes.
Notice that if we are given the equation of a plane in this form we can quickly get a normal vector for the plane. We need to find a normal vector.
Recall however, that we saw how to do this in the Cross Product section.
We can form the following two vectors from the given points. Notice as well that there are many possible vectors to use here, we just chose two of the possibilities. Now, we know that the cross product of two vectors will be orthogonal to both of these vectors.
Since both of these are in the plane any vector that is orthogonal to both of these will also be orthogonal to the plane. Therefore, we can use the cross product as the normal vector.
Show Solution This is not as difficult a problem as it may at first appear to be. We can pick off a vector that is normal to the plane. We can also get a vector that is parallel to the line. Now, if these two vectors are parallel then the line and the plane will be orthogonal.Rational Absolute Value Problem.
Notes. Let’s do a simple one first, where we can handle the absolute value just like a factor, but when we do the checking, we’ll take into account that it is an absolute value.
I am going to break one of my unspoken cardinal rules: Only write about real problems and measurement that is actually possible in the real world. I am going to break the second part of the rule.
I am going to define a way for you to think about measuring social media, and you can't actually easily. Reduce a given linear equation in two variables to the standard form y = mx + c; calculate gradients and intercepts of the graphs and then plot them to check.
Graphing Slope. Accurately graphing slope is the key to graphing linear equations. In the previous lesson, Calculating Slope, you learned how to calculate the slope of a line. In this lesson, you are going to graph a line, given the slope.
kcc1 Count to by ones and by tens. kcc2 Count forward beginning from a given number within the known sequence (instead of having to begin at 1). kcc3 Write numbers from 0 to Represent a number of objects with a written numeral (with 0 representing a count of no objects).
kcc4a When counting objects, say the number names in the standard order, pairing each object with one and only. When looking at the equation of the moved function, however, we have to be careful..
When functions are transformed on the outside of the \(f(x)\) part, you move the function up and down and do the “regular” math, as we’ll see in the examples rutadeltambor.com are vertical transformations or translations, and affect the \(y\) part of the function.
When transformations are made on the inside of. |
12. Arc Length of Curve: Parametric, Polar Coordinates
by M. Bourne
Arc Length of a Curve which is in Parametric Coordinates
We'll first look at an example then develop the formula for the general case.
Example 1 - Race Track
In the Curvilinear Motion section, we had an example where a race car was travelling around a curve described in parametric equations as:
`x(t) = 20 + 0.2t^3`,
`y(t) = 20t − 2t^2`
where x and y are in meters and t is time in seconds.
What is the distance travelled by the car in the first 8 seconds?
The graph of this case is given below.
It is based on plotting the x- and y-points at times between `t = 0` and `t = 8`.
So for example, at `t = 0`,
`x(0) = 20`, and `y(0) = 0`, so the car starts at `(20, 0)`.
At `t = 3`,
`x(3) = 25.4`, and `y(3) = 42`, so the car is at `(25.4, 42)`.
Finally, at `t = 8`, the car is at
`x(8) = 122.4`, and `y(8) = 32`, that is `(122.4, 32)`.
The parametric curve (x(t), y(t)), showing assorted points.
Estimate: An inspection of the graph shows our final answer should be around 150 m.
We extend the concept from Arc Length of a Curve to the parametric case.
We start with the expression that we met in the earlier section:
Differentiating with respect to t and squaring gives:
Taking the positive square root of each side:
Then, integrating with respect to t from t = t1 to t = t2 gives us the formula for the length of a curve in parametric equations form:
Back to Example 1
Find the required length travelled by the race car using the given formula.
We now use the formula to find the distance travelled by the car.
In this case, we have:
`x(t) = 20+0.2t^3`, so `(dx)/(dt)=0.6t^2`
`y(t) = 20t − 2t^2` giving `(dy)/(dt)=20-4t`
Our lower and upper limits for this example are `t = 0` to `t = 8`.
Substituting these into the distance formula gives:
`text[length]=r` `=int_0^8 sqrt[(0.6t^2)^2+(20-4t)^2]`
Using a computer algebra system (see the answer in Wolfram|Alpha) gives us the length `144.7\ "m"`.
Our answer is reasonable and is consistent with our earlier estimate.
Please support IntMath!
Arc Length of a Curve in Polar Coordinates
Once again we start with an example to get a sense of what we are trying to find.
Example 2 - Golden Spiral
A Golden Spiral has the characteristic such that for every quarter turn (`90^@` or `π/2` in radians), the distance from the center of the spiral increases by the golden ratio `Phi = 1.6180`.
The formula for a golden spiral is as follows:
r(θ) = 1.618013 e0.30635θ
Find the length of the spiral from the center to the point where it has rotated two complete revolutions.
Following is the spiral whose length we need to find. It traces out the angle from `θ = 0` to `θ = 4π` (2 revolutions).
Estimate: It is actually quite difficult to estimate the length of this curve by inspection. But it is reasonable to imagine we can approximate it with a circle, radius 40 and this would give a length (circumference) of
`C = 2πr = 2π(40) = 80π ≈ 251`
Next we'll meet the equation for the length.
General Form of the Length of a Curve in Polar Form
In general, the arc length of a curve r(θ) in polar coordinates is given by:
`L=int_a^bsqrt(r^2+((dr)/(d theta))^2)d theta`
where θ spans from θ = a to θ = b.
Back to Example 2
Use the above formula to find the length of the Golden Spiral, rotated 2 revolutions.
Applying the formula, we have for our Golden Spiral example:
From the question:
`r = 1.618013\ e^(0.30635\ θ)`
`r^2 = (1.618013\ e^(0.30635\ θ))^2` `= 2.61797\ e^(0.6127\ θ)`
`(dr)/(d theta)=0.49568\ e^[0.30635\ theta]`
`((dr)/(d theta))^2=0.245697\ e^[0.6127\ theta]`
Putting it together, the required length is:
`L` `=int_0^[4pi] sqrt[2.61797e^[0.6127theta]+0.2457e^[0.6127 theta]]d theta` `=254.0`
This is actually quite close to our very rough estimate before.
Note: The answer of 254.0 above comes from using a computer algebra system, like Scientific Notebook or Wolfram|Alpha (which gives quite a different answer, 244.2).
Get the Daily Math Tweet!
IntMath on Twitter
We can see Archimedean Spirals in the spring mechanism of clocks.
Watch mechanism [Image source]
An Archimedean Spiral has general equation in polar coordinates:
`r = a + bθ`
r is the distance from the origin;
a is the start point of the spiral; and
b affects the distance between each arm. (The distance is actually given by `2pib`.)
Find the length of a flat clock spring which is in the shape of a spiral having 7.5 turns, where the inner radius is 5 mm and the outer end radius is 15.5 mm.
(You can see more background on this question at Length of Archimedean Spiral.)
Here is a graph of the situation:
In this case, `a = 5` (since this is where the spiral starts).
The distance between each arm is:
`(15.5 − 5)/7.5 = 10.5/7.5 = 1.4`
and b is found as:
`b = 1.4/2π = 0.22282`
So our formula is
`r = 5 + 0.22282\ θ`
The start angle is `θ` is `a = 0` and after `7.5` turns, the end point is `b = 7.5 × 2π = 15π = 47.12389`.
The derivative is:
Substituting all these in our formula gives:
`L` `=int_0^[15pi] sqrt[(5+0.22282\ theta)^2+(0.22282)^2]\ d theta` ` =483.1`
So the clock spring is `483.1\ "mm"` long.
Easy to understand math videos: |
In mathematics, a Cayley graph, also known as a Cayley color graph, Cayley diagram, group diagram, or color group is a graph that encodes the abstract structure of a group. Its definition is suggested by Cayley's theorem (named after Arthur Cayley) and uses a specified, set of generators for the group. It is a central tool in combinatorial and geometric group theory. The structure and symmetry of Cayley graphs makes them particularly good candidates for constructing families of expander graphs.
- Each element of is assigned a vertex: the vertex set of is identified with
- Each element of is assigned a color .
- For every and , there is a directed edge of color from the vertex corresponding to to the one corresponding to .
Not every source requires that generate the group. If is not a generating set for , then is disconnected and each connected component represents a coset of the subgroup generated by .
If an element of is its own inverse, then it is typically represented by an undirected edge.
In geometric group theory, the set is often assumed to be finite which corresponds to being locally finite.
- Suppose that is the infinite cyclic group and the set consists of the standard generator 1 and its inverse (−1 in the additive notation); then the Cayley graph is an infinite path.
- Similarly, if is the finite cyclic group of order and the set consists of two elements, the standard generator of and its inverse, then the Cayley graph is the cycle . More generally, the Cayley graphs of finite cyclic groups are exactly the circulant graphs.
- The Cayley graph of the direct product of groups (with the cartesian product of generating sets as a generating set) is the cartesian product of the corresponding Cayley graphs. Thus the Cayley graph of the abelian group with the set of generators consisting of four elements is the infinite grid on the plane , while for the direct product with similar generators the Cayley graph is the finite grid on a torus.
- A Cayley graph of the dihedral group on two generators and is depicted to the left. Red arrows represent composition with . Since is self-inverse, the blue lines, which represent composition with , are undirected. Therefore the graph is mixed: it has eight vertices, eight arrows, and four edges. The Cayley table of the group can be derived from the group presentation
- A different Cayley graph of is shown on the right. is still the horizontal reflection and is represented by blue lines, and is a diagonal reflection and is represented by pink lines. As both reflections are self-inverse the Cayley graph on the right is completely undirected. This graph corresponds to the presentation
- The Cayley graph of the free group on two generators and corresponding to the set is depicted at the top of the article, and represents the identity element. Travelling along an edge to the right represents right multiplication by while travelling along an edge upward corresponds to the multiplication by Since the free group has no relations, the Cayley graph has no cycles. This Cayley graph is a 4-regular infinite tree and is a key ingredient in the proof of the Banach–Tarski paradox.
- A Cayley graph of the discrete Heisenberg group
- is depicted to the right. The generators used in the picture are the three matrices given by the three permutations of 1, 0, 0 for the entries . They satisfy the relations , which can also be understood from the picture. This is a non-commutative infinite group, and despite being a three-dimensional space, the Cayley graph has four-dimensional volume growth.
The group acts on itself by left multiplication (see Cayley's theorem). This may be viewed as the action of on its Cayley graph. Explicitly, an element maps a vertex to the vertex The set of edges of the Cayley graph and their color is preserved by this action: the edge is mapped to the edge , both having color . The left multiplication action of a group on itself is simply transitive, in particular, Cayley graphs are vertex-transitive. The following is a kind of converse to this:
- Sabidussi's Theorem. An (unlabeled and uncolored) directed graph is a Cayley graph of a group if and only if it admits a simply transitive action of by graph automorphisms (i.e. preserving the set of directed edges).
To recover the group and the generating set from the unlabeled directed graph select a vertex and label it by the identity element of the group. Then label each vertex of by the unique element of that maps to The set of generators of that yields as the Cayley graph is the set of labels of out-neighbors of .
- The Cayley graph depends in an essential way on the choice of the set of generators. For example, if the generating set has elements then each vertex of the Cayley graph has incoming and outgoing directed edges. In the case of a symmetric generating set with elements, the Cayley graph is a regular directed graph of degree
- Cycles (or closed walks) in the Cayley graph indicate relations between the elements of In the more elaborate construction of the Cayley complex of a group, closed paths corresponding to relations are "filled in" by polygons. This means that the problem of constructing the Cayley graph of a given presentation is equivalent to solving the Word Problem for .
- If is a surjective group homomorphism and the images of the elements of the generating set for are distinct, then it induces a covering of graphs
- where In particular, if a group has generators, all of order different from 2, and the set consists of these generators together with their inverses, then the Cayley graph is covered by the infinite regular tree of degree corresponding to the free group on the same set of generators.
- For any finite Cayley graph, considered as undirected, the vertex connectivity is at least equal to 2/3 of the degree of the graph. If the generating set is minimal (removal of any element and, if present, its inverse from the generating set leaves a set which is not generating), the vertex connectivity is equal to the degree. The edge connectivity is in all cases equal to the degree.
- If is the left-regular representation with matrix form denoted , the adjacency matrix of is .
- Every group character of the group induces an eigenvector of the adjacency matrix of . When is Abelian, the associated eigenvalue is
- which takes the form for integers
- In particular, the associated eigenvalue of the trivial character (the one sending every element to 1) is the degree of , that is, the order of . If is an Abelian group, there are exactly characters, determining all eigenvalues. The corresponding orthonormal basis of eigenvectors is given by It is interesting to note that this eigenbasis is independent of the generating set .
- More generally for symmetric generating sets, take a complete set of irreducible representations of and let with eigenvalue set . Then the set of eigenvalues of is exactly where eigenvalue appears with multiplicity for each occurrence of as an eigenvalue of
Schreier coset graph
If one, instead, takes the vertices to be right cosets of a fixed subgroup one obtains a related construction, the Schreier coset graph, which is at the basis of coset enumeration or the Todd–Coxeter process.
Connection to group theory
Knowledge about the structure of the group can be obtained by studying the adjacency matrix of the graph and in particular applying the theorems of spectral graph theory. Conversely, for symmetric generating sets, the spectral and representation theory of are directly tied together: take a complete set of irreducible representations of and let with eigenvalues . Then the set of eigenvalues of is exactly where eigenvalue appears with multiplicity for each occurrence of as an eigenvalue of
Geometric group theory
For infinite groups, the coarse geometry of the Cayley graph is fundamental to geometric group theory. For a finitely generated group, this is independent of choice of finite set of generators, hence an intrinsic property of the group. This is only interesting for infinite groups: every finite group is coarsely equivalent to a point (or the trivial group), since one can choose as finite set of generators the entire group.
Formally, for a given choice of generators, one has the word metric (the natural distance on the Cayley graph), which determines a metric space. The coarse equivalence class of this space is an invariant of the group.
When , the Cayley graph is -regular, so spectral techniques may be used to analyze the expansion properties of the graph. In particular for abelian groups, the eigenvalues of the Cayley graph are more easily computable and given by with top eigenvalue equal to , so we may use Cheeger's inequality to bound the edge expansion ratio using the spectral gap.
- If a discrete group has Kazhdan's property (T), and is a finite, symmetric generating set of , then there exists a constant depending only on such that for any finite quotient of the Cayley graph of with respect to the image of is a -expander.
For example the group has property (T) and is generated by elementary matrices and this gives relatively explicit examples of expander graphs.
An integral graph is one whose eigenvalues are all integers. While the complete classification of integral graphs remains an open problem, the Cayley graphs of certain groups are always integral. Using previous characterizations of the spectrum of Cayley graphs, note that is integral iff the eigenvalues of are integral for every representation of .
Cayley integral simple group
A group is Cayley integral simple (CIS) if the connected Cayley graph is integral exactly when the symmetric generating set is the complement of a subgroup of . A result of Ahmady, Bell, and Mohar shows that all CIS groups are isomorphic to , or for primes . It is important that actually generates the entire group in order for the Cayley graph to be connected. (If does not generate , the Cayley graph may still be integral, but the complement of is not necessarily a subgroup.)
In the example of , the symmetric generating sets (up to graph isomorphism) are
- : is a -cycle with eigenvalues
- : is with eigenvalues
The only subgroups of are the whole group and the trivial group, and the only symmetric generating set that produces an integral graph is the complement of the trivial group. Therefore must be a CIS group.
The proof of the complete CIS classification uses the fact that every subgroup and homomorphic image of a CIS group is also a CIS group.
Cayley integral group
A slightly different notion is that of a Cayley integral group , in which every symmetric subset produces an integral graph . Note that no longer has to generate the entire group.
The complete list of Cayley integral groups is given by , and the dicyclic group of order , where and is the quaternion group. The proof relies on two important properties of Cayley integral groups:
- Subgroups and homomorphic images of Cayley integral groups are also Cayley integral groups.
- A group is Cayley integral iff every connected Cayley graph of the group is also integral.
Normal and Eulerian generating sets
Given a general group , is normal if is closed under conjugation by elements of (generalizing the notion of a normal subgroup), and is Eulerian if for every , the set of elements generating the cyclic group is also contained in . A 2019 result by Guo, Lytkina, Mazurov, and Revin proves that the Cayley graph is integral for any Eulerian normal subset , using purely representation theoretic techniques.
The proof of this result is relatively short: given an Eulerian normal subset, select pairwise nonconjugate so that is the union of the conjugacy classes . Then using the characterization of the spectrum of a Cayley graph, one can show the eigenvalues of are given by taken over irreducible characters of . Each eigenvalue in this set must be an element of for a primitive root of unity (where must be divisible by the orders of each ). Because the eigenvalues are algebraic integers, to show they are integral it suffices to show that they are rational, and it suffices to show is fixed under any automorphism of . There must be some relatively prime to such that for all , and because is both Eulerian and normal, for some . Sending bijects conjugacy classes, so and have the same size and merely permutes terms in the sum for . Therefore is fixed for all automorphisms of , so is rational and thus integral.
Consequently, if is the alternating group and is a set of permutations given by , then the Cayley graph is integral. (This solved a previously open problem from the Kourovka Notebook.) In addition when is the symmetric group and is either the set of all transpositions or the set of transpositions involving a particular element, the Cayley graph is also integral.
Cayley graphs were first considered for finite groups by Arthur Cayley in 1878. Max Dehn in his unpublished lectures on group theory from 1909–10 reintroduced Cayley graphs under the name Gruppenbild (group diagram), which led to the geometric group theory of today. His most important application was the solution of the word problem for the fundamental group of surfaces with genus ≥ 2, which is equivalent to the topological problem of deciding which closed curves on the surface contract to a point.
The Bethe lattice or infinite Cayley tree is the Cayley graph of the free group on generators. A presentation of a group by generators corresponds to a surjective map from the free group on generators to the group and at the level of Cayley graphs to a map from the infinite Cayley tree to the Cayley graph. This can also be interpreted (in algebraic topology) as the universal cover of the Cayley graph, which is not in general simply connected.
- Vertex-transitive graph
- Generating set of a group
- Lovász conjecture
- Cube-connected cycles
- Algebraic graph theory
- Cycle graph (algebra)
- Magnus, Wilhelm; Karrass, Abraham; Solitar, Donald (2004) . Combinatorial Group Theory: Presentations of Groups in Terms of Generators and Relations. Courier. ISBN 978-0-486-43830-6.
- Cayley, Arthur (1878). "Desiderata and suggestions: No. 2. The Theory of groups: graphical representation". American Journal of Mathematics. 1 (2): 174–6. doi:10.2307/2369306. JSTOR 2369306. In his Collected Mathematical Papers 10: 403–405.
- Theron, Daniel Peter (1988), An extension of the concept of graphically regular representations, Ph.D. thesis, University of Wisconsin, Madison, p. 46, MR 2636729.
- Sabidussi, Gert (October 1958). "On a class of fixed-point-free graphs". Proceedings of the American Mathematical Society. 9 (5): 800–4. doi:10.1090/s0002-9939-1958-0097068-7. JSTOR 2033090.
- See Theorem 3.7 of Babai, László (1995). "27. Automorphism groups, isomorphism, reconstruction" (PDF). In Graham, Ronald L.; Grötschel, Martin; Lovász, László (eds.). Handbook of Combinatorics. 1. Elsevier. pp. 1447–1540. ISBN 9780444823465.
- White, Arthur T. (1972). "On the genus of a group". Transactions of the American Mathematical Society. 173: 203–214. doi:10.1090/S0002-9947-1972-0317980-2. MR 0317980.
- Proposition 1.12 in Lubotzky, Alexander (2012). "Expander graphs in pure and applied mathematics". Bulletin of the American Mathematical Society. 49: 113–162. arXiv:1105.2389. doi:10.1090/S0273-0979-2011-01359-3.
- Ahmady, Azhvan; Bell, Jason; Mohar, Bojan (2014). "Integral Cayley graphs and groups". SIAM Journal on Discrete Mathematics. 28 (2): 685–701. arXiv:1307.6155. doi:10.1137/130925487. S2CID 207067134.
- Guo, W.; Lytkina, D.V.; Mazurov, V.D.; Revin, D.O. (2019). "Integral Cayley graphs" (PDF). doi:10.1007/s10469-019-09550-2.
- Dehn, Max (2012) . Papers on Group Theory and Topology. Springer-Verlag. ISBN 978-1461291077. Translated from the German and with introductions and an appendix by John Stillwell, and with an appendix by Otto Schreier. |
Level 163 Level 165
68 words 0 ignored
Ready to learn Ready to review
Check the boxes below to ignore/unignore words, then click save at the bottom. Ignored words will never appear in any learning session.
Closure Postulate of Addition
the sum of a + b is a unique real number
Commutative Property of Addition
Associative Property of Addition
Additive Postulate of Zero
Postulate of Additive Inverses
Closure Postulate of Multiplication
The product of ab is a unique real number
Commutative Property of Multiplication
Associative Postulate of Multiplication
Multiplicative Postulate of One
Postulate of Multiplicative Inverses
when a does not = 0, (1/a)*a=1; a*(1/a)=1
Reflexive Property of Equality
Symmetric Property of Equality
For all real numbers x and y, if x = y, then y = x.
Transitive Property of Equality
For all real numbers x, y, and z , if x = y and y = z, then x = z.
Postulate of Comparison
one and only one of the following statements are true: a<b, a=b, or a>b.
Transitive Postulate of Inequality
if a<b and b<c, then a<c
Additive Postulate of Inequality
if a<b, then a+c<b+c
Multiplicative Postulate of Inequality
If a<b and 0<c, then ac<bc; if a<b and c<0, then bc<ac
Addition Property of Equality
if a=b, then a+c=b+c and c+a=c+b
Subtraction Property of Equality
if a=b, then a-c=b-c and c-a=c-b
Multiplicative Property of Equality
if a=b, then ac=bc and ca=ba
Division Property of Equality
if a=b and c does not equal 0, then a/c=b/c
Subtraction Property of Inequality
if a<b, then a-c<b-c
Division Property of Inequality
if a<b and c does not equal 0, then a/c<b/c
if a=b, "a" may be replaced by "b" and vice versa in any equation or inequality
Zero Product Property
If ab=0, then a=0 or b=0
Segment Addition Postulate
If B is between A and C, then AB + BC= AC
Definition of a midpoint
The midpoint of a segment is the point that divides the segment into two congruent segments.
Definition of a bisector
A bisector of a segment is a line, segment, ray, or plane whose intersection with segment AB is the midpoint of segment AB.
Angles with equal measures are..
Angle Addition Postulate
D is in the interior of angle ABC and only if m<ABD + m<DBC = m<ABC.
Angles that are next to each other. They share a vertex and a common side.
if M is the midpoint of AB, then AM is congruent to MB
angle bisector theorem
if a point is on the bisector of an angle, then it is equidistant from the other two sides of the angle
Definition of a Right Angle
An angle is a right angle if and only if the angle has a measurement of 90 degrees
Definition of Perpendicular lines
Two lines are perpendicular if and only if the two lines meet to form congruent adjacent angles
Definition of Complementary Angles
Two angles are complementary if and only if their sum= 90 degrees
Alternate Exterior Angles Theorem
If two parallel lines are cut by a transversal, then the pairs of alternate exterior angles are congruent.
Consecutive Interior Angles Postulate
When two parallel lines are cut by a transversal, the two interior angles on the same side are supplementary.
Consecutive Exterior Angles Postulate
When two parallel lines are cut by a transversal, the two exterior angles on the same side are supplementary.
Vertical angles theorem
If two angles are vertical angles, then they are congruent.
Corresponding parts of congruent triangles are congruent.
Definition of Midpoint
If a midpoint lies on a line, then both halves of the line ending at the midpoint are congruent to each other.
Definition of segment bisector
The two sides of the line are congruent to each other when cut by a segment bisector.
Definition of angle bisector
The two sides of the angle are congruent to each other when cut by an angle bisector.
when two segments or two angles are congruent, you can flip them over and they will still be congruent
a(b) = (ab)
if two segments or two angles are congruent to the same segment of angle, they are congruent to each other
Anything equals itself; a shared piece.
SSS triangle congruence postulate
If the 3 sides of one triangle are congruent to the 3 sides of another triange then the triangles are cogruent.
SAS triangle congruence postulate
If two angles and the included angle of one triangle are congruent to the two sides and the included angle of another triangle, then the triangles are congruent.
ASA triangle congruence postulate
Two triangles are congruent if two angles and the included side of one triangle are congruent to the two angles and the included side of the other triangle.
AAS Triangle Congruence Theorem
Two triangles can be proved congruent with this proof if they have two congruent angles and a congruent side in that order or reverse.
Definition of Right Angle
A right angle has 90 degrees.
The points on any line or line segment can be put into one-to-one correspondence with real numbers.
Given any angle, the measure can be put into one-to-one correspondence with real numbers between 0 and 180.
If the hypotenuse and a corresponding side of two triangles are congruent, then the triangles are congruent.
Notebook Paper Theorem
If a transversal cuts two parallel lines and one of the angles measures 90 degrees, then the rest of the angles measure 90 degrees.
Base Angles Theorem
If 2 sides of a triangle are congruent then the angles opposite are congruent (used with isosceles)
d = √[( x₂ - x₁)² + (y₂ - y₁)²]
The formula for finding the slope of a line.
Segment Addition Postulate
True or False? A line segment has finite points.
sum of degrees in any triangle
Complementary angles add up to ________ degrees.
Adjacent angles are angles that share a common ______________, but no ______________ points. |
An Introduction to Macroeconomics. While most of what is in this chapter will be covered again elsewhere, it is a good warm-up chapter and you can begin to learn the vocabulary of macro. Macro.
While most of what is in this chapter will be covered again elsewhere, it is a good warm-up chapter and you can begin to learn the vocabulary of macro.
The authors have a statement on page 6 that macroeconomics is about examining the economy as a whole. While the emphasis will not be on individual firms or households, aggregates will be studied.
An aggregate is a collection of specific economic units treated as if they were one unit. So, we will consider households as an aggregate, as well as the business sector and government.
Some macroeconomic ideas we will explore are total output, total employment, total income, aggregate expenditures, and the general level of prices.
When you consider the US economy of the last 100 years there has been a great deal of progress. Over these last 100 years we have seen in a macro sense
1) Long run economic growth – to use a pizza analogy – each year a bigger pie seems to be made and often each person gets a bigger slice, and
2) Short run fluctuations called the business cycle occur. So, while the economy in general has been doing better, around that trend there are ups and downs. The down periods may be called a recession.
When the macro economy is studied the ideas of GDP (output or production), unemployment and inflation will usually be mentioned.
GDP stands for the Gross Domestic Product. The nominal GDP totals the dollar value of all final goods and services produced within the borders of the country using the prices of the year. As an example, in the US orange juice is produced each year. If we are looking at year 2010, then the output of orange juice is evaluated at the price of orange juice in 2010.
If 10 gallons of OJ were produced in 2010 and each gallon had a market value of $5, then in GDP OJ would account for $50 worth of the total.
Say in 2009 it was also the case that 10 gallons of OJ was produced, but the price was $4. Then in 2009 the 10 gallons contributed to GDP by the amount $40. Since the amount produced is the same both years you would think it would be measured the same each year. But, from year 2009 to 2010 in my example the price went from $4 to $5. The GDP went up only because of a price change.
The real GDP, RGDP, corrects for price level changes and therefore has a focus on production level changes. We will see more later how this is accomplished.
In macro to be counted as unemployed a person has to not have a job, be willing to work, and actively seeking work. Later we will calculate an unemployment rate.
A major problem with unemployment is the production of the goods and services that would have occurred are lost forever.
Inflation is an increase in the overall level of prices. It has been the case in the US that inflation occurs most every year, but sometimes it is bigger or smaller than what folks expect.
If it is bigger that folks expect, for example, people who have saved will see that the amount they have saved will not buy as much as they had anticipated.
There has been considerable study about how the economy at the national level performs. With this in mind, there has also been study about the Federal Reserve System use of Monetary Policy and Federal Government use of Fiscal Policy and how these policies might impact the performance of the economy.
Part of our task later will be to consider a model of the economy and what role the Federal Reserve and the Federal Government play in that model.
Economic growth has to do with the growth in production stated as an average, or on a per person (per capita) basis.
The handy rule of 70 tells us something will double in n years if it is growing at x percent each year. The formula here is n = 70/x. So, if something is growing at 2% per year, in 70/2 = 35 years the amount will be doubled.
If the growth rate is 3% the amount will double in 70/3 = 23..33 years. The higher the growth rate the sooner our amount is doubled.
When you remember the basic idea of scarcity, higher growth rates alleviate the scarcity situation.
Remember that the basic resources in an economy are land, labor, capital and entrepreneurial ability. If the economic system can obtain more resources then more goods and services can be made than previously.
The resource capital includes the man made items such as machinery, tools, factories and warehouses that are used in the production of other items. Much of the growth in the US in the last 100 years is due to the greater use of capital goods.
Households are the principal source of saving in the economy, where saving is the income that has not been consumed in the current period for things such as hamburgers, fries, jeans, and tires, for example.
Financial institutions such as banks, insurance companies and “money management firms” reward the household for saving by interest and dividends and maybe even capital gains.
Why do financial institutions do this? They do it because they have the idea that businesses have ideas about buying capital goods and expanding and making their business better. Then financial institutions will lend to business and charge a fee. The businesses undertake economic investment here.
Business who take the loans undertake investment in capital goods. The is what we mean by investment in economics.
When you and I are outside an economics class we might talk about how our saving is an investment. It is, but we are undertaking financial investment to make our future better, perhaps.
Perhaps my next slide will help you see what “investment” is all about in a macro class. Investment is not what household do with their saving – this is financial investment! Investment is what businesses do with funding from financial institutions – they buy capital goods!
Remember using capital goods makes our economy so much more productive! We have better growth!
Financial Institutions like banks
Households may have saving that includes various types of financial investments that they get through financial institutions.
Businesses go to financial institutions to borrow and then buy capital goods – businesses undertake investment!
How big of a factory should a company build and what types of tools should it put in the factory?
The question is not easy to answer because no one knows the future with certainty (well, the sun will rise tomorrow!). So, folks study as much as they can and from this study formulate a view of the world. The view is their expectation.
Now, with the expectation, people act!
Do you think people are shocked when their expectation is not met? SURE!
In economics, a shock is when something unexpected happens. Shocks can be good and shocks can be bad.
Businesses want to make profit (they are profit maximizers).
Expectations about business prospects are formed.
Positive demand shocks happen when demand is higher than expected and negative demand shocks are when demand is lower than expected. Most of the attention in our text will be about demand shocks and how they affect the economy.
Part of actually turning a profit involves developing accurate expectations about future market conditions.
The graph on the next slide will be used in a story that will be developed on many slides.
Units per week
700 900 1150
After much research and thought (and expectations formed) a company builds a facility that has an optimal rate of output of 900 units per week – we see this at the vertical line in the graph.
(By the way, your car gets optimal gas mileage, in terms of miles per gallon, at about 55 miles per hour. Your car can go slower or faster, but miles per gallon will drop and thus it is more costly to drive those miles.)
To make 900 units a week, behind the scenes of the graph the firm will hire a certain amount of labor and other inputs.
The firm expects the demand to be D medium one (slangy, huh?) and will produce the product for a price of $37,000 and collect a profit.
In the graph the firm expected demand to be D medium. If D medium occurs, with 900 units made, all can be sold at $37,000 with profit earned and labor and other inputs used as expected.
So, if expectations are always correct, the output and employment at the firm will be steady. This means there would be no fluctuations.
(This would be at odds with reality, because we see fluctuations!)
If demand is not the expected amount D medium what happens?
The authors of our text have 2 options to consider (and they explain that over the course of time both will play out).
1) If the price can freely fluctuate
-then when demand is lower than expected (shockingly) the price will fall,
-then when demand is higher than expected (shockingly) the price will rise.
Under this option demand shocks have no impact on output and employment, only on the price.
2) If the price is not freely flexible (the case of sticky prices! – focus on the horizontal line in the graph)
-when demand is lower than expected
-inventories build up
-output will be cut below 900
-employment will be cut some
-when demand is higher than expected
-inventories become too low
-output will be cranked up above 900
-employment will be added to.
Demand shocks have no impact on price, the shocks just have an impact on output and employment.
(If you spill your Coke on them! )
In the short term
-consumers like you and me want stable prices (sure we tolerate gas price gyrations, but don’t mess with the price of our toothpaste and garbage bags!)
-firms like stable prices and don’t want price wars to break out. So, in the short term prices seem sticky!
In the longer term prices tend to adjust to the new reality and expectations are adjusted to see this new reality.
So, in the chapters to come these ideas are more fully explored!
Let’s doooooo iiiiiiiiiitttttttttttttttttttttt! |
How society defines deafness is important because the description defines a deaf person’s social identity and impacts their ability to access the world around them. There are different viewpoints to explain ‘deafness’ – one view defines a person by their (in)ability to hear, the other by a person’s inability to access the world around them. We have added another model that focuses on how some deaf people prefer to communicate. By explaining the different perspectives on deafness we hope to encourage readers to define deafness differently.
1. The medical model of deafness:
The medical model is the most prevalent view and focuses on (the lack of) ability to hear. It is seen as an impairment that affects a person’s ability to function in society. The medical model does not distinguish between the person and what is ‘wrong’ with them.
Healthcare industries dedicated to correcting deafness have grown significantly. In 2018, the hearing aid market was valued at $5.1 billion. Other services include audiology, speech therapy, educational psychology, teachers of the deaf, cochlear implant departments and so on.
Hearing Loss ‘categories’:
The medical model of hearing loss and deafness focuses on correcting the difficulty in speech recognition. Audiology assessments put hearing loss/deafness into four categories:
- Mild: Loss of hearing resulting in difficulty picking up softer speech sounds such as f,s and th.
- Moderate: Loss of hearing which results in losing additional speech sounds and shapes (e.g. m,b,p), particularly in noisy places.
- Severe: Loss of hearing results in losing all speech sounds and some louder sounds e.g. a dog barking.
- Profound: No hearing other than being aware of exceptionally loud noises such as an aeroplane taking off or roadworks drill.
The major problem with the medical model is that everything is focused on the individual. By ‘fixing’ the hearing loss, the individual can manage in the outside world. Unfortunately, the medical model fails to take into account the impact of hearing loss and deafness in the real world. There are a number of barriers that prevent people with any level of hearing loss from achieving their potential and accessing the world around them.
The medical model also fails to explain that equipment such as hearing aids and cochlear implants only enable access to sound, not language. If aids enabled access to language, individuals would not need the additional support of speech therapy, lip-reading classes or British Sign Language.
Whilst deafness itself may not be life-threatening, most people think of deafness as an impairment to be ‘fixed’. This view can have a detrimental effect on a person’s mental and physical well being.
2. The social model:
The social model of deafness accepts that there are differences in hearing ability and the barriers preventing access to the world, need to be removed. Adjusting services to accommodate these differences removes barriers and enable access. For example, a deaf person going to the cinema would need subtitles to access the dialogue in a film.
The social model acknowledges an individual’s difference, not as a disability or something that is wrong with an individual but as a difference that needs to be accommodated. The model acknowledges factors such as unconscious bias can lead to negative attitudes and discrimination.
Disability arises when services have not been adequately adjusted to accommodate individual differences. e.g. expecting everyone to use steps – providing a ramp for wheelchair users would be a reasonable adjustment
The social model accepts that individuals are different, including the way they communicate or how they behave and focuses on addressing the barriers that prevent individuals from being part of the community. This is a more holistic approach and acknowledges that individuals are part of the wider community.
Unfortunately, many services simply do not see accessibility as their responsibility. Some services take the view that they are ‘accessible’ in the sense that anyone can use their services. These services fail to understand that they need to make reasonable adjustments, so their service can be accessed and used as intended.
Failing to make reasonable adjustments applies to all areas of life including sport, leisure, council services, health services and businesses open to the public e.g. cinemas.
The social model also applies to employment. Job seekers and employees need to participate in social activities at work. Activities will include meetings, phone calls, training sessions and social events. Unfortunately, there are common misconceptions about communication that can impact a person’s involvement in the social aspects of working. However, someone with hearing loss or deafness can participate in these activities with the right adjustments, (often at little or no cost).
3. A ‘cultural’ model?
Finally, there is the cultural model of deafness. This model usually refers to individuals who are profoundly deaf and use British Sign language (BSL) for communication.
People who identify as Deaf with a big ‘D’ see themselves as part of a linguistic minority who have a rich linguistic culture. Most ‘Deaf’ community members use have used BSL from birth to adulthood and use sign language as their first and only method of communication. They usually only socialise or go to activities that involve other people from the Deaf community, as there is a common cultural understanding.
People who are ‘Deaf’ are also more likely to have gone to a school for the Deaf. However, the majority of specialist deaf schools have now closed down. Most deaf children are now educated in mainstream schools. Unfortunately, this can create an isolating experience.
At Access Ambassadors, we focus on the social model of deafness. We focus on helping organisations remove the barriers that prevent individuals from accessing services and employment.
Deaf and hard of hearing people do not like to be defined by a label. Hearing ability is only one part of an individual’s sensory input. Nonetheless, there are a variety of terms that mainstream services use to describe deaf or hard of hearing individuals. The term you use when talking to someone will depend on how the individual views their deafness or hearing loss. Some Deaf people view deafness as a social identity, particularly sign language users. It is important to know the range of terms that are currently in use:
- Hearing-impaired: This term refers to a disability category that describes individuals who have lost some of their hearing.
- Hard of hearing: This term is similar to hearing impaired. It describes people with some level of hearing loss.
- Deafened: This term refers to individuals who were hearing but have lost the function of hearing. For example, soldiers who lose their hearing, are described as deafened.
- d/Deaf: This term usually refers to individuals who have severe or profound hearing loss.
Hearing loss/deafness can happen at any point in a person’s life. There are a range of communication aids:
There are a number of ‘speech to text’ software options, some of which, are free. The software can be used on a smartphone, tablet or PC. Here is a list of options currently available. Of course, technology moves so fast that more and more apps come online and improve accessibility – Google Live captioning is a great example of this:
- Sign Language interpreter:
Sign Language Interpreters are professionals who facilitate communication between deaf and hearing people. Interpreters have extensive knowledge of cultural differences in English and British Sign Language. This enables interpreters to translate from one language to another to achieve linguistic equivalence. There is a national register of qualified interpreters that list over 1,000 interpreters working across the UK.
- Hearing loop:
People who wear hearing aids will use a hearing loop system. The loop picks up sounds from the room and then emits a signal that a hearing aid picks up. A loop cuts out most background noise making the sound clearer.
Subtitles (or captions) are the text form of dialogue or commentary. They are, usually at the bottom of the screen in films, tv programmes or as stage text at theatre productions.
This page gives a brief but detailed overview of the information that organisations need to be aware of when making services accessible. Deafness and hearing loss are complex issues that affect each person differently. In other words, there is not a one-size-fits-all approach. This is something many organisations are slowly coming to realise as their customer base shrinks. Customers want to know you have considered their needs, including those who may have lost their hearing. |
This graphic contains an image and illustration of a nearby star, named CoRoT-2a, which has a planet in close orbit around it. The separation between the star and planet is only about three percent of the distance between the Earth and the sun, causing some exotic effects not seen in our solar system.
The planet-hosting star is located in the center of the image. Data from NASA’s Chandra X-ray Observatory are shown in purple, along with optical and infrared data from the Panchromatic Robotic Optical Monitoring and Polarimetry Telescopes, or PROMPT, and the Two Micron All Sky Survey, or 2MASS. CoRoT-2a is surrounded by a purple glow showing that it is an X-ray source.
This star is pummeling its companion planet — not visible in this image — with a barrage of X-rays 100,000 times more intense than the Earth receives from the sun. Data from Chandra suggest that high-energy radiation from CoRoT-2a is evaporating about five million tons of matter from the nearby planet every second, giving insight into the difficult survival path for some planets. The artist’s representation shows the material, in blue, being stripped off the planet.
The Chandra observations provide evidence that CoRoT-2a is a very active star, with bright X-ray emission produced by powerful, turbulent magnetic fields. This magnetic activity is represented by the prominences and eruptions on the surface of the star in the illustration.
Such strong activity is usually found in much younger stars and may be caused by the proximity of the planet. The planet may be speeding up the star’s rotation, causing its magnetic fields to remain active longer than expected. Support for this idea comes from observations of a likely companion star to CoRoT-2a that orbits at a distance about a thousand times greater than the distance between the Earth and the sun. This star is visible in the image as the faint, nearby star located below and to the right of CoRoT-2a. It is also shown as the bright background star in the illustration. This star is not detected in X-rays, perhaps because it does not have a close-in planet like CoRoT-2b to cause it to stay active.
The planet, CoRoT-2b, was discovered by the French Space Agency’s Convection, Rotation and planetary Transits, or CoRoT, satellite in 2008. It is located about 880 light years from Earth and has a mass about three times that of Jupiter.
Credits: Optical: NASA/NSF/IPAC-Caltech/UMass/2MASS, PROMPT; Wide field image: DSS; X-ray: NASA/CXC/Univ of Hamburg/S.Schröter et al; Illustration: CXC/M. Weiss]]>
Viewed from space, the most striking feature of our planet is the water. In both liquid and frozen form, it covers 75% of the Earth’s surface. It fills the sky with clouds. Water is practically everywhere on Earth, from inside the planet’s rocky crust to inside the cells of the human body. This detailed, photo-like view of Earth is based largely on observations from MODIS, the Moderate Resolution Imaging Spectroradiometer, on NASA’s Terra satellite. It is one of many images of our watery world featured in a new story examining water in all of its forms and functions.
Image Credit: NASA]]>
Just in time for Valentine’s Day comes a new image of a ring — not of jewels — but of black holes. This composite image of Arp 147, a pair of interacting galaxies located about 430 million light years from Earth, shows X-rays from the NASA’s Chandra X-ray Observatory (pink) and optical data from the Hubble Space Telescope (red, green, blue) produced by the Space Telescope Science Institute, or STScI.
Arp 147 contains the remnant of a spiral galaxy (right) that collided with the elliptical galaxy on the left. This collision has produced an expanding wave of star formation that shows up as a blue ring containing in abundance of massive young stars. These stars race through their evolution in a few million years or less and explode as supernovas, leaving behind neutron stars and black holes.
A fraction of the neutron stars and black holes will have companion stars, and may become bright X-ray sources as they pull in matter from their companions. The nine X-ray sources scattered around the ring in Arp 147 are so bright that they must be black holes, with masses that are likely ten to twenty times that of the sun.
An X-ray source is also detected in the nucleus of the red galaxy on the left and may be powered by a poorly-fed supermassive black hole. This source is not obvious in the composite image but can easily be seen in the X-ray image. Other objects unrelated to Arp 147 are also visible: a foreground star in the lower left of the image and a background quasar as the pink source above and to the left of the red galaxy.
Infrared observations with NASA’s Spitzer Space Telescope and ultraviolet observations with NASA’s Galaxy Evolution Explorer (GALEX) have allowed estimates of the rate of star formation in the ring. These estimates, combined with the use of models for the evolution of binary stars have allowed the authors to conclude that the most intense star formation may have ended some 15 million years ago, in Earth’s time frame. These results were published in the October 1st, 2010 issue of The Astrophysical Journal. The authors were Saul Rappaport and Alan Levine from the Massachusetts Institute of Technology, David Pooley from Eureka Scientific and Benjamin Steinhorn, also from MIT.
Image Credit: X-ray: NASA/CXC/MIT/S .Rappaport et al., Optical: NASA/STScI]]>
Yes, there are day and night on the moon, but not the same as our planet. Instead of twenty-four hours, a moon-day is equal to 27.3 earth-days because the moon takes so much time to complete a rotation around its axis. What is more surprising is that the moon takes exactly the same amount of time to complete a circle around the earth too, i.e. 27.3 days. As a result we always see only one side of the moon. At the most 59% of the moon’s surface is visible from the earth. The rest 41% of the surface is never seen by us.
Furthermore, the moon only has a day and a night, and no evening! The reason: There is no atmosphere on the moon. Due to the absence of air the sun-rays do not scatter at all. Hence, where the sun-rays fall there is a dazzling light and the rest is extremely dark! As if sunshine and darkness are partitioned by drawing a line between them. In the same way without air the climate is also not temperate on the moon. The temperature in the light would be 102°C, whereas an inch away in the dark the mercury would drop down to freezing -157°C!
The more clearly we remember a dream, the greater must have been the part of the brain machine that was working at the time we dreamed. On the other hand, there’s no doubt that we have many dreams which we do not remember at all when we wake, which were due to the working of only a very small part of the brain. We see from this that we can judge which dreams are the best kind to have, if we are to have any. The more definite a dream is, the more vivid it is and the better we remember it, the more awake our brain was when we dreamed, the less was the rest it was getting, the poorer and less valuable was our sleep. But when a dream is scarcely remembered, or not remembered at all, and when it is very faint and vague, then our brain was much less awake during the dream and our rest and sleep were so much the less injured]]>
The placenta is an organ that connects the developing fetus to the uterine wall to allow nutrient uptake, waste elimination, and gas exchange via the mother’s blood supply Placentas are a defining characteristic of eutherian or “placental” mammals, but are also found in some snakes and lizards with varying levels of development up to mammalian levelsThe word placenta comes from the Latin for cake, mammals produce a choriovitelline placenta that, while connected to the uterine wall, provides nutrients mainly derived from the egg sac
In humans, aside from serving as the conduit for oxygen and nutrients for fetus, placenta secretes hormone (secreted by syncytial layer/syncytiotrophoblast of chorionic villi)that is important during pregnancy.
Human Chorionic Gonadotropin (hCG). The first placental hormone produced is hCG, which can be found in maternal blood and urine as early as the first missed menstrual period (shortly after implantation has occurred) through about the 100th day of pregnancy. This is the hormone analyzed by pregnancy test; a false-negative result from a pregnancy test may be obtained before or after this period. Women’s blood serum will be completely negative for hCG by one to two weeks after birth. hCG testing is proof that all placental tissue is delivered. hCG is only present during pregnancy because it is secreted by the placenta, which of course is present only during pregnancy. hCG also ensures that the corpus luteum continue to secrete progesterone and estrogen. Progesterone is very important during pregnancy because when its secretion decreases, endometrial lining will slough off and pregnancy will be lost. hCG suppresses the maternal immunologic response so that placenta is not rejected.
Human Placental Lactogen (hPL [Human Chorionic Somatomammotropin]). This hormone is lactogenic and growth-promoting properties. It promotes mammary gland growth in preparation for lactation in the mother. It also regulates maternal glucose, protein, fat levels so that this is always available to the fetus.
Estrogen. It is referred to as the “hormone of woman” because it influences the female appearance. It contributes to the woman’s mammary gland development in preparation for lactation and stimulates uterine growth to accommodate growing fetus.
Progesterone. This is referred to as the “hormone of mothers” because it is necessary to maintain endometrial lining of the uterus during pregnancy. This hormone prevents preterm labor by reducing myometrial contraction. This hormone is high during pregnancy]]>
India Develops World’s Cheapest Laptop commercially next year. |
Baker's percentage is a notation method indicating the proportion of an ingredient relative to the flour used in a recipe when making breads, cakes, muffins, and other baked goods. It is also referred to as baker's math, and may be indicated by a phrase such as based on flour weight. It is sometimes called formula percentage, a phrase that refers to the sum of a set of bakers' percentages.[note 1] Baker's percentage expresses a ratio in percentages of each ingredient's weight to the total flour weight:
For example, in a recipe that calls for 10 pounds of flour and 5 pounds of water, the corresponding baker's percentages are 100% for the flour and 50% for the water. Because these percentages are stated with respect to the weight of flour rather than with respect to the weight of all ingredients, the sum of these percentages always exceeds 100%.
Flour-based recipes are more precisely conceived as baker's percentages, and more accurately measured using weight instead of volume. The uncertainty in using volume measurements follows from the fact that flour settles in storage and therefore does not have a constant density.
A yeast-dough formula could call for the following list of ingredients, presented as a series of baker's percentages:
There are several common conversions that are used with baker's percentages. Converting baker's percentages to ingredient weights is one. Converting known ingredient weights to baker percentages is another. Conversion to true percentages, or based on total weight, is helpful to calculate unknown ingredient weights from a desired total or formula weight.
Using baker percentages
ingredient % method 1 method 2 flour 100% Wf * 1.00 Wf * 100% water 35% Wf * 0.35 Wf * 35% milk 35% Wf * 0.35 Wf * 35% fresh yeast 4% Wf * 0.04 Wf * 4% salt 1.8% Wf * 0.018 Wf * 1.8%
In the example below, 2 lb and 10 kg of flour weights have been calculated. Depending on the desired weight unit, only one of the following four weight columns is used:
weights 2 lb 10 kg ingredient % lb oz kg g flour 100% 2 32 10 10000 water 35% 0.7 11.2 3.5 3500 milk 35% 0.7 11.2 3.5 3500 fresh yeast 4% 0.08 1.28 0.4 400 salt 1.8% 0.036 0.576 0.18 180
Creating baker's percentages
The baker has determined how much a recipe's ingredients weigh, and uses uniform decimal weight units. All ingredient weights are divided by the flour weight to obtain a ratio, then the ratio is multiplied by 100% to yield the baker's percentage for that ingredient:
ingredient weight ingredient mass⁄flour mass × 100% flour 10 kg 10 kg÷ 10 kg= 1.000 = 100% water 3.5 kg 3.5 kg÷ 10 kg= 0.350 = 35% milk 3.5 kg 3.5 kg÷ 10 kg= 0.350 = 35% fresh yeast 0.4 kg 0.4 kg÷ 10 kg= 0.040 = 4% salt 0.18 kg 0.18 kg÷ 10 kg= 0.018 = 1.8%
Due to the canceling of uniform weight units, the baker may employ any desired system of measurement (metric or avoirdupois, etc.) when using a baker's percentage to determine an ingredient's weight. Generally, the baker finds it easiest to use the system of measurement that is present on the available tools.
Formula percentage and total mass
flour 100% 56.88% water 35% 19.91% milk 35% 19.91% fresh yeast 4% 2.28% salt 1.8% 1.02% Total 175.8% 100%
The total or sum of the baker's percentages is called the formula percentage. The sum of the ingredient masses is called the formula mass (or formula "weight"). Here are some interesting calculations:
- The flour's mass times the formula percentage equals the formula mass:
- An ingredient's mass is obtained by multiplying the formula mass by that ingredient's true percentage; because an ingredient's true percentage is that ingredient's baker's percentage divided by the formula percentage expressed as parts per hundred, an ingredient's mass can also be obtained by multiplying the formula mass by the ingredient's baker's percentage and then dividing the result by the formula percentage:
- Thus, it is not necessary to calculate each ingredient's true percentage in order to calculate each ingredient's mass, provided the formula mass and the baker's percentages are known.
- Ingredients' masses can also be obtained by first calculating the mass of the flour then using baker's percentages to calculate remaining ingredient masses:
- The two methods of calculating the mass of an ingredient are equivalent:
Weights and densities
The use of customary U.S. units can sometimes be awkward and the metric system makes these conversions simpler. In the metric system, there are only a small number of basic measures of relevance to cooking: the gram (g) for weight, the liter (L) for volume, the meter (m) for length, and degrees Celsius (°C) for temperature; multiples and sub-multiples are indicated by prefixes, two commonly used metric cooking prefixes are milli- (m-) and kilo- (k-). Intra-metric conversions involve moving the decimal point.
Common avoirdupois and metric weight equivalences:
- 1 pound (lb) = 16 ounces (oz)
- 1 kilogram (kg) = 1,000 grams (g) = 2.20462262 lb
- 1 lb = 453.59237 g = 0.45359237 kg
- 1 oz = 28.3495231 g.
In four different English-language countries of recipe and measuring-utensil markets, approximate cup volumes range from 236.59 to 284.1 milliliters (mL). Adaptation of volumetric recipes can be made with density approximations:
Volume to mass conversions for some common cooking ingredients ingredient density
U.S. customary cup
≈237 mL[note 6]
g oz g oz g oz water[note 7] 1[note 8] 249–250 8.8 283–284 10 236–237 8.3[note 9] granulated sugar 0.8 200 7.0 230 8.0 190 6.7 wheat flour 0.5–0.6 120–150 4.4–5.3 140–170 5.0–6.0 120–140 4.2–5.0 table salt 1.2 300 10.6 340 12.0 280 10.0
Due to volume and density ambiguities, a different approach involves volumetrically measuring the ingredients, then using scales or balances of appropriate accuracy and error ranges to weigh them, and recording the results. With this method, occasionally an error or outlier of some kind occurs.
Baker's percentages do not accurately reflect the impact of the amount of gluten-forming proteins in the flour on the final product and therefore may need to be adjusted from country to country, or even miller to miller, depending on definitions of terms like "bread flour" and actual protein content. Manipulation of known flour-protein levels can be calculated with a Pearson square.
In home baking, the amounts of ingredients such as salt or yeast expressed by mass may be too small to measure accurately on the scales used by most home cooks. For these ingredients, it may be easier to express quantities by volume, based on standard densities. For this reason, many breadmaking books that are targeted to home bakers provide both percentages and volumes for common batch sizes.
Besides the need for appropriate readability scales, a kitchen calculator is helpful when working directly from baker's percentages.
Baker's percentages enable the user to:
- compare recipes more easily (i.e., which are drier, saltier, sweeter, etc.).
- spot a bad recipe, or predict its baked characteristics.
- alter or add a single-ingredient percentage without changing the other ingredients' percentages.
- measure uniformly an ingredient where the quantity per unit may vary (as with eggs).
- scale accurately and easily for different batch sizes.
Common formulations for bread include 100% flour, 60% water/liquid, 1% yeast, 2% salt and 1% oil, lard or butter.
In a recipe, the baker's percentage for water is referred to as the "hydration"; it is indicative of the stickiness of the dough and the "crumb" of the bread. Lower hydration rates (e.g., 50–57%) are typical for bagels and pretzels, and medium hydration levels (58–65%) are typical for breads and rolls. Higher hydration levels are used to produce more and larger holes, as is common in artisan breads such as baguettes or ciabatta. Doughs are also often classified by the terms stiff, firm, soft, and slack. Batters are more liquid doughs. Muffins are a type of drop batter while pancakes are a type of pour batter.
Very stiff < 57% Stiff to firm 57-65% Soft 65-70% Soft to slack 70-80% Batters
Drop 95% Pour 190%
- † Except for creams and custards, when the formula includes milk, bakers almost always use high-heat NFDM (non-fat dry milk). In breads the usage is typically within a range of 5%-12%; fresh whole milk is 3.5% milk fat, 88% water, and 8.5% milk solids.
- †† A yeast flavor in the baked bread is generally not noticeable when the bakers' percent of added yeast is less than 2.5%.
- There is some ambiguity regarding the use of the phrase "formula percentage" in the literature. From the published date of 2004 to the date 2007, Hui's definitions have changed slightly. In 2004 "formula percent" was defined by "total weight of all ingredients"; however by the latter date's usage, the preference was to use the prefix "true" in the phrase "True formula percent (true percent)" when referring to "total weight of all ingredients." In 2005, Ramaswamy & Marcotte used the phrase "typical formula" in reference to a "baker's %" series of ingredients, then drew the semantic and mathematic distinctions that "actual percentage" was one based upon "total mass", which they labeled "% flour", "% water", etc. In 2010, Figoni said that "baker's percentage" was "sometimes called formula percentage...." In 1939, the phrase formula percentage was said to commonly refer to the sum of the particular percentages that would later be called bakers' percentages.
- Derived algebraically from Gisslen's formula.
- Wf denotes a flour weight. In method 1 the percentage was divided by 100%. Method 2 works well when using a calculator. When using a spreadsheet, formatting the cell as percentage versus number automatically handles the per-cent portion of the calculation.
- True percentage values have been rounded and are approximate.
- One gram per millilitre is very close to one avoirdupois ounce per fluid ounce: 1 g/mL ≈ 1.002 av oz/imp fl oz
This is not a numerical coincidence, but comes from the original definition of the kilogram as the mass of one litre of water, and the imperial gallon as the volume occupied by ten avoirdupois pounds of water. The slight difference is due to water at 4 °C (39 °F) being used for the kilogram, and at 62 °F (17 °C) for the imperial gallon. The U.S. fluid ounce is slightly larger.
- 1 g/mL ≈ 1.043 av oz/U.S. fl oz
- From cup (unit). Note the similarity of cup mL to water weight or mass as g. This density relationship can also be useful for determining unknown volumes.
- 1 g/mL is a good rough guide for water-based liquids such as milk (the density of milk is about 1.03–1.04 g/mL).
- The density of water ranges from about 0.96 to 1.00 g/mL dependent on temperature and pressure. The table above assumes a temperature range 0–30 °C (32–86 °F). The variation is too small to make any difference in cooking.
- Since an imperial cup of water weighs approximately 10 avoirdupois ounces and five imperial cups are approximately equal to six U.S. cups, one U.S. cup of water weighs approximately 8⅓ avoirdupois ounces.
- Mathematically converted from liquid-to-dry volumetric ratios on quick bread. 1 cup water weighs 237 g, 1 cup all purpose flour, 125 g, rounding applied. It is worth noting that if the liquid is whole milk of 3.25% milkfat, which is somewhat common in pancake recipes, the actual water content or hydration is about 88% of that value per the USDA National Nutrient database, thus pancake hydrations may be as low as, or lower than, 167% or thereabouts (190% * 88%).
- Paula I. Figoni (2010). How Baking Works: Exploring the Fundamentals of Baking Science. New York: Wiley. pp. 9–11. ISBN 0-470-39267-3. Retrieved 2010-12-06.
Baker's percentage—sometimes called formula percentage or indicated as "on flour weight basis"—is different from the percentages commonly taught in math classes.
- Griffin, Mary Annarose; Gisslen, Wayne (2005). Professional baking (4th ed.). New York: John Wiley. p. 10. ISBN 0-471-46427-9. Retrieved 2011-01-01.
- Corriher, Shirley (2008). BakeWise: The Hows and Whys of Successful Baking with Over 200 Magnificent Recipes. New York: Scribner. p. 32. ISBN 1-4165-6078-5. Retrieved 2010-12-09.
- Hui, Yiu H. (2006). Handbook of food science, technology, and engineering. Washington, DC: Taylor & Francis. p. 16-6. ISBN 0-8493-9849-5. Retrieved 2010-12-09.
- Laura Halpin Rinsky; Glenn Rinsky (2009). The pastry chef's companion: a comprehensive resource guide for the baking and pastry professional. Chichester: John Wiley & Sons. p. 19. ISBN 0-470-00955-1. Retrieved 2010-12-09.
- Daniel T. DiMuzio (2009). Bread Baking: An Artisan's Perspective. New York: Wiley. p. 31. ISBN 0-470-13882-3. Retrieved 2010-12-11.
- Cauvain, Stanley P. (2003). Bread making: improving quality. Boca Raton: CRC Press. p. 475. ISBN 1-85573-553-9. Retrieved 2010-12-08.
Generally the taste of yeast itself is not detectable in bread unless the amount of yeast used is greater than 2.5% based on the weight of flour.
- J. Scott Smith; Yiu H. Hui, eds. (2004). Food processing: principles and applications. Cambridge, MA: Blackwell Pub. p. 178. ISBN 0-8138-1942-3. Retrieved 2010-12-29.
Formula—term used instead of "recipe," by the baking industry; the weight of each ingredient is determined based on the weight of flour at 100%.
Formula percent—term used by the baking industry to describe the amount of each ingredient by weight for a "recipe" or formula compared to the weight of all ingredients.
- Yiu H. Hui, ed. (2007). Handbook of food products manufacturing. New York: Wiley. p. 302. ISBN 0-470-12524-1. Retrieved 2010-12-29.
True formula percent (true percent): Term used by the baking industry to describe the amount of each ingredient by weight for a "recipe" or formula compared with the total weight of all ingredients.
- Michele Marcotte; Hosahalli Ramaswamy (2005). Food Processing: Principles and Applications. Boca Raton: CRC. pp. 14–15. ISBN 1-58716-008-0. Retrieved 2010-12-25.
- Quartermaster Corps, ed. (1939). Army baker. Washington: U.S. Government Printing Office. pp. 38–41. Training Manual No. 2100-151. Retrieved 2012-02-07.
The sum of the percentages of ingredients used in any dough is commonly referred to as the formula percentage (168 percent in example in b above). The sum of the weights of ingredients used in a dough is commonly referred to as formula weight (462 pounds in example in c above).
- Gisslen, Wayne (2007). Professional cooking (Sixth ed.). New York: John Wiley. p. 893. ISBN 0-471-66376-X. Retrieved 2010-12-25.
- Gisslen, Wayne (2009). Professional baking. New York: John Wiley. p. 24. ISBN 0-471-78349-8.
- Stanley P Cauvain (2009). Stanley P. Cauvain; Linda S. Young, eds. The ICC Handbook of Cereals, Flour, Dough & Product Testing: Methods and Applications. BakeTran, High Wycombe, Buckinghamshire, UK. Lancaster, Pennsylvania: DEStech Publications, Inc. p. 69. ISBN 1-932078-99-1. Retrieved 2010-12-26.
Using Cereal Testing at Mill Intake" > "The Bulk Density of Grain (Hectolitre Mass, Bushel Mass, Test Weight, Specific Weight)
- Wihlfahrt, Julius Emil (1913) . A treatise on flour, yeast, fermentation and baking, together with recipes for bread and cakes. THE FLEISCHMANN CO. p. 25. Retrieved 2010-01-22.
- Rees, Nicole; Amendola, Joseph (2003). The baker's manual: 150 master formulas for baking. London: J. Wiley. p. 11. ISBN 0-471-40525-6. Retrieved 2010-12-06.
- "The Metric Kitchen". Retrieved 2010-11-30.
- "Intra-metric Conversions". Archived from the original (Doc) on 2006-09-16. Retrieved 2011-02-15.
- Google Calculator, retrieved 2010-12-18
- L. Fulton, E. Matthews, C. Davis: Average weight of a measured cup of various foods. Home Economics Research Report No. 41, Agricultural Research Service, United States Department of Agriculture, Washington, DC, 1977.
- "KitchenSavvy: Flour Power?". Retrieved 2010-12-09.
- Hosahalli Ramaswamy; Amalendu Chakraverty; Mujumdar, Arun S.; Vijaya Raghavan (2003). Handbook of postharvest technology: cereals, fruits, vegetables, tea, and spices. New York, N.Y: Marcel Dekker. p. 263. ISBN 0-8247-0514-9. Retrieved 2010-01-07.
- Van Loon, Dirk (1976). The family cow. Charlotte, Vt: Garden Way Pub. p. 152. ISBN 0-88266-066-7.
- Reinhart, Peter (2009). Peter Reinhart's Artisan Breads Every Day. Berkeley, Calif: Ten Speed Press. pp. 207–209. ISBN 1-58008-998-4. Retrieved 2010-12-09.
- "Bakers Percentages - Revised". Retrieved 2014-11-28.
- Paula I. Figoni (2010). How Baking Works: Exploring the Fundamentals of Baking Science. New York: Wiley. p. 360. ISBN 0-470-39267-3. Retrieved 2010-12-08.
- Schieberle, Peter (2009). Food Chemistry. Berlin: Springer. p. 716. ISBN 3-540-69933-3. Retrieved 2010-12-11.
- Hui, Yiu H. (2006). Handbook of food science, technology, and engineering. Washington, DC: Taylor & Francis. p. 148-26. ISBN 0-8493-9849-5. Retrieved 2010-12-08.
- Mark Keeney; Jenness, Robert; Marth, Elmer H.; Noble P. Wong (1988). Fundamentals of Dairy Chemistry. Berlin: Springer. p. 760. ISBN 0-8342-1360-5. Retrieved 2010-12-08.
- Daniel T. DiMuzio (2009). Bread Baking: An Artisan's Perspective. New York: Wiley. p. 24. ISBN 0-470-13882-3. Retrieved 2010-12-11.
- Paula I. Figoni (2010). How Baking Works: Exploring the Fundamentals of Baking Science. New York: Wiley. p. 150. ISBN 0-470-39267-3. Retrieved 2010-12-11. |
Nucleic acid structure
Nucleic acid structure refers to the structure of nucleic acids such as DNA and RNA. Chemically speaking, DNA and RNA are very similar. Nucleic acid structure is often divided into four different levels: primary, secondary, tertiary and quaternary.
Primary structure consists of a linear sequence of nucleotides that are linked together by phosphodiester bonds. It is this linear sequence of nucleotides that make up the Primary structure of DNA or RNA. Nucleotides consist of 3 components:
- Nitrogenous base
- 5-carbon sugar which is called deoxyribose (found in DNA) and ribose (found in RNA).
- One or more phosphate groups.
The nitrogen bases adenine and guanine are purine in structure and form a glycosidic bond between their 9 nitrogen and the 1' -OH group of the deoxyribose. Cytosine, thymine and uracil are pyrimidines, hence the glycosidic bonds forms between their 1 nitrogen and the 1' -OH of the deoxyribose. For both the purine and pyrimidine bases, the phosphate group forms a bond with the deoxyribose sugar through an ester bond between one of its negatively charged oxygen groups and the 5' -OH of the sugar. The polarity in DNA and RNA is derived from the oxygen and nitrogen atoms in the backbone. Nucleic acids are formed when nucleotides come together through phosphodiester linkages between the 5' and 3' carbon atoms. A Nucleic acid sequence is the order of nucleotides within a DNA (GACT) or RNA (GACU) molecule that is determined by a series of letters. Sequences are presented from the 5' to 3' end and determine the covalent structure of the entire molecule. Sequences can be complementary to another sequence in that the base on each position is complementary as well as in the reverse order. An example of a complementary sequence to AGCT is TCGA. DNA is double-stranded containing both a sense strand and an antisense strand. Therefore, the complementary sequence will be to the sense strand.
Complexes with alkali metal ionsEdit
There are three potential metal binding groups on nucleic acids: phosphate, sugar and base moieties. Solid-state structure of complexes with alkali metal ions have been reviewed.
Secondary structure is the set of interactions between bases, i.e., which parts of strands are bound to each other. In DNA double helix, the two strands of DNA are held together by hydrogen bonds. The nucleotides on one strand base pairs with the nucleotide on the other strand. The secondary structure is responsible for the shape that the nucleic acid assumes. The bases in the DNA are classified as purines and pyrimidines. The purines are adenine and guanine. Purines consist of a double ring structure, a six membered and a five membered ring containing nitrogen. The pyrimidines are cytosine and thymine. It has a single ringed structure, a six membered ring containing nitrogen. A purine base always pairs with a pyrimidine base (guanine (G) pairs with cytosine (C) and adenine (A) pairs with thymine (T) or uracil (U)). DNA's secondary structure is predominantly determined by base-pairing of the two polynucleotide strands wrapped around each other to form a double helix. Although the two strands are aligned by hydrogen bonds in base pairs, the stronger forces holding the two strands together are stacking interactions between the bases. These stacking interactions are stabilized by Van der Waals forces and hydrophobic interactions, and show a large amount of local structural variability. There are also two grooves in the double helix, which are called major groove and minor groove based on their relative size.
The secondary structure of RNA consists of a single polynucleotide. Base pairing in RNA occurs when RNA folds between complementarity regions. Both single- and double-stranded regions are often found in RNA molecules. The antiparallel strands form a helical shape. The four basic elements in the secondary structure of RNA are helices, loops, bulges, and junctions. Stem-loop or hairpin loop is the most common element of RNA secondary structure. Stem-loop is formed when the RNA chains fold back on themselves to form a double helical tract called the stem, the unpaired nucleotides forms single stranded region called the loop. Secondary structure of RNA can be predicted by experimental data on the secondary structure elements, helices, loops and bulges. Bulges and internal loops are formed by separation of the double helical tract on either one strand (bulge) or on both strands (internal loops) by unpaired nucleotides. A tetraloop is a four-base pairs hairpin RNA structure. There are three common families of tetraloop in ribosomal RNA: UNCG, GNRA, and CUUG (N is one of the four nucleotides and R is a purine).UNCG is the most stable tetraloop. Pseudoknot is a RNA secondary structure first identified in turnip yellow mosaic virus. Pseudoknots are formed when nucleotides from the hairpin loop pairs with a single stranded region outside of the hairpin to form a helical segment. H-type fold pseudoknots are best characterized. In H-type fold, nucleotides in the hairpin loop pairs with the bases outside the hairpin stem forming second stem and loop. This causes formation of pseudoknots with two stems and two loops. Pseudoknots are functional elements in RNA structure having diverse function and found in most classes of RNA. DotKnot-PW method is used for comparative pseudoknots prediction. The main points in the DotKnot-PW method is scoring the similarities found in stems, secondary elements and H-type pseudoknots.
Tertiary structure refers to the locations of the atoms in three-dimensional space, taking into consideration geometrical and steric constraints. It is a higher order than the secondary structure, in which large-scale folding in a linear polymer occurs and the entire chain is folded into a specific 3-dimensional shape. There are 4 areas in which the structural forms of DNA can differ.
- Handedness – right or left
- Length of the helix turn
- Number of base pairs per turn
- Difference in size between the major and minor grooves
B-DNA is the most common form of DNA in vivo and is a more narrow, elongated helix than A-DNA. Its wide major groove makes it more accessible to proteins. On the other hand, it has a narrow minor groove. B-DNA's favored conformations occur at high water concentrations; the hydration of the minor groove appears to favor B-DNA. B-DNA base pairs are nearly perpendicular to the helix axis. The sugar pucker which determines the shape of the a-helix, whether the helix will exist in the A-form or in the B-form, occurs at the C2'-endo.
A-DNA, is a form of the DNA duplex observed under dehydrating conditions. It is shorter and wider than B-DNA. RNA adopts this double helical form, and RNA-DNA duplexes are mostly A-form, but B-form RNA-DNA duplexes have been observed. In localized single strand dinucleotide contexts, RNA can also adopt the B-form without pairing to DNA. A-DNA has a deep, narrow major groove which does not make it easily accessible to proteins. On the other hand, its wide, shallow minor groove makes it accessible to proteins but with lower information content than the major groove. Its favored conformation is at low water concentrations. A-DNAs base pairs are tilted relative to the helix axis, and are displaced from the axis. The sugar pucker occurs at the C3'-endo and in RNA 2'-OH inhibits C2'-endo conformation. Long considered little more than a laboratory artifice, A-DNA is now known to have several biological functions.
Z-DNA is a relatively rare left-handed double-helix. Given the proper sequence and superhelical tension, it can be formed in vivo but its function is unclear. It has a more narrow, more elongated helix than A or B. Z-DNA's major groove is not really a groove, and it has a narrow minor groove. The most favored conformation occurs when there are high salt concentrations. There are some base substitutions but they require an alternating purine-pyrimidine sequence. The N2-amino of G H-bonds to 5' PO, which explains the slow exchange of protons and the need for the G purine. Z-DNA base pairs are nearly perpendicular to the helix axis. Z-DNA does not contain single base-pairs but rather a GpC repeat with P-P distances varying for GpC and CpG. On the GpC stack there is good base overlap, whereas on the CpG stack there is less overlap. Z-DNA's zigzag backbone is due to the C sugar conformation compensating for G glycosidic bond conformation. The conformation of G is syn, C2'-endo and for C it is anti, C3'-endo.
A linear DNA molecule having free ends can rotate, to adjust to changes of various dynamic processes in the cell, by changing how many times the two chains of its double helix twist around each other. Some DNA molecules are circular and are topologically constrained. More recently circular RNA was described as well to be a natural pervasive class of nucleic acids, expressed in many organisms (see circRNA).
A covalently closed, circular DNA (also known as cccDNA) is topologically constrained as the number of times the chains coiled around one other cannot change. This cccDNA can be supercoiled, which is the tertiary structure of DNA. Supercoiling is characterized by the linking number, twist and writhe. The linking number (Lk) for circular DNA is defined as the number of times one strand would have to pass through the other strand to completely separate the two strands. The linking number for circular DNA can only be changed by breaking of a covalent bond in one of the two strands. Always an integer, the linking number of a cccDNA is the sum of two components: twists (Tw) and writhes (Wr).
Twists are the number of times the two strands of DNA are twisted around each other. Writhes are number of times the DNA helix crosses over itself. DNA in cells is negatively supercoiled and has the tendency to unwind. Hence the separation of strands is easier in negatively supercoiled DNA than in relaxed DNA. The two components of supercoiled DNA are solenoid and plectonemic. The plectonemic supercoil is found in prokaryotes, while the solenoidal supercoiling is mostly seen in eukaryotes.
The quaternary structure of nucleic acids is similar to that of protein quaternary structure. Although some of the concepts are not exactly the same, the quaternary structure refers to a higher-level of organization of nucleic acids. Moreover, it refers to interactions of the nucleic acids with other molecules. The most commonly seen form of higher-level organization of nucleic acids is seen in the form of chromatin which leads to its interactions with the small proteins histones. Also, the quaternary structure refers to the interactions between separate RNA units in the ribosome or spliceosome.
- Biomolecular structure
- Crosslinking of DNA
- DNA nanotechnology
- DNA supercoil
- Gene structure
- Non-helical models of DNA structure
- Nucleic acid design
- Nucleic acid double helix
- Nucleic acid structure determination (experimental)
- Nucleic acid structure prediction (computational)
- Nucleic acid thermodynamics
- Protein structure
- Krieger M, Scott MP, Matsudaira PT, Lodish HF, Darnell JE, Lawrence Z, Kaiser C, Berk A (2004). "Section 4.1: Structure of Nucleic Acids". Molecular cell biology. New York: W.H. Freeman and CO. ISBN 978-0-7167-4366-8.
- "Structure of Nucleic Acids". SparkNotes.
- Anthony-Cahill SJ, Mathews CK, van Holde KE, Appling DR (2012). Biochemistry (4th Edition). Englewood Cliffs, N.J: Prentice Hall. ISBN 978-0-13-800464-4.
- Alberts B, Johnson A, Lewis J, Raff M, Roberts K, Wlater P (2002). Molecular Biology of the Cell (4th ed.). New York NY: Garland Science. ISBN 978-0-8153-3218-3.
- Mao C (December 2004). "The emergence of complexity: lessons from DNA". PLoS Biology. 2 (12): e431. doi:10.1371/journal.pbio.0020431. PMC 535573. PMID 15597116.
- Katsuyuki, Aoki; Kazutaka, Murayama; Hu, Ning-Hai (2016). "Chapter 3, section3. Nucleic Acid Constituent complexes". In Astrid, Sigel; Helmut, Sigel; Roland K.O., Sigel (eds.). The Alkali Metal Ions: Their Role in Life. Metal Ions in Life Sciences. 16. Springer. pp. 43–66. doi:10.1007/978-3-319-21756-7_3. ISBN 978-3-319-21755-0. PMID 26860299.
- Sedova A, Banavali NK (2017). "Geometric Patterns for Neighboring Bases Near the Stacked State in Nucleic Acid Strands". Biochemistry. 56 (10): 1426–1443. doi:10.1021/acs.biochem.6b01101. PMID 28187685.
- Tinoco I, Bustamante C (October 1999). "How RNA folds". Journal of Molecular Biology. 293 (2): 271–81. doi:10.1006/jmbi.1999.3001. PMID 10550208.
- "RNA structure (Molecular Biology)".
- Hollyfield JG, Besharse JC, Rayborn ME (December 1976). "The effect of light on the quantity of phagosomes in the pigment epithelium". Experimental Eye Research. 23 (6): 623–35. doi:10.1016/0014-4835(76)90221-9. PMID 1087245.
- Rietveld K, Van Poelgeest R, Pleij CW, Van Boom JH, Bosch L (March 1982). "The tRNA-like structure at the 3' terminus of turnip yellow mosaic virus RNA. Differences and similarities with canonical tRNA". Nucleic Acids Research. 10 (6): 1929–46. doi:10.1093/nar/10.6.1929. PMC 320581. PMID 7079175.
- Staple DW, Butcher SE (June 2005). "Pseudoknots: RNA structures with diverse functions". PLoS Biology. 3 (6): e213. doi:10.1371/journal.pbio.0030213. PMC 1149493. PMID 15941360.
- Sperschneider J, Datta A, Wise MJ (December 2012). "Predicting pseudoknotted structures across two RNA sequences". Bioinformatics. 28 (23): 3058–65. doi:10.1093/bioinformatics/bts575. PMC 3516145. PMID 23044552.
- Dickerson RE, Drew HR, Conner BN, Wing RM, Fratini AV, Kopka ML (April 1982). "The anatomy of A-, B-, and Z-DNA". Science. 216 (4545): 475–85. doi:10.1126/science.7071593. PMID 7071593.
- Chen X; Ramakrishnan B; Sundaralingam M (1995). "Crystal structures of B-form DNA-RNA chimers complexed with distamycin". Nature Structural Biology. 2 (9): 733–735. doi:10.1038/nsb0995-733.
- Sedova A, Banavali NK (2016). "RNA approaches the B-form in stacked single strand dinucleotide contexts". Biopolymers. 105 (2): 65–82. doi:10.1002/bip.22750. PMID 26443416.
- Mirkin SM (2001). DNA Topology: Fundamentals. Encyclopedia of Life Sciences. doi:10.1038/npg.els.0001038. ISBN 978-0470016176.
- "Strucual Biochemistry/Nucleic Acid/DNA/DNA Structure". Retrieved 11 December 2012. |
In 1896, Bequerel, a French physicist discovered that crystals of Uranium salts emitted penetrating rays similar to X-rays which could fog photographic plates. Two years after this Pierre and Marie Currie discovered other elements: Polonium and Radium which had this property. The emission was known as Radioactivity.
Protons and Netrons are held together in the nucleus of an atom by the strong-force. This force acts over a very short distance of about ~1 fm, (10-15m) and over this short distance it can overcome the electromagnetic repulsion between the positively charged protons. Nuclei with radii that are within the range of the Strong force are stable. As atomic number increases the radius of the nucleus also increases and the element becomes unstable. This instablity manifests itself as the emission of particles or energy from the nucleus. The elements with atomic number greater than 82 are radioactive.
The decay constant is a measure of how quickly on average a radioactive nuclei will take to decay. Since radioactive decay is a random process, the decay of a single nucleus may happen at any time but for many undecayed nuclei, the average decay rate is given by the decay constant, λ and it has the unit of [s-1] or [h-1] or [year-1].
The activity of a radioactive material is defined by two factors:
The activity, A is measured in Becquerels [Bq] or [s-1].
The corrected activity is the activity taking into account the background radiation.
Consider a block of radioactive material, initially the number of undecayed nuclei is, N0. On the basis of our reasoning above we can say that the number which will decay will depend on overall number of nuclei, N, and also on the length of the brief period of time. In other words the more nuclei there are the more will decay and the longer the time period the more nuclei will decay. Let us denote the number which will have decayed as dN and the small time interval as dt. So we have reasoned that the number of radioactive nuclei which will decay during the time interval from t to t+dt must be proportional to N and to dt. In symbols therefore: -dN∝Ndt.Turning the proportionality in this equation into an equality we can write: -dN=λ N dt
Dividing across by N we can rewrite this equation as:
So this equation describes the situation for any brief time interval, dt. To find out what happens for all periods of time we simply add up what happens in each brief time interval. In other words we integrate the above equation. Expressing this more formally we can say that for the period of time from t = 0 to any later time t, the number of radioactive nuclei will decrease from N0 to Nt, so that:
The final expression is known as the radioactive decay law. It has the form of an exponential decay curve like the one we saw in the discharge of a capacitor.
The Flash animation below simulates the random decay of a small number of radioactive nuclei. Each atom nucleus has a constant probability of decaying. During the simulation, for each undecayed nucleus, a uniformly distributed random number is generated and if the number is less than the decay probability the atom will decay.
Initially the large proportion of undecayed nuclei will generate a large number of decays. As the number of undecayed atoms decreases, the number of decays will also reduce.
The red curve, shows the theoretical exponential decay curve for the array of nucleus plotting the number of undecayed nuclei against time. The blue curve is the simulated decay rate. Due to the statistical nature of the process there are small variations in the decay curves but overall the agreement is good.
The graph also shows the half-life of the decay process. In each half-life the remaining nuclei will reduce by a half from an initial number N0 to N0/2 in the first half-life, then to N0/4 in second half-life, N0/8 in the third, and so on.
Run the simulation several times to observe the statistical decay process.
You can simulate the decay process the next time you have a bag of M&Ms. Take the packet of M&Ms and empty them on a flat surface. The M&Ms which land such that you can see the 'M' represent the nuclei that have decayed. Since you can see the M or not the decay constant is 0.5 Record the number that of decays and you can eat the them. Repeat the process with the remaining M&Ms until there are no more remaining.
Starting with 50 M&Ms, the decay sequence is
The decay law leads to an exponential decay which reaches zero in an infinite amount of time. A useful measure of rate at which the material decays is given by the half-life. This is the time taken for the number of undecayed nuclei to decrease by half the initial amount.
Successive half-lifes decreases the number of undecayed nuclei by N0/4, N0/8, etc. As shown in Figure X.
Mathematically, the half-life can be calculated by seting Nt=N0/2 in the radioactive decay equation. Therefore, N0/2=N exp(-&lambdat1/2). Taking logs and re-arranging for t1/2 leads to t1/2 = ln(2)/λ
There are broadly three types of radioactive emissions. These are:
Alpha radiation is the emission of two protons and two neutron from the nucleus, which is the same as a Helium nucleus. Due to the heavy mass and charge, α radiation the least penetrating, being stopped by a sheet of paper. However it also is the most ionising form of radiation, knocking electrons from their shells in nearby atoms. The dangers of alpha-radiation come from being ingested into the body. When an alpha particle is emitted, the proton number decreases by 2 and and mass number decreases by 4.
AZX → A-4Z-2(X-2) + 42α
β radiation is the emission of an electron from the nucleus. Since the nucleus does not contain any electrons, either a proton or a neutron transforms. Depending on which transforms leads to one of two kinds of beta radiation.
Beta radiation occurs in two forms. β+ and β-
AZX → AZ-1(X+1) + 0+1e+ + 00ν
AZX → AZ+1(X-1) + 0-1e- + 00ν;
The emission of the positron or electron is accompanied by the emission of a ghostly particle called a neutrino ν or antineutrino ν, respectively. Particle tracks photographed in a cloud chamber showed that the energy of emitted particles that showed up in the cloud chamber was not conserved. Rather than abandon the principle of the conservation of energy a new particle was postulated by W. Pauli in 1930. The lack of interaction between matter and neutrinos ment that it was 26 years latter the neutrino was discovered by Reines and Cowan with the anti-neutrino being found in
Beta radiation is more penetrating then α radiation but it can be stopped by a few mm of aluminium. Once again, the main danger of β radiation comes from when it is ingested in the body.
After the emission of an α particle or β decay, the nucleus is left in an excited state and it releases its excess energy in the form of a γ-ray photon. γ-rays are high-energy electromagnetic waves. They are also highly penetrating and are not stopped but the probability of absorption is proportional to the thickness of the absorbing medium, leading to an exponential decrease in the number of γ-ray photons passing through. |
Box Plot is the visual representation of the depicting groups of numerical data through their quartiles. Boxplot is also used for detect the outlier in data set. It captures the summary of the data efficiently with a simple box and whiskers and allows us to compare easily across groups. Boxplot summarizes a sample data using 25th, 50th and 75th percentiles. These percentiles are also known as the lower quartile, median and upper quartile.
A box plot consist of 5 things.
- First Quartile or 25%
- Median (Second Quartile) or 50%
- Third Quartile or 75%
To download the dataset used, click here.
Draw the box plot with Pandas:
One way to plot boxplot using pandas dataframe is to use
boxplot() function that is part of pandas library.
days with respect
size with respect
Draw the boxplot using seaborn library:
seaborn.boxplot(x=None, y=None, hue=None, data=None, order=None, hue_order=None, orient=None, color=None, palette=None, saturation=0.75, width=0.8, dodge=True, fliersize=5, linewidth=None, whis=1.5, notch=False, ax=None, **kwargs)
x = feature of dataset
y = feature of dataset
hue = feature of dataset
data = datafram or full dataset
color = color name
Let’s see how to create the box plot through seaborn library.
Information about “tips” dataset.
days with respect
- Bottom black horizontal line of blue box plot is minimum value
- First black horizontal line of rectangle shape of blue box plot is First quartile or 25%
- Second black horizontal line of rectangle shape of blue box plot is Second quartile or 50% or median.
- Third black horizontal line of rectangle shape of blue box plot is third quartile or 75%
- Top black horizontal line of rectangle shape of blue box plot is maximum value.
- Small diamond shape of blue box plot is outlier data or erroneous data.
Attention geek! Strengthen your foundations with the Python Programming Foundation Course and learn the basics.
To begin with, your interview preparations Enhance your Data Structures concepts with the Python DS Course. And to begin with your Machine Learning Journey, join the Machine Learning – Basic Level Course |
Mathematical inquiry processes: Interpret; explore; test particular cases. Conceptual field of inquiry: Volume of a pyramid; circumference and perimeter.
The prompt was inspired by the dimensions of the Great Pyramid of Giza, which closely approximate to the condition in the statement. The Ancient Egyptians are believed to have been attempting to 'square the circle' in their construction of the pyramid (see box). There are a number of lines of inquiry that could develop from the prompt:
Define the geometrical terms and understand the meaning of the prompt. The illustration below might help to make it clearer. The semi-circle is folded up in the second picture to show the height of the pyramid equals the radius of the circle.
Draw a net and construct a right square-based pyramid. Work out the perpendicular height of the pyramid using Pythagoras' Theorem and then calculate the circumference of the circle. Compare the perimeter of the square to the circumference of the circle. How closely does the construction satisfy the condition in the prompt? How should we change the dimensions of the net to get closer to satisfying the condition?
Select a side length of the base. Work out the circumference of the circle and the height of the pyramid that satisfy the condition in the prompt as closely as possible.
Calculate the volume of the pyramid and, assuming students selected different side lengths, compare it to the volumes of the mathematically similar pyramids created by other students in the class. What are the scale factors of enlargement for the length and volume in each case? Make a generalisation about the scale factors.
Work out the angles of elevation, the length of the diagonals across the base and the slope heights using Pythagoras' Theorem and trigonometry.
The Great Pyramid of Giza
The Great Pyramid of Piza was completed in 2560 BC. The architects are thought to have based its design on the condition in the prompt. They were attempting to 'square the circle' by finding a square and circle with equal perimeters. At the end of the nineteenth century, 'squaring the circle', which usually refers to the area of the shapes, was proved to be impossible because of the transcendental nature of π.
Figure 1 shows a square (side length 1) whose perimeter is 4. The radius of the circle is the perpendicular height of the isosceles triangle. The angles at the base of the triangle are 51.85o, which is the slope angle of each face in the Great Pyramid of Piza. To find the height of the triangle (h):
h = 0.5(tan 51.85o) = 0.6365 (accurate to 4 decimal places).
As the radius is 0.6365, the circumference of the circle = 2πr = 2π(0.6365) = 3.9992 (also accurate to 4 decimal places). The circumference is, therefore, only 0.02% shorter than the length of the perimeter.
The original dimensions of the Great Pyramid are thought to have been:
base length 230.4m height 146.5m
Using these measurements, the perimeter of the base equals 921.6m and the circumference of the circle whose radius is the height of the pyramid is 920.5m (accurate to one decimal place). The circumference is, therefore, 0.12% shorter than the length of the perimeter. A height of 146.66m - just 16cm more - would have given a circumference within 0.012% of the length of the perimeter.
More information about the dimensions of the Great Pyramid of Piza, including their connection to the golden ratio, can be found here. |
The tree line is the edge of the habitat at which trees are capable of growing. It is found at high elevations and in frigid environments. Beyond the tree line, trees cannot tolerate the environmental conditions (usually cold temperatures or lack of moisture).:51 The tree line should not be confused with a lower timberline or forest line, which is the line where trees form a forest with a closed canopy.:151:18
The tree line, like many other natural lines (lake boundaries, for example), appears well-defined from a distance, but upon sufficiently close inspection, it is a gradual transition in most places. Trees grow shorter towards the inhospitable climate until they simply stop growing.:55
- 1 Types
- 2 Typical vegetation
- 3 Worldwide distribution
- 4 Long term monitoring of alpine treelines
- 5 See also
- 6 References
- 7 Further reading
An alpine tree line is the highest elevation that sustains trees; higher up it is too cold, or the snow cover lasts for too much of the year to sustain trees.:151 The climate above the tree line of mountains is called an alpine climate,:21 and the terrain can be described as alpine tundra. In the northern hemisphere treelines on north-facing slopes are lower than on south-facing slopes because the increased shade on north-facing slopes means the snowpack takes longer to melt. This shortens the growing season for trees.:109 In the southern hemisphere, the south-facing slopes have the shorter growing season.
The alpine tree line boundary is seldom abrupt: it usually forms a transition zone between closed forest below and treeless alpine tundra above. This zone of transition occurs “near the top of the tallest peaks in the northeastern United States, high up on the giant volcanoes in central Mexico, and on mountains in each of the 11 western states and throughout much of Canada and Alaska”. Environmentally dwarfed shrubs (krummholz) commonly forms the upper limit.
The decrease in air temperature due to increasing elevation causes the alpine climate. The rate of decrease can vary in different mountain chains, from 3.5 °F (1.9 °C) per 1,000 feet (300 m) of elevation gain in the dry mountains of the Western United States, to 1.4 °F (0.78 °C) per 1,000 feet (300 m) in the moister mountains of the Eastern United States. Skin effects and topography can create microclimates that alter the general cooling trend.
Compared with arctic timberlines, alpine timberlines may receive fewer than half of the number of degree days (>10 °C) based on air temperature because solar radiation intensities are greater at alpine than at arctic timberlines. However, the number of degree days calculated from leaf temperatures may be very similar in the two kinds of timberlines.
Summer warmth generally sets the limit to which tree growth can occur, for while timberline conifers are very frost-hardy during most of the year, they become sensitive to just 1 or 2 degrees of frost in mid-summer. A series of warm summers in the 1940s seems to have permitted the establishment of “significant numbers” of spruce seedlings above the previous treeline in the hills near Fairbanks, Alaska. Survival depends on a sufficiency of new growth to support the tree. The windiness of high-elevation sites is also a potent determinant of the distribution of tree growth. Wind can mechanically damage tree tissues directly, including blasting with wind-borne particles, and may also contribute to the desiccation of foliage, especially of shoots that project above snow cover.
At the alpine timberline, tree growth is inhibited when excessive snow lingers and shortens the growing season to the point where new growth would not have time to harden before the onset of fall frost. Moderate snowpack, however, may promote tree growth by insulating the trees from extreme cold during the winter, curtailing water loss, and prolonging a supply of moisture through the early part of the growing season. However, snow accumulation in sheltered gullies in the Selkirk Mountains of southeastern British Columbia causes timberline to be 400 metres (1,300 ft) lower than on exposed intervening shoulders.
In a desert, the tree line marks the driest places where trees can grow; drier desert areas having insufficient rainfall to sustain them. These tend to be called the "lower" tree line and occur below about 5,000 ft (1,500 m) elevation in the Desert Southwestern United States. The desert treeline tends to be higher on pole-facing slopes than equator-facing slopes, because the increased shade on the former keeps those cooler and prevents moisture from evaporating as quickly, giving trees a longer growing season and more access to water.
In some mountainous areas, higher elevations above the condensation line or on equator-facing and leeward slopes can result in low rainfall and increased exposure to solar radiation. This dries out the soil, resulting in a localized arid environment unsuitable for trees. Many south-facing ridges of the mountains of the Western U.S. have a lower treeline than the northern faces because of increased sun exposure and aridity.
Double tree line
Different tree species have different tolerances to drought and cold. Mountain ranges isolated by oceans or deserts may have restricted repertoires of tree species with gaps that are above the alpine tree line for some species yet below the desert tree line for others. For example, several mountain ranges in the Great Basin of North America have lower belts of Pinyon Pines and Junipers separated by intermediate brushy but treeless zones from upper belts of Limber and Bristlecone Pines.:37
On coasts and isolated mountains the tree line is often much lower than in corresponding altitudes inland and in larger, more complex mountain systems, because strong winds reduce tree growth. In addition the lack of suitable soil, such as along talus slopes or exposed rock formations, prevents trees from gaining an adequate foothold and exposes them to drought and sun.
The arctic tree line is the northernmost latitude in the Northern Hemisphere where trees can grow; farther north, it is too cold all year round to sustain trees. Extremely cold temperatures, especially when prolonged, can result in freezing of the internal sap of trees, killing them. In addition, permafrost in the soil can prevent trees from getting their roots deep enough for the necessary structural support.
Unlike alpine timberlines, the northern timberline occurs at low elevations. The arctic forest–tundra transition zone in northwestern Canada varies in width, perhaps averaging 145 kilometres (90 mi) and widening markedly from west to east, in contrast with the telescoped alpine timberlines. North of the arctic timberline lies the low-growing tundra, and southwards lies the boreal forest.
Two zones can be distinguished in the arctic timberline: a forest–tundra zone of scattered patches of krummholz or stunted trees, with larger trees along rivers and on sheltered sites set in a matrix of tundra; and “open boreal forest” or “lichen woodland”, consisting of open groves of erect trees underlain by carpet of Cladonia spp. lichens. The proportion of trees to lichen mat increases southwards towards the “forest line”, where trees cover 50 per cent or more of the landscape.
A southern treeline exists in the New Zealand Subantarctic Islands and the Australian Macquarie Island, with places where mean annual temperatures above 5 °C (41 °F) support trees and woody plants, and those below 5 °C (41 °F) don't. Another treeline exists in the southwestern most parts of the Magellanic subpolar forests ecoregion, where the forest merges into the subantarctic tundra (termed Magellanic moorland or Magellanic tundra). For example, the northern half of Hoste Island has a Nothofagus antarctica forest but the southern part is tundra.
Other tree lines
Several other reasons may cause the environment to be too extreme for trees to grow. This can include geothermal exposure associated with hot springs or volcanoes, such as at Yellowstone; high soil acidity near bogs; high salinity associated with playas or salt lakes; or ground that is saturated with groundwater that excludes oxygen from the soil, which most tree roots need for growth. The margins of muskegs and bogs are common examples of these types of open area. However, no such line exists for swamps, where trees, such as Bald cypress and the many mangrove species, have adapted to growing in permanently waterlogged soil. In some colder parts of the world there are tree lines around swamps, where there are no local tree species that can develop. There are also man-made pollution tree lines in weather-exposed areas, where new tree lines have developed because of the increased stress of pollution. Examples are found around Nikel in Russia and previously in the Erzgebirge.
Some typical Arctic and alpine tree line tree species (note the predominance of conifers):
- Dahurian Larch (Larix gmelinii)
- Macedonian Pine (Pinus peuce)
- Swiss Pine (Pinus cembra)
- Mountain Pine (Pinus mugo)
- Arctic White Birch (Betula pubescens subsp. tortuosa)
- Rowan (Sorbus aucuparia)
- Subalpine fir (Abies lasiocarpa):106
- Subalpine Larch (Larix lyallii)
- Engelmann Spruce (Picea engelmannii):106
- Whitebark Pine (Pinus albicaulis)
- Great Basin Bristlecone Pine (Pinus longaeva)
- Rocky Mountains Bristlecone Pine (Pinus aristata)
- Foxtail Pine (Pinus balfouriana)
- Limber Pine (Pinus flexilis)
- Potosi Pinyon (Pinus culminicola)
- Black spruce (Picea mariana):53
- White spruce (Picea glauca)
- Tamarack Larch (Larix laricina)
- Hartweg's Pine (Pinus hartwegii)
- Antarctic Beech (Nothofagus antarctica)
- Lenga Beech (Nothofagus pumilio)
- Alder (Alnus acuminata)
- Pino del cerro (Podocarpus parlatorei)
- Polylepis (Polylepis tarapacana)
- Snow Gum (Eucalyptus pauciflora)
Alpine tree lines
The alpine tree line at a location is dependent on local variables, such as aspect of slope, rain shadow and proximity to either geographical pole. In addition, in some tropical or island localities, the lack of biogeographical access to species that have evolved in a subalpine environment can result in lower tree lines than one might expect by climate alone.
Averaging over many locations and local microclimates, the treeline rises 75 metres (246 ft) when moving 1 degree south from 70 to 50°N, and 130 metres (430 ft) per degree from 50 to 30°N. Between 30°N and 20°S, the treeline is roughly constant, between 3,500 and 4,000 metres (11,500 and 13,100 ft).
Here is a list of approximate tree lines from locations around the globe:
|Location||Approx. latitude||Approx. elevation of tree line||Notes|
|Finnmarksvidda, Norway||69°N||500||1,600||At 71°N, near the coast, the tree-line is below sea level (Arctic tree line).|
|Chugach Mountains, Alaska||61°N||700||2,300||Tree line around 1,500 feet (460 m) or lower in coastal areas|
|Southern Norway||61°N||1,100||3,600||Much lower near the coast, down to 500–600 metres (1,600–2,000 ft).|
|Scotland||57°N||500||1,600||Strong maritime influence serves to cool summer and restrict tree growth:85|
|Olympic Mountains WA, United States||47°N||1,500||4,900||Heavy winter snowpack buries young trees until late summer|
|Mount Katahdin, Maine, United States||46°N||1,150||3,800|
|Eastern Alps, Austria, Italy||46°N||1,750||5,700||more exposure to Russian cold winds than Western Alps|
|Alps of Piedmont, Northwestern Italy||45°N||2,100||6,900|
|New Hampshire, United States||44°N||1,350||4,400|| Some peaks have even lower treelines because of fire and subsequent loss of soil, such as Grand Monadnock and Mount Chocorua.|
|Wyoming, United States||43°N||3,000||9,800|
|Rila and Pirin Mountains, Bulgaria||42°N||2,300||7,500||Up to 2,600 m (8,500 ft) on favorable locations. Mountain Pine is the most common tree line species.|
|Pyrenees Spain, France, Andorra||42°N||2,300||7,500||Mountain Pine is the tree line species|
|Wasatch Mountains, Utah, United States||40°N||2,900||9,500||Higher (nearly 11,000 feet or 3,400 metres in the Uintas)|
|Rocky Mountain NP, CO, United States||40°N||3,550||11,600|| On warm southwest slopes|
|3,250||10,700||On northeast slopes|
|Yosemite, CA, United States||38°N||3,200||10,500|| West side of Sierra Nevada|
|3,600||11,800|| East side of Sierra Nevada|
|Sierra Nevada, Spain||37°N||2,400||7,900||Precipitation low in summer|
|Yushan, Taiwan||23°N||3,600||11,800|| Strong winds and poor soil restrict further grow of trees.|
|Hawaii, United States||20°N||3,000||9,800|| Geographic isolation and no local tree species with high tolerance to cold temperatures.|
|Pico de Orizaba, Mexico||19°N||4,000||13,100|||
|Mount Kilimanjaro, Tanzania||3°S||3,950||13,000|||
|Andes, Peru||11°S||3,900||12,800||East side; on west side tree growth is restricted by dryness|
|Andes, Bolivia||18°S||5,200||17,100||Western Cordillera; highest treeline in the world on the slopes of Sajama Volcano (Polylepis tarapacana)|
|4,100||13,500||Eastern Cordillera; treeline is lower because of lower solar radiation (more humid climate)|
|Sierra de Córdoba, Argentina||31°S||2,000||6,600||Precipitation low above trade winds, also high exposure|
|Australian Alps, Australia||36°S||2,000||6,600||West side of Australian Alps|
|1,700||5,600||East side of Australian Alps|
|Andes, Laguna del Laja, Chile||37°S||1,600||5,200||Temperature rather than precipitation restricts tree growth|
|Mount Taranaki, North Island, New Zealand||39°S||1,500||4,900||Strong maritime influence serves to cool summer and restrict tree growth|
|Tasmania, Australia||41°S||1,200||3,900||Cold winters, strong cold winds and cool summers with occasional summer snow restrict tree growth|
|Fiordland, South Island, New Zealand||45°S||950||3,100||Cold winters, strong cold winds and cool summers with occasional summer snow restrict tree growth|
|Torres del Paine, Chile||51°S||950||3,100||Strong influence from the Southern Patagonian Ice Field serves to cool summer and restrict tree growth|
|Navarino Island, Chile||55°S||600||2,000||Strong maritime influence serves to cool summer and restrict tree growth|
Arctic tree lines
Like the alpine tree lines shown above, polar tree lines are heavily influenced by local variables such as aspect of slope and degree of shelter. In addition, permafrost has a major impact on the ability of trees to place roots into the ground. When roots are too shallow, trees are susceptible to windthrow and erosion. Trees can often grow in river valleys at latitudes where they could not grow on a more exposed site. Maritime influences such as ocean currents also play a major role in determining how far from the equator trees can grow as well as the warm summers experienced in extreme continental climates. In northern inland Scandinavia there is substantial maritime influence on high parallels that keep winters relatively mild, but enough inland effect to have summers well above the threshold for the tree line. Here are some typical polar treelines:
|Location||Approx. longitude||Approx. latitude of tree line||Notes|
|Norway||24°E||70°N||The North Atlantic current makes Arctic climates in this region warmer than other coastal locations at comparable latitude. In particular the mildness of winters prevents permafrost.|
|West Siberian Plain||75°E||66°N|
|Central Siberian Plateau||102°E||72°N||Extreme continental climate means the summer is warm enough to allow tree growth at higher latitudes, extending to northernmost forests of the world at 72°28'N at Ary-Mas (102° 15' E) in the Novaya River valley, a tributary of the Khatanga River and the more northern Lukunsky grove at 72°31'N, 105° 03' E east from Khatanga River.|
|Russian Far East (Kamchatka and Chukotka)||160°E||60°N||The Oyashio Current and strong winds affect summer temperatures to prevent tree growth. The Aleutian Islands are almost completely treeless.|
|Alaska||152°W||68°N||Trees grow north to the south facing slopes of the Brooks Range. The mountains block cold air coming off of the Arctic Ocean.|
|Northwest Territories, Canada||132°W||69°N||Reaches north of the Arctic Circle because of the continental nature of the climate and warmer summer temperatures.|
|Nunavut||95°W||61°N||Influence of the very cold Hudson Bay moves the treeline southwards.|
|Labrador Peninsula||72°W||56°N||Very strong influence of the Labrador Current on summer temperatures as well as altitude effects (much of Labrador is a plateau). In parts of Labrador, the treeline extends as far south as 53°N.|
|Greenland||50°W||64°N||Determined by experimental tree planting in the absence of native trees because of isolation from natural seed sources; a very few trees are surviving, but growing slowly, at Søndre Strømfjord, 67°N.|
Antarctic tree lines
Kerguelen Island (49°S), South Georgia (54°S), and other subantarctic islands are all so heavily wind exposed and with a too cold summer climate (tundra) that none have any indigenous tree species. The Falkland Islands (51°S) summer temperature is near the limit, but the islands are also treeless although some planted trees exist.
Antarctic Peninsula is the northernmost point in Antarctica (63°S) and has the mildest weather. It is located 1,080 kilometres (670 mi) from Cape Horn on Tierra del Fuego. But no trees live in Antarctica. In fact, only a few species of grass, mosses, and lichens survive on the peninsula. In addition, no trees survive on any of the subantarctic islands near the peninsula.
Southern Rata forests exist on Enderby Island and Auckland Islands (both 50°S) and these grow up to an elevation of 370 metres (1,200 ft) in sheltered valleys. These trees seldom grow above 3 m (9.8 ft) in height and they get smaller as one gains altitude, so that by 180 m (600 ft) they are waist high. These islands have only 600 – 800 hours of sun annually. Campbell Island (52°S) further south is treeless, except for one stunted pine, planted by scientists. The climate on these islands is not severe, but tree growth is limited by almost continual rain and wind. Summers are very cold with an average January temperature of 9 °C (48 °F). Winters are mild 5 °C (41 °F) but wet. Macquarie Island (Australia) is located at 54°S and has no vegetation beyond snow grass and alpine grasses and mosses.
Long term monitoring of alpine treelines
|Look up tree line in Wiktionary, the free dictionary.|
- Ecotone: a transition between two adjacent ecological communities
- Edge effects: the effect of contrasting environments on an ecosystem
- Massenerhebung effect
- Elliott-Fisk, D.L. (2000). "The Taiga and Boreal Forest". In Barbour, M.G.; Billings, M.D. North American Terrestrial Vegetation (2nd ed.). Cambridge University Press. ISBN 978-0-521-55986-7.
- Jørgensen, S.E. (2009). Ecosystem Ecology. Academic Press. ISBN 978-0-444-53466-8.
- Körner, C.; Riedl, S. (2012). Alpine Treelines: Functional Ecology of the Global High Elevation Tree Limits. Springer. ISBN 9783034803960.
- Zwinger, A.; Willard, B. E. (1996). Land Above the Trees: A Guide to American Alpine Tundra. Big Earth Publishing. ISBN 1-55566-171-8.
- Körner, C (2003). Alpine plant life: functional plant ecology of high mountain ecosystems. Springer. ISBN 3-540-00347-9.
- "Alpine Tundra Ecosystem". Rocky Mountain National Park. National Park Service. Retrieved 2011-05-13.
- Peet, R.K. (2000). "Forests and Meadows of the Rocky Mountains". In Barbour, M.G.; Billings, M.D. North American Terrestrial Vegetation (2nd ed.). Cambridge University Press. ISBN 978-0-521-55986-7.
- Arno, S.F. (1984). Timberline: Mountain and Arctic Forest Frontiers. Seattle, WA: The Mountaineers. ISBN 0-89886-085-7.
- Baker, F.S. (1944). "Mountain climates of the western United States". Ecol. Monogr. 14: 223–254. JSTOR 1943534.
- Geiger, R. (1950). The Climate near the Ground. Harvard Univ. Press (Cambridge MA).
- Tranquillini, W. (1979). Physiological Ecology of the Alpine Timberline: tree existence at high altitudes with special reference to the European Alps. New York NY: Springer-Verlag. ISBN 3642671071.
- Coates, K.D.; Haeussler, S.; Lindeburgh, S; Pojar, R.; Stock, A.J. (1994). Ecology and silviculture of interior spruce in British Columbia. OCLC 66824523.
- Viereck, L.A. (1979). "Characteristics of treeline plant communities in Alaska" (PDF). Holarctic Ecol. 2: 228–238. JSTOR 3682417.
- Viereck, L.A.; Van Cleve, K.; Dyrness, C.T (1986). "Forest ecosystem distribution in the taiga environment". In Van Cleve, K.; Chapin, F.S.; Flanagan, P.W.; Viereck, L.A.; Dyrness, C.T. Forest Ecosystems in the Alaskan Taiga. New York NY: Springer-Verlag. pp. 22–43. doi:10.1007/978-1-4612-4902-3_3. ISBN 1461249023.
- Sowell, J.B.; McNulty, S.P.; Schilling, B.K. (1996). "The role of stem recharge in reducing the winter desiccation of Picea engelmannii (Pinaceae) needles at alpine timberline". Amer. J. Bot. 83: 1351–1355. JSTOR 2446122.
- Shaw, C.H. (1909). "The causes of timberline on mountains: the role of snow". Plant World 12: 169–181.
- Bradley, Raymond S. (1999). Paleoclimatology: reconstructing climates of the Quaternary 68. Academic Press. p. 344. ISBN 0123869951.
- Baldwin, B.G. (2002). The Jepson desert manual: vascular plants of southeastern California. University of California Press. ISBN 0-520-22775-1.
- Pienitz, Reinhard; Douglas, Marianne S. V.; Smol, John P. (2004). Long-term environmental change in Arctic and Antarctic lakes. Springer. p. 102. ISBN 1402021267.
- Timoney, K.P.; La Roi, G.H.; Zoltai, S.C.; Robinson, A.L. (1992). "The high subarctic forest–tundra of northwestern Canada: position, width, and vegetation gradients in relation to climate". Arctic 45: 1–9. JSTOR 40511186.
- Löve, Dd (1970). "Subarctic and subalpine: where and what?". Arctic and Alpine Res. 2: 63–73. JSTOR 1550141.
- Hare, F.K.; Ritchie, J (1972). "The boreal bioclimates". Geogr. Rev. 62: 333–365. JSTOR 213287.
- R.A., Black; Bliss, L.C. (1978). "Recovery sequence of Picea mariana–Vaccinium uliginosum forests after burning near Inuvik, Northwest Territories, Canada". Can. J. Bot. 56 (6): 2020–2030. doi:10.1139/b78-243.
- "Antipodes Subantarctic Islands tundra". Terrestrial Ecoregions. World Wildlife Fund.
- "Magellanic subpolar Nothofagus forests". Terrestrial Ecoregions. World Wildlife Fund.
- Chalupa, V. (1992). "Micropropagation of European Mountain Ash (Sorbus aucuparia L.) and Wild Service Tree [Sorbus torminalis (L.) Cr.]". In Bajaj, Y.P.S. High-Tech and Micropropagation II. Biotechnology in Agriculture and Forestry 18. Springer Berlin Heidelberg. pp. 211–226. doi:10.1007/978-3-642-76422-6_11. ISBN 978-3-642-76424-0.
- "Treeline". The Canadian Encyclopedia. Archived from the original on 2011-10-21. Retrieved 2011-06-22.
- Fajardo, A; Piper, FI; Cavieres, LA (2011). "Distinguishing local from global climate influences in the variation of carbon status with altitude in a tree line species". Global ecology and biogeography 20 (2): 307–318. doi:10.1111/j.1466-8238.2010.00598.x.
- Körner, Ch (1998). "A re-assessment of high elevation treeline positions and their explanation" (PDF). Oecologia 115 (4): 445–459. doi:10.1007/s004420050540.
- "Action For Scotland's Biodiversity" (PDF).
- Körner, Ch. "High Elevation Treeline Research". Archived from the original on 2011-05-14. Retrieved 2010-06-14.
- "Physiogeography of the Russian Far East".
- "Mount Washington State Park". New Hampshire State Parks. Archived from the original on 2013-04-03. Retrieved 2013-08-22.
Tree line, the elevation above which trees do not grow, is about 4,400 feet in the White Mountains, nearly 2,000 feet below the summit of Mt. Washington.
- Schoenherr, Allan A. (1995). A Natural History of California. UC Press. ISBN 0-520-06922-6.
- "台灣地帶性植被之區劃與植物區系之分區" (PDF).
- Lara, Antonio; Villalba, Ricardo; Wolodarsky-Franke, Alexia; Aravena, Juan Carlos; Luckman, Brian H.; Cuq, Emilio (2005). "Spatial and temporal variation in Nothofagus pumilio growth at tree line along its latitudinal range (35°40′–55° S) in the Chilean Andes" (PDF). Journal of Biogeography 32 (5): 879–893. doi:10.1111/j.1365-2699.2005.01191.x.
- Aravena, Juan C.; Lara, Antonio; Wolodarsky-Franke, Alexia; Villalba, Ricardo; Cuq, Emilio (2002). "Tree-ring growth patterns and temperature reconstruction from Nothofagus pumilio (Fagaceae) forests at the upper tree line of southern, Chilean Patagonia". Revista Chilena de Historia Natural (Santiago) 75 (2). doi:10.4067/S0716-078X2002000200008.
- Arno, S.F.; Hammerly, R.P. (1984). Timberline. Mountain and Arctic Forest Frontiers. Seattle: The Mountaineers. ISBN 0-89886-085-7.
- Beringer, Jason; Tapper, Nigel J.; McHugh, Ian; Chapin, F. S., III; et al. (2001). "Impact of Arctic treeline on synoptic climate". Geophysical Research Letters 28 (22): 4247–4250. Bibcode:2001GeoRL..28.4247B. doi:10.1029/2001GL012914.
- Ødum, S (1979). "Actual and potential tree line in the North Atlantic region, especially in Greenland and the Faroes". Holarctic Ecology 2 (4): 222–227. doi:10.1111/j.1600-0587.1979.tb01293.x.
- Ødum, S (1991). "Choice of species and origins for arboriculture in Greenland and the Faroe Islands". Dansk Dendrologisk Årsskrift 9: 3–78.
- Singh, C.P.; Panigrahy, S.; Parihar, J.S.; Dharaiya, N. (2013). "Modeling environmental niche of Himalayan birch and remote sensing based vicarious validation" (PDF). Tropical Ecology 54 (3): 321–329.
- Singh, C.P.; Panigrahy, S.; Thapliyal, A.; Kimothi, M.M.; Soni, P.; Parihar, J.S. (2012). "Monitoring the alpine treeline shift in parts of the Indian Himalayas using remote sensing" (PDF). Current Science 102 (4): 559–562. Archived from the original (PDF) on 2013-05-16.
- Panigrahy, S.; Singh, C.P.; Kimothi, M.M.; Soni, P.; Parihar, J.S. (2010). "The Upward migration of alpine vegetation as an indicator of climate change: observations for Indian Himalayan region using remote sensing data" (PDF). Nnrms(B) 35: 73–80. Archived from the original on November 24, 2011.
- Singh, C.P. (2008). "Alpine ecosystems in relation to climate change". ISG Newsletter 14: 54–57.
- Ameztegui A, Coll L, Brotons L, Ninot JM. Land-use legacies rather than climate change are driving the recent upward shift of the mountain tree line in the Pyrenees. Global Ecology and Biogeography. DOI: 10.1111/geb.12407 |
Compounds and molecules are made up of many atoms bonded together. The chemical formula describes the type and number of these atoms. For example, take the chemical formula for water: H2O. This formula shows that water consists of one oxygen atom (O) and two hydrogen atoms (H). The number of each element is given as a subscript after each element unless the number is 1 (for oxygen).
As you read, here are some questions you will find answers to:
How do teachers explain chemical processes and balance equations to young learners?
How do they make students comfortable with the idea that chemical reactions are part of our everyday lives?
What are the benefits of a virtual lab?
What Is The Difficulty In Learning Formula And Equation Balancing?
There are three reasons why formulas and equations balancing can be difficult for even the most committed student.
1. It Feels abstract
Because the balancing of formulas and equations is done at the molecular level, you cannot see or feel them. Not being able to behold the process and not see its relevance to the real world can frustrate learning and make it tough for learners to stay motivated.
2. This is content heavy
There are three main types of chemical formulas that contain varying degrees of structural information:
The molecular formula of a compound is the smallest whole number ratio of the atoms in that compound. In other words, the empirical formula only gives you the ratio between each type of element in the compound, not the actual number of atoms in each molecule. For example, the molecular formula for glucose is CH2O, which means there are twice as many hydrogen atoms as there are carbon or oxygen atoms.
The molecular formula shows the total number of each atom present in a molecule. Thus, the molecular formula is an integer multiple of the corresponding empirical formula. For example, glucose has the molecular formula CH2O. The molecular formula of glucose is C6H12O6, which is a multiple of six applied to the empirical formula.
The structural formula provides more information about the structure of the molecule. This is an extended molecular formula that provides information about the grouping and/or bonds between atoms. The structural formula for propane describes the bonds between carbon and hydrogen atoms with lines. The abbreviated structural formula gives almost the same information due to the arrangement of the atoms.
Figure: The molecular structure of propane can be illustrated by its structural formula and abbreviated structural formula.
3. It's complicated
A chemical reaction equation defines how one or more substances change their chemical composition to create new ones. Although new substances are formed, atoms are not destroyed or created. An example of a chemical reaction equation is
2H2 + O2 → 2H2O.
This equation depicts the reaction between hydrogen and oxygen to form water. Hydrogen and oxygen are reactants whereas water is a product of this reaction.
5 Ways To Improve The Understanding Of Formulas And Equation Balancing
With those points in mind, here are things you can incorporate into formulas and equations balancing lessons to make them more engaging, accessible, and fun for you and your students.
1. Show the people behind the science
Modern chemical nomenclature begins with Berzeluis, one of the so-called fathers of chemistry, when he said that chemicals should be given a name based on what they are and not where they came from. (After all, malic acid was found elsewhere, such as in cherries.) He created the 1- and 2-letter atomic symbol system taught in high school today, with letters taken from the Latin word for element (hence "Pb'' for plumbum which is Lead). This method of writing chemical formulas was developed by the 19th century Swedish chemist Jons Jacob Berzelius.
2. Relate to the real world
Coefficients and subscripts:
There are two types of numbers in a chemical equation: coefficients, which indicate the total number of molecules/atoms in a reaction, and subscripts, which indicate the amount of each element in the molecule. So 2H2O means we have two water molecules, each with two hydrogen atoms and one oxygen atom. This gives a total of four hydrogen atoms and two oxygen atoms.
3. Seeing is believing
Visualization can make all the difference when a topic is as complex and abstract as formulas and equation balancing.
Conservation of Mass:
When reactants react to form products, atoms are neither created nor destroyed. This principle is known as the law of conservation of mass. As a result, all chemical equations must be balanced. In a balanced reaction equation, there is an equal number of atoms for each element in the equation. For the reaction
2H2 + O2 → 2H2O,
The coefficients are added in front of the H2 and H2O molecules to balance the equation. Combined with the index, this coefficient gives four hydrogen atoms and two oxygen atoms on the reactant and product sides of the corresponding equation. A common example is when wood burns, the amount of soot, ash, and gas is equal to the initial charcoal and oxygen when it first reacted. Therefore, the mass of the product is equal to that of the reactant.
4. Make it stick with word-play
Visualization can assist in comprehending this lesson but when learning chemical reactions, chemical formulas, and equation balancing, there are few options other than the use of memory aids. An example is three types of chemical formulas that can be recalled as StEM where the lowercase alphabet is not considered as shown;
S - Structural formula
M - Molecular Formula
E - Empirical formula
5. Use virtual laboratory simulation
A unique way to teach formulas and equation balancing is through virtual laboratory simulations. At Labster, we strive to provide highly interactive lab presentations using game-based elements such as storytelling and scoring systems in an immersive 3D world.
Check out Labster formulas and equation balancing Simulation. You will learn the difference between reactants and products and how the conservation of mass law guides us and how to balance equations.
Learn more about formulas and equation balancing simulations or contact us to find out how to get started using virtual labs. |
In the first triangle video, we focused on what is true for all triangles. In this video, we will discuss different types of triangles and what makes them special and distinct.
Transcript: Types of Triangles
Okay, now back to triangles. In the first triangle video, we talked about ideas that were true for all triangles. In this video, we will begin with two special categories of triangles: isosceles and equilateral triangles.
Types of Triangles: Isosceles Triangle
An isosceles triangle, is one in which two sides are equal. Now it happens to be true, if the two sides are equal, then it must be true that the angles opposite those sides are also equal.
So those would be these two angles here, they would have to be equal. In fact, Euclid’s remarkable theorem on this topic, is a two way theorem. What do I mean by that?
If a triangle has two equal sides, then it must have the opposite angles equal. And if it has two equal angles, then the opposite sides must be equal. So from the side, you can deduce the angles, from the angles you can deduce the sides.
It’s a two-way theorem. Could an isosceles triangle have a right angle? Could it have an obtuse angle? You might wanna pause the video and think about this for a moment.
Of course, the answer is yes in both cases. Here are isosceles triangles, one with the right angle, one with an obtuse angle, and of course, we can even find the other angles in that right triangle.
We have 90 degrees, and so the other two angles also have to add up to 90 degrees, so each one must be 45, in the obtuse triangle. We have a 110 degrees so the other two sides must, the other two angles must add up to 70 degrees, that means each one is 35 degrees.
Types of Triangles: Equilateral Triangle
An equilateral triangle has three equal sides, and three equal angles. Since the three angles have to add up to 180 degrees, each one must equal 60 degrees.
So an equilateral always has three 60 degree angles. Here’s an example equilateral triangle. Is an equilateral triangle also an isosceles triangle? Technically, yes. Technically, the definition of isosceles is a triangle with two or more equal sides.
That’s the technical definition of isosceles. In all likelihood, the test will not ask directly about this distinction. But you must know that every special fact about isosceles triangles, also applies to equilateral triangles, that’s important to know. Now, we will return to all triangles. We’ll talk about area.
Area of Triangles
The area of a triangle is how much space the triangle takes up. If units are used, this area is measured in square units, something like square inches or centimeters squared, something of that sort. You may even remember the formula, area equals one-half base times height, where b is the base and h is the height. But these problematic terms need to be clarified.
There are some very naive understandings of these, that lead to all kinds of mistakes. So we have to be very clear, of exactly what do we mean by a base, and what do we mean by a height. First of all, what exactly is the base of a triangle? Naive students will say the bottom side, but this is not the whole answer.
Part of the problem, is that some textbooks often insist on representing all triangles with a horizontal side on the bottom. And this over-representation of one orientation, misleads students into mistakenly thinking, that every triangle comes with a horizontal bottom side. In fact, if we look at a triangle like this, this is a general triangle, there is no horizontal bottom side.
The word base, has absolutely nothing to do with being horizontal. In fact in any triangle, only of the three sides could be the base. The base is simply the side of the triangle we choose to find the area. So, any triangle has three possible bases, those are the three sides. Now, what exactly is the height of a triangle? Just as students naively assume that the base is horizontal, so they naively assume that the height is vertical.
Neither of these is necessarily true. In fact, h is the length of a segment called the altitude. Now the rough definition is, an altitude is a segment from a vertex to the opposite side, that is perpendicular to that side. So we’ll expand that definition in a moment. But for the moment that rough definition is fine.
So just as any side can be the base, we can draw an altitude from any vertex to the opposite side. Here is a triangle with the three altitudes drawn. So notice each one is a different length, so there is three different heights, and that’s not a problem. Each height is paired uniquely with a particular base.
So in the first one we draw the altitude from B, we draw to base A-C. Whatever side the altitude is hitting, that’s what we’re gonna use as our base paired with that height. So one way to calculate the area, would be the length of base AC times altitude, would be BD. Now, we draw the altitude from A to BC, as we have in the middle triangle.
Now, we choosing BC as the base, because that’s the side the altitude is hitting. So, now the altitude is gonna be one half BC times AE. And similarly on the right, we’re drawing the line, the altitude from vertex C, we’re drawing it to side AB. So, we’re choosing AB as our base. And so here, the base will be AB and will have one half AB x CF It turns out for any triangle.
If we calculate the area in these three different ways, we’ll always get the exact same number. The triangle has only one area, so these three different ways of calculating the area, will always lead to the same answer. Now this rough definition of altitude is fine in triangles, in which all three angles are acute.
Types of Triangles: Obtuse Triangle
Think about an obtuse triangle. There’s no way to draw a segment from P to MN, that can be perpendicular to MN so just impossible In other words. Any segment that starts out at P, and intersects anywhere on segment MN. It’s not gonna be perpendicular to MN. So we have a problem, the altitude is not simply aligned from a vertex of the opposite side.
What we have to do is extend that segment,and then draw a segment from P, that is perpendicular to the extended segment. So notice that the altitude is not actually intersecting segment MN itself, it’s intersecting the line along which MN lies. So PR is in fact the altitude It is perpendicular to that line, technically it doesn’t intersect the segment MN.
It intersects the line on which that segment lies. So we could legitimately say that the area of this triangle, is one half the base MN times the altitude PR, that is 100% true. Similarly, we could rearrange, we could draw altitudes from other vertices. We draw one from N, that just intersects the opposite side, that’s easy.
We draw one from M, well again, we have to extend PN the other way. And then the altitude intersects that extended line. So for the second triangle with an altitude from M, we can just say the area is one-half MP times SN. And for the third triangle, we can say the area is one-half PN times MT. Notice that, in right triangle, two of the altitudes would be the legs, the sides that meet at the right angle.
If one leg is the base, the other leg is the altitude, and either way, the area equals 1/2 times the product of the legs. So that doesn’t really matter whether we call AC the base and BC the height, or call BC the base and AC the height. Either way we’re gonna wind up with an area one half AC times BC. So that’s how you find an area of a triangle.
Sometimes students worry about how to find the length of an altitude. The test will not give you simply the three sides of a triangle, and expect you to find the altitude. Now technically there would be ways to do this involving trigonometry, and other more advanced forms of math. This is not something you’re expected to do on the test.
An altitude always forms a smaller right triangles within any triangle. That means sometimes we might use the Pythagorean theorem, or another right-triangle fact to find the altitude. We will discuss this more in the next video. So, yes, there are ways using the Pythagorean theorem, but there’s not a way just with the general triangle, that you could automatically compute the altitude.
Here’s a practice problem. Pause the video and then we’ll talk about this.
Okay, in the diagram, two altitudes are drawn, and we want the length of the base GF. We’ll let’s see, we have the altitude GX which has a length of 6, and that’s drawn to the line On which FH lies.
So it must be true that we consider the area is one half 6 times 14. If we call GF, if we just say we’ll leave that as GF because that’s unknown. That lies along the line GFY and HY is the altitude to that line, so the area is also one half 7 times GF. So those are two different ways to calculate the area, and here is the big idea.
If we calculate the area of the same triangle in two different ways, those two different ways must lead to the same answer. So we must be able to calculate the area that way, calculate it this way, and say those two are equal. Well, we’ll get rid of the one half, multiply both sides by two to cancel that, and so we just get this product here.
We can plug in the numbers that we have. We’ll divide both sides by seven, and what we’re left with is just two times six which is 12, and that must be the length of FG or GF. Now we’re gonna talk about four kinds of lines in a triangle. You will not need to know these names, but you will need to keep these ideas straight.
Special Lines in Types of Triangles
So the first line is the altitude, we just talked about this. This goes to the vertex, and it’s perpendicular to the opposite side. Technically it’s perpendicular to the line, that contains the opposite side if the triangle is obtuse. But that definitely goes to the vertex, definitely makes a right angle. Obviously, the point where the altitude intersects the opposite side, usually isn’t the midpoint of the opposite side.
As we see here, s is not even close to being the midpoint of pr. The second special line is the perpendicular bisector of a side. Perpendicular bisector side goes to the midpoint, it’s perpendicular with the midpoint. But this in general does not even pass through the opposite vertex at all. So that green line, that’s not even pretending to pass through Q.
It just goes right by Q without passing it. So in general we can construct a perpendicular bisector of a side, but in general, that’s not going to pass through. The vertex of the opposite side, the opposite vertex.
The third special line, goes from the vertex of the midpoint of the opposite side, and this called a median.
And so here M is definitely the midpoint of the opposite side, so we’re drawing the line from Q to M. In general, the median divides the opposite side in half, but does not divide the angle at Q in half. And in fact if you look at that angle, it’s pretty easy to tell that the angle on the left, PQM, that’s a pretty small angle and the angle on the right.
MQR, that’s a much larger angle. In fact that might almost be 90 degrees, or slightly more than 90 degrees. So clearly those two angles are not equal. So this line divides the opposite side in half, does not divide the opposite angle in half. Does not divide the angle in half at all.
The fourth special line is an angle bisector, this is the opposite. The angle bisector divides the angle in half, but it usually doesn’t divide the opposite side into halves. So notice here where intersects the opposite side, point V. Point V is not even pretending to be the midpoint Of PR, it’s nowhere close to dividing PR in half.
So we do divide the angle in half, we do not divide the opposite side in half. Very important to keep all those ideas straight. So that’s what’s true in a general triangle. In a general triangle, if on any side any side and opposite angle pair, we can construct these four lines.
The altitude, the perpendicular bisector, the median, and the angle bisector are the four different lines of triangles. They do four different things in a general triangle.
More on Types of Triangles
You have to appreciate what is and what isn’t true in most triangles, in order to appreciate the very special thing that happens only in an isosceles triangle. The line down the middle of an isosceles triangle, from the vertex to the midpoint of the base, plays all four of these roles at once. So point T is the midpoint of the base. So if we draw the line from G to T, this is gonna be perpendicular to the base, so it’s gonna be an altitude.
Of course it’s aligned to the midpoint of the opposite side, so it’s a median. It also bisects the angle, so it’s an angle bisector, and so it plays all four roles. Median altitude, angle bisector and perpendicular by sector. All four of these roles is played by the single line, down the middle of an isosceles triangle.
Now in general, those four special lines are four different lines in ordinary triangles. So if we get any information that one segment is playing more than one role, that the perpendicular and angle bisectors are the same thing. That’s enough to prove that the triangle is isosceles. So any time any two of those roles, are played by the same line right there.
That’s an indication the triangle must be isosceles. Because in general, those four different roles are four completely different lines, in most triangles. The mid-line in an isosceles triangles is a line of symmetry, which divides the bigger triangle into two congruent right triangles. Of course, all these isosceles triangle facts apply also to the equilateral triangle, which is a special case of isosceles.
In summary, isosceles means equal bases and opposite angles are equal, the fact it goes both ways. If you know the bases are equal, then you can prove that the opposite angles are equal. If you know the angles are equal, if you know the angles are equal, you can prove the opposite sides are equal.
You can go either way, equilateral has three equal sides, and all 60 degrees angles. Areas equals one half base times height, but we have to be very careful about this, we can’t afford to be naive, any side can be the base. And the altitude is perpendicular to that side, the length of the altitude is the height.
And so base and height, have absolutely nothing to do with being horizontal or vertical. The altitude, the perpendicular bisector, the line from the vertex to the opposite midpoint, which is the median. And the angle bisector are four completely different lines, in most triangles. But the line of symmetry in an isosceles triangle, plays all four of those roles at once.
And so that is something very, very special. |
From Wikipedia, the free encyclopedia - View original article
In genetics, a promoter is a region of DNA that initiates transcription of a particular gene. Promoters are located near the transcription start sites of genes, on the same strand and upstream on the DNA (towards the 5' region of the sense strand). Promoters can be about 100–1000 base pairs long.
For the transcription to take place, the enzyme that synthesizes RNA, known as RNA polymerase, must attach to the DNA near a gene. Promoters contain specific DNA sequences and response elements that provide a secure initial binding site for RNA polymerase and for proteins called transcription factors that recruit RNA polymerase. These transcription factors have specific activator or repressor sequences of corresponding nucleotides that attach to specific promoters and regulate gene expression.
As promoters are typically immediately adjacent to the gene in question, positions in the promoter are designated relative to the transcriptional start site, where transcription of DNA begins for a particular gene (i.e., positions upstream are negative numbers counting back from -1, for example -100 is a position 100 base pairs upstream).
In the cell nucleus, it seems that promoters are distributed preferentially at the edge of the chromosomal territories, likely for the co-expression of genes on different chromosomes. Furthermore, in humans, promoters show certain structural features characteristic for each chromosome.
It should be noted that the above promoter sequences are recognized only by RNA polymerase holoenzyme containing sigma-70. RNA polymerase holoenzymes containing other sigma factors recognize different core promoter sequences.
<-- upstream downstream --> 5'-XXXXXXXPPPPPXXXXXXPPPPPPXXXXGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGXXXX-3' -35 -10 Gene to be transcribed
(Note that the optimal spacing between the -35 and -10 sequences is 17 bp.)
for -10 sequence T A T A A T 77% 76% 60% 61% 56% 82%
for -35 sequence T T G A C A 69% 79% 61% 56% 54% 54%
Gene promoters are typically located upstream of the gene and can have regulatory elements several kilobases away from the transcriptional start site (enhancers). In eukaryotes, the transcriptional complex can cause the DNA to bend back on itself, which allows for placement of regulatory sequences far from the actual site of transcription. Eukaryotic RNA-polymerase-II-dependent promoters can contain a TATA element (consensus sequence TATAAA), which is recognized by the general transcription factor TATA-binding protein (TBP); and a B recognition element (BRE), which is recognized by the general transcription factor TFIIB. The TATA element and BRE typically are located close to the transcriptional start site (typically within 30 to 40 base pairs).
Eukaryotic promoter regulatory sequences typically bind proteins called transcription factors that are involved in the formation of the transcriptional complex. An example is the E-box (sequence CACGTG), which binds transcription factors in the basic helix-loop-helix (bHLH) family (e.g. BMAL1-Clock, cMyc).
Bidirectional promoters are short (<1 kbp), intergenic regions of DNA between the 5' ends of the genes in a bidirectional gene pair. A “bidirectional gene pair” refers to two adjacent genes coded on opposite strands, with their 5' ends oriented toward one another. The two genes are often functionally related, and modification of their shared promoter region allows them to be co-regulated and thus co-expressed. Bidirectional promoters are a common feature of mammalian genomes. About 11% of human genes are bidirectionally paired.
Bidirectionally paired genes in the Gene Ontology database shared at least one database-assigned functional category with their partners 47% of the time. Microarray analysis has shown bidirectionally paired genes to be co-expressed to a higher degree than random genes or neighboring unidirectional genes. Although co-expression does not necessarily indicate co-regulation, methylation of bidirectional promoter regions has been shown to downregulate both genes, and demethylation to upregulate both genes. There are exceptions to this, however. In some cases (about 11%), only one gene of a bidirectional pair is expressed. In these cases, the promoter is implicated in suppression of the non-expressed gene. The mechanism behind this could be competition for the same polymerases, or chromatin modification. Divergent transcription could shift nucleosomes to upregulate transcription of one gene, or remove bound transcription factors to downregulate transcription of one gene.
Some functional classes of genes are more likely to be bidirectionally paired than others. Genes implicated in DNA repair are five times more likely to be regulated by bidirectional promoters than by unidirectional promoters. Chaperone proteins are three times more likely, and mitochondrial genes are more than twice as likely. Many basic housekeeping and cellular metabolic genes are regulated by bidirectional promoters. The overrepresentation of bidirectionally paired DNA repair genes associates these promoters with cancer. Forty-five percent of human somatic oncogenes seem to be regulated by bidirectional promoters - significantly more than non-cancer causing genes. Hypermethylation of the promoters between gene pairs WNT9A/CD558500, CTDSPL/BC040563, and KCNK15/BF195580 has been associated with tumors.
Certain sequence characteristics have been observed in bidirectional promoters, including a lack of TATA boxes, an abundance of GpC islands, and a symmetry around the midpoint of dominant Cs and As on one side and Gs and Ts on the other. CCAAT boxes are common, as they are in many promoters that lack TATA boxes. In addition, the motifs NRF-1, GABPA, YY1,and ACTACAnnTCCC are represented in bidirectional promoters at significantly higher rates than in unidirectional promoters. The absence of TATA boxes suggests that they play a role in determining the directionality of promoters, but counterexamples of bidirectional promoters do possess TATA boxes and unidirectional promoters without them indicates that they cannot be the only factor.
Although the term "bidirectional promoter" refers specifically to promoter regions of mRNA-encoding genes, luciferase assays have shown that over half of human genes do not have a strong directional bias. Research suggests that non-coding RNAs are frequently associated with the promoter regions of mRNA-encoding genes. It has been hypothesized that the recruitment and initiation of RNA Polymerase II usually begins bidirectionally, but divergent transcription is halted at a checkpoint later during elongation. Possible mechanisms behind this regulation include sequences in the promoter region, chromatin modification, and the spatial orientation of the DNA.
A subgenomic promoter is a promoter added to a virus for a specific heterologous gene, resulting in the formation of mRNA for that gene alone.
A wide variety of algorithms have been developed to facilitate detection of promoters in genomic sequence, and promoter prediction is a common element of many gene prediction methods. A promoter region is located before the -35 and -10 Consensus sequences. The closer the promoter region is to the consensus sequences the more often transcription of that gene will take place. There is not a set pattern for promoter regions as there are for consensus sequences.
A major question in evolutionary biology is how important tinkering with promoter sequences is to evolutionary change, for example, the changes that have occurred in the human lineage after separating from chimps.
Some evolutionary biologists, for example Allan Wilson, have proposed that evolution in promoter or regulatory regions may be more important than changes in coding sequences over such time frames.
A key reason for the importance of promoters is the potential to incorporate endocrine and environmental signals into changes in gene expression: A great variety of changes in the extracellular or intracellular environment may have impact on gene expression, depending on the exact configuration of a given promoter: the combination and arrangement of specific DNA sequences that constitute the promoter defines the exact groups of proteins that can be bound to the promoter, at a given timepoint. Once the cell receives a physiological, pathological, or pharmacological stimulus, a number of cellular proteins are modified biochemically by signal cascades. By changes in structure, specific proteins acquire the capability to enter the nucleus of the cell and bind to promoter DNA, or to other proteins that themselves are already bound to a given promoter. The multi-protein complexes that are formed have the potential to change levels of gene expression. As a result the gene product may increase or decrease inside the cell.
The binding of RNAP (R) to a promoter (P) is a two-step process:
Though OMIM is a major resource for gathering information on the relationship between mutations and natural variation in gene sequence and susceptibility to hundreds of diseases, a sophisticated search strategy is required to extract diseases associated with defects in transcriptional control where the promoter is believed to have direct involvement.
This is a list of diseases where evidence suggests some promoter malfunction, through either direct mutation of a promoter sequence or mutation in a transcription factor or transcriptional co-activator.
Most diseases are heterogeneous in etiology, meaning that one "disease" is often many different diseases at the molecular level, though symptoms exhibited and response to treatment may be identical. How diseases of different molecular origin respond to treatments is partially addressed in the discipline of pharmacogenomics.
Not listed here are the many kinds of cancers involving aberrant transcriptional regulation owing to creation of chimeric genes through pathological chromosomal translocation. Importantly, intervention on the number or structure of promoter-bound proteins is one key to treating a disease without affecting expression of unrelated genes sharing elements with the target gene. Genes where change is not desirable are capable of influencing the potential of a cell to become cancerous and form a tumor.
The usage of canonical sequence for a promoter is often problematic, and can lead to misunderstandings about promoter sequences. Canonical implies perfect, in some sense.
In the case of a transcription factor binding site, then there may be a single sequence that binds the protein most strongly under specified cellular conditions. This might be called canonical.
However, natural selection may favor less energetic binding as a way of regulating transcriptional output. In this case, we may call the most common sequence in a population, the wild-type sequence. It may not even be the most advantageous sequence to have under prevailing conditions.
Some cases of many genetic diseases are associated with variations in promoters or transcription factors.
Some promoters are called constitutive as they are active in all circumstances in the cell, while others are regulated becoming active in response to specific stimuli.
When referring to a promoter some authors actually mean promoter + operator. i.e. The lac promoter is IPTG inducible, this means that besides the lac promoter the lac operator is also present. If the lac operator was not present the IPTG would not have an inducible effect. Another example is the tac promoter system (Ptac). Notice how its written down as tac promoter, while in fact it means both promoter and operator. |
Childhood obesity is a serious medical condition that affects children and adolescents. Children who are obese are above the normal weight for their age and height.
Childhood obesity is particularly troubling because the extra pounds often start children on the path to health problems that were once considered adult problems — diabetes, high blood pressure and high cholesterol. Many obese children become obese adults, especially if one or both parents are obese. Childhood obesity can also lead to poor self-esteem and depression.
One of the best strategies to reduce childhood obesity is to improve the eating and exercise habits of your entire family. Treating and preventing childhood obesity helps protect your child's health now and in the future.
Not all children carrying extra pounds are overweight or obese. Some children have larger than average body frames. And children normally carry different amounts of body fat at the various stages of development. So you might not know by how your child looks if weight is a health concern.
The body mass index (BMI), which provides a guideline of weight in relation to height, is the accepted measure of overweight and obesity. Your child's doctor can use growth charts, the BMI and, if necessary, other tests to help you figure out if your child's weight could pose health problems.
When to see a doctor
If you're worried that your child is putting on too much weight, talk to his or her doctor. The doctor will consider your child's history of growth and development, your family's weight-for-height history, and where your child lands on the growth charts. This can help determine if your child's weight is in an unhealthy range.
Lifestyle issues — too little activity and too many calories from food and drinks — are the main contributors to childhood obesity. But genetic and hormonal factors might play a role as well. For example, recent research has found that changes in digestive hormones can affect the signals that let you know you're full.
Many factors — usually working in combination — increase your child's risk of becoming overweight:
- Diet. Regularly eating high-calorie foods, such as fast foods, baked goods and vending machine snacks, can cause your child to gain weight. Candy and desserts also can cause weight gain, and more and more evidence points to sugary drinks, including fruit juices, as culprits in obesity in some people.
- Lack of exercise. Children who don't exercise much are more likely to gain weight because they don't burn as many calories. Too much time spent in sedentary activities, such as watching television or playing video games, also contributes to the problem.
- Family factors. If your child comes from a family of overweight people, he or she may be more likely to put on weight. This is especially true in an environment where high-calorie foods are always available and physical activity isn't encouraged.
- Psychological factors. Personal, parental and family stress can increase a child's risk of obesity. Some children overeat to cope with problems or to deal with emotions, such as stress, or to fight boredom. Their parents might have similar tendencies.
- Socioeconomic factors. People in some communities have limited resources and limited access to supermarkets. As a result, they might buy convenience foods that don't spoil quickly, such as frozen meals, crackers and cookies. Also, people who live in lower income neighborhoods might not have access to a safe place to exercise.
Childhood obesity can have complications for your child's physical, social and emotional well-being.
- Type 2 diabetes. This chronic condition affects the way your child's body uses sugar (glucose). Obesity and a sedentary lifestyle increase the risk of type 2 diabetes.
- Metabolic syndrome. This cluster of conditions can put your child at risk of heart disease, diabetes or other health problems. Conditions include high blood pressure, high blood sugar, high triglycerides, low HDL ("good") cholesterol and excess abdominal fat.
- High cholesterol and high blood pressure. A poor diet can cause your child to develop one or both of these conditions. These factors can contribute to the buildup of plaques in the arteries, which can cause arteries to narrow and harden, possibly leading to a heart attack or stroke later in life.
- Asthma. Children who are overweight or obese might be more likely to have asthma.
- Sleep disorders. Obstructive sleep apnea is a potentially serious disorder in which a child's breathing repeatedly stops and starts during sleep.
- Nonalcoholic fatty liver disease (NAFLD). This disorder, which usually causes no symptoms, causes fatty deposits to build up in the liver. NAFLD can lead to scarring and liver damage.
- Bone fractures. Obese children are more likely to break bones than are children of normal weight.
Social and emotional complications
- Low self-esteem and being bullied. Children often tease or bully their overweight peers, who suffer a loss of self-esteem and an increased risk of depression as a result.
- Behavior and learning problems. Overweight children tend to have more anxiety and poorer social skills than normal-weight children do. These problems might lead children who are overweight either to act out and disrupt their classrooms or to withdraw socially.
- Depression. Low self-esteem can create overwhelming feelings of hopelessness, which can lead to depression in some children who are overweight.
Whether your child is at risk of becoming overweight or is currently at a healthy weight, you can take measures to get or keep things on the right track.
- Limit your child's consumption of sugar-sweetened beverages or avoid them
- Provide plenty of fruits and vegetables
- Eat meals as a family as often as possible
- Limit eating out, especially at fast-food restaurants, and when you do eat out, teach your child how to make healthier choices
- Adjust portion sizes appropriately for age
- Limit TV and other "screen time" to less than 2 hours a day for children older than 2, and don't allow television for children younger than 2
- Be sure your child gets enough sleep
Also, be sure your child sees the doctor for well-child checkups at least once a year. During this visit, the doctor measures your child's height and weight and calculates his or her BMI. An increase in your child's BMI or in his or her percentile rank over one year is a possible sign that your child is at risk of becoming overweight.
As part of regular well-child care, the doctor calculates your child's BMI and determines where it falls on the BMI-for-age growth chart. The BMI helps indicate if your child is overweight for his or her age and height.
Using the growth chart, your doctor determines your child's percentile, meaning how your child compares with other children of the same sex and age. For example, if your child is in the 80th percentile, it means that compared with other children of the same sex and age, 80 percent have a lower weight or BMI.
Cutoff points on these growth charts, established by the Centers for Disease Control and Prevention, help identify children who are overweight and obese:
- BMI between 85th and 94th percentiles — overweight
- BMI 95th percentile or above — obesity
Because BMI doesn't consider things such as being muscular or having a larger than average body frame and because growth patterns vary greatly among children, your doctor also factors in your child's growth and development. This helps determine whether your child's weight is a health concern.
In addition to BMI and charting weight on the growth charts, the doctor evaluates:
- Your family's history of obesity and weight-related health problems, such as diabetes
- Your child's eating habits
- Your child's activity level
- Other health conditions your child has
- Psychosocial history, including incidences of depression, sleep disturbances, and sadness and whether your child feels isolated or alone or is the target of bullying
Your child's doctor might order blood tests if he or she finds that your child is obese. These tests might include:
- A cholesterol test
- A blood sugar test
- Other blood tests to check for hormone imbalances, vitamin D deficiency or other conditions associated with obesity
Some of these tests require that your child not eat or drink anything before the test. Ask if your child needs to fast before a blood test and for how long.
Treatment for childhood obesity is based on your child's age and if he or she has other medical conditions. Treatment usually includes changes in your child's eating habits and physical activity level. In certain circumstances, treatment might include medications or weight-loss surgery.
Treatment for children who are overweight
The American Academy of Pediatrics recommends that children older than 2 and adolescents whose weight falls in the overweight category be put on a weight-maintenance program to slow the progress of weight gain. This strategy allows the child to add inches in height but not pounds, causing the BMI to drop over time into a healthier range.
Treatment for children who are obese
Children ages 6 to 11 who are obese might be encouraged to modify their eating habits for gradual weight loss of no more than 1 pound (or about 0.5 kilogram) a month. Older children and adolescents who are obese or severely obese might be encouraged to modify their eating habits to aim for weight loss of up to 2 pounds (or about 1 kilogram) a week.
The methods for maintaining your child's current weight or losing weight are the same: Your child needs to eat a healthy diet — both in terms of type and amount of food — and increase physical activity. Success depends largely on your commitment to helping your child make these changes.
Parents are the ones who buy groceries, cook meals and decide where the food is eaten. Even small changes can make a big difference in your child's health.
- When food shopping, choose fruits and vegetables. Cut back on convenience foods — such as cookies, crackers and prepared meals — which are often high in sugar, fat and calories. Always have healthy snacks available.
- Limit sweetened beverages. This includes those that contain fruit juice. These drinks provide little nutritional value in exchange for their high calories. They can also make your child feel too full to eat healthier foods.
- Limit fast food. Many of the menu options are high in fat and calories.
- Sit down together for family meals. Make it an event — a time to share news and tell stories. Discourage eating in front of a TV, computer or video game screen, which can lead to fast eating and lowered awareness of the amount eaten.
- Serve appropriate portion sizes. Children don't need as much food as adults do. Allow your child to eat only until full, even if that means leaving food on the plate. And remember, when you eat out, restaurant portion sizes are often way too large.
A critical part of achieving and maintaining a healthy weight, especially for children, is physical activity. It burns calories, strengthens bones and muscles, and helps children sleep well at night and stay alert during the day.
Good habits established in childhood help adolescents maintain healthy weights despite the hormonal changes, rapid growth and social influences that often lead to overeating. And active children are more likely to become fit adults.
To increase your child's activity level:
- Limit TV and recreational computer time to no more than 2 hours a day for children older than 2. Don't allow children younger than 2 to watch television. Other sedentary activities — playing video and computer games, talking on the phone, or texting — also should be limited.
- Emphasize activity, not exercise. Children should be moderately to vigorously active for at least an hour a day. Your child's activity doesn't have to be a structured exercise program — the object is to get him or her moving. Free-play activities — such as playing hide-and-seek, tag or jump-rope — can be great for burning calories and improving fitness.
- Find activities your child likes. For instance, if your child is artistically inclined, go on a nature hike to collect leaves and rocks that your child can use to make a collage. If your child likes to climb, head for the nearest neighborhood jungle gym or climbing wall. If your child likes to read, then walk or bike to the neighborhood library for a book.
Medication might be prescribed for some adolescents as part of an overall weight-loss plan. The risks of taking prescription medication over the long term are unknown, and medications' effects on weight loss and weight maintenance for adolescents is still in question.
Weight-loss surgery might be an option for severely obese adolescents who have been unable to lose weight through lifestyle changes. However, as with any type of surgery, there are potential risks and long-term complications. Discuss the pros and cons with your child's doctor.
Your doctor might recommend this surgery if your child's weight poses a greater health threat than do the potential risks of surgery. It's important that a child being considered for weight-loss surgery meet with a team of pediatric specialists, including a pediatric endocrinologist, psychologist and dietitian.
Weight-loss surgery isn't a miracle cure. It doesn't guarantee that an adolescent will lose all of his or her excess weight or be able to keep it off long term. And surgery doesn't replace the need for a healthy diet and regular physical activity.
If you're overweight and thinking of becoming pregnant, losing weight and eating well might affect your child's future. Eating well throughout pregnancy might also have a positive impact on your baby's later food choices.
To give your infant a healthy start, the World Health Organization recommends exclusively breast-feeding for 6 months.
For children who are overweight or obese, their best chance of achieving and maintaining a healthy weight is to start eating a healthy diet and moving more. Here are some steps you can take at home to help your child succeed:
- Be a role model. Choose healthy foods and active pastimes for yourself. If you need to lose weight, doing so will motivate your child to do likewise.
- Involve the whole family. Make healthy eating a priority and emphasize how important it is for everyone to be physically active. This avoids singling out the child who is overweight.
Parents play a crucial role in helping children who are obese feel loved and in control of their weight. Take advantage of every opportunity to build your child's self-esteem. Don't be afraid to bring up the topic of health and fitness, but do be sensitive that a child may view your concern as an insult. Talk to your kids directly, openly, and without being critical or judgmental.
In addition, consider the following:
- Avoid weight talk. Negative comments about your own, someone else's or your child's weight — even if well-intended — can hurt your child. Negative talk about weight can lead to poor body image. Instead, focus your conversation on healthy eating and positive body image.
- Discourage dieting and skipping meals. Instead, encourage and support healthy eating and increased physical activity.
- Find reasons to praise your child's efforts. Celebrate small, incremental changes in behavior but don't reward with food. Choose other ways to mark your child's accomplishments, such as going to the bowling alley or a local park.
- Talk to your child about his or her feelings. Help your child find ways other than eating to deal with emotions.
- Help your child focus on positive goals. For example, point out that he or she can now bike for more than 20 minutes without getting tired or can run the required number of laps in gym class.
- Be patient. Realize that an intense focus on your child's eating habits and weight can easily backfire, leading a child to overeat even more or possibly making him or her prone to developing an eating disorder.
Your child's family doctor or pediatrician will probably make the initial diagnosis of childhood obesity. If your child has complications from being obese, you might be referred to additional specialists to help manage these complications.
Here's some information to help you get ready for your appointment.
What you can do
When you make the appointment, ask if there's anything your child needs to do in advance, such as fast before having certain tests and for how long. Make a list of:
- Your child's symptoms, if any, and when they began
- Key personal information, including a family medical history and history of obesity
- All medications, vitamins or other supplements your child takes, including doses
- What your child typically eats in a week, and how active he or she is
- Questions to ask your doctor
Bring a family member or friend along, if possible, to help you remember all the information you're given.
For childhood obesity, some basic questions to ask your doctor include:
- What other health problems is my child likely to develop?
- What are the treatment options?
- Are there medications that might help manage my child's weight and other health conditions?
- How long will treatment take?
- What can I do to help my child lose weight?
- Are there brochures or other printed material I can have? What websites do you recommend?
Don't hesitate to ask other questions.
What to expect from your doctor
Your child's doctor or other health provider is likely to ask you a number of questions about your child's eating and activity, including:
- What does your child eat in a typical day?
- How much activity does your child get in a typical day?
- What factors do you believe affect your child's weight?
- What diets or treatments, if any, have you tried to help your child lose weight?
- Are you ready to make changes in your family's lifestyle to help your child lose weight?
- What might prevent your child from losing weight?
- How often does the family eat together? Does the child help prepare the food?
- Does your child, or family, eat while watching TV, texting or using a computer?
What you can do in the meantime
If you have days or weeks before your child's scheduled appointment, keep a record of what your child eats and how active he or she is. |
How To Make Layers Of The Earth For Kids :
How to make layers of the earth for kids is a cool science fair ideas for kids. Children can learn about science with simple materials available around them. Today we are making the 3d model of earth. 3d model of earth layers can be made with easily at home. We have used simple children’s play dough and made this awesome kids science project.
If you are searching some science fair ideas for kids.Then this layers of the earth project can be great idea. 3d model of earth is best suitable for school students from class 4,5 and 6.
What is earth layers working model ?
Earth layers working model is a simple 3d model of earths layer. We have learned that there are four different layers in earth. Four different layers of earth are crust, mantle, outer core and inner core.
Working model of 3d model of earth layers helps us to answers many questions like :
- How To Make Layers Of The Earth For Kids ?
- What are different layer of earth ?
- Can be great science fair ideas for kids.
Working model of earth layers experiment is best way to teach students different layers of earth. From this school science project we can learn about earths surface. Now let us make this science project.
Materials Required to Make Layers Of The Earth For Kids :
This working model can be made from different materials. We have made this simple science project using simple materials. Some of materials needed for layers o the earth for kids are :
Take ten different colors of play dough. I suggest you to take these colorful play dough of large size.You can also make these dough at home and color it as your requirement.
- We have used a Diy knife for cutting play dough. You can also use any other type of knife. Best recommended is exacto knife (DIY knife).
- A white color paper is needed. This paper is used for indexing different layers name..
- Some of other materials needed are metal scale,scissor, pen or pencil, etc.
How To Make Layers Of The Earth For Kids :
- First of all we have take a cup of blue color play dough. Press this dough and make a spherical “ball” shape. This color represent the oceans ,seas . Which are on the outer surface of earth crust.
- Take a scale or any rectangular shape object and slightly press making v shape on the spherical ball.
- We have taken small piece of yellow color play dough and stick it to the center of the ball on V shape. This is outer core layer of earth .
- After yellow color play dough orange color dough is used as the inner core.
- Brown color of play dough is finally stick on the outer surface of V shape. This layer is mental layer of our 3d model.
- Lastly different color of play dough are used to make continent. These continent are attached to the outer surface .
- Finally our layers of the earth for kids is ready for demo.
layers of the earth for kids Video :
For better demo of preparation of this project, we have embedded our working process below in video format.
Here is full process of making this school science projects layers of the earth for kids in video form. This is our YouTube channel DIY Projects. We also have crated many other school science projects in our channel. We also provide many science fair ideas for school students.
School science projects layers of the earth for kids :
Working model of earths layer is best science project. It can be performed at home or class room. 3d model of earth layers is also best for kids science project ideas. There are four layers of earth according to our text book.
Crust is a outer most layer of earth. This the layer in which we are living. It is said that thickness of crust is from 8km to 32km. It consist of all oceans, cultivable land, mountains,etc.
Mantle is the second outer layer of earth. This is the most thickest layer of earth. It is hard part. There is no any possibilities of living things in this layer of earth.
Outer Core :
Outer core is third layer of earth surface. It is said that thickness of outer core is about 2,200 km. This layer is very hot. Some minerals found in this layer are molten nickel and iron.
Inner Core :
Inner core is the fourth layer of earth surface. It is the inner most layer. This is the hottest among all surface. Its thickness is about 1250km.
We can also divide earths layer in 5 different layers. They are lithosphere, asthenosphere, mesosphere, outer core and inner core.
What is lithosphere ?
lithosphere is the outer most layer of earth surface. It consist of crust and some outer part of mental. This is the rocky part. It also consist of soil.
What is asthenosphere ?
Asthenosphere is the weak layer of upper mental. Asthenosphere is in the semi molten state.
Advantages of Working model of earth layers :
There are lots of advantages of this Working model of earth layers. We have mention some of the merits :
- We can get knowledge about the concept layers of earth.
- 3d model of earths layer help us to get knowledge regarding different layers of earth.
- We can better understand as well as it makes us easy to explain earth’s layer.
- This science project can be great science fair ideas for kids.
- It also helps us to make different shapes using play dough.
Safety tips while making layers of the earths for kids :
Our first priority before doing any science project must always be safety. We always suggest you to perform any science project protecting yourself.
- Always ware a safety glass which protects your eyes.
- Carefully handle knife otherwise it might cut our hand.
- Do this science project with your parents, teachers or any senior brother and sister.
Alternative process of making 3d model of the earth layer :
There are lots of ways to make working model of earths layer as school science project. We can make the 3d model of earth layer using therocole also. For this simply use spherical thermocole and paint different layers of earth.
Working model of earths layer can also be made using clay also.
Some of school science projects for kids :
- How To make Hydraulic Jack
- How To Make a Wheel And Axle
- School Science project Lungs Model
- Floating Egg Experiment
- Balloon Rocket Experiment
- How To make Compass At Home
- Pin Wheel Science Experiment |
Skip to content
Skip to section navigation
Different techniques and resources to make easy raised-line drawings for students who are blind and visually impaired.
Guidelines to make science and math graphs accessible to students who are blind or visually impaired
Suggestions for fall themed sensory trays for students who are blind or visually impaired.
Overview of the SOFIA space mission and making astronomy more accessible to students who are blind or visually impaired
This interactive model allows students to tactually balance a simple chemical equation.
This activity provides students with a hands-on exercise that relies on shapes rather than the ability to read letters in the study of genetics.
Activities for students with severe multiple disabilities to learn about and explore leaves
This activity helps students to understand the adding and removal of thermal energy, during an introductory chemistry course.
In the world of physics, some properties doesn’t always behave like we think they should. A liquid should flow and fill a container; a solid should not move.
This review is appropriate for students after they have both learned the AZER model and the APH Periodic Table.
In this lab, students use both critical thinking and math skills as they measure volume using tools adapted for visual impairment.
This simple activity allows students to display their knowledge of basic concepts related to the atom using the APH model.
This activity provides a hands-on experience for students to better understand the concept of the "atom". It is based on the meaning of the word atom.
This simple activity uses a calendar to introduce the rows (periods) and columns (groups) of the Periodic Table.
Practical instructional tips to support English Language Arts skills in science class with students who are blind or who have low visio
A guide to making the APH Periodic Table Reference Booklet easier to use for students who are blind or visually impaired.
This is an ongoing opportunity for students to ask, WHY? Students both post and answer questions posted by other students.
Hydroponic systems grow food when the resources needed, such as rain and rich soil, do not exist. In this activity, students build a basic hydroponics system.
Is there a connection between the sense of smell and the sense of taste? In this activity, students will discover that there is.
This assignment came about in order to encourage my students to consider the work of scientists and mathematicians who are visually impaired.
Suggestions for teaching science to students who are deafblind
In this activity, students identify sedimentary and metamorphic rock based on the feel of the rock.
Tips for adapting rulers for students who are blind or visually impaired.
In this short activity, students who are blind or visually impaired become the players (products and reactants) in cellular respiration.
Many students are not familiar with the Table of Contents. This activity supports both students' interest in learning new science concepts and literacy skills.
Students of Biology observe movement of water into and out of an egg to represent osmosis.
Every child can use the engineering design process to help something or someone! In this activity, a plant that is very droopy needs some help!
Before using engineering and technology skills to design and build projects, students benefit from experience working with a variety of materials and fasteners.
This outdoor activity is a great way to introduce the study of the Gas Laws!
Tips on making tactile graphic organizers accessible to students who are blind or visually impaired
In this activity, students explore how the development of seed pods protect the seeds and ensure the survival of the plant species.
Discussion of the values of field testing products before using them with students who are blind or visually impaired.
Students build a model of each stage of mitosis using simple materials. This activity can also modified to build a model of meiosis.
A simple activity designed to teach a student who is deafblind about the layers of the Earth. Active learning is encouraged through the building of a model.
Introduction to Soundscapes as part of the science curriculum for students who are blind or visually impaired
An entry about students who are blind or visually impaired participating in a Science Camp in Taiwan.
A short activity for students who are blind or visually impaired about the difference between learned and inherited traits.
Science experiment for students who are blind or visually impaired to study how smell affects heartrate and stress.
Learn about activities at a science camp in Taiwan for students who are blind or visually impaired
Explore which type of soda will cause the biggest explosion with Mentos with this hands-on science experiment for students who are blind or visually impaired.
An experiment for students who are blind or visually impaired to using balloons to determine if different vinegars react differently with baking soda.
Description of science program in Taiwan for children who are blind or visually impaired
Science project done by a student with visual impairments to explore what materials affect the magnetic field.
Science project testing whether students reading print or braille could memorize more non-consecutive numbers.
Activities to teach concepts involved in flying to children who are blind or deafblind.
A student who is blind's science project to see if people with functional vision can be out ranked by those with none when playing audiogames.
Hands-on activity for students who are blind or visually impaired to explore what kind of music lowers heartrate.
Science project done by a student who is visually impaired to explore how wingspan affects flight distance.
Science experiment to see what kind of chocolate melts the fastest.
Science project that asks "What is the most common fingerprints on humans" done by a student at TSBVI. |
The proportions and expressions of the human face are important to identify origin, emotional tendencies, health qualities, and some social information. From birth, faces are important in the individual's social interaction. Face perceptions are very complex as the recognition of facial expressions involves extensive and diverse areas in the brain. Sometimes, damaged parts of the brain can cause specific impairments in understanding faces or prosopagnosia.
- 1 Development
- 2 Adult
- 3 Face advantage in memory recall
- 4 Ethnicity
- 5 In individuals with autism spectrum disorder
- 6 Artificial
- 7 See also
- 8 Further reading
- 9 References
- 10 External links
From birth, infants possess rudimentary facial processing capacities. Infants as young as two days of age are capable of mimicking the facial expressions of an adult, displaying their capacity to note details like mouth and eye shape as well as to move their own muscles in a way that produces similar patterns in their faces [this claim is highly controversial; it certainly should not be taken as fact]. However, despite this ability, newborns are not yet aware of the emotional content encoded within facial expressions. Five-month olds, when presented with an image of a person making a fearful expression and a person making a happy expression, pay the same amount of attention to and exhibit similar event-related potentials for both. However, when seven-month-olds are given the same treatment, they focus more on the fearful face, and their event-related potential for the scared face shows a stronger initial negative central component than that for the happy face. This result indicates an increased attentional and cognitive focus toward fear that reflects the threat-salient nature of the emotion. In addition, infants’ negative central components were not different for new faces that varied in the intensity of an emotional expression but portrayed the same emotion as a face they had been habituated to but were stronger for different-emotion faces, showing that seven-month-olds regarded happy and sad faces as distinct emotive categories.
The recognition of faces is an important neurological mechanism that an individual uses every day. Jeffrey and Rhodes said that faces "convey a wealth of information that we use to guide our social interactions." Emotions play a large role in our social interactions. The perception of a positive or negative emotion on a face affects the way that an individual perceives and processes that face. For example, a face that is perceived to have a negative emotion is processed in a less holistic manner than a face displaying a positive emotion. The ability of face recognition is apparent even in early childhood. The neurological mechanisms responsible for face recognition are present by age five. Research shows that the way children process faces is similar to that of adults, but adults process faces more efficiently. The reason for this may be because of advancements in memory and cognitive functioning that occur with age.
Infants are able to comprehend facial expressions as social cues representing the feelings of other people before they are a year old. At seven months, the object of an observed face’s apparent emotional reaction is relevant in processing the face. Infants at this age show greater negative central components to angry faces that are looking directly at them than elsewhere, although the direction of fearful faces’ gaze produces no difference. In addition, two ERP components in the posterior part of the brain are differently aroused by the two negative expressions tested. These results indicate that infants at this age can at least partially understand the higher level of threat from anger directed at them as compared to anger directed elsewhere. By at least seven months of age, infants are also able to use facial expressions to understand others' behavior. Seven-month olds will look to facial cues to understand the motives of other people in ambiguous situations, as shown by a study in which they watched an experimenter’s face longer if she took a toy from them and maintained a neutral expression than if she made a happy expression. Interest in the social world is increased by interaction with the physical environment. Training three-month-old infants to reach for objects with Velcro-covered “sticky mitts” increases the amount of attention that they pay to faces as compared to passively moving objects through their hands and non-trained control groups.
In following with the notion that seven-month-olds have categorical understandings of emotion, they are also capable of associating emotional prosodies with corresponding facial expressions. When presented with a happy or angry face, shortly followed by an emotionally-neutral word read in a happy or angry tone, their ERPs follow different patterns. Happy faces followed by angry vocal tones produce more changes than the other incongruous pairing, while there was no such difference between happy and angry congruous pairings, with the greater reaction implying that infants held greater expectations of a happy vocal tone after seeing a happy face than an angry tone following an angry face. Considering an infant’s relative immobility and thus their decreased capacity to elicit negative reactions from their parents, this result implies that experience has a role in building comprehension of facial expressions.
Several other studies indicate that early perceptual experience is crucial to the development of capacities characteristic of adult visual perception, including the ability to identify familiar people and to recognize and comprehend facial expressions. The capacity to discern between faces, much like language, appears to have a broad potential in early life that is whittled down to kinds of faces that are experienced in early life. Infants can discern between macaque faces at six months of age, but, without continued exposure, cannot at nine months of age. Being shown photographs of macaques during this three-month period gave nine-month-olds the ability to reliably distinguish between unfamiliar macaque faces.
The neural substrates of face perception in infants are likely similar to those of adults, but the limits of imaging technology that are feasible for use with infants currently prevent very specific localization of function as well as specific information from subcortical areas like the amygdala, which is active in the perception of facial expression in adults. In a study on healthy adults, it was shown that faces are likely to be processed, in part, via a retinotectal (subcortical) pathway.
However, there is activity near the fusiform gyrus, as well as in occipital areas. when infants are exposed to faces, and it varies depending on factors including facial expression and eye gaze direction.
Theories about the processes involved in adult face perception have largely come from two sources: research on normal adult face perception and the study of impairments in face perception that are caused by brain injury or neurological illness. Novel optical illusions such as the Flashed Face Distortion Effect, in which scientific phenomenology outpaces neurological theory, also provide areas for research.
One of the most widely accepted theories of face perception argues that understanding faces involves several stages: from basic perceptual manipulations on the sensory information to derive details about the person (such as age, gender or attractiveness), to being able to recall meaningful details such as their name and any relevant past experiences of the individual.
This model (developed by psychologists Vicki Bruce and Andrew Young) argues that face perception might involve several independent sub-processes working in unison.
- A "view centered description" is derived from the perceptual input. Simple physical aspects of the face are used to work out age, gender or basic facial expressions. Most analysis at this stage is on feature-by-feature basis. That initial information is used to create a structural model of the face, which allows it to be compared to other faces in memory, and across views. This explains why the same person seen from a novel angle can still be recognized. This structural encoding can be seen to be specific for upright faces as demonstrated by the Thatcher effect. The structurally encoded representation is transferred to notional "face recognition units" that are used with "personal identity nodes" to identify a person through information from semantic memory. The natural ability to produce someone's name when presented with their face has been shown in experimental research to be damaged in some cases of brain injury, suggesting that naming may be a separate process from the memory of other information about a person.
The study of prosopagnosia (an impairment in recognizing faces which is usually caused by brain injury) has been particularly helpful in understanding how normal face perception might work. Individuals with prosopagnosia may differ in their abilities to understand faces, and it has been the investigation of these differences which has suggested that several stage theories might be correct.
Face perception is an ability that involves many areas of the brain; however, some areas have been shown to be particularly important. Brain imaging studies typically show a great deal of activity in an area of the temporal lobe known as the fusiform gyrus, an area also known to cause prosopagnosia when damaged (particularly when damage occurs on both sides). This evidence has led to a particular interest in this area and it is sometimes referred to as the fusiform face area for that reason.
Neuroanatomy of facial processing
There are several parts of the brain that play a role in face perception. Rossion, Hanseeuw, and Dricot used BOLD fMRI mapping to identify activation in the brain when subjects viewed both cars and faces. The majority of BOLD fMRI studies use blood oxygen level dependent (BOLD) contrast to determine which areas of the brain are activated by various cognitive functions. They found that the occipital face area, located in the occipital lobe, the fusiform face area, the superior temporal sulcus, the amygdala, and the anterior/inferior cortex of the temporal lobe, all played roles in contrasting the faces from the cars, with the initial face perception beginning in the fusiform face area and occipital face areas. This entire region links to form a network that acts to distinguish faces. The processing of faces in the brain is known as a "sum of parts" perception. However, the individual parts of the face must be processed first in order to put all of the pieces together. In early processing, the occipital face area contributes to face perception by recognizing the eyes, nose, and mouth as individual pieces. Furthermore, Arcurio, Gold, and James used BOLD fMRI mapping to determine the patterns of activation in the brain when parts of the face were presented in combination and when they were presented singly. The occipital face area is activated by the visual perception of single features of the face, for example, the nose and mouth, and preferred combination of two-eyes over other combinations. This research supports that the occipital face area recognizes the parts of the face at the early stages of recognition. On the contrary, the fusiform face area shows no preference for single features, because the fusiform face area is responsible for "holistic/configural" information, meaning that it puts all of the processed pieces of the face together in later processing. This theory is supported by the work of Gold et al. who found that regardless of the orientation of a face, subjects were impacted by the configuration of the individual facial features. Subjects were also impacted by the coding of the relationships between those features. This shows that processing is done by a summation of the parts in the later stages of recognition.
Facial perception has well identified, neuroanatomical correlates in the brain. During the perception of faces, major activations occur in the extrastriate areas bilaterally, particularly in the fusiform face area (FFA), the occipital face area (OFA), and the superior temporal sulcus (fSTS).
The FFA is located in the lateral fusiform gyrus. It is thought that this area is involved in holistic processing of faces and it is sensitive to the presence of facial parts as well as the configuration of these parts. The FFA is also necessary for successful face detection and identification. This is supported by fMRI activation and studies on prospagnosia, which involves lesions in the FFA.
The OFA is located in the inferior occipital gyrus. Similar to the FFA, this area is also active during successful face detection and identification, a finding that is supported by fMRI activation. The OFA is involved and necessary in the analysis of facial parts but not in the spacing or configuration of facial parts. This suggests that the OFA may be involved in a facial processing step that occurs prior to the FFA processing.
The fSTS is involved in recognition of facial parts and is not sensitive to the configuration of these parts. It is also thought that this area is involved in gaze perception. The fSTS has demonstrated increased activation when attending to gaze direction.
Bilateral activation is generally shown in all of these specialized facial areas. However there are some studies that include increased activation in one side over the other. For instance McCarthy (1997) has shown that the right fusiform gyrus is more important for facial processing in complex situations.
Gorno-Tempini and Price have shown that the fusiform gyri are preferentially responsive to faces, whereas the parahippocampal/lingual gyri are responsive to buildings.
It is important to note that while certain areas respond selectively to faces, facial processing involves many neural networks. These networks include visual and emotional processing systems as well. Emotional face processing research has demonstrated that there are some of the other functions at work. While looking at faces displaying emotions (especially those with fear facial expressions) compared to neutral faces there is increased activity in the right fusiform gyrus. This increased activity also correlates with increased amygdala activity in the same situations. The emotional processing effects observed in the fusiform gyrus are decreased in patients with amygdala lesions. This demonstrates possible connections between the amygdala and facial processing areas.
Another aspect that affects both the fusiform gyrus and the amygdala activation is the familiarity of faces. Having multiple regions that can be activated by similar face components indicates that facial processing is a complex process.
Ishai and colleagues have proposed the object form topology hypothesis, which posits that there is a topological organization of neural substrates for object and facial processing. However, Gauthier disagrees and suggests that the category-specific and process-map models could accommodate most other proposed models for the neural underpinnings of facial processing.
Most neuroanatomical substrates for facial processing are perfused by the middle cerebral artery (MCA). Therefore, facial processing has been studied using measurements of mean cerebral blood flow velocity in the middle cerebral arteries bilaterally. During facial recognition tasks, greater changes in the right middle cerebral artery (RMCA) than the left (LMCA) have been observed. It has been demonstrated that men were right lateralized and women left lateralized during facial processing tasks.
Just as memory and cognitive function separate the abilities of children and adults to recognize faces, the familiarity of a face may also play a role in the perception of faces. Zheng, Mondloch, and Segalowitz recorded event-related potentials in the brain to determine the timing of recognition of faces in the brain. The results of the study showed that familiar faces are indicated and recognized by a stronger N250, a specific wavelength response that plays a role in the visual memory of faces. Similarly, Moulson et al. found that all faces elicit the N170 response in the brain.
Hemispheric asymmetries in facial processing capability
The mechanisms underlying gender-related differences in facial processing have not been studied extensively.
Studies using electrophysiological techniques have demonstrated gender-related differences during a face recognition memory (FRM) task and a facial affect identification task (FAIT). The male subjects used a right, while the female subjects used a left, hemisphere neural activation system in the processing of faces and facial affect. Moreover, in facial perception there was no association to estimated intelligence, suggesting that face recognition performance in women is unrelated to several basic cognitive processes. Gender-related differences may suggest a role for sex hormones. In females there may be variability for psychological functions related to differences in hormonal levels during different phases of the menstrual cycle.
Data obtained in norm and in pathology support asymmetric face processing. Gorno-Tempini and others in 2001, suggested that the left inferior frontal cortex and the bilateral occipitotemporal junction respond equally to all face conditions. Some neuroscientists contend that both the left inferior frontal cortex (Brodmann area 47) and the occipitotemporal junction are implicated in facial memory. The right inferior temporal/fusiform gyrus responds selectively to faces but not to non-faces. The right temporal pole is activated during the discrimination of familiar faces and scenes from unfamiliar ones. Right asymmetry in the mid temporal lobe for faces has also been shown using 133-Xenon measured cerebral blood flow (CBF). Other investigators have observed right lateralization for facial recognition in previous electrophysiological and imaging studies.
The implication of the observation of asymmetry for facial perception would be that different hemispheric strategies would be implemented. The right hemisphere would be expected to employ a holistic strategy, and the left an analytic strategy. In 2007, Philip Njemanze, using a novel functional transcranial Doppler (fTCD) technique called functional transcranial Doppler spectroscopy (fTCDS), demonstrated that men were right lateralized for object and facial perception, while women were left lateralized for facial tasks but showed a right tendency or no lateralization for object perception. Njemanze demonstrated using fTCDS, summation of responses related to facial stimulus complexity, which could be presumed as evidence for topological organization of these cortical areas in men. It may suggest that the latter extends from the area implicated in object perception to a much greater area involved in facial perception.
This agrees with the object form topology hypothesis proposed by Ishai and colleagues in 1999. However, the relatedness of object and facial perception was process based, and appears to be associated with their common holistic processing strategy in the right hemisphere. Moreover, when the same men were presented with facial paradigm requiring analytic processing, the left hemisphere was activated. This agrees with the suggestion made by Gauthier in 2000, that the extrastriate cortex contains areas that are best suited for different computations, and described as the process-map model. Therefore, the proposed models are not mutually exclusive, and this underscores the fact that facial processing does not impose any new constraints on the brain other than those used for other stimuli.
It may be suggested that each stimulus was mapped by category into face or non-face, and by process into holistic or analytic. Therefore, a unified category-specific process-mapping system was implemented for either right or left cognitive styles. Njemanze in 2007, concluded that, for facial perception, men used a category-specific process-mapping system for right cognitive style, but women used same for the left.
Face-specific recognition theories
Cognitive neuroscientists Isabel Gauthier and Michael Tarr are two of the major proponents of the view that face recognition involves expert discrimination of similar objects (See the Perceptual Expertise Network). Other scientists, in particular Nancy Kanwisher and her colleagues, argue that face recognition involves processes that are face-specific and that are not recruited by expert discriminations in other object classes (see the domain specificity).
Studies by Gauthier have shown that an area of the brain known as the fusiform gyrus (sometimes called the "fusiform face area, (FFA)" because it is active during face recognition) is also active when study participants are asked to discriminate between different types of birds and cars, and even when participants become expert at distinguishing computer generated nonsense shapes known as greebles. This suggests that the fusiform gyrus may have a general role in the recognition of similar visual objects. Yaoda Xu, then a post doctoral fellow with Nancy Kanwisher, replicated the car and bird expertise study using an improved fMRI design that was less susceptible to attentional accounts.
The activity found by Gauthier when participants viewed non-face objects was not as strong as when participants were viewing faces, however this could be because we have much more expertise for faces than for most other objects. Furthermore, not all findings of this research have been successfully replicated, for example, other research groups using different study designs have found that the fusiform gyrus is specific to faces and other nearby regions deal with non-face objects.
However, these failures to replicate are difficult to interpret, because studies vary on too many aspects of the method. It has been argued that some studies test experts with objects that are slightly outside of their domain of expertise. More to the point, failures to replicate are null effects and can occur for many different reasons. In contrast, each replication adds a great deal of weight to a particular argument. With regard to "face specific" effects in neuroimaging, there are now multiple replications with Greebles, with birds and cars, and two unpublished studies with chess experts.
Although it is sometimes found that expertise recruits the FFA (e.g. as hypothesized by a proponent of this view in the preceding paragraph), a more common and less controversial finding is that expertise leads to focal category-selectivity in the fusiform gyrus—a pattern similar in terms of antecedent factors and neural specificity to that seen for faces. As such, it remains an open question as to whether face recognition and expert-level object recognition recruit similar neural mechanisms across different subregions of the fusiform or whether the two domains literally share the same neural substrates. Moreover, at least one study argues that the issue as to whether expertise-predicated category-selective areas overlap with the FFA is nonsensical in that multiple measurements of the FFA within an individual person often overlap no more with each other than do measurements of FFA and expertise-predicated regions. At the same time, numerous studies have failed to replicate them altogether. For example, four published fMRI studies have asked whether expertise has any specific connection to the FFA in particular, by testing for expertise effects in both the FFA and a nearby but not face-selective region called LOC (Rhodes et al., JOCN 2004; Op de Beeck et al., JN 2006; Moore et al., JN 2006; Yue et al. VR 2006). In all four studies, expertise effects are significantly stronger in the LOC than in the FFA, and indeed expertise effects were only borderline significant in the FFA in two of the studies, while the effects were robust and significant in the LOC in all four studies.
Therefore, it is still not clear in exactly which situations the fusiform gyrus becomes active, although it is certain that face recognition relies heavily on this area and damage to it can lead to severe face recognition impairment.
Face advantage in memory recall
During face perception, neural networks make connections with the brain to recall memories. According to the Seminal Model of face perception, there are three stages of face processing including recognition of the face, the recall of memories and information that are linked with that face, and finally name recall. There are, however, exceptions to this order. For example, names are recalled faster than semantic information in cases of highly-familiar stimuli. While the face is powerful identifier of individuals, the voice also helps in the recognition of people and is an identifier for important information.
Research has been conducted to see if faces or voices make it easier to identify individuals and recall semantic memory and episodic memory. These experiments look at all three stages of face processing. The experiment method was to show two groups celebrity and familiar faces or voices with a between-group design and ask the participants to recall information about them. The participants are first asked if the stimulus is familiar. If they answer yes then they are asked to information (semantic memory) and memories they have of the person (episodic memory) that fits the face or voice presented. These experiments all demonstrate the strong phenomenon of the face advantage and how it persists through different followup studies with different experimental controls and variables.
After the first experiments on the advantage of faces over voices in memory recall, errors and gaps were found in the methods used. For one, there was not a clear face advantage for the recognition stage of face processing. Participants showed a familiarity-only response to voices more often than faces. In other words, when voices were recognized (about 60-70% of the time) they were much harder to recall biographical information but very good at being recognized. The results were looked at as remember versus know judgements. A lot more remember results (or familiarity) occurred with voices, and more know (or memory recall) responses happened with faces. This phenomenon persists through experiments dealing with criminal line-ups in prisons. Witnesses are more likely to say that a suspect's voice sounded familiar than his/her face even though they cannot remember anything about the suspect. This discrepancy is due to a larger amount of guesswork and false alarms that occur with voices.
To give faces a similar ambiguity to that of voices, the face stimuli were blurred in the followup experiment This experiment followed the same procedures as the first, presenting two groups with sets of stimuli made up of half celebrity faces and half unfamiliar faces. The only difference was that the face stimuli were blurred so that detailed features could not be seen. Participants were than asked to say if they recognized the person, if they could recall specific biographical information about the them, and finally if they knew the person's name. The results were completely different from those of the original experiment, supporting the view that there were problems in the first experiment's methods. According to the results of the followup, the same amount of information and memory could be recalled through voices and faces, dismantling the face advantage. However, these results are flawed and premature because other methodological issues in the experiment still needed to be fixed.
Content of speech
The process of controlling the content of speech extract has proven to be more difficult than the elimination of non facial cues in photographs. Thus the findings of experiments that did not control this factor lead to misleading conclusions regarding the voice recognition over the face recognition. For example, in an experiment it was found that 40% of the time participants could easily pair the celebrity-voice with their occupation just by guessing. In order to eliminate these errors, experimenters removed parts of the voice samples that could possibly give clues to the identity of the target, such as catchphrases. Even after controlling the voice samples as well as the face samples (using blurred faces), studies have shown that semantic information can be more accessible to retrieve when individuals are recognizing faces than voices.
Another technique to control the content of the speech extracts is to present the faces and voices of personally familiar individuals, like the participant's teachers or neighbors, instead of the faces and voices of celebrities. In this way alike words are used for the speech extracts. For example, the familiar targets are asked to read exactly the same scripted speech for their voice extracts. The results showed again that semantic information is easier to retrieve when individuals are recognizing faces than voices.
Another factor that has to be controlled in order for the results to be reliable is the frequency of exposure. If we take the example of celebrities, people are exposed to celebrities faces more often than their voices because of the mass media. Through magazines, newspapers and the Internet individuals are exposed to celebrities faces without their voices in an every day basis rather than their voices without their faces. Thus, someone could argue that for all of the experiments that were done until now the findings were a result of the frequency of exposure to the faces of celebrities rather than their voices.
To overcome this problem researchers decided to use personally familiar individuals as stimuli instead of celebrities. Personally familiar individuals, such as participant's teachers, are for the most part heard as well as seen. Studies that used this type of control also demonstrated the face advantage. Students were able to retrieve semantic information more readily when recognizing their teachers faces (both normal and blurred)rather their voices.
However, researchers over the years have found an even more effective way to control not only the frequency of exposure but also the content of the speech extracts, the associative learning paradigm. Participants are asked to link semantic information as well as names with pre-experimentally unknown voices and faces. In a current experiment that used this paradigm, a name and a profession were given together with, accordingly, a voice, a face or both to three participant groups. The associations described above were repeated four times. The next step was a cued recall task in which every stimulus that was learned in the previous phase was introduced and participants were asked to tell the profession and the name for every stimulus. Again, the results showed that semantic information can be more accessible to retrieve when individuals are recognizing faces than voices even when the frequency of exposure was controlled.
Extension to episodic memory and explanation for existence
Episodic memory is our ability to remember specific, previously experienced events. In recognition of faces as it pertains to episodic memory, there has been shown to be activation in the left lateral prefrontal cortex, parietal lobe, and the left medial frontal/anterior cingulate cortex. It was also found that a left lateralization during episodic memory retrieval in the parietal cortex correlated strongly with success in retrieval. This may possibly be due to the hypothesis that the link between face recognition and episodic memory were stronger than those of voice and episodic memory. This hypothesis can also be supported by the existence of specialized face recognition devices thought to be located in the temporal lobes. There is also evidence of the existence of two separate neural systems for face recognition: one for familiar faces and another for newly learned faces. One explanation for this link between face recognition and episodic memory is that since face recognition is a major part of human existence, the brain creates a link between the two in order to be better able to communicate with others.
Differences in own- versus other-race face recognition and perceptual discrimination was first researched in 1914 Humans tend to perceive people of other races than their own to all look alike:
Other things being equal, individuals of a given race are distinguishable from each other in proportion to our familiarity, to our contact with the race as whole. Thus, to the uninitiated American all Asiatics look alike, while to the Asiatics, all White men look alike.
This phenomenon is known as the cross-race effect, own-race effect, other-race effect, own race bias or interracial-face-recognition-deficit. The effect occurs as early as 170ms in the brain with the N170 brain response to faces.[clarification needed]
A meta-analysis, Mullen has found evidence that the other-race effect is larger among White subjects than among African American subjects, whereas Brigham and Williamson (1979, cited in Shepherd, 1981) obtained the opposite pattern. Shepherd also reviewed studies that found a main effect for race efface like that of the present[clarification needed] study, with better performance on White faces, other studies in which no difference was found, and yet other studies in which performance was better on African American faces. Overall, Shepherd reports a reliable positive correlation between the size of the effect of target race (indexed by the difference in proportion correct on same- and other-race faces) and self-ratings of amount of interaction with members of the other race, r(30) = .57, p < .01. This correlation is at least partly an artifact of the fact that African American subjects, who performed equally well on faces of both races, almost always responded with the highest possible self-rating of amount of interaction with white people (M = 4.75), whereas their white counterparts both demonstrated an other-race effect and reported less other-race interaction (M = 2.13); the difference in ratings was reliable, £(30) = 7.86, p < .01
Further research points to the importance of other-race experience in own- versus other-race face processing (O'Toole et al., 1991; Slone et al., 2000; Walker & Tanaka, 2003). In a series of studies, Walker and colleagues showed the relationship between amount and type of other-race contact and the ability to perceptually differentiate other-race faces (Walker & Tanaka, 2003; Walker & Hewstone, 2006a,b; 2007). Participants with greater other-race experience were consistently more accurate at discriminating between other-race faces than were participants with less other-race experience.
In addition to other-race contact, there is suggestion that the own-race effect is linked to increased ability to extract information about the spatial relationships between different features. Richard Ferraro writes that facial recognition is an example of a neuropsychological measure that can be used to assess cognitive abilities that are salient within African-American culture. Daniel T. Levin writes that the deficit occurs because people emphasize visual information specifying race at the expense of individuating information when recognizing faces of other races. Further research using perceptual tasks could shed light on the specific cognitive processes involved in the other-race effect. The question if the own-race effect can be overcome was already indirectly answered by Ekman & Friesen in 1976 and Ducci, Arcuri, Georgis & Sineshaw in 1982. They had observed that people from New Guinea and Ethiopia who had had contact with white people before had a significantly better emotional recognition rate.
Studies on adults have also shown sex differences in face recognition. Men tend to recognize fewer faces of women than women do, whereas there are no sex differences with regard to male faces.
In individuals with autism spectrum disorder
Autism spectrum disorder (ASD) is a comprehensive neural developmental disorder that produces many deficits including social, communicative, and perceptual deficits. Of specific interest, individuals with autism exhibit difficulties in various aspects of facial perception, including facial identity recognition and recognition of emotional expressions. These deficits are suspected to be a product of abnormalities occurring in both the early and late stages of facial processing.
Speed and methods
People with ASD process face and non-face stimuli with the same speed. In typically developing individuals, there is a preference for face processing, thus resulting in a faster processing speed in comparison to non-face stimuli. These individuals primarily utilize holistic processing when perceiving faces. Contrastingly, individuals with ASD employ part-based processing or bottom-up processing, focusing on individual features rather than the face as a whole. When focusing on the individual parts of the face, persons with ASD direct their gaze primarily to the lower half of the face, specifically the mouth, varying from the eye trained gaze of typically developing people. This deviation from holistic face processing does not employ the use of facial prototypes, which are templates stored in memory that make for easy retrieval.
Additionally, individuals with ASD display difficulty with recognition memory, specifically memory that aids in identifying faces. The memory deficit is selective for faces and does not extend to other objects or visual inputs. Some evidence lends support to the theory that these face-memory deficits are products of interference between connections of face processing regions.
The atypical facial processing style of people with ASD often manifests in constrained social ability, due to decreased eye contact, joint attention, interpretation of emotional expression, and communicative skills. These deficiencies can be seen in infants as young as 9 months; specifically in terms of poor eye contact and difficulties engaging in joint attention. Some experts have even used the term 'face avoidance' to describe the phenomena where infants who are later diagnosed with ASD preferentially attend to non-face objects over faces. Furthermore, some have proposed that the demonstrated impairment in children with ASD's ability to grasp emotional content of faces is not a reflection of the incapacity to process emotional information, but rather, the result of a general inattentiveness to facial expression. The constraints of these processes that are essential to the development of communicative and social-cognitive abilities are viewed to be the cause of impaired social engagement and responsivity. Furthermore, research suggests that there exists a link between decreased face processing abilities in individuals with ASD and later deficits in Theory of Mind; for example, while typically developing individuals are able to relate others' emotional expressions to their actions, individuals with ASD do not demonstrate this skill to the same extent.
There is some contention about this causation however, resembling the chicken or the egg dispute. Others theorize that social impairment leads to perceptual problems rather than vice versa. In this perspective, a biological lack of social interest inherent to ASD inhibits developments of facial recognition and perception processes due to underutilization. Continued research is necessary to determine which theory is best supported.
Many of the obstacles that individuals with ASD face in terms of facial processing may be derived from abnormalities in the fusiform face area and amygdala, which have been shown to be important in face perception as discussed above. Typically, the fusiform face area in individuals with ASD has reduced volume compared to normally developed persons. This volume reduction has been attributed to deviant amygdala activity that does not flag faces as emotionally salient and thus decreases activation levels of the fusiform face area. This hypoactivity in the fusiform face area has been found in several studies.
Studies are not conclusive as to which brain areas people with ASD use instead. One study found that, when looking at faces, people with ASD exhibit activity in brain regions normally active when typically developing individuals perceive objects. Another study found that during facial perception, people with ASD use different neural systems, with each one of them using their own unique neural circuitry.
As ASD individuals age, scores on behavioral tests assessing ability to perform face-emotion recognition increase to levels similar to controls. Yet, it is apparent that the recognition mechanisms of these individuals are still atypical, though often effective. In terms of face identity-recognition, compensation can take many forms including a more pattern-based strategy which was first seen in face inversion tasks. Alternatively, evidence suggests that older individuals compensate by using mimicry of other’s facial expressions and rely on their motor feedback of facial muscles for face emotion-recognition. These strategies help overcome the obstacles individuals with ASD face in interacting within social contexts.
A great deal of effort has been put into developing software that can recognize human faces. Much of the work has been done by a branch of artificial intelligence known as computer vision which uses findings from the psychology of face perception to inform software design. Recent breakthroughs using noninvasive functional transcranial Doppler spectroscopy as demonstrated by Njemanze, 2007, to locate specific responses to facial stimuli have led to improved systems for facial recognition. The new system uses input responses called cortical long-term potentiation (CLTP) derived from Fourier analysis of mean blood flow velocity to trigger target face search from a computerized face database system. Such a system provides for brain-machine interface for facial recognition, and the method has been referred to as cognitive biometrics.
Another interesting application is the estimation of human age from face images. As an important hint for human communication, facial images contain lots of useful information including gender, expression, age, etc. Unfortunately, compared with other cognition problems, age estimation from facial images is still very challenging. This is mainly because the aging process is influenced not only by a person's genes but also many external factors. Physical condition, living style etc. may accelerate or slow the aging process. Besides, since the aging process is slow and with long duration, collecting sufficient data for training is fairly demanding work.
- Capgras delusion
- Fregoli syndrome
- Cognitive neuropsychology
- Delusional misidentification syndrome
- Facial recognition system
- Prosopagnosia, or face blindness
- Recognition of human individuals
- Social cognition
- Thatcher effect
- The Greebles
- Pareidolia, perceiving faces in random objects and shapes
- Apophenia, seeing meaningful patterns in random data
- Hollow face illusion
- N170, an event-related potential associated with viewing faces
- Cross-race effect
- Bruce, V. and Young, A. (2000) In the Eye of the Beholder: The Science of Face Perception. Oxford: Oxford University Press. ISBN 0-19-852439-0
- Tiffany M. Field, Robert Woodson, Reena Greenberg, Debra Cohen (8 October 1982). "Discrimination and imitation of facial expressions by neonates". Science 218 (4568): 179–181. doi:10.1126/science.7123230. PMID 7123230.
- Mikko J. Peltola, Jukka M. Leppanen, Silja Maki & Jari K. Hietanen (June 2009). "Emergence of enhanced attention to fearful faces between 5 and 7 months of age". Social cognitive and affective neuroscience 4 (2): 134–142. doi:10.1093/scan/nsn046. PMC 2686224. PMID 19174536.
- Leppanen, Jukka; Richmond, Jenny; Vogel-Farley, Vanessa; Moulson, Margaret; Nelson, Charles (May 2009). "Categorical representation of facial expressions in the infant brain". Infancy : the official journal of the International Society on Infant Studies 14 (3): 346–362. doi:10.1080/15250000902839393. PMC 2954432. PMID 20953267.
- Jeffery, L.; Rhodes, G. (2011). "Insights into the development of face recognition mechanisms revealed by face after effects.". British Journal of Psychology 102 (4): 799–815. doi:10.1111/j.2044-8295.2011.02066.x.
- Jeffery, L.; Rhodes, G. (2011). "Insights into the development of face recognition mechanisms revealed by face aftereffects.". British Journal of Psychology 102 (4): 799. doi:10.1111/j.2044-8295.2011.02066.x.
- Curby, K.M.; Johnson, K.J., & Tyson A. (2012). "Face to face with emotion: Holistic face processing is modulated by emotional state". Cognition and Emotion 26 (1): 93–102. doi:10.1080/02699931.2011.555752.
- Jeffery, L.; Rhodes, G. (2011). "Insights into the development of face recognition mechanisms revealed by face aftereffects.". British Journal of Psychology 102 (4): 799–815. doi:10.1111/j.2044-8295.2011.02066.x.
- Stefanie Hoehl & Tricia Striano (November–December 2008). "Neural processing of eye gaze and threat-related emotional facial expressions in infancy". Child development 79 (6): 1752–1760. doi:10.1111/j.1467-8624.2008.01223.x. PMID 19037947.
- Tricia Striano & Amrisha Vaish (2010). "Seven- to 9-month-old infants use facial expressions to interpret others' actions". British Journal of Developmental Psychology 24 (4): 753–760. doi:10.1348/026151005X70319.
- Klaus Libertus & Amy Needham (November 2011). "Reaching experience increases face preference in 3-month-old infants". Developmental science 14 (6): 1355–1364. doi:10.1111/j.1467-7687.2011.01084.x. PMID 22010895.
- Tobias Grossmann, Tricia Striano & Angela D. Friederici (May 2006). "Crossmodal integration of emotional information from face and voice in the infant brain". Developmental science 9 (3): 309–315. doi:10.1111/j.1467-7687.2006.00494.x. PMID 16669802.
- Charles A. Nelson (March–June 2001). "The development and neural bases of face recognition". nfant and Child Development 10 (1–2): 3–18. doi:10.1002/icd.239.
- O. Pascalis, L. S. Scott, D. J. Kelly, R. W. Shannon, E. Nicholson, M. Coleman & C. A. Nelson (April 2005). "Plasticity of face processing in infancy". Proceedings of the National Academy of Sciences of the United States of America 102 (14): 5297–5300. doi:10.1073/pnas.0406627102. PMC 555965. PMID 15790676.
- Emi Nakato, Yumiko Otsuka, So Kanazawa, Masami K. Yamaguchi & Ryusuke Kakigi (January 2011). "Distinct differences in the pattern of hemodynamic response to happy and angry facial expressions in infants--a near-infrared spectroscopic study". NeuroImage 54 (2): 1600–1606. doi:10.1016/j.neuroimage.2010.09.021. PMID 20850548.
- Awasthi B, Friedman J, Williams, MA (2011). "Processing of low spatial frequency faces at periphery in choice reaching tasks". Neuropsychologia 49 (7): 2136–41. doi:10.1016/j.neuropsychologia.2011.03.003. PMID 21397615.
- Bruce V, Young A (August 1986). "Understanding face recognition". Br J Psychology 77 (Pt 3): 305–27. doi:10.1111/j.2044-8295.1986.tb02199.x. PMID 3756376.
- Kanwisher N, McDermott J, Chun MM (1 June 1997). "The fusiform face area: a module in human extrastriate cortex specialized for face perception". J. Neurosci. 17 (11): 4302–11. PMID 9151747.
- Rossion, B.; Hanseeuw, B.; Dricot, L. (2012). "Defining face perception areas in the human brain: A large scale factorial fMRI face localizer analysis.". Brain and Cognition 79 (2): 138–157. doi:10.1016/j.bandc.2012.01.001.
- KannurpattiRypmaBiswal, S.S.B. (March 2012). "Prediction of task-related BOLD fMRI with amplitude signatures of resting-state fMRI". Frontiers in Systems Neuroscience 6: 1–7. doi:10.3389/fnsys.2012.00007.
- Gold, J.M.; Mundy, P.J.; Tjan, B.S. (2012). "The perception of a face is no more than the sum of its parts". Psychological Science 23 (4): 427–434. doi:10.1177/0956797611427407.
- Pitcher, D.; Walsh, V.; Duchaine, B. (2011). "The role of the occipital face area in the cortical face perception network". Experimental Brain Research 209 (4): 481–493. doi:10.1007/s00221-011-2579-1.
- Arcurio, L.R.; Gold, J.M.; James, T.W. (2012). "The response of face-selective cortex with single face parts and part combinations". Neuropsychologia 50 (10): 2454–2459. doi:10.1016/j.neuropsychologia.2012.06.016.
- Arcurio, L.R.; Gold, J.M.; James, T.W. (2012). "The response of face-selective cortex with single face parts and part combinations". Neuropsychologia 50 (10): 2458. doi:10.1016/j.neuropsychologia.2012.06.016.
- Liu J, Harris A, Kanwisher N. (2010). Perception of face parts and face configurations: An fmri study. Journal of Cognitive Neuroscience. (1), 203–211.
- Rossion, B., Caldara, R., Seghier, M., Schuller, A-M., Lazeyras, F., Mayer, E., (2003). A network of occipito-temporal face-sensitive areas besides the right middle fusiform gyrus is necessary for normal face processing. A Journal of Neurology, 126 11 2381-2395
- McCarthy, G., Puce, A., Gore, J., Allison, T., (1997). Face-Specific Processing in the Human Fusiform Gyrus. Journal of Cognitive Neuroscience, 9 5 605-610
- Campbell, R., Heywood, C.A., Cowey, A., Regard, M., and Landis, T. (1990). Sensitivity to eye gaze in prosopagnosic patients and monkeys with superior temporal sulcus ablation" Neuropsychologia 28(11), 1123-1142
- Neural substrates of facial recognition 8 (2). 1996. pp. 139–46. PMID 9081548.
- Haxby JV, Horwitz B, Ungerleider LG, Maisog JM, Pietrini P, Grady CL (1 November 1994). "The functional organization of human extrastriate cortex: a PET-rCBF study of selective attention to faces and locations". J. Neurosci. 14 (11 Pt 1): 6336–53. PMID 7965040.
- Haxby JV, Ungerleider LG, Clark VP, Schouten JL, Hoffman EA, Martin A (January 1999). "The effect of face inversion on activity in human neural systems for face and object perception". Neuron 22 (1): 189–99. doi:10.1016/S0896-6273(00)80690-X. PMID 10027301.
- Puce A, Allison T, Asgari M, Gore JC, McCarthy G (15 August 1996). "Differential sensitivity of human visual cortex to faces, letterstrings, and textures: a functional magnetic resonance imaging study". J. Neurosci. 16 (16): 5205–15. PMID 8756449.
- Puce A, Allison T, Gore JC, McCarthy G (September 1995). "Face-sensitive regions in human extrastriate cortex studied by functional MRI". J. Neurophysiol. 74 (3): 1192–9. PMID 7500143.
- Sergent J, Ohta S, MacDonald B (February 1992). "Functional neuroanatomy of face and object processing. A positron emission tomography study". Brain 115 (Pt 1): 15–36. doi:10.1093/brain/115.1.15. PMID 1559150.
- Gorno-Tempini ML, Price CJ (October 2001). "Identification of famous faces and buildings: a functional neuroimaging study of semantically unique items". Brain 124 (Pt 10): 2087–97. doi:10.1093/brain/124.10.2087. PMID 11571224.
- Vuilleumier P, Pourtois G, Distributed and interactive brain mechanisms during emotion face perception: Evidence from functional neuroimaging" Neuropsychologia 45 (2007) 174–194
- Ishai A, Ungerleider LG, Martin A, Schouten JL, Haxby JV (August 1999). "Distributed representation of objects in the human ventral visual pathway". Proc. Natl. Acad. Sci. U.S.A. 96 (16): 9379–84. doi:10.1073/pnas.96.16.9379. PMC 17791. PMID 10430951.
- Gauthier I (January 2000). "What constrains the organization of the ventral temporal cortex?". Trends Cogn. Sci. (Regul. Ed.) 4 (1): 1–2. doi:10.1016/S1364-6613(99)01416-3. PMID 10637614.
- Droste DW, Harders AG, Rastogi E (August 1989). "A transcranial Doppler study of blood flow velocity in the middle cerebral arteries performed at rest and during mental activities". Stroke 20 (8): 1005–11. doi:10.1161/01.STR.20.8.1005. PMID 2667197.
- Harders AG, Laborde G, Droste DW, Rastogi E (July 1989). "Brain activity and blood flow velocity changes: a transcranial Doppler study". Int. J. Neurosci. 47 (1–2): 91–102. doi:10.3109/00207458908987421. PMID 2676884.
- Njemanze PC (September 2004). "Asymmetry in cerebral blood flow velocity with processing of facial images during head-down rest". Aviat Space Environ Med 75 (9): 800–5. PMID 15460633.
- Zheng, X.; Mondloch, C.J. & Segalowitz, S.J. (2012). "The timing of individual face recognition in the brain". Neuropsychologia 50 (7): 1451–1461. doi:10.1016/j.neuropsychologia.2012.02.030.
- Eimer, M.; Gosling, A.; Duchaine, B. (2012). "Electrophysiological markers of covert face recognition in developmental prosopagnosia". Brain: A Journal of Neurology 135 (2): 542–554. doi:10.1093/brain/awr347.
- Moulson, M.C.; Balas, B.; Nelson, C.; Sinha, P. (2011). "EEG correlates of categorical and graded face perception.". Neuropsychologia 49 (14): 3847–3853. doi:10.1016/j.neuropsychologia.2011.09.046.
- Everhart DE, Shucard JL, Quatrin T, Shucard DW (July 2001). "Sex-related differences in event-related potentials, face recognition, and facial affect processing in prepubertal children". Neuropsychology 15 (3): 329–41. doi:10.1037/0894-4184.108.40.2069. PMID 11499988.
- Herlitz A, Yonker JE (February 2002). "Sex differences in episodic memory: the influence of intelligence". J Clin Exp Neuropsychol 24 (1): 107–14. doi:10.1076/jcen.220.127.116.110. PMID 11935429.
- Smith WM (July 2000). "Hemispheric and facial asymmetry: gender differences". Laterality 5 (3): 251–8. doi:10.1080/135765000406094. PMID 15513145.
- Voyer D, Voyer S, Bryden MP (March 1995). "Magnitude of sex differences in spatial abilities: a meta-analysis and consideration of critical variables". Psychol Bull 117 (2): 250–70. doi:10.1037/0033-2909.117.2.250. PMID 7724690.
- Hausmann M (2005). "Hemispheric asymmetry in spatial attention across the menstrual cycle". Neuropsychologia 43 (11): 1559–67. doi:10.1016/j.neuropsychologia.2005.01.017. PMID 16009238.
- De Renzi E (1986). "Prosopagnosia in two patients with CT scan evidence of damage confined to the right hemisphere". Neuropsychologia 24 (3): 385–9. doi:10.1016/0028-3932(86)90023-0. PMID 3736820.
- De Renzi E, Perani D, Carlesimo GA, Silveri MC, Fazio F (August 1994). "Prosopagnosia can be associated with damage confined to the right hemisphere--an MRI and PET study and a review of the literature". Neuropsychologia 32 (8): 893–902. doi:10.1016/0028-3932(94)90041-8. PMID 7969865.
- Mattson AJ, Levin HS, Grafman J (February 2000). "A case of prosopagnosia following moderate closed head injury with left hemisphere focal lesion". Cortex 36 (1): 125–37. doi:10.1016/S0010-9452(08)70841-4. PMID 10728902.
- Barton JJ, Cherkasova M (July 2003). "Face imagery and its relation to perception and covert recognition in prosopagnosia". Neurology 61 (2): 220–5. doi:10.1212/01.WNL.0000071229.11658.F8. PMID 12874402.
- Sprengelmeyer R, Rausch M, Eysel UT, Przuntek H (October 1998). "Neural structures associated with recognition of facial expressions of basic emotions". Proc. Biol. Sci. 265 (1409): 1927–31. doi:10.1098/rspb.1998.0522. PMC 1689486. PMID 9821359.
- Verstichel P (2001). "[Impaired recognition of faces: implicit recognition, feeling of familiarity, role of each hemisphere]". Bull. Acad. Natl. Med. (in French) 185 (3): 537–49; discussion 550–3. PMID 11501262.
- Nakamura K, Kawashima R, Sato N et al. (September 2000). "Functional delineation of the human occipito-temporal areas related to face and scene processing. A PET study". Brain 123 (Pt 9): 1903–12. doi:10.1093/brain/123.9.1903. PMID 10960054.
- Gur RC, Jaggi JL, Ragland JD et al. (September 1993). "Effects of memory processing on regional brain activation: cerebral blood flow in normal subjects". Int. J. Neurosci. 72 (1–2): 31–44. doi:10.3109/00207459308991621. PMID 8225798.
- Ojemann JG, Ojemann GA, Lettich E (February 1992). "Neuronal activity related to faces and matching in human right nondominant temporal cortex". Brain 115 (Pt 1): 1–13. doi:10.1093/brain/115.1.1. PMID 1559147.
- Bogen JE (April 1969). "The other side of the brain. I. Dysgraphia and dyscopia following cerebral commissurotomy". Bull Los Angeles Neurol Soc 34 (2): 73–105. PMID 5792283.
- Bogen JE (1975). "Some educational aspects of hemispheric specialization". UCLA Educator 17: 24–32.
- Bradshaw JL, Nettleton NC (1981). "The nature of hemispheric specialization in man". Behavioral and Brain Science 4: 51–91. doi:10.1017/S0140525X00007548.
- Galin D (October 1974). "Implications for psychiatry of left and right cerebral specialization. A neurophysiological context for unconscious processes". Arch. Gen. Psychiatry 31 (4): 572–83. doi:10.1001/archpsyc.1974.01760160110022. PMID 4421063.
- Njemanze PC (January 2007). "Cerebral lateralisation for facial processing: gender-related cognitive styles determined using Fourier analysis of mean cerebral blood flow velocity in the middle cerebral arteries". Laterality 12 (1): 31–49. doi:10.1080/13576500600886796. PMID 17090448.
- Gauthier I, Skudlarski P, Gore JC, Anderson AW (February 2000). "Expertise for cars and birds recruits brain areas involved in face recognition". Nat. Neurosci. 3 (2): 191–7. doi:10.1038/72140. PMID 10649576.
- Gauthier I, Tarr MJ, Anderson AW, Skudlarski P, Gore JC (June 1999). "Activation of the middle fusiform 'face area' increases with expertise in recognizing novel objects". Nat. Neurosci. 2 (6): 568–73. doi:10.1038/9224. PMID 10448223.
- Grill-Spector K, Knouf N, Kanwisher N (May 2004). "The fusiform face area subserves face perception, not generic within-category identification". Nat. Neurosci. 7 (5): 555–62. doi:10.1038/nn1224. PMID 15077112.
- Xu Y (August 2005). "Revisiting the role of the fusiform face area in visual expertise". Cereb. Cortex 15 (8): 1234–42. doi:10.1093/cercor/bhi006. PMID 15677350.
- Righi G, Tarr MJ (2004). "Are chess experts any different from face, bird, or greeble experts?". Journal of Vision 4 (8): 504–504. doi:10.1167/4.8.504.
- My Brilliant Brain, partly about grandmaster Susan Polgar, shows brain scans of the fusiform gyrus while Polgar viewed chess diagrams.
- Kung CC, Peissig JJ, Tarr MJ (December 2007). "Is region-of-interest overlap comparison a reliable measure of category specificity?". J Cogn Neurosci 19 (12): 2019–34. doi:10.1162/jocn.2007.19.12.2019. PMID 17892386.
- Mansour, Jamal; Lindsay, Roderick (30 January 2010). "Facial Recognition". Corsini Encyclopedia of Psychology 1–2. doi:10.1002/9780470479216.corpsy0342.
- Bruce, V.; Young, A (1986). "Understanding Face Recognition". British Journal of Psychology. 77 ( Pt 3) (77): 305–327. doi:10.1111/j.2044-8295.1986.tb02199.x. PMID 3756376.
- Calderwood, L; Burton, A.M. (November 2006). "Children and adults recall the names of highly familiar faces faster than semantic information". British Journal of Psychology 96 (4): 441–454. doi:10.1348/000712605X84124.
- Ellis, Hadyn; Jones, Dylan (February 1997). "Intra- and Inter-modal repetition priming of familiar faces and voices". British Journal of Psychology 88 (1): 143. doi:10.1111/j.2044-8295.1997.tb02625.x.
- Nadal, Lynn (2005). "Speaker Recognition". Encyclopedia of Cognitive Science 4: 142–145.
- Bredart, S.; Barsics, C. (3 December 2012). "Recalling Semantic and Episodic Information From Faces and Voices: A Face Advantage". Current Directions in Psychological Science 21 (6): 378–381. doi:10.1177/0963721412454876.
- Hanley, J. Richard; Damjanovic, Ljubica (November 2009). "It is more difficult to retrieve a familiar person's name and occupation from their voice than from their blurred face". Memory 17 (8): 830–839. doi:10.1080/09658210903264175.
- Yarmey, Daniel A. (1 January 1994). "Face and Voice Identifications in showups and lineups". Applied cognitive Psychology 8 (5): 453–464. doi:10.1002/acp.2350080504.
- Van Lancker, Diana; Kreiman, Jody (January 1987). "Voice discrimination and recognition are separate abilities". Neuropsychologia 25 (5): 829–834. doi:10.1016/0028-3932(87)90120-5.
- Barsics, Catherine; Brédart, Serge (June 2011). "Recalling episodic information about personally known faces and voices". Consciousness and Cognition 20 (2): 303–308. doi:10.1016/j.concog.2010.03.008.
- Ethofer, edited by Belin Pascal, Salvatore Campanella, Thomas. Integrating face and voice in person perception. New York: Springer. ISBN 978-1-4614-3584-6.
- Brédart, Serge; Barsics, Catherine; Hanley, Rick (November 2009). "Recalling semantic information about personally known faces and voices". European Journal of Cognitive Psychology 21 (7): 1013–1021. doi:10.1080/09541440802591821.
- Barsics, Catherine; Brédart, Serge (July 2012). "Recalling semantic information about newly learned faces and voices". Memory 20 (5): 527–534. doi:10.1080/09658211.2012.683012.
- "Learning.". Encyclopedia of Insects. Oxford: Elsevier Science & Technology. Retrieved 6 December 2013.
- "Memory, Explicit and Implicit.". Encyclopedia of the Human Brain. Oxford: Elsevier Science & Technology. Retrieved 6 December 2013.
- "Episodic Memory, Computational Models of". Encyclopedia of Cognitive Science. Chichester, UK: John Wiley & Sons. 2005.
- Leube, Dirk T.; Erb, Michael; Grodd, Wolfgang; Bartels, Mathias; Kircher, Tilo T.J. (December 2003). "Successful episodic memory retrieval of newly learned faces activates a left fronto-parietal network". Cognitive Brain Research 18 (1): 97–101. doi:10.1016/j.cogbrainres.2003.09.008.
- Hofer, Alex; Siedentopf, Christian M.; Ischebeck, Anja; Rettenbacher, Maria A.; Verius, Michael; Golaszewski, Stefan M.; Felber, Stephan; Fleischhacker, W. Wolfgang (March 2007). "Neural substrates for episodic encoding and recognition of unfamiliar faces". Brain and Cognition 63 (2): 174–181. doi:10.1016/j.bandc.2006.11.005.
- "Face Perception, Neural Basis of". Encyclopedia of Cognitive Science. John Wiley & Sons. 2005.
- "Face Perception, Psychology of". Encyclopedia of Cognitive Science. John Wiley & Sons. 2005.
- Feingold CA (1914). "The influence of environment on identification of persons and things". Journal of Criminal Law and Police Science 5: 39–51. doi:10.2307/1133283.
- Walker PM, Tanaka JW (2003). "An encoding advantage for own-race versus other-race faces". Perception 32 (9): 1117–25. doi:10.1068/p5098. PMID 14651324.
- Vizioli L, Rousselet GA, Caldara R (2010). "Neural repetition suppression to identity is abolished by other-race faces". Proc. Natl. Acad. Sci. U.S.A. 107 (46): 20081–20086. doi:10.1073/pnas.1005751107. PMC 2993371. PMID 21041643.
- Malpass & Kravitz, 1969; Cross, Cross, & Daly, 1971; Shepherd, Deregowski, & Ellis, 1974; all cited in Shepherd, 1981
- Chance, Goldstein, & McBride, 1975; Feinman & Entwistle, 1976; cited in Shepherd, 1981
- Brigham & Karkowitz, 1978; Brigham & Williamson, 1979; cited in Shepherd, 1981
- Other-Race Face Perception D. Stephen Lindsay, Philip C. Jack, Jr., and Marcus A. Christian. Williams College
- Diamond & Carey, 1986; Rhodeset al.,1989
- F. Richard Ferraro (2002). Minority and Cross-cultural Aspects of Neuropsychological Assessment. Studies on Neuropsychology, Development and Cognition 4. East Sussex: Psychology Press. p. 90. ISBN 90-265-1830-7.
- Levin DT (December 2000). "Race as a visual feature: using visual search and perceptual discrimination tasks to understand face categories and the cross-race recognition deficit". J Exp Psychol Gen 129 (4): 559–74. doi:10.1037/0096-3418.104.22.1689. PMID 11142869.
- Rehnman J, Herlitz A (April 2006). "Higher face recognition ability in girls: Magnified by own-sex and own-ethnicity bias". Memory 14 (3): 289–96. doi:10.1080/09658210500233581. PMID 16574585.
- Tanaka, J.W.; Lincoln, S.; Hegg, L. (2003). "A framework for the study and treatment of face processing deficits in autism". In Schwarzer, G.; Leder, H. The development of face processing. Ohio: Hogrefe & Huber Publishers. pp. 101–119. ISBN 9780889372641.
- Behrmann, Marlene; Avidan, Galia; Leonard, Grace L.; Kimchi, Rutie; Beatriz, Luna; Humphreys, Kate; Minshew, Nancy (2006). "Configural processing in autism and its relationship to face processing". Neuropsychologia 44: 110–129. doi:10.1016/j.neuropsychologia.2005.04.002.
- Schreibman, Laura (1988). Autism. Newbury Park: Sage Publications. pp. 14–47. ISBN 0803928092.
- Weigelt, Sarah; Koldewyn, Kami; Kanwisher, Nancy (2012). "Face identity recognition in autism spectrum disorders: A review of behavioral studies". Neuroscience & Biobehavioral Reviews 36: 1060–1084. doi:10.1016/j.neubiorev.2011.12.008.
- Dawson, Geraldine; Webb, Sara Jane; McPartland, James (2005). "Understanding the nature of face processing impairment in autism: Insights from behavioral and electrophysiological studies". Developmental Neuropsychology 27 (3): 403–424. doi:10.1207/s15326942dn2703_6. PMID 15843104.
- Kita, Yosuke; Inagaki, Masumi (2012). "Face recognition in patients with Autism Spectrum Disorder". Brain and Nerve 64 (7): 821–831. PMID 22764354.
- Grelotti, David; Gauthier, Isabel; Schultz, Robert (2002). "Social interest and the development of cortical face specialization: What autism teaches us about face processing". Developmental Psychobiology 40: 213–235. doi:10.1002/dev.10028. Retrieved 2012-2-24.
- Riby, Deborah; Doherty-Sneddon Gwyneth (2009). "The eyes or the mouth? Feature salience and unfamiliar face processing in Williams syndrome and autism". The Quarterly Journal of Experimental Psychology 62: 189–203. doi:10.1080/17470210701855629.
- Joseph, Robert; Tanaka, James (2003). "Holistic and part-based face recognition in children with autism". Journal of Child Psychology and Psychiatry 44: 529–542. doi:10.1111/1469-7610.00142.
- Langdell, Tim (1978). "Recognition of Faces: An approach to the study of autism". Journal of Psychology and Psychiatry and Allied Disciplines (Blackwell) 19: 255–265. doi:10.1111/j.1469-7610.1978.tb00468.x. Retrieved 2/12/2013.
- Spezio, Michael; Adolphs, Ralph; Hurley, Robert; Piven, Joseph (28 Sep 2006). "Abnormal use of facial information in high functioning autism". Journal of Autism and Developmental Disorders 37: 929–939. doi:10.1007/s10803-006-0232-9.
- Revlin, Russell (2013). Cognition: Theory and Practice. Worth Publishers. pp. 98–101. ISBN 9780716756675.
- Triesch, Jochen; Teuscher, Christof; Deak, Gedeon O.; Carlson, Eric (2006). "Gaze following: why (not) learn it?". Developmental Science 9: 125–157. doi:10.1111/j.1467-7687.2006.00470.x.
- Volkmar, Fred; Chawarska, Kasia; Klin, Ami (2005). "Autism in infancy and early childhood". Annual Reviews of Psychology 56: 315–316. doi:10.1146/annurev.psych.56.091103.070159. PMID 15709938.
- Nader-Grosbois, N.; Day, J.M. (2011). "Emotional cognition: theory of mind and face recognition". In Matson, J.L; Sturmey, R. International handbook of autism and pervasive developmental disorders. New York: Springer Science & Business Media. pp. 127–157. ISBN 9781441980649.
- Pierce, Karen; Muller, R.A., Ambrose, J., Allen, G.,Chourchesne (2001). "Face processing occurs outside the fusiform 'face area' in autism: evidence from functional MRI". Brain 124 (10): 2059–2073. doi:10.1093/brain/124.10.2059. Retrieved 2013-2-13.
- Harms, Madeline; Martin, Alex; Wallace, Gregory (2010). "Facial emotion recognition in autism spectrum disorders: A review of behavioral and neuroimaging studies". Neuropsychology Review 20: 290–322. doi:10.1007/s11065-010-9138-6.
- Wright, Barry; Clarke, Natalie; Jordan, Jo; Young, Andrew; Clarke, Paula; Miles, Jermey; Nation, Kate; Clarke, Leesa; Williams, Christine (2008). "Emotion recognition in faces and the use of visual context Vo in young people with high-functioning autism spectrum disorders". Autism 12: 607-. doi:10.1177/1362361308097118.
- Njemanze, P.C. Transcranial doppler spectroscopy for assessment of brain cognitive functions. United States Patent Application No. 20040158155, August 12th, 2004
- Njemanze, P.C. Noninvasive transcranial doppler ultrasound face and object recognition testing system. United States Patent No. 6,773,400, August 10th, 2004
- YangJing Long (2009). "Human age estimation by metric learning for regression problems". Proc. International Conference on Computer Analysis of Images and Patterns: 74–82.
- Face Recognition Homepage
- Are Faces a "Special" Class of Objects?
- Science Aid: Face Recognition
- FaceResearch – Scientific research and online studies on face perception
- Face Blind Prosopagnosia Research Centers at Harvard and University College London
- Face Recognition Tests - online tests for self-assessment of face recognition abilities.
- Perceptual Expertise Network (PEN) Collaborative group of cognitive neuroscientists studying perceptual expertise, including face recognition.
- Face Lab at the University of Western Australia
- Perception Lab at the University of St Andrews, Scotland
- The effect of facial expression and identity information on the processing of own and other race faces by Yoriko Hirose, PhD thesis from the University of Stirling
- Global Emotion Online-Training to overcome Caucasian-Asian other-race effect |
FARM TENANCY. Since the colonial period, there have always been some Texas farmers who rented the land they farmed rather than owning it. Although no statistical information was collected until 1880, when United States census officials began to include that information in their returns, it is clear from letters, court cases, and newspaper advertisements that tenants rented land for a variety of reasons and paid in a variety of ways. Some farmers who possessed the resources to buy land rented until they were more familiar with Texas before making a permanent commitment to a specific location, while others rented because they lacked the resources necessary to obtain land of their own. Some tenants, sharecroppers, paid for rented land by promising a share of the crop or labor, while others paid in cash. In antebellum Texas most farm tenants probably lived outside the plantation areas of the state, since most plantations involved in the production of commercial crops utilized slaves. Precise figures are impossible to obtain, but it seems clear that only a relatively small percentage of farmers were tenants. Thus they are rarely if ever mentioned in newspapers or descriptions of the state.
The end of the Civil War and the demise of slavery brought a need for new labor arrangements in the production of commercial crops. Texas plantation owners, like others in the South, had little or no cash, and they wanted to assure themselves of a stable labor supply throughout the growing and harvesting season. A system of tenant farming evolved that met these needs. The most common arrangement after the Civil War was a share tenant or sharecropping arrangement. Since the crop would not be split until after the harvest, tenants could only receive payment for their labor after the crops were in. Most tenants in the period just after the Civil War were black, and the Freedmen's Bureau supervised the signing and implementation of tenant-farming agreements in areas where it had local agents until it closed its local offices in December 1868. Although the agents sometimes complained that black women did not want to work as long in the fields as they had before the war and that many blacks did not want to work as many hours as they had as slaves, they generally reported that African Americans worked well as tenants when treated fairly. Blacks and, later, whites seem to have preferred tenancy arrangements over other forms of agricultural labor because tenancy gave them greater independence and flexibility than wage labor. It also directly rewarded them for their hard work with better crops. They seem to have viewed tenancy as an agricultural ladder that could lead to farm ownership under the right conditions.
As tenant farming became more common, it also became more systematic. In Texas, as in other Southern states, a hierarchy of tenant farmers developed, according to what tenants provided for themselves. At the top were share and cash tenants who supplied the mules, plows, seed, feed, and other supplies needed. Share tenants typically paid the landlord a third of the cotton crop and a fourth of the grain. At the bottom were sharecroppers who supplied only their labor. They typically received half the crops. The differences were critical, not only because share tenants received a larger portion of the crops, but also because they were considered the owners of the crops. Sharecroppers were generally considered laborers whose wages were paid with a share of the crops, which were owned by the landlord. A sharecropping arrangement gave owners greater control over how their land was worked. By 1880, when the first systematic data were collected, approximately 38 percent of all farmers were tenants. More than 80 percent of these rented for a share of the crops. Although statistics on farm tenure by race were not collected until 1900, blacks comprised a much higher proportion of the total number of tenant farmers than their proportion of the population. The highest percentages of tenant farmers were in counties with black majorities. In Fort Bend, Harrison, and Marion counties, for example, tenant farmers comprised 74, 60, and 51 percent of all farmers, respectively. Census returns did not differentiate between sharecroppers and share tenants until 1920, so it is impossible to determine what percentage of the group listed as share tenants were actually sharecroppers.
In addition to paying out a portion of the crop as rent, many tenants also mortgaged their share of the cotton crop to a furnishing merchant or their landlord for food and other supplies. Because the crops were of an uncertain size, and the price of cotton at harvest was unknown, cotton crops were risky collateral for the lender. Consequently, the interest on the loans was quite high, sometimes as high as 150 percent. Once forced to make a crop lien, many tenants could never get away from the system, as they found themselves just breaking even or owing more than the total received for their crops. One economist estimated that by 1914 half the tenants borrowed 100 percent of their income. As the population of the state grew and the state's vast lands were claimed, acquiring ownership of a farm became more difficult. This led to a corresponding increase in the proportion of tenants. By 1900, half of all Texas farmers were tenants. Again as in 1880, more than 80 percent of these were either share tenants or sharecroppers. The biggest proportional increase was probably the increase in white tenants. In 1900, 47 percent of all white farmers and 69 percent of all black farmers were tenants.
The conditions under which tenant farmers lived and worked became political issues with the rise of the People's party in the 1890s and became even more prominent when James E. Ferguson used them as part of his successful campaign for governor in 1914. Despite the rhetoric, except for the temporary prosperity the high cotton prices of World War I brought, conditions for tenant farmers did not materially change, and the number of tenants continued to rise. Each census recorded a larger proportion of tenants among Texas farmers. The census of 1930 recorded the highest percentage of tenants ever reported in the state. That year, almost 61 percent of all Texas farmers were tenant farmers, and one third of these were sharecroppers. In terms of the population as a whole, in 1930, nearly one quarter of all Texans lived on tenant farms. With the coming of the New Deal, however, the number of tenant farmers began to fall. Although Franklin D. Roosevelt saw tenant farming as evidence of economic problems and supported programs designed to make tenants into owners, the programs that had the highest impact on tenants were those that paid farmers to restrict crop acreage. These programs reduced the need for labor and caused many owners to push sharecroppers off the land. By 1935, the proportion of tenant farmers in Texas had dropped to 57 percent. This drop was due to a much larger decrease in the number of sharecroppers, as the number of other types of tenants rose slightly. The changes brought on by the New Deal signaled the beginning of a rapid drop in the proportion of Texas farmers who rented their land rather than owned it and a dramatic change in the types of farmers who were tenants. By the time of the 1945 census, the pull of the job market and the armed services in World War II had accelerated the changes already begun by the Great Depression and New Deal programs. The proportion of farmers who were tenants fell from almost 61 percent in 1930 to a little over 37½ percent by 1945. The number of sharecroppers fell from more than a third to 16 percent of all tenant farmers. By 1987 tenants comprised just under 12 percent of all farmers.
As the numbers fell, the very nature of tenant farming also changed. The most striking changes came in the number of part owners listed in the census. Part owners-that is, farmers who owned some of the land they farmed and rented the rest-comprised less than 10 percent of all Texas farmers until 1940, when they accounted for 11 percent of all farmers. By 1978 part owners made up almost 30 percent of all farm operators. During this same period, Texas farming became more highly mechanized. In 1929, for example, fewer than 10 percent of all Texas farms had tractors. By 1960, Texas had more tractors than farms. As farming became more mechanized and thus more capital-intensive, viable economic units became too large for one family to own enough farmland and provide machinery and working capital at the same time. Therefore, many of the larger operations were run by part owners or by tenants who owned no land at all. In 1987 full owners comprised 56 percent of all farmers but farmed only a quarter of all harvested cropland. Tenants, who comprised just 12 percent of all farmers, also farmed approximately a fourth of all harvested cropland. Part owners made up about 32 percent of all farmers and harvested a little over half of all harvested cropland. See also AGRICULTURE.
Barry A. Crouch, The Freedmen's Bureau and Black Texans (Austin: University of Texas Press, 1992). James Edward Ferguson, The Need of Outside Capital for Turning Landless Men of Texas into Home Owning Farmers (Temple, Texas: Telegram Print, 1915). Louis Ferleger, "Sharecropping Contracts in the Late Nineteenth Century South," Agricultural History 67 (Summer 1993). Neil F. Foley, The New South in the Southwest: Anglos, Blacks and Mexicans in Central Texas, 1880–1930 (Ph.D. dissertation, University of Michigan, 1990). Cecil Harper, Jr., Farming Someone Else's Land: Farm Tenancy in the Texas Brazos River Valley, 1850–1880 (Ph.D. dissertation, University of North Texas, 1988). Virgil Lee, Farm Mortgage Financing in Texas (College Station: Texas Agricultural Experiment Station, 1925). Richard G. Lowe and Randolph B. Campbell, Planters and Plain Folk: Agriculture in Antebellum Texas (Dallas: Southern Methodist University Press, 1987). Studies in Farm Tenancy in Texas (Austin: Division of Public Welfare, Department of Extension, 1915). Texas Historic Crop Statistics, 1866–1975 (Austin: Texas Crop and Livestock Reporting Service, 1976). Harold D. Woodman, King Cotton and His Retainers: Financing and Marketing the Cotton Crop of the South, 1800–1925 (Columbia: University of South Carolina Press, 1990). Robert Yantis, Farm Acreage, Values, Ownership and Tenancy (Austin: Texas Department of Agriculture, 1927).
The following, adapted from the Chicago Manual of Style, 15th edition, is the preferred citation for this article.Cecil Harper, Jr., and E. Dale Odom, "FARM TENANCY," Handbook of Texas Online (http://www.tshaonline.org/handbook/online/articles/aefmu), accessed May 25, 2015. Uploaded on June 12, 2010. Modified on September 4, 2013. Published by the Texas State Historical Association. |
2 Apparent weight Consider a man standing on a spring scale F sp W The only forces acting on the man are the weight force and the force of the spring.
3 Now imagine he weighs himself in a lift which is accelerating upwards. Since he is accelerating, there must be a net force F net = F sp – W = ma y or F sp = W + ma y F sp W F net i.e. the scale reads heavier. Apparent weight is given by the magnitude of the normal force.
5 Angular position If an object moves in a circle of radius r, then after travelling a distance s it has moved an angular displacement θ: θ is measured in radians (2π radians = 360°) KJF §3.8
6 Tangential velocity If motion is uniform and object takes time t to execute motion, then it has tangential velocity of magnitude v given by Period of motion T = time to complete one revolution (units: s) Frequency f = number of revolutions per second (units: s –1 or Hz)
7 Angular velocity Define an angular velocity ω Uniform circular motion is when ω is constant. Combining last 3 equations: v = rω period KJF §6.1
8 Question You place a beetle on a uniformly rotating record Is the beetle's tangential velocity different or the same at different radial positions? Is the beetle's angular velocity different or the same at the different radial positions? Remember; all points on a rigid rotating object will experience the same angular velocity
9 Consider an object is moving in uniform circular motion – tangential speed is constant. Is the object accelerating? Velocity is a vector ∴ changing direction ⇒ acceleration ⇒ net force
10 The change in velocity Δv = v 2 – v 1 and Δv points towards the centre of the circle Angle between velocity vectors is θ so Δv = vθ and so KJF §3.8
11 Acceleration points towards centre – centripetal acceleration a c Since the object is accelerating, there must be a force to keep it moving in a circle Centripetal acceleration KJF §6.2 This centripetal force may be provided by friction, tension in a string, gravity etc. or combinations. Examples ?
12 Note that centripetal force is the name given to the resultant force: it is not a separate force in the free-body diagram. The centripetal acceleration has to be provided by some other force (tension, friction, normal force) in order for circular motion to occur.
13 Draw a free-body diagram If the object is moving in a circle, there must be a net force pointing towards the centre of the circle. The magnitude of this net force is given by Solving CM problems
14 Problem 1 You enter the carnival ride called "The Rotor". The circular room is spinning and you and other riders are stuck to the circular wall. Draw a free-body diagram of the woman in red Is she in equilibrium? Explain What force is providing the centripetal force?
15 Whirling bucket A bucket of water is whirled around in a vertical circle with radius 1m. What is the minimum speed that it can be whirled so the water remains in the bucket? [3 ms –1, or rotation period 2s]
A beetle is sitting on a rotating turntable. Looking at the turntable side on, so the centre is towards the right: (a)(b)(c)(d)(e) other Which diagram correctly shows the forces acting on the beetle?
There is a centripetal force acting on the beetle. What provides this force? the angular velocity of the turntable gravity tangential velocity friction centripetal acceleration normal force
The turntable starts to spin faster. Which direction should beetle move so as not to slip? inwards outwards forward in the direction of motion backwards away from the direction of motion
20 Car around a corner A car of mass 1.6 t travels at a constant speed of 72 km/h around a horizontal curved road with radius of curvature 190 m. (Draw a free-body diagram) What is the minimum value of μ s between the road and the tyres that will prevent slippage? [0.21]
21 Car over a hill A car is driving at constant speed over a hill, which is a circular dome of radius 240 m. Above what speed will the car leave the road at the top of the hill? [175 km/h]
22 Banked road On a curve, if the road surface is "banked" (tilted towards the curve centre) then the horizontal component of the normal force can provide some (or all) of the required centripetal force. Choose v & θ so that less or no static friction is required. KJF example 6.6
23 KJF example 6.6 A curve of radius 70m is banked at a 15° angle. At what speed can a car take this curve without assistance from friction? KJF example 6.6 [14 ms –1 = 50 km h –1 ]
Next lecture Centre of mass and Torque Read: KJF §7.2, 7.3 |
7 figures are placed in the coordinate plane Coordinate Prooffigures are placed in the coordinate planeuse algebra to proveSlope FormulaMidpoint FormulaDistance Formula
8 Example: Name the missing coordinates of each triangle.
9 Placing Figures in the Coordinate Plane Suggestions:Use the origin as a vertex.Place at least one side onan axis.Keep the figure in thefirst quadrant if possible.Use coordinates thatmake computations assimple as possible.
10 Example: Position and label each triangle on the coordinate plane. isosceles with base that is 2b units longequilateral with sides a units long
11 Coordinate ProofTami and Juan are hiking. Tami hikes 300 feet east of camp and thenhikes 500 feet north. Juan hikes 500 feet west of camp and then300 feet north.Prove that Juan, Tami, and the camp form a right triangle.
12 If a line segment joins the midpoint of two sides of a triangle, then it is parallel to the third side. |
In general, as faults (short circuits) occur, currents increase in magnitude, and voltages go down. Besides these magnitude changes of the AC quantities, other changes may occur in one or more of the following parameters: phase angles of current and voltage phasors, harmonic components, active and reactive power, frequency of the power system, and so on.
Relay operating principles may be based upon detecting these changes, and identifying the changes with the possibility that a fault may exist inside its assigned zone of protection.
We will divide relay operating principles into categories based upon which of these input quantities a particular relay responds.
- Level Detection
- Magnitude Comparison
- Differential Comparison
- Phase Angle Comparison
- Distance Measurement
- Pilot Relaying
- Harmonic Content
- Frequency Sensing
This is the simplest of all relay operating principles. As indicated above, fault current magnitudes are almost always greater than the normal load currents that exist in a power system. Consider the motor connected to a 4 kV power system as shown in Figure 1.
The full-load current for the motor is 245 A. Allowing for an emergency overload capability of 25%, a current of 1.25 × 245 = 306 A or lower should correspond to normal operation. Any current above a set level (chosen to be above 306 A by a safety margin in the present example) may be taken to mean that a fault, or some other abnormal condition, exists inside the zone of protection of the motor.
The level above which the relay operates is known as the pickup setting of the relay. For all currents above the pickup, the relay operates, and for currents smaller than the pickup value, the relay takes no action. It is of course possible to arrange the relay to operate for values smaller than the pickup value, and take no action for values above the pickup.
An undervoltage relay is an example of such a relay.
The operating characteristics of an overcurrent relay can be presented as a plot of the operating time of the relay versus the current in the relay. It is best to normalize the current as a ratio of the actual current to the pickup setting.
The operating time for (normalized) currents less than 1.0 is infinite, while for values greater than 1.0 the relay operates. The actual time for operation will depend upon the design of the relay. The ideal level detector relay would have a characteristic as shown by the solid line in Figure 2.
In practice, the relay characteristic has a less abrupt transition, as shown by the dotted line.
This operating principle is based upon the comparison of one or more operating quantities with each other. For example, a current balance relay may compare the current in one circuit with the current in another circuit, which should have equal or proportional magnitudes under normal operating conditions.
The relay will operate when the current division in the two circuits varies by a given tolerance. Figure 3 shows two identical parallel lines that are connected to the same bus at either end.
Similar logic would be used to trip line B if its current exceeds that in line A, when the latter is not open. Another instance in which this relay can be used is when the windings of a machine have two identical parallel sub-windings per phase.
Differential comparison is one of the most sensitive and effective methods of providing protection against faults. The concept of differential comparison is quite simple, and can be best understood by referring to the generator winding shown in Figure 4.
As the winding is electrically continuous, current entering one end I1, must equal the current leaving the other end I2. One could use a magnitude comparison relay described above to test for a fault on the protected winding.
In either case, the protection is termed a differential protection. In general, the differential protection principle is capable of detecting very small magnitudes of fault currents. Its only drawback is that it requires currents from the extremities of a zone of protection, which restricts its application to power apparatus, such as transformers, generators, motors, buses, capacitors, and reactors.
This type of relay compares the relative phase angle between two AC quantities. Phase angle comparison is commonly used to determine the direction of a current with respect to a reference quantity.
For instance, the normal power flow in a given direction will result in the phase angle between the voltage and the current varying around its power factor angle, say approximately ±30°. When the power flows in the opposite direction, this angle will become (180° ± 30°).
Similarly, for a fault in the forward or reverse direction, the phase angle of the current with respect to the voltage will be −φ and (180◦ − φ), respectively, where φ, the impedance angle of the fault circuit, is close to 90° for power transmission networks.
These relationships are explained for two transmission lines in Figure 5.
This difference in phase relationships created by a fault is exploited by making relays that respond to phase angle differences between two input quantities – such as the fault voltage and the fault current in the present example.
As discussed above, the most positive and reliable type of protection compares the current entering the circuit with the current leaving it. On transmission lines and feeders, the length, voltage, and configuration of the line may make this principle uneconomical.
Instead of comparing the local line current with the far-end line current, the relay compares the local current with the local voltage. This, in effect, is a measurement of the impedance of the line as seen from the relay terminal.
An impedance relay relies on the fact that the length of the line (i.e., its distance) for a given conductor diameter and spacing determines its impedance.
Certain relaying principles are based upon the information obtained by the relay from a remote location. The information is usually – although not always – in the form of contact status (open or closed). The information is sent over a communication channel using power line carrier, microwave, or telephone circuits.
Currents and voltages in a power system usually have a sinusoidal waveform of the fundamental power system frequency. There are, however, deviations from a pure sinusoid, such as the third harmonic voltages and currents produced by the generators that are present during normal system operation.
Other harmonics occur during abnormal system conditions, such as the odd harmonics associated with transformer saturation, or transient components caused by the energization of transformers.
Normal power system operation is at 50 or 60 Hz, depending upon the country. Any deviation from these values indicates that a problem exists or is imminent. Frequency can be measured by filter circuits, by counting zero crossings of waveforms in a unit of time, or by special sampling and digital computer techniques.
Frequency-sensing relays may be used to take corrective actions that will bring the system frequency back to normal.
Relays may be constructed from electromechanical elements such as solenoids, hinged armatures, induction discs, solid-state elements such as diodes, silicon-controlled rectifiers (SCRs), transistors or magnetic or operational amplifiers, or digital computers using analog-to-digital converters and microprocessors.
It will be seen that, because the electromechanical relays were developed early on in the development of protection systems, the description of all relay characteristics is often in terms of electromechanical relays. The construction of a relay does not inherently change the protection concept, although there are advantages and disadvantages associated with each type.
Reference // Power System Relaying by Stanley H. Horowitz – Retired Consulting Engineer American Electric Power) and Arun G. Phadke – University Distinguished Research Professor (Purchase hardcopy from Amazon) |
Subtle gravitational differences across Earth’s surface are being measured, with unprecedented accuracy, by the Gravity Field and Steady-State Ocean Circulation Explorer (GOCE) satellite, built and operated by the European Space Agency. The data will provide scientists with a powerful foundation for further research into ocean circulation, sea level change, the structure and dynamics of Earth’s interior, as well as movements of Earth’s tectonic plates to better understand earthquakes and volcanoes.
GOCE was launched on March 17, 2009, from the Plesetsk Cosmodrome in northern Russia. It was carried into orbit by a modified intercontinental ballistic missile (decommissioned following the Strategic Arms Reduction Treaty). The satellite’s main data collection instrument is called a gradiometer; it detects very small variations in gravitational force as it travels over Earth’s surface. There’s also a Global Positioning System (GPS) receiver that works with other satellites to identify non-gravitational forces that may affect GOCE, as well as a laser reflector that allows GOCE to be tracked by ground-based lasers.
Animation of the GOCE geoid. Credit: ESA.
This animation of a rotating “potato-like” Earth shows a very precise model of Earth’s geoid created from data obtained by GOCE and released on March 31, 2011, at the Fourth International GOCE User Workshop in Munich, Germany. Colours represent deviations in height (–100 to +100 meters) from an “ideal” geoid. The blue colours represent low values and the reds/yellows represent high values. This geoid does not represent actual surface features on Earth. Instead, it’s a complex mathematical model built from GOCE data that show, in a highly exaggerated way, the relative differences in gravity across Earth’s surface. It can also be thought of as the surface of an “ideal” global ocean shaped only by gravity, without the influence of tides and currents.
Scientifically, a geoid is defined as an equipotential surface, that is, a surface that’s always perpendicular to Earth’s gravitational field. An illustration in the Wikipedia entry about it, shown below, provides a high-level description: in the figure, the plumb line (a weight attached to a cord) at each location always points down towards the Earth’s center of gravity. Therefore, a hypothetical surface that’s perpendicular to that plumb line is a local geoid surface. When mathematically stitched together and calibrated to a mean sea level, those perpendicular surfaces at many locations around the Earth form a geoid, a model of how gravity changes over the surface of the Earth.
The gravitational “landscape” of a geoid is based solely on the Earth’s mass and morphology. If the Earth were not rotating, if there were no movement of air, sea, or land, and if the Earth’s interior were uniformly dense, a geoid would be a perfect sphere. But Earth’s rotation causes the polar regions to flatten down slightly, making Earth an ellipsoid instead of a sphere. As a result, the force of gravity is slightly stronger at the poles compared to the equator. Smaller variations in gravity across the Earth’s surface are caused by differences in the thickness and rock density of Earth’s crust, as well as density differences and convection deep in Earth’s interior.
Scientists can use the high resolution geoid based on GOCE’s data as a gravitational reference frame for other Earth sciences investigations. Ocean circulation, sea level changes, and the melting of ice caps — important indicators for climate change — cause variations in actual ocean surface heights that can be measured by other Earth observatories. These observations, calibrated against a good geoid model, will significantly help in better understanding Earth’s climate dynamics.
Density differences and convection in Earth’s mantle also affect the gravitational field. For instance, the GOCE geoid model show a “depression” in the Indian Ocean and “plateaus” in the North Atlantic and Western Pacific. Gravity data could show signatures of powerful earthquakes and volcanoes, providing knowledge that may someday help scientists predict these natural disasters. There are also important applications in geo-information systems, civil engineering, mapping, and exploration that will be enhanced by a more refined geoid model.
Since its launch in March of 2009, except for a brief period for spacecraft systems checks and a temporary operational glitch, GOCE has been collecting data on our planet’s gravitational field as it orbits Earth in an approximate north-south direction (polar orbit), at an altitude of just 250 kilometers. This is unusually low for a low-Earth orbit but it’s required because the best gravitational field measurements are obtained when GOCE gets as close as possible to Earth’s surface while still maintaining its orbit. The satellite’s aerodynamic shape helps stabilize it as it skims atop the atmosphere’s edge, but inevitably, the rarefied air causes a drag on the satellite that slows it down. Therefore, to maintain its orbital velocity, GOCE uses its ion propulsion system to give itself an occasional boost.
The mission was originally supposed to last 20 months, the estimated time it would have taken for GOCE to use up all its fuel. But an unusually quiet solar cycle minimum had thinned the upper atmosphere, reducing drag on the satellite, which enabled it to conserve fuel. Because it has fuel reserves remaining, the mission has been extended till the end of 2012, allowing GOCE to continue collecting data that will increase the already high precision of its gravity measurements.
Shireen Gonzaga is a freelance writer who enjoys writing about natural history. She is also a technical editor at an astronomical observatory where she works on documentation for astronomers. Shireen has many interests and hobbies related to the natural world. She lives in Cockeysville, Maryland. |
The features of the scientific method
The researchers do so because they want to be accurate and definite about their findings Re-observation may be made by the same researcher at a different time and place or done by other professionals at some other time or place.
Characteristics of scientific method ppt
Provided by: Wikibooks. Nevertheless, science can begin from some random observation. This contrasts with methods that rely on pure reason including that proposed by Plato and with methods that rely on emotional or other subjective factors. This contrasts with methods that rely on experiences that are unique to a particular individual or a small group of individuals. The more unlikely that a prediction would be correct simply by coincidence, then the more convincing it would be if the prediction were fulfilled; evidence is also stronger if the answer to the prediction is not already known, due to the effects of hindsight bias see also postdiction. Or what is science? Watson but discarded. Occam's Razor serves as a rule of thumb for choosing the most desirable amongst a group of equally explanatory hypotheses. To minimize the confirmation bias which results from entertaining a single hypothesis, strong inference emphasizes the need for entertaining multiple alternative hypotheses. In general scientists tend to look for theories that are " elegant " or " beautiful ". Some of the realities could be directly observed, like the number of students present in the class and how many of them are male and how many female. The purpose of an experiment is to determine whether observations of the real world agree with or conflict with the predictions derived from a hypothesis. For example, while a hypothesis on the existence of other intelligent species may be convincing with scientifically based speculation, there is no known experiment that can test this hypothesis. Verifiability means that an experiment must be replicable by another researcher.
Those who want to be definite and positive are often referred to as positivists. The entire community, especially the scientific community, must have access to the results of the method so that knowledge does not stagnate and progress.
For example, Newton's laws explained thousands of years of scientific observations of the planets almost perfectly. Franklin immediately spotted the flaws which concerned the water content. Scientists are free to use whatever resources they have — their own creativity, ideas from other fields, inductive reasoningBayesian inferenceand so on — to imagine possible explanations for a phenomenon under study.
These outward trappings are part of science. Most experimental results do not produce large changes in human understanding; improvements in theoretical scientific understanding typically result from a gradual process of development over time, sometimes across different domains of science.
The scientific method allows psychological data to be replicated and confirmed in many instances, under different circumstances, and by a variety of researchers.
The features of the scientific method
Newton was able to include those measurements into consequences of his laws of motion. A new technology or theory might make the necessary experiments feasible. Even taking a plane from New York to Paris is an experiment which tests the aerodynamical hypotheses used for constructing the plane. References Bunge, M. The major precepts of the scientific method employed by all scientific disciplines are verifiability, predictability, falsifiability, and fairness. Applying the scientific method to psychology, therefore, helps to standardize the approach to understanding its very different types of information. Therefore, science itself can have little to say about the possibility. The conjecture might be that a new drug will cure the disease in some of those people. In subsequent modifications, it has also subsumed aspects of many other fields such as biochemistry and molecular biology. Any interference of their personal likings and dis-likings in their research can contaminate the purity of the data, which ultimately can affect the predictions made by the researcher. The Basic Steps of the Scientific Method The basic steps in the scientific method are: Observe a natural phenomenon and define a question about it Make a hypothesis, or potential solution to the question Test the hypothesis If the hypothesis is true, find more evidence or find counter-evidence If the hypothesis is false, create a new hypothesis or try again Draw conclusions and repeat—the scientific method is never-ending, and no result is ever considered perfect In order to ask an important question that may improve our understanding of the world, a researcher must first observe natural phenomena. A statistical hypothesis is a conjecture about a given statistical population. Scientists assume an attitude of openness and accountability on the part of those conducting an experiment. However, these laws were then determined to be special cases of a more general theory relativity , which explained both the previously unexplained exceptions to Newton's laws and predicted and explained other observations such as the deflection of light by gravity. Replicable Experiments Scientific experiments are replicable.
based on 115 review |
Halley’s Comet, proud parent of two meteor showers, swings into the inner solar system about every 76 years. At such times, the sun’s heat causes the comet to loosen its icy grip over its mountain-sized conglomeration of ice, dust and gas. At each pass near the sun, the crumbly comet sheds a fresh trail of debris into its orbital stream. It lost about 1/1,000th of its mass during its last flyby in 1986. It’s because comets like Halley are so crumbly that we see annual meteor showers, like the Eta Aquariid meteor shower that’s going on now.
Keep reading to learn more about Comet Halley, the meteor showers it spawns, and about how astronomers calculate the velocities of meteors streaking across our sky.
Comet Halley’s 2 meteor showers. Because Comet Halley has circled the sun innumerable times over countless millennia, cometary fragments litter its orbit. That’s why the comet doesn’t need to be anywhere near the Earth or the sun in order to produce a meteor shower. Instead, whenever our Earth in its orbit intersects Comet Halley’s orbit, cometary bits and pieces – oftentimes no larger than grains of sand or granules of gravel – smash into Earth’s upper atmosphere, to vaporize as fiery streaks across our sky: meteors.
It so happens we intersect Comet Halley’s orbit not once, but twice each year. In early May, we see bits of this comet as the annual Eta Aquariid meteor shower.
Then some six months later, in October, Earth in its orbit again intersects the orbital path of Comet Halley. This time around, these broken-up chunks from Halley’s Comet burn up in Earth’s atmosphere as the annual Orionid meteor shower.
By the way, these small fragments are called meteoroids when in outer space, and meteors when they vaporize in the Earth’s atmosphere.
Meteors in annual showers – made from the icy debris of comets – don’t hit the ground. They vaporize high in Earth’s atmosphere. The more rocky or metallic meteors are what sometimes hit the ground intact, and then they are called meteorites.
Where is Comet Halley now? Often, astronomers like to give distances of solar system objects in terms of astronomical units (AU), which is the sun-Earth distance. Comet Halley lodges 0.587 AU from the sun at its closest point to the sun (perihelion) and 35.3 AU at its farthest point (aphelion).
In other words, Halley’s Comet resides about 60 times farther from the sun at its closest than it does at its farthest.
It was last at perihelion in 1986, and will again return to perihelion in 2061.
At present, Comet Halley lies outside the orbit of Neptune, and not far from its aphelion point. See the image at the top of this post – for May 2019 – via Fourmilab.
Even so, meteroids swim throughout Comet Halley’s orbital stream, so each time Earth crosses the orbit of Halley’s Comet, in May and October, these meteoroids turn into incandescent meteors once they plunge into the Earth’s upper atmosphere.
Of course, Comet Halley isn’t the only comet that produces a major meteor shower …
Parent bodies of other major meteor showers
|Meteor Shower||Parent Body||Semi-major axis||Orbital Period||Perihelion||Aphelion|
|Quadrantids||2003 EH1 (asteroid)||3.12 AU||5.52 years||1.19 AU||5.06 AU|
|Lyrids||Comet Thatcher||55.68 AU||415 years||0.92 AU||110 AU|
|Eta Aquariids||Comet 1/P Halley||17.8 AU||75.3 years||0.59 AU||35.3 AU|
|Delta Aquariids||Comet 96P/Machholz||3.03 AU||5.28 years||0.12 AU||5.94 AU|
|Perseids||Comet 109P/Swift-Tuttle||26.09 AU||133 years||0.96 AU||51.23 AU|
|Draconids||Comet 21P/Giacobini–Zinner||3.52 AU||6.62 years||1.04 AU||6.01 AU|
|Orionids||Comet 1/P Halley||17.8 AU||75.3 years||0.59 AU||35.3 AU|
|Taurids||Comet 2P/Encke||2.22 AU||3.30 years||0.33 AU||4.11 AU|
|Leonids||Comet 55P/Tempel-Tuttle||10.33 AU||33.22 years||0.98 AU||19.69 AU|
|Geminids||3200 Phaethon (asteroid)||1.27 AU||1.43 years||0.14 AU||2.40 AU|
How fast do meteors from Comet Halley travel? If we can figure how fast Comet Halley travels at the Earth’s distance from the sun, we should also be able to figure out how fast these meteors fly in our sky.
Some of you may know that a solar system body, such as a planet or comet, goes faster in its orbit as it nears the sun and more slowly in its orbit as it gets farther away. Halley’s Comet swings inside the orbit of Venus at perihelion – the comet’s nearest point to the sun. At aphelion – its most distant point – Halley’s Comet goes all the way beyond the orbit of Neptune, the solar system’s outermost (known) planet.
When the meteoroids from the orbital stream of Halley’s Comet streak across the sky as Eta Aquariid or Orionid meteors, we know these meteoroids/meteors have to be one astronomical unit (Earth’s distance) from the sun. It might be tempting to assume that these meteoroids at one astronomical unit from the sun travel through space at the same speed Earth does: 67,000 miles per hour (108,000 km/h).
However, the velocity of these meteoroids through space does not equal that of Earth at the Earth’s distance from the sun. For that to happen, Earth and Halley’s Comet would have to orbit the sun in the same period of time. But the orbital periods of Earth and Comet Halley are vastly different. Earth takes one year to orbit the sun whereas Halley’s Comet takes about 76 years.
However, thanks to the great genius, Isaac Newton, we can compute the velocity of these meteoroids/meteors at the Earth’s distance from the sun by using Newton’s Vis-viva equation, his poetic rendition of instantaneous motion.
The answer, giving the velocity of these meteoroids through space at the Earth’s distance from the sun, is virtually at our fingertips. All we need to know is Comet Halley’s semi-major axis (mean distance from the sun) in astronomical units. Here you have it:
Comet Halley’s semi-major axis = 17.8 astronomical units.
In the easy-to-use Vis-viva equation below, r = distance from sun in astronomical units, and a = semi-major axis of Comet Halley’s orbit in astronomical units. In other words, r = 1 AU and a = 17.8 AU.
Vis-viva equation (r = distance from sun = 1 AU; and a = semi-major axis = 17.8 AU):
Velocity = 67,000 x the square root of (2/r – 1/a)
Velocity = 67,000 x the square root of (2/1 – 1/17.8)
Velocity = 67,000 x the square root of (2 – 0.056)
Velocity = 67,000 x the square root of 1.944
Velocity = 67,000 x 1.39
Velocity = 93,130 miles per hour or 25.87 miles per second
The above answer gives the velocity of these meteoroids through space at the Earth’s distance from the sun. However, if these meteoroids were to hit Earth’s atmosphere head-on, that would push the velocity up to an incredible 160,130 miles per hour (257,704 km/h) because 93,130 + 67,000 = 160,130. NASA gives the velocity for the Eta Aquariid meteors and Orionid meteors at 148,000 miles per hour (238,000 km/h), which suggests the collision of these meteoroids/meteors with Earth is not all that far from head-on.
We can also use the Vis-viva equation to find out the velocity of Halley’s Comet (or its meteoroids) at the perihelion distance of 0.59 AU and aphelion distance of 35.3 AU.
Perihelion velocity = 122,331 miles per hour (200,000 km/h)
Aphelion velocity = 1,464 miles per hour (2,400 km/h)
Bottom line: The famous Comet Halley spawns the Eta Aquariids – going on now – and the Orionids in October. Plus where the comet is now, parent bodies of other meteor showers … and Isaac Newton’s Vis-viva equation, his poetic rendition of instantaneous motion.
Bruce McClure has served as lead writer for EarthSky's popular Tonight pages since 2004. He's a sundial aficionado, whose love for the heavens has taken him to Lake Titicaca in Bolivia and sailing in the North Atlantic, where he earned his celestial navigation certificate through the School of Ocean Sailing and Navigation. He also writes and hosts public astronomy programs and planetarium programs in and around his home in upstate New York. |
Mirrors in orbit would reflect sunlight onto huge solar panels.
What if you could imagine looking at Tokyo Bay from high above and seeing a man-made island in the harbor, 3 kilometers long. There is a massive net stretched over the island and studded with 5 billion tiny rectifying antennas, which convert microwave energy into DC electricity. Also on the island is a substation that sends that electricity coursing through a submarine cable to Tokyo, to help keep the factories of the Keihin industrial zone humming and the neon lights of Shibuya shining bright.
But you can’t even see the most interesting part. Several giant solar collectors in geosynchronous orbit are beaming microwaves down to the island from 36 000 km above Earth.
It’s been the subject of many previous studies and the stuff of sci-fi for decades, but space-based solar power could at last become a reality—and within 25 years, according to a proposal from researchers at the Japan Aerospace Exploration Agency (JAXA). The agency, which leads the world in research on space-based solar power systems, now has a technology road map that suggests a series of ground and orbital demonstrations leading to the development in the 2030s of a 1-gigawatt commercial system—about the same output as a typical nuclear power plant.
It’s an ambitious plan, to be sure. But a combination of technical and social factors is giving it currency, especially in Japan. On the technical front, recent advances in wireless power transmission allow moving antennas to coordinate in order to send a precise beam across vast distances. At the same time, heightened public concerns about the climatic effects of greenhouse gases produced by the burning of fossil fuels are prompting a look at alternatives. Renewable energy technologies to harvest the sun and the wind are constantly improving, but large-scale solar and wind farms occupy huge swaths of land, and they provide only intermittent power. Space-based solar collectors in geosynchronous orbit, on the other hand, could generate power nearly 24 hours a day. Japan has a particular interest in finding a practical clean energy source: The accident at the Fukushima Daiichi nuclear power plant prompted an exhaustive and systematic search for alternatives, yet Japan lacks both fossil fuel resources and empty land suitable for renewable power installations.
Soon after we humans invented silicon-based photovoltaic cells to convert sunlight directly into electricity, more than 60 years ago, we realized that space would be the best place to perform that conversion. The concept was first proposed formally in 1968 by the American aerospace engineer Peter Glaser. In a seminal paper, he acknowledged the challenges of constructing, launching, and operating these satellites but argued that improved photovoltaics and easier access to space would soon make them achievable. In the 1970s, NASA and the U.S. Department of Energy carried out serious studies on space-based solar power, and over the decades since, various types of solar power satellites (SPSs) have been proposed. No such satellites have been orbited yet because of concerns regarding costs and technical feasibility. The relevant technologies have made great strides in recent years, however. It’s time to take another look at space-based solar power.
A commercial SPS capable of producing 1 GW would be a magnificent structure weighing more than 10 000 metric tons and measuring several kilometers across. To complete and operate an electricity system based on such satellites, we would have to demonstrate mastery of six different disciplines: wireless power transmission, space transportation, construction of large structures in orbit, satellite attitude and orbit control, power generation, and power management. Of those six challenges, it’s the wireless power transmission that remains the most daunting. So that’s where JAXA has focused its research.
Wireless power transmission has been the subject of investigation since Nikola Tesla’s experiments at the end of the 19th century. Tesla famously began building a 57-meter tower on New York’s Long Island in 1901, hoping to use it to beam power to such targets as moving airships, but his funding was canceled before he could realize his dream.
To send power over distances measured in millimeters or centimeters—for example, to charge an electric toothbrush from its base or an electric vehicle from a roadway—electromagnetic induction works fine. But transmitting power over longer distances can be accomplished efficiently only by converting electricity into either a laser or a microwave beam.
The laser method’s main advantages and disadvantages both relate to its short wavelength, which would be around 1 micrometer for this application. Such wavelengths can be transmitted and received by relatively small components: The transmitting optics in space would measure about 1 meter for a 1-GW installation, and the receiving station on the ground would be several hundred meters long. However, the short-wavelength laser would often be blocked by the atmosphere; water molecules in clouds would absorb or scatter the laser beam, as they do sunlight. No one wants a space-based solar power system that works only when the sky is clear.
But microwaves—for example, ones with wavelengths between 5 and 10 centimeters—would have no such problems in transmission. Microwaves also have an efficiency advantage for a space-based solar power system, where power must be converted twice: first from DC power to microwaves aboard the satellite, then from microwaves to DC power on the ground. In lab conditions, researchers have achieved about 80 percent efficiency in that power conversion on both ends. Electronics companies are now striving to achieve such rates in commercially available components, such as in power amplifiers based on gallium nitridesemiconductors, which could be used in the microwave transmitters.
In their pursuit of an optimal design for the satellite, JAXA researchers are working on two different concepts. In the more basic one, a huge square panel (measuring 2 km per side) would be covered with photovoltaic elements on its top surface and transmission antennas on its bottom. This panel would be suspended by 10-km-long tether wires from a small bus, which would house the satellite’s controls and communication systems.
Using a technique called gravity gradient stabilization, the bus would act as a counterweight to the huge panel. The panel, which would be closer to Earth, would experience more gravitational pull down toward the planet and less centrifugal force away from it, while the bus would be tugged upward by the opposite effects. This balance of forces would keep the satellite in a stable orbit, so it wouldn’t need any active attitude-control system, saving millions of dollars in fuel costs.
The problem with this basic SPS configuration is its inconstant rate of power generation. Because the photovoltaic panel’s orientation is fixed, the amount of sunlight that hits it varies greatly as the geosynchronous satellite and Earth spin.
So JAXA has come up with a more advanced SPS concept that solves the solar collection problem by employing two huge reflective mirrors. These would be positioned so that between the two of them, they would direct light onto two photovoltaic panels 24 hours a day. The two mirrors would be free flying, not tethered to the solar panels or the separate transmission unit, which means that we would have to master a sophisticated kind of formation flying to implement this system. Space agencies have some experience with formation flying, most notably in the docking maneuvers performed at the International Space Station, but coordinating a formation flight involving kilometer-scale structures is a big step from today’s docking procedures.
We would also have to make several other breakthroughs before this advanced type of SPS could be built. We’d need very light materials for the mirror structures to allow for the formation flight, as well as extremely high-voltage power transmission cables that could channel the power from the solar panels to the transmission unit with minimal resistive losses. Such technologies would take years to develop, so if one or more nations do embark on a long-term project to exploit space-based solar power, they may employ a two-phase program that begins with the basic model while researchers work on the technologies that will allow for next-generation systems.
To generate the microwaves, researchers have proposedvacuum tubes such as magnetrons, klystrons, or traveling wave tubes, because their power conversion efficiency is reasonably high—typically 70 percent or higher—and they’re relatively inexpensive. Semiconductor amplifiers are getting better all the time, however; their efficiencies are going up, and their costs are coming down. Cost is important here because a 1-GW commercial SPS would have to include at least 100 million 10-watt semiconductor amplifiers.
To choose a microwave frequency for transmission, we have to weigh several factors. Low-frequency microwaves penetrate the atmosphere well, but they require very large antennas, which would make construction and maintenance more complicated. Frequencies in the range of 1 to 10 gigahertz offer the best compromise between antenna size and atmospheric attenuation. Within this range, 2.45 and 5.8 GHz are the potential candidates because they are in the bands set aside for industrial, scientific, and medical uses. Of these, 5.8 GHz seems particularly desirable because the transmitting antennas can be smaller.
Making a powerful beam of microwaves is important, of course, but the next step is a lot trickier: aiming the beam precisely so that it travels the 36 000 km to hit the rectifying antennas spot on.
Consider that the microwave transmission system would be composed of a number of antenna panels, each measuring perhaps 5 meters long, that would be covered in tiny antennas: In total, more than 1 billion antennas would likely be installed on a single SPS. Coordinating the microwaves generated by this vast swarm of antennas won’t be easy. To produce a single, precisely focused beam, the phases of the microwaves sent from all the antenna panels must be synchronized. That would be hard to manage, as these panels would move relative to each other.
This challenge of precisely directing a beam from a moving source is unique and hasn’t been solved by existing communication technologies. The beam must have very little divergence to prevent it from spreading out over too large an area. To send power at the 5.8-GHz frequency to a rectifying antenna, orrectenna, with a diameter of 3 km, the divergence must be limited to 100 microradians and the beam must have a pointing accuracy of 10 µrad.
JAXA’s solution involves a pilot signal that would be sent from the rectenna on the ground. As each individual antenna panel on the satellite received the pilot signal, it would calculate the necessary phases for its microwaves and adjust accordingly. The sum of all these adjustments is a tight beam that would zing down through the atmosphere to hit the rectenna. Such phase-adjusting technologies, known as retrodirective systems, have been used in small-scale antenna arrays in space, but additional work would be needed before they could coordinate several kilometers of orbital transmitters.
Once the beam reaches the receiving site, the rest of the process would be relatively easy. Arrays of rectennas would convert the microwave power to DC power with an efficiency greater than 80 percent. Then the DC power would be converted to AC and fed into the electrical grid.
When laypeople hear these orbital solar farms described, they often ask if it would be safe to send a powerful beam of microwaves down to Earth. Wouldn’t it cook whatever’s in its path, like food in a microwave oven? Some people have a grisly mental image of roasted seagulls dropping from the sky. In fact, the beam wouldn’t even be intense enough to heat your coffee. In the center of the beam in a commercial SPS system, the power density would be 1 kilowatt per square meter, which is about equal to the intensity of sunlight. As the regulatory limit for sustained human exposure to microwaves is typically set at 10 watts per square meter, however, the rectenna site would have to be a restricted area, and maintenance workers who enter that zone would have to take simple precautions, such as donning protective clothing. But the land outside the rectenna site would be perfectly safe. At a distance of 2 km from its center, the beam’s power density will have already dropped below the regulatory threshold.
In 2008, on a mountaintop on Hawaii’s main island, a rectenna received a beam of microwaves sent from the slopes of a volcano on the island of Maui, about 150 km away. That demonstration project, led by former NASA physicistJohn Mankins and recorded for a show on the Discovery Channel, was modest in its ambitions: Only 20 W of power were generated by the solar panels on Maui and beamed across the ocean. This setup was far from ideal because the microwaves’ phases were disturbed during this horizontal transmission through the dense atmosphere. Most of the power was lost in transmission, and less than a microwatt was received on the Big Island. But the experiment did demonstrate the general principle to an admiring public. And it’s worth remembering that in a space-based system, the microwaves would pass through dense atmosphere only for the last few kilometers of their journey.
In Japan, we are now planning a series of demonstrations for the next few years. By the end of this year, researchers expect to perform a ground experiment in which a beam of hundreds of watts will be transmitted over about 50 meters. This project, funded by JAXA and Japan Space Systems, will be the world’s first demonstration of high-power and long-range microwave transmission with the critical addition of retrodirective beam control. The microwave transmitter consists of four individual panels that can move in relation to one another in order to simulate antenna motion in orbit. Each panel, measuring 0.6 meter by 0.6 meter, contains hundreds of tiny transmitting antennas and receiving antennas to detect the pilot signal, as well as phase controllers and power management systems. Each panel will transmit 400 W, so that the total beam will carry 1.6 kW; in this early-stage experiment, we expect the rectenna to have a power output of 350 W.
Next, JAXA researchers hope to conduct the first microwave power transmission experiment in space, sending several kilowatts from low Earth orbit to the ground. This step, proposed for 2018, should test out the hardware: We hope to demonstrate microwave beam control, evaluate the system’s overall efficiency, and verify that the microwave beam doesn’t interfere with existing communications infrastructure. We also have some space science to conduct. We want to be sure that the intense microwave beam isn’t distorted or absorbed by the plasma of the ionosphere, the upper-atmosphere layer that contains electrically charged particles. We’re pretty sure that the beam won’t interact with this plasma, but our hypothesis can be confirmed only in the space environment.
If all goes well with these initial ground and space demonstrations, things will really start to get interesting. JAXA’s technology road map calls for work to begin on a 100-kW SPS demonstration around 2020. Engineers would verify all the basic technologies required for a commercial space-based solar power system during this stage.
Constructing and orbiting a 2-megawatt and then a 200-MW plant, the next likely steps, would require an international consortium, like the ones that fund the world’s giant particle physics experiments. Under such a scenario, a global organization could begin the construction of a 1-GW commercial SPS in the 2030s.
It would be difficult and expensive, but the payoff would be immense, and not just in economic terms. Throughout human history, the introduction of each new energy source—beginning with firewood, and moving on through coal, oil, gas, and nuclear power—has caused a revolution in our way of living. If humanity truly embraces space-based solar power, a ring of satellites in orbit could provide nearly unlimited energy, ending the biggest conflicts over Earth’s energy resources. As we place more of the machinery of daily life in space, we’ll begin to create a prosperous and peaceful civilization beyond Earth’s surface.
Via IEEE Spectrum |
Volume is the quantity of three-dimensional space enclosed by some closed boundary, for example, the space that a substance (solid, liquid, gas, or plasma) or shape occupies or contains. Volume is often quantified numerically using the SI derived unit, the cubic metre. The volume of a container is generally understood to be the capacity of the container, i. e. the amount of fluid (gas or liquid) that the container could hold, rather than the amount of space the container itself displaces.
Three dimensional mathematical shapes are also assigned volumes. Volumes of some simple shapes, such as regular, straight-edged, and circular shapes can be easily calculated using arithmetic formulas. Volumes of a complicated shape can be calculated by integral calculus if a formula exists for the shape's boundary. Where a variance in shape and volume occurs, such as those that exist between different human beings, these can be calculated using three-dimensional techniques such as the Body Volume Index. One-dimensional figures (such as lines) and two-dimensional shapes (such as squares) are assigned zero volume in the three-dimensional space.
The volume of a solid (whether regularly or irregularly shaped) can be determined by fluid displacement. Displacement of liquid can also be used to determine the volume of a gas. The combined volume of two substances is usually greater than the volume of one of the substances. However, sometimes one substance dissolves in the other and the combined volume is not additive.
In differential geometry, volume is expressed by means of the volume form, and is an important global Riemannian invariant. In thermodynamics, volume is a fundamental parameter, and is a conjugate variable to pressure.
Any unit of length gives a corresponding unit of volume, namely the volume of a cube whose side has the given length. For example, a cubic centimetre (cm3) would be the volume of a cube whose sides are one centimetre (1 cm) in length.
In the International System of Units (SI), the standard unit of volume is the cubic metre (m3). The metric system also includes the litre (L) as a unit of volume, where one litre is the volume of a 10-centimetre cube. Thus
- 1 litre = (10 cm)3 = 1000 cubic centimetres = 0.001 cubic metres,
- 1 cubic metre = 1000 litres.
Small amounts of liquid are often measured in millilitres, where
- 1 millilitre = 0.001 litres = 1 cubic centimetre.
Various other traditional units of volume are also in use, including the cubic inch, the cubic foot, the cubic mile, the teaspoon, the tablespoon, the fluid ounce, the fluid dram, the gill, the pint, the quart, the gallon, the minim, the barrel, the cord, the peck, the bushel, and the hogshead.
Volume and capacity are sometimes distinguished, with capacity being used for how much a container can hold (with contents measured commonly in litres or its derived units), and volume being how much space an object displaces (commonly measured in cubic metres or its derived units).
Volume and capacity are also distinguished in capacity management, where capacity is defined as volume over a specified time period. However, in this context the term volume may be more loosely interpreted to mean quantity.
The density of an object is defined as mass per unit volume. The inverse of density is specific volume which is defined as volume divided by mass. Specific volume is a concept important in thermodynamics where the volume of a working fluid is often an important parameter of a system being studied.
Volume in Calculus
The volume integral in cylindrical coordinates is
|Cube||a = length of any side (or edge)|
|Cylinder||r = radius of circular face, h = height|
|Prism||B = area of the base, h = height|
|Rectangular prism||l = length, w = width, h = height|
|Triangular prism||b = base length of triangle, h = height of triangle, l = length of prism or distance between the triangular bases|
|Sphere||r = radius of sphere
which is the integral of the surface area of a sphere
|Ellipsoid||a, b, c = semi-axes of ellipsoid|
|Torus||r = minor radius (radius of the tube), R = major radius (distance from center of tube to center of torus)|
|Pyramid||B = area of the base, h = height of pyramid|
|Square pyramid||s = side length of base, h = height|
|Rectangular pyramid||l = length, w = width, h = height|
|Cone||r = radius of circle at base, h = distance from base to tip or height|
||a, b, and c are the parallelepiped edge lengths, and α, β, and γ are the internal angles between the edges|
|Any volumetric sweep
|h = any dimension of the figure,
A(h) = area of the cross-sections perpendicular to h described as a function of the position along h. a and b are the limits of integration for the volumetric sweep.
(This will work for any figure if its cross-sectional area can be determined from h).
|Any rotated figure (washer method)
|and are functions expressing the outer and inner radii of the function, respectively.|
Volume ratios for a cone, sphere and cylinder of the same radius and height
Let the radius be r and the height be h (which is 2r for the sphere), then the volume of cone is
the volume of the sphere is
while the volume of the cylinder is
Volume formula derivations
The volume of a sphere is the integral of an infinite number of infinitesimally small circular disks of thickness dx. The calculation for the volume of a sphere with center 0 and radius r is as follows.
The surface area of the circular disk is .
The radius of the circular disks, defined such that the x-axis cuts perpendicularly through them, is
where y or z can be taken to represent the radius of a disk at a particular x value.
Using y as the disk radius, the volume of the sphere can be calculated as
This formula can be derived more quickly using the formula for the sphere's surface area, which is . The volume of the sphere consists of layers of infinitesimally thin spherical shells, and the sphere volume is equal to
The cone is a type of pyramidal shape. The fundamental equation for pyramids, one-third times base times altitude, applies to cones as well.
However, using calculus, the volume of a cone is the integral of an infinite number of infinitesimally thin circular disks of thickness dx. The calculation for the volume of a cone of height h, whose base is centered at (0,0,0) with radius r, is as follows.
The radius of each circular disk is r if x = 0 and 0 if x = h, and varying linearly in between—that is,
The surface area of the circular disk is then
The volume of the cone can then be calculated as
and after extraction of the constants:
Integrating gives us
Volume in differential geometry
In differential geometry, a branch of mathematics, a volume form on a differentiable manifold is a differential form of top degree (i.e. whose degree is equal to the dimension of the manifold) that is nowhere equal to zero. A manifold has a volume form if and only if it is orientable. An orientable manifold has infinitely many volume forms, since multiplying a volume form by a non-vanishing function yields another volume form. On non-orientable manifolds, one may instead define the weaker notion of a density. Integrating the volume form gives the volume of the manifold according to that form.
where the are the 1-forms providing an oriented basis for the cotangent bundle of the n-dimensional manifold. Here, is the absolute value of the determinant of the matrix representation of the metric tensor on the manifold.
Volume in thermodynamics
In thermodynamics, the volume of a system is an important extensive parameter for describing its thermodynamic state. The specific volume, an intensive property, is the system's volume per unit of mass. Volume is a function of state and is interdependent with other thermodynamic properties such as pressure and temperature. For example, volume is related to the pressure and temperature of an ideal gas by the ideal gas law.
- "Your Dictionary entry for "volume"". Retrieved 2010-05-01.
- One litre of sugar (about 970 grams) can dissolve in 0.6 litres of hot water, producing a total volume of less than one litre. "Solubility". Retrieved 2010-05-01.
Up to 1800 grams of sucrose can dissolve in a liter of water.
- "General Tables of Units of Measurement". NIST Weights and Measures Division. Retrieved 2011-01-12.
- Coxeter, H. S. M.: Regular Polytopes (Methuen and Co., 1948). Table I(i).
- Rorres, Chris. "Tomb of Archimedes: Sources". Courant Institute of Mathematical Sciences. Retrieved 2007-01-02.
|The Wikibook Geometry has a page on the topic of: Perimeters, Areas, Volumes|
|The Wikibook Calculus has a page on the topic of: Volume| |
Our editors will review what you’ve submitted and determine whether to revise the article.Join Britannica's Publishing Partner Program and our community of experts to gain a global audience for your work!
Silver standard, monetary standard under which the basic unit of currency is defined as a stated quantity of silver and which is usually characterized by the coinage and circulation of silver, unrestricted convertibility of other money into silver, and the free import and export of silver for the settlement of international obligations.
No country presently operates under a silver standard. During the 1870s most European countries adopted the gold standard, and by the early 1900s only China and Mexico and a few small countries still used the silver standard. In 1873 the U.S. Treasury stopped coining silver. This led to the Free Silver Movement, whose supporters (miners, farmers, and debtors) advocated the return of silver coin. After the defeat of William Jennings Bryan, who ran for U.S. president on a platform advocating free and unlimited coinage of silver in 1896, agitation for free silver died in the United States. The U.S. Congress adopted the gold standard in 1900.
Learn More in these related Britannica articles:
money: Standards of value…all were effectively on a silver standard. In Britain, on the other hand, the ratio established in the 18th century on the advice of Sir Isaac Newton, then serving as master of the mint, overvalued gold and therefore led to an effective gold standard. In the United States a ratio…
China, country of East Asia. It is the largest of all Asian countries and has the largest population…
Mexico, country of southern North America and the third largest country in Latin America, after Brazil and Argentina. Mexican society is characterized by extremes of wealth and poverty, with a limited middle class wedged between an elite cadre of landowners and investors on the one hand and masses of rural… |
Artist conception of the James Webb Space Telescope. Credit: NASA GSFC/CIL/Adriana Manrique Gutierrez
We recently saw the amazing image of the black hole in the center of our Milky Way galaxy, taken by the Event Horizon Telescope. One of the puzzles of contemporary astronomy is how every big galaxy came to have a giant main black hole, and how some of these black holes are remarkably large even at really early times of the universe.
” One of the most interesting areas of discovery that Webb will open is the search for primeval great voids in the early universe. These are the seeds of the far more enormous black holes that astronomers have actually discovered in galactic nuclei. Many (most likely all) galaxies host great voids at their centers, with masses varying from millions to billions of times the mass of our Sun. These supermassive black holes have grown to be so large both by gobbling matter around them and likewise through the merging of smaller black holes.
” An interesting recent finding has actually been the discovery of hyper-massive black holes, with masses of a number of billion solar masses, currently in location when the universe was only about 700 million years old, a little portion of its current age of 13.8 billion years. This is a confusing outcome, as at such early dates there is inadequate time to grow such hyper-massive black holes, according to basic theories. Some situations have actually been proposed to solve this dilemma.
” One possibility is that great voids, arising from the death of the really first generation of stars in the early universe, have actually accreted material at incredibly high rates. Another circumstance is that primeval, pristine gas clouds, not yet enhanced by chemical elements heavier than helium, could directly collapse to form a great void with a mass of a couple of hundred thousand solar masses, and subsequently accrete matter to evolve into the hyper-massive black holes observed at later epochs. Finally, dense, nuclear star clusters at the centers of baby galaxies might have produced intermediate mass great void seeds, through stellar accidents or merging of stellar-mass black holes, and after that become much more enormous via accretion.
This illustration reveals the populations of recognized black holes (large black dots) and the candidate black hole progenitors in the early universe (shaded regions). Credit: Roberto Maiolino, University of Cambridge
It is possible that the first black hole seeds originally formed in the infant universe, within simply a couple of million years after the huge bang. Its extraordinary level of sensitivity makes Webb capable of discovering extremely distant galaxies, and due to the fact that of the time required for the light released by the galaxies to travel to us, we will see them as they were in the remote past.
” Webbs NIRSpec instrument is particularly well fit to determine primeval great void seeds. My associates in the NIRSpec Instrument Science Team and I will be searching for their signatures throughout active stages, when they are voraciously gobbling matter and proliferating. In these phases the product surrounding them becomes luminous and very hot and ionizes the atoms in their environments and in their host galaxies.
” NIRSpec will distribute the light from these systems into spectra, or rainbows. The rainbow of active great void seeds will be characterised by specific finger prints, functions of extremely ionized atoms. NIRSpec will likewise determine the velocity of the gas orbiting in the vicinity of these primeval great voids. Smaller black holes will be defined by lower orbital speeds. Black hole seeds formed in pristine clouds will be recognized by the absence of features connected with any element heavier than helium.
” I look forward to utilizing Webbs unmatched capabilities to browse for these great void progenitors, with the supreme goal of understanding their nature and origin. The early universe and the realm of great voids seeds is a totally uncharted area that my coworkers and I are extremely delighted to explore with Webb.”
— Roberto Maiolino, teacher of experimental astrophysics and director of the Kavli Institute for Cosmology, University of Cambridge
Jonathan Gardner, Webb deputy senior job researcher, NASAs Goddard Space Flight Center
Stefanie Milam, Webb deputy project researcher for planetary science, NASAs Goddard Space Flight
One of the puzzles of contemporary astronomy is how every big galaxy came to have a giant central black hole, and how some of these black holes are surprisingly big even at really early times of the universe.” One of the most interesting areas of discovery that Webb is about to open is the search for primeval black holes in the early universe. These supermassive black holes have grown to be so large both by gobbling matter around them and likewise through the merging of smaller black holes.
Another circumstance is that primeval, pristine gas clouds, not yet enriched by chemical aspects much heavier than helium, might straight collapse to form a black hole with a mass of a few hundred thousand solar masses, and subsequently accrete matter to evolve into the hyper-massive black holes observed at later dates. Thick, nuclear star clusters at the centers of baby galaxies may have produced intermediate mass black hole seeds, via excellent accidents or combining of stellar-mass black holes, and then end up being much more massive through accretion. |
2.1. Using Data in C++¶
C++ requires the users specify the specific data type of each variable
before it is used.
The primary C++ built-in atomic data types are: integer (
floating point (
float), double precision floating point (
bool), and character (
char). There is also a special
type which holds a memory location called
pointer. C++ also has
collection or compound data types, which will be discussed in a future
2.2. Numeric Data¶
Numeric C++ data types include
int for integer,
for floating point,
double for double precision floating point.
The standard arithmetic operations, +, -, *, and / are used with optional parentheses to force the order of operations away from normal operator precedence.
In Python we can use
// to get integer division.
In C++, we declare all data types.
When two integers are divided in C++, the integer portion of the
quotient is returned and the fractional portion is removed.
i.e. When two integers are divided, integer division is used.
To get the whole quotient, declaring one of the numbers as a float will
convert the entire result into floating point.
Exponentiation in C++ is done using
pow() from the
and the remainder (modulo) operator is done with
Run the following code to see that you understand each result.
When declaring numeric variables in C++,
can optionally be used to help
to ensure space is used as efficiently as possible.
2.3. Boolean Data¶
Boolean data types are named after George Boole who was an English mathematician,
so the word “Boolean” should be capitalized. However,
the Boolean data type, in C++ uses the keyword
which is not capitalized.
The possible state values
for a C++ Boolean are lower case
Be sure to note the difference in capitalization from Python.
In Python, these same truth values are capitalized, while in C++,
they are lower case.
C++ uses the standard Boolean operators, but they are represented
differently than in Python: “and” is given by
&& , “or” is given by
, and “not” is given by
Note that the internally stored values representing
0 respectively. Hence, we see this in output as well.
Boolean data objects are also used as results for comparison operators such as equality (==) and greater than (\(>\)). In addition, relational operators and logical operators can be combined together to form complex logical questions. Table 1 shows the relational and logical operators with examples shown in the session that follows.
Less than operator
Greater than operator
less than or equal
Less than or equal to operator
greater than or equal
Greater than or equal to operator
Not equal operator
Both operands true for result to be true
One or the other operand is true for the result to be true
Negates the truth value, false becomes true, true becomes false
When a C++ variable is declared, space in memory is set aside to hold this type of value. A C++ variable can optionally be initialized in the declaration by using a combination of a declaration and an assignment statement.
Consider the following session:
int theSum = 0; creates a variable called
theSum and initializes it to hold the data value of
As in Python, the right-hand side of each assignment
statement is evaluated and the resulting data value is
“assigned” to the variable named on the left-hand side.
Here the type of the variable is integer.
Because Python is dynamically typed, if the type of the data
changes in the program, so does the type of the variable.
However, in C++, the data type cannot change.
This is a characteristic of C++’s static typing. A
variable can hold ever only one type of data.
Pitfall: C++ will often simply try to do the assignment you
complaining. Note what happened in the code above in the final output.
2.4. Character Data¶
In Python strings can be created with single or double quotes.
In C++ single quotes are used for the character (
char) data type,
and double quotes are used for the string data type.
Consider the following code.
Try the following question.
A C++ pointer is a variable that stores a memory address and can be used to indirectly access data stored at that memory location.
We know that variables in a computer program are used to label data with a descriptive identifier so that the data can be accessed and used by that computer program.
Let’s look at some examples of storing an integer in Python and C++.
In Python every single thing is stored as an object. Hence, a Python variable is actually a reference to an object that is stored in memory. Hence, each Python variable requires two memory locations: one to store the reference, and the other to store the variable value itself in an object.
In C++ the value of each variable is stored directly in memory without the need for either a reference or an object. This makes access faster, but it is one of the reasons we need to declare each variable because different types take differing amounts of space in memory!
The following code declares a variable called varN that has in it a value of 100:
// Python reference for a single integer value
varN = 100
// C++ variable declaration and assignment of an integer value
int varN = 100;
In C++ the results of running this code will look like the diagram below:
In each case, when we want to output the value to the console, we use the variable name to do so.
But, we can also identify the memory location of the variable by its address. In both Python and C++, this address may change each time the program is run. In C++, the address will always look odd because it will be the actual memory address written in a hexadecimal code which is a base 16 code like 0x7ffd93f25244. In Python it is implementation dependent, it is sometimes a hexadecimal code and sometimes just a count or another way to reference the address.
In Python we use
id to reference the address,
while in C++ we use the address-of operator,
In both Python and C++, variables are stored in memory locations which are dependent upon the run itself. If you repeatedly run the above code in either C++ or Python, you may see the location change.
As suggested above, in Python, it is impossible to store a variable directly. Instead, we must use a variable name and a reference to the data object. (Hence the arrow in the image.) In C++, variables store values directly, because they are faster to reference.
References are slower, but they are sometimes useful. If in C++, we want to create a analogous reference to a memory location, we must use a special data type called a pointer.
2.5.1. Pointer Syntax¶
When declaring a pointer in C++ that will “point” to the memory address of some data type, you will use the same rules of declaring variables and data types. The key difference is that there must be an asterisk (*) between the data type and the identifier.
variableType *identifier; // syntax to declare a C++ pointer
int *ptrx; // example of a C++ pointer to an integer
White space in C++ generally does not matter, so the following pointer declarations are identical:
SOMETYPE *variablename; // preferable
SOMETYPE * variablename;
However, the first declaration is preferable because it is clearer to the programmer that the variable is in fact a pointer because the asterisk is closer to the variable name.
220.127.116.11. The address-of operator,
Now that we know how to declare pointers, how do we give them the address of
where the value is going to be stored? One way to do this is to have a pointer
refer to another variable by using the address-of operator, which is denoted by the
&. The address-of operator
& does exactly what it indicates,
variableType varN; // a variable to hold the value
namely it returns the address.
The syntax is shown below, where varN stores the value, and ptrN stores the address of where varN is located:
variableType *ptrN = &varN; // a variable pointing to the address of varN
Keep in mind that when declaring a C++ pointer, the pointer needs to reference the same type as the variable or constant to which it points.
Expanding on the example above where varN has the value of 9.
//variable declaration for a single integer value
int varN = 9;
ptrN = &varN;
The results of running this C++ code will look like the diagram below.
2.5.2. Accessing Values from Pointers¶
Once you have a C++ pointer, you use the asterisk before the pointer variable, to dereference the pointer, which means go to the location pointed at by the 3.
In other words, varN and *ptrN (note the asterisk in front!) reference the same
value in the code above.
Let’s extend the example above to output the value of a variable and its address in memory:
Compiling and running the above code will have the program output the value in varN, what is in ptrN (the memory address of varN), and what value is located at that memory location.
The second output sentence is the address of varN, which would most likely be different if you run the program on your machine.
WARNING: What happens if you forget the ampersand when assigning a value to a pointer and have the following instructions instead?
This is BAD, BAD, BAD!
If your compiler does not catch that error (the one for this class may),
cout instruction outputs
After changing *ptrN, varN now has: 50
which is expected because you changed where ptrN is pointing to and NOT the contents of where it is pointing.
cout instruction is a disaster because
(1) You don’t know what is stored in location 100 in memory, and
(2) that location is outside of your segment (area in memory reserved
for your program), so the operating system will jump in with a message
about a “segmentation fault”. Although such an error message looks bad,
a “seg fault” is in fact a helpful error because unlike the elusive logical
errors, the reason is fairly localized.
2.5.3. The null pointer¶
None in Python, the null pointer (
nullptr) in C++ points to
nothing. Older editions of C++ also used
NULL (all caps) or 0,
but we will use the keyword
nullptr because the compiler can do
better error handling with the keyword. The null pointer is often used
in conditions and/or in logical operations.
The following example demonstrates how the null pointer works.
The variable ptrx initially has the address of x when it is declared.
On the first iteration of the loop, it is assigned the value of
nullptr, which evaluates to a false value; thereby ending the loop:
Helpful Tip: The null pointer becomes very useful when you must test the state of a pointer, such as whether the assignment to an address is valid or not.
All variables must be declared before use in C++.
C++ has typical built-in numeric types:
intis for integers and
doubleare used for floating point depending on the number of digits desired.
C++ has the Boolean type
The character data type
charholds a single character which is encased in single quotes.
Pointers are a type of variable that stores a memory address. To declare a pointer, an
*is used before the variable name that is supposed to store the location. |
Confirmation bias, also called myside bias, is the tendency to search for or interpret information in a way that confirms one's beliefs or hypotheses.[Note 1] People display this bias when they gather or remember information selectively, or when they interpret it in a biased way. The effect is stronger for emotionally charged issues and for deeply entrenched beliefs. People also tend to interpret ambiguous evidence as supporting their existing position. Biased search, interpretation and memory have been invoked to explain attitude polarization (when a disagreement becomes more extreme even though the different parties are exposed to the same evidence), belief perseverance (when beliefs persist after the evidence for them is shown to be false), the irrational primacy effect (a greater reliance on information encountered early in a series) and illusory correlation (when people falsely perceive an association between two events or situations).
A series of experiments in the 1960s suggested that people are biased toward confirming their existing beliefs. Later work re-interpreted these results as a tendency to test ideas in a one-sided way, focusing on one possibility and ignoring alternatives. In certain situations, this tendency can bias people's conclusions. Explanations for the observed biases include wishful thinking and the limited human capacity to process information. Another explanation is that people show confirmation bias because they are weighing up the costs of being wrong, rather than investigating in a neutral, scientific way.
Confirmation biases contribute to overconfidence in personal beliefs and can maintain or strengthen beliefs in the face of contrary evidence. Poor decisions due to these biases have been found in political and organizational contexts.[Note 2]
- 1 Types
- 2 Related effects
- 3 Individual differences
- 4 History
- 5 Explanations
- 6 Consequences
- 7 See also
- 8 Notes
- 9 References
- 10 Sources
- 11 Further reading
- 12 External links
Confirmation biases are effects in information processing. They differ from the behavioral confirmation effect, also called "self-fulfilling prophecy", in which behavior, influenced by expectations, causes those expectations to come true. Some psychologists use "confirmation bias" to refer to the tendency to avoid rejecting beliefs, while searching for evidence, interpreting it, or recalling it from memory. Other psychologists restrict the term to selective collection of evidence.[Note 3]
Biased search for information
Experiments have found repeatedly that people tend to test hypotheses in a one-sided way, by searching for evidence consistent with their current hypothesis. Rather than searching through all the relevant evidence, they phrase questions to receive an affirmative answer that supports their hypothesis. They look for the consequences that they would expect if their hypothesis were true, rather than what would happen if it were false. For example, someone using yes/no questions to find a number he or she suspects to be the number 3 might ask, "Is it an odd number?" People prefer this type of question, called a "positive test", even when a negative test such as "Is it an even number?" would yield exactly the same information. However, this does not mean that people seek tests that guarantee a positive answer. In studies where subjects could select either such pseudo-tests or genuinely diagnostic ones, they favored the genuinely diagnostic.
The preference for positive tests in itself is not a bias, since positive tests can be highly informative. However, in combination with other effects, this strategy can confirm existing beliefs or assumptions, independently of whether they are true. In real-world situations, evidence is often complex and mixed. For example, various contradictory ideas about someone could each be supported by concentrating on one aspect of his or her behavior. Thus any search for evidence in favor of a hypothesis is likely to succeed. One illustration of this is the way the phrasing of a question can significantly change the answer. For example, people who are asked, "Are you happy with your social life?" report greater satisfaction than those asked, "Are you unhappy with your social life?"
Even a small change in a question's wording can affect how people search through available information, and hence the conclusions they reach. This was shown using a fictional child custody case. Participants read that Parent A was moderately suitable to be the guardian in multiple ways. Parent B had a mix of salient positive and negative qualities: a close relationship with the child but a job that would take him or her away for long periods of time. When asked, "Which parent should have custody of the child?" the majority of participants chose Parent B, looking mainly for positive attributes. However, when asked, "Which parent should be denied custody of the child?" they looked for negative attributes and the majority answered that Parent B should be denied custody, implying that Parent A should have custody.
Similar studies have demonstrated how people engage in a biased search for information, but also that this phenomenon may be limited by a preference for genuine diagnostic tests. In an initial experiment, participants rated another person on the introversion–extroversion personality dimension on the basis of an interview. They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, "What do you find unpleasant about noisy parties?" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, "What would you do to liven up a dull party?" These loaded questions gave the interviewees little or no opportunity to falsify the hypothesis about them. A later version of the experiment gave the participants less presumptive questions to choose from, such as, "Do you shy away from social interactions?" Participants preferred to ask these more diagnostic questions, showing only a weak bias towards positive tests. This pattern, of a main preference for diagnostic tests and a weaker preference for positive tests, has been replicated in other studies.
Personality traits influence and interact with biased search processes. Individuals vary in their abilities to defend their attitudes from external attacks in relation to selective exposure. Selective exposure occurs when individuals search for information that is consistent, rather than inconsistent, with their personal beliefs. An experiment examined the extent to which individuals could refute arguments that contradicted their personal beliefs. People with high confidence levels more readily seek out contradictory information to their personal position to form an argument. Individuals with low confidence levels do not seek out contradictory information and prefer information that supports their personal position. People generate and evaluate evidence in arguments that are biased towards their own beliefs and opinions. Heightened confidence levels decrease preference for information that supports individuals' personal beliefs.
Another experiment gave participants a complex rule-discovery task that involved moving objects simulated by a computer. Objects on the computer screen followed specific laws, which the participants had to figure out. So, participants could "fire" objects across the screen to test their hypotheses. Despite making many attempts over a ten-hour session, none of the participants figured out the rules of the system. They typically attempted to confirm rather than falsify their hypotheses, and were reluctant to consider alternatives. Even after seeing objective evidence that refuted their working hypotheses, they frequently continued doing the same tests. Some of the participants were taught proper hypothesis-testing, but these instructions had almost no effect.
Confirmation biases are not limited to the collection of evidence. Even if two individuals have the same information, the way they interpret it can be biased.
A team at Stanford University conducted an experiment involving participants who felt strongly about capital punishment, with half in favor and half against it. Each participant read descriptions of two studies: a comparison of U.S. states with and without the death penalty, and a comparison of murder rates in a state before and after the introduction of the death penalty. After reading a quick description of each study, the participants were asked whether their opinions had changed. Then, they read a more detailed account of each study's procedure and had to rate whether the research was well-conducted and convincing. In fact, the studies were fictional. Half the participants were told that one kind of study supported the deterrent effect and the other undermined it, while for other participants the conclusions were swapped.
The participants, whether supporters or opponents, reported shifting their attitudes slightly in the direction of the first study they read. Once they read the more detailed descriptions of the two studies, they almost all returned to their original belief regardless of the evidence provided, pointing to details that supported their viewpoint and disregarding anything contrary. Participants described studies supporting their pre-existing view as superior to those that contradicted it, in detailed and specific ways. Writing about a study that seemed to undermine the deterrence effect, a death penalty proponent wrote, "The research didn't cover a long enough period of time", while an opponent's comment on the same study said, "No strong evidence to contradict the researchers has been presented". The results illustrated that people set higher standards of evidence for hypotheses that go against their current expectations. This effect, known as "disconfirmation bias", has been supported by other experiments.
Another study of biased interpretation occurred during the 2004 US presidential election and involved participants who reported having strong feelings about the candidates. They were shown apparently contradictory pairs of statements, either from Republican candidate George W. Bush, Democratic candidate John Kerry or a politically neutral public figure. They were also given further statements that made the apparent contradiction seem reasonable. From these three pieces of information, they had to decide whether or not each individual's statements were inconsistent.:1948 There were strong differences in these evaluations, with participants much more likely to interpret statements from the candidate they opposed as contradictory.:1951
In this experiment, the participants made their judgments while in a magnetic resonance imaging (MRI) scanner which monitored their brain activity. As participants evaluated contradictory statements by their favored candidate, emotional centers of their brains were aroused. This did not happen with the statements by the other figures. The experimenters inferred that the different responses to the statements were not due to passive reasoning errors. Instead, the participants were actively reducing the cognitive dissonance induced by reading about their favored candidate's irrational or hypocritical behavior.[page needed]
Biases in belief interpretation are persistent, regardless of intelligence level. Participants in an experiment took the SAT test (a college admissions test used in the United States) to assess their intelligence levels. They then read information regarding safety concerns for vehicles, and the experimenters manipulated the national origin of the car. American participants provided their opinion if the car should be banned on a six point scale, where one indicated "definitely yes" and six indicated "definitely no." Participants firstly evaluated if they would allow a dangerous German car on American streets and a dangerous American car on German streets. Participants believed that the dangerous German car on American streets should be banned more quickly than the dangerous American car on German streets. There was no difference among intelligence levels at the rate participants would ban a car.
Biased interpretation is not restricted to emotionally significant topics. In another experiment, participants were told a story about a theft. They had to rate the evidential importance of statements arguing either for or against a particular character being responsible. When they hypothesized that character's guilt, they rated statements supporting that hypothesis as more important than conflicting statements.
Even if people gather and interpret evidence in a neutral manner, they may still remember it selectively to reinforce their expectations. This effect is called "selective recall", "confirmatory memory" or "access-biased memory". Psychological theories differ in their predictions about selective recall. Schema theory predicts that information matching prior expectations will be more easily stored and recalled than information that does not match. Some alternative approaches say that surprising information stands out and so is memorable. Predictions from both these theories have been confirmed in different experimental contexts, with no theory winning outright.
In one study, participants read a profile of a woman which described a mix of introverted and extroverted behaviors. They later had to recall examples of her introversion and extroversion. One group was told this was to assess the woman for a job as a librarian, while a second group were told it was for a job in real estate sales. There was a significant difference between what these two groups recalled, with the "librarian" group recalling more examples of introversion and the "sales" groups recalling more extroverted behavior. A selective memory effect has also been shown in experiments that manipulate the desirability of personality types. In one of these, a group of participants were shown evidence that extroverted people are more successful than introverts. Another group were told the opposite. In a subsequent, apparently unrelated, study, they were asked to recall events from their lives in which they had been either introverted or extroverted. Each group of participants provided more memories connecting themselves with the more desirable personality type, and recalled those memories more quickly.
Changes in emotional states can also influence memory recall. Participants rated how they felt when they had first learned that O.J. Simpson had been acquitted of murder charges. They described their emotional reactions and confidence regarding the verdict one week, two months, and one year after the trial. Results indicated that participants' assessments for Simpson's guilt changed over time. The more that participants' opinion of the verdict had changed, the less stable were the participant's memories regarding their initial emotional reactions. When participants recalled their initial emotional reactions two months and a year later, past appraisals closely resembled current appraisals of emotion. People demonstrate sizable myside bias when discussing their opinions on controversial topics. Memory recall and construction of experiences undergo revision in relation to corresponding emotional states.
Myside bias has been shown to influence the accuracy of memory recall. In an experiment, widows and widowers rated the intensity of their experienced grief six months and five years after the deaths of their spouses. Participants noted a higher experience of grief at six months rather than at five years. Yet, when the participants were asked after five years how they had felt six months after the death of their significant other, the intensity of grief participants recalled was highly correlated with their current level of grief. Individuals appear to utilize their current emotional states to analyze how they must have felt when experiencing past events. Emotional memories are reconstructed by current emotional states.
One study showed how selective memory can maintain belief in extrasensory perception (ESP). Believers and disbelievers were each shown descriptions of ESP experiments. Half of each group were told that the experimental results supported the existence of ESP, while the others were told they did not. In a subsequent test, participants recalled the material accurately, apart from believers who had read the non-supportive evidence. This group remembered significantly less information and some of them incorrectly remembered the results as supporting ESP.
Polarization of opinion
When people with opposing views interpret new information in a biased way, their views can move even further apart. This is called "attitude polarization". The effect was demonstrated by an experiment that involved drawing a series of red and black balls from one of two concealed "bingo baskets". Participants knew that one basket contained 60% black and 40% red balls; the other, 40% black and 60% red. The experimenters looked at what happened when balls of alternating color were drawn in turn, a sequence that does not favor either basket. After each ball was drawn, participants in one group were asked to state out loud their judgments of the probability that the balls were being drawn from one or the other basket. These participants tended to grow more confident with each successive draw—whether they initially thought the basket with 60% black balls or the one with 60% red balls was the more likely source, their estimate of the probability increased. Another group of participants were asked to state probability estimates only at the end of a sequence of drawn balls, rather than after each ball. They did not show the polarization effect, suggesting that it does not necessarily occur when people simply hold opposing positions, but rather when they openly commit to them.
A less abstract study was the Stanford biased interpretation experiment in which participants with strong opinions about the death penalty read about mixed experimental evidence. Twenty-three percent of the participants reported that their views had become more extreme, and this self-reported shift correlated strongly with their initial attitudes. In later experiments, participants also reported their opinions becoming more extreme in response to ambiguous information. However, comparisons of their attitudes before and after the new evidence showed no significant change, suggesting that the self-reported changes might not be real. Based on these experiments, Deanna Kuhn and Joseph Lao concluded that polarization is a real phenomenon but far from inevitable, only happening in a small minority of cases. They found that it was prompted not only by considering mixed evidence, but by merely thinking about the topic.
Charles Taber and Milton Lodge argued that the Stanford team's result had been hard to replicate because the arguments used in later experiments were too abstract or confusing to evoke an emotional response. The Taber and Lodge study used the emotionally charged topics of gun control and affirmative action. They measured the attitudes of their participants towards these issues before and after reading arguments on each side of the debate. Two groups of participants showed attitude polarization: those with strong prior opinions and those who were politically knowledgeable. In part of this study, participants chose which information sources to read, from a list prepared by the experimenters. For example they could read the National Rifle Association's and the Brady Anti-Handgun Coalition's arguments on gun control. Even when instructed to be even-handed, participants were more likely to read arguments that supported their existing attitudes than arguments that did not. This biased search for information correlated well with the polarization effect.
The "backfire effect" is a name for the finding that, given evidence against their beliefs, people can reject the evidence and believe even more strongly. The phrase was first coined by Brendan Nyhan and Jason Reifler.
Persistence of discredited beliefs
Confirmation biases can be used to explain why some beliefs persist when the initial evidence for them is removed. This belief perseverance effect has been shown by a series of experiments using what is called the "debriefing paradigm": participants read fake evidence for a hypothesis, their attitude change is measured, then the fakery is exposed in detail. Their attitudes are then measured once more to see if their belief returns to its previous level.
A common finding is that at least some of the initial belief remains even after a full debrief. In one experiment, participants had to distinguish between real and fake suicide notes. The feedback was random: some were told they had done well while others were told they had performed badly. Even after being fully debriefed, participants were still influenced by the feedback. They still thought they were better or worse than average at that kind of task, depending on what they had initially been told.
In another study, participants read job performance ratings of two firefighters, along with their responses to a risk aversion test. This fictional data was arranged to show either a negative or positive association: some participants were told that a risk-taking firefighter did better, while others were told they did less well than a risk-averse colleague. Even if these two case studies were true, they would have been scientifically poor evidence for a conclusion about firefighters in general. However, the participants found them subjectively persuasive. When the case studies were shown to be fictional, participants' belief in a link diminished, but around half of the original effect remained. Follow-up interviews established that the participants had understood the debriefing and taken it seriously. Participants seemed to trust the debriefing, but regarded the discredited information as irrelevant to their personal belief.
Preference for early information
Experiments have shown that information is weighted more strongly when it appears early in a series, even when the order is unimportant. For example, people form a more positive impression of someone described as "intelligent, industrious, impulsive, critical, stubborn, envious" than when they are given the same words in reverse order. This irrational primacy effect is independent of the primacy effect in memory in which the earlier items in a series leave a stronger memory trace. Biased interpretation offers an explanation for this effect: seeing the initial evidence, people form a working hypothesis that affects how they interpret the rest of the information.
One demonstration of irrational primacy used colored chips supposedly drawn from two urns. Participants were told the color distributions of the urns, and had to estimate the probability of a chip being drawn from one of them. In fact, the colors appeared in a pre-arranged order. The first thirty draws favored one urn and the next thirty favored the other. The series as a whole was neutral, so rationally, the two urns were equally likely. However, after sixty draws, participants favored the urn suggested by the initial thirty.
Another experiment involved a slide show of a single object, seen as just a blur at first and in slightly better focus with each succeeding slide. After each slide, participants had to state their best guess of what the object was. Participants whose early guesses were wrong persisted with those guesses, even when the picture was sufficiently in focus that the object was readily recognizable to other people.
Illusory association between events
Illusory correlation is the tendency to see non-existent correlations in a set of data. This tendency was first demonstrated in a series of experiments in the late 1960s. In one experiment, participants read a set of psychiatric case studies, including responses to the Rorschach inkblot test. They reported that the homosexual men in the set were more likely to report seeing buttocks, anuses or sexually ambiguous figures in the inkblots. In fact the case studies were fictional and, in one version of the experiment, had been constructed so that the homosexual men were less likely to report this imagery. In a survey, a group of experienced psychoanalysts reported the same set of illusory associations with homosexuality.
Another study recorded the symptoms experienced by arthritic patients, along with weather conditions over a 15-month period. Nearly all the patients reported that their pains were correlated with weather conditions, although the real correlation was zero.
This effect is a kind of biased interpretation, in that objectively neutral or unfavorable evidence is interpreted to support existing beliefs. It is also related to biases in hypothesis-testing behavior. In judging whether two events, such as illness and bad weather, are correlated, people rely heavily on the number of positive-positive cases: in this example, instances of both pain and bad weather. They pay relatively little attention to the other kinds of observation (of no pain and/or good weather). This parallels the reliance on positive tests in hypothesis testing. It may also reflect selective recall, in that people may have a sense that two events are correlated because it is easier to recall times when they happened together.
In the above fictional example, arthritic symptoms are more likely on days with no rain. However, people are likely to focus on the relatively large number of days which have both rain and symptoms. By concentrating on one cell of the table rather than all four, people can misperceive the relationship, in this case associating rain with arthritic symptoms.
Myside bias was once believed to be associated with greater intelligence; however, studies have shown that myside bias can be more influenced by ability to rationally think as opposed to amount of intelligence. Myside bias can cause an inability to effectively and logically evaluate the opposite side of an argument. Studies have stated that myside bias is an absence of "active open-mindedness," meaning the active search for why an initial idea may be wrong. Typically, myside bias is operationalized in empirical studies as the quantity of evidence used in support of their side in comparison to the opposite side.
A study has found individual differences in myside bias. This study investigates individual differences that are acquired through learning in a cultural context and are mutable. The researcher found important individual difference in argumentation. Studies have suggested that individual differences such as deductive reasoning ability, ability to overcome belief bias, epistemological understanding, and thinking disposition are a significant predictors of the reasoning and generating arguments, counterarguments, and rebuttals.
A study by Christopher Wolfe and Anne Britt also investigated how participants' views of "what makes a good argument?" can be a source of myside bias that influence the way a person creates their own arguments. The study investigated individual differences of argumentation schema and asked participants to write essays. The participants were randomly assigned to write essays either for or against their side of the argument they preferred and given balanced or unrestricted research instructions. The balanced research instructions instructed participants to create a balanced argument that included both pros and cons and the unrestricted research instruction did not give any particular instructions on how to create the argument.
Overall, the results revealed that balance research instruction significantly increased the use of participants adding opposing information to their argument. These data also reveal that personal belief is not a source of myside bias. Furthermore, participants who believed that good arguments were based on facts were more likely to exhibit myside bias than participants who did not agree with this statement. This evidence is consistent with the claims proposed in Baron's article that people's opinions about good thinking can influence how arguments are generated.
Before psychological research on confirmation bias, the phenomenon had been observed anecdotally by writers, including the Greek historian Thucydides (c. 460 BC – c. 395 BC), Italian poet Dante Alighieri (1265–1321), English philosopher and scientist Francis Bacon (1561–1626), and Russian author Leo Tolstoy (1828–1910). Thucydides, in The Peloponnesian War wrote: "... for it is a habit of mankind to entrust to careless hope what they long for, and to use sovereign reason to thrust aside what they do not fancy." In the Divine Comedy, St. Thomas Aquinas cautions Dante when they meet in Paradise, "opinion—hasty—often can incline to the wrong side, and then affection for one's own opinion binds, confines the mind." Bacon, in the Novum Organum, wrote,
The human understanding when it has once adopted an opinion ... draws all things else to support and agree with it. And though there be a greater number and weight of instances to be found on the other side, yet these it either neglects or despises, or else by some distinction sets aside or rejects[.]
I know that most men—not only those considered clever, but even those who are very clever, and capable of understanding most difficult scientific, mathematical, or philosophic problems—can very seldom discern even the simplest and most obvious truth if it be such as to oblige them to admit the falsity of conclusions they have formed, perhaps with much difficulty—conclusions of which they are proud, which they have taught to others, and on which they have built their lives.
Wason's research on hypothesis-testing
The term "confirmation bias" was coined by English psychologist Peter Wason. For an experiment published in 1960, he challenged participants to identify a rule applying to triples of numbers. At the outset, they were told that (2,4,6) fits the rule. Participants could generate their own triples and the experimenter told them whether or not each triple conformed to the rule.
While the actual rule was simply "any ascending sequence", the participants had a great deal of difficulty in finding it, often announcing rules that were far more specific, such as "the middle number is the average of the first and last". The participants seemed to test only positive examples—triples that obeyed their hypothesized rule. For example, if they thought the rule was, "Each number is two greater than its predecessor", they would offer a triple that fit this rule, such as (11,13,15) rather than a triple that violates it, such as (11,12,19).
Wason accepted falsificationism, according to which a scientific test of a hypothesis is a serious attempt to falsify it. He interpreted his results as showing a preference for confirmation over falsification, hence the term "confirmation bias".[Note 4] Wason also used confirmation bias to explain the results of his selection task experiment. In this task, participants are given partial information about a set of objects, and have to specify what further information they would need to tell whether or not a conditional rule ("If A, then B") applies. It has been found repeatedly that people perform badly on various forms of this test, in most cases ignoring information that could potentially refute the rule.
Klayman and Ha's critique
A 1987 paper by Joshua Klayman and Young-Won Ha argued that the Wason experiments had not actually demonstrated a bias towards confirmation. Instead, Klayman and Ha interpreted the results in terms of a tendency to make tests that are consistent with the working hypothesis. They called this the "positive test strategy". This strategy is an example of a heuristic: a reasoning shortcut that is imperfect but easy to compute. Klayman and Ha used Bayesian probability and information theory as their standard of hypothesis-testing, rather than the falsificationism used by Wason. According to these ideas, each answer to a question yields a different amount of information, which depends on the person's prior beliefs. Thus a scientific test of a hypothesis is one that is expected to produce the most information. Since the information content depends on initial probabilities, a positive test can either be highly informative or uninformative. Klayman and Ha argued that when people think about realistic problems, they are looking for a specific answer with a small initial probability. In this case, positive tests are usually more informative than negative tests. However, in Wason's rule discovery task the answer—three numbers in ascending order—is very broad, so positive tests are unlikely to yield informative answers. Klayman and Ha supported their analysis by citing an experiment that used the labels "DAX" and "MED" in place of "fits the rule" and "doesn't fit the rule". This avoided implying that the aim was to find a low-probability rule. Participants had much more success with this version of the experiment.
In light of this and other critiques, the focus of research moved away from confirmation versus falsification to examine whether people test hypotheses in an informative way, or an uninformative but positive way. The search for "true" confirmation bias led psychologists to look at a wider range of effects in how people process information.
Confirmation bias is often described as a result of automatic, unintentional strategies rather than deliberate deception. According to Robert Maccoun, most biased evidence processing occurs through a combination of both "cold" (cognitive) and "hot" (motivated) mechanisms.
Cognitive explanations for confirmation bias are based on limitations in people's ability to handle complex tasks, and the shortcuts, called heuristics, that they use. For example, people may judge the reliability of evidence by using the availability heuristic—i.e., how readily a particular idea comes to mind. It is also possible that people can only focus on one thought at a time, so find it difficult to test alternative hypotheses in parallel. Another heuristic is the positive test strategy identified by Klayman and Ha, in which people test a hypothesis by examining cases where they expect a property or event to occur. This heuristic avoids the difficult or impossible task of working out how diagnostic each possible question will be. However, it is not universally reliable, so people can overlook challenges to their existing beliefs.
Motivational explanations involve an effect of desire on belief, sometimes called "wishful thinking". It is known that people prefer pleasant thoughts over unpleasant ones in a number of ways: this is called the "Pollyanna principle". Applied to arguments or sources of evidence, this could explain why desired conclusions are more likely to be believed true. According to experiments that manipulate the desirability of the conclusion, people demand a high standard of evidence for unpalatable ideas and a low standard for preferred ideas. In other words, they ask, "Can I believe this?" for some suggestions and, "Must I believe this?" for others. Although consistency is a desirable feature of attitudes, an excessive drive for consistency is another potential source of bias because it may prevent people from neutrally evaluating new, surprising information. Social psychologist Ziva Kunda combines the cognitive and motivational theories, arguing that motivation creates the bias, but cognitive factors determine the size of the effect.
Explanations in terms of cost-benefit analysis assume that people do not just test hypotheses in a disinterested way, but assess the costs of different errors. Using ideas from evolutionary psychology, James Friedrich suggests that people do not primarily aim at truth in testing hypotheses, but try to avoid the most costly errors. For example, employers might ask one-sided questions in job interviews because they are focused on weeding out unsuitable candidates. Yaacov Trope and Akiva Liberman's refinement of this theory assumes that people compare the two different kinds of error: accepting a false hypothesis or rejecting a true hypothesis. For instance, someone who underestimates a friend's honesty might treat him or her suspiciously and so undermine the friendship. Overestimating the friend's honesty may also be costly, but less so. In this case, it would be rational to seek, evaluate or remember evidence of their honesty in a biased way. When someone gives an initial impression of being introverted or extroverted, questions that match that impression come across as more empathic. This suggests that when talking to someone who seems to be an introvert, it is a sign of better social skills to ask, "Do you feel awkward in social situations?" rather than, "Do you like noisy parties?" The connection between confirmation bias and social skills was corroborated by a study of how college students get to know other people. Highly self-monitoring students, who are more sensitive to their environment and to social norms, asked more matching questions when interviewing a high-status staff member than when getting to know fellow students.
Psychologists Jennifer Lerner and Philip Tetlock distinguish two different kinds of thinking process. Exploratory thought neutrally considers multiple points of view and tries to anticipate all possible objections to a particular position, while confirmatory thought seeks to justify a specific point of view. Lerner and Tetlock say that when people expect to justify their position to others whose views they already know, they will tend to adopt a similar position to those people, and then use confirmatory thought to bolster their own credibility. However, if the external parties are overly aggressive or critical, people will disengage from thought altogether, and simply assert their personal opinions without justification. Lerner and Tetlock say that people only push themselves to think critically and logically when they know in advance they will need to explain themselves to others who are well-informed, genuinely interested in the truth, and whose views they don't already know. Because those conditions rarely exist, they argue, most people are using confirmatory thought most of the time.
Confirmation bias can lead investors to be overconfident, ignoring evidence that their strategies will lose money. In studies of political stock markets, investors made more profit when they resisted bias. For example, participants who interpreted a candidate's debate performance in a neutral rather than partisan way were more likely to profit. To combat the effect of confirmation bias, investors can try to adopt a contrary viewpoint "for the sake of argument". In one technique, they imagine that their investments have collapsed and ask themselves why this might happen.
In physical and mental health
Raymond Nickerson, a psychologist, blames confirmation bias for the ineffective medical procedures that were used for centuries before the arrival of scientific medicine. If a patient recovered, medical authorities counted the treatment as successful, rather than looking for alternative explanations such as that the disease had run its natural course. Biased assimilation is a factor in the modern appeal of alternative medicine, whose proponents are swayed by positive anecdotal evidence but treat scientific evidence hyper-critically.
Cognitive therapy was developed by Aaron T. Beck in the early 1960s and has become a popular approach. According to Beck, biased information processing is a factor in depression. His approach teaches people to treat evidence impartially, rather than selectively reinforcing negative outlooks. Phobias and hypochondria have also been shown to involve confirmation bias for threatening information.
In politics and law
Nickerson argues that reasoning in judicial and political contexts is sometimes subconsciously biased, favoring conclusions that judges, juries or governments have already committed to. Since the evidence in a jury trial can be complex, and jurors often reach decisions about the verdict early on, it is reasonable to expect an attitude polarization effect. The prediction that jurors will become more extreme in their views as they see more evidence has been borne out in experiments with mock trials. Both inquisitorial and adversarial criminal justice systems are affected by confirmation bias.
Confirmation bias can be a factor in creating or extending conflicts, from emotionally charged debates to wars: by interpreting the evidence in their favor, each opposing party can become overconfident that it is in the stronger position. On the other hand, confirmation bias can result in people ignoring or misinterpreting the signs of an imminent or incipient conflict. For example, psychologists Stuart Sutherland and Thomas Kida have each argued that US Admiral Husband E. Kimmel showed confirmation bias when playing down the first signs of the Japanese attack on Pearl Harbor.
A two-decade study of political pundits by Philip E. Tetlock found that, on the whole, their predictions were not much better than chance. Tetlock divided experts into "foxes" who maintained multiple hypotheses, and "hedgehogs" who were more dogmatic. In general, the hedgehogs were much less accurate. Tetlock blamed their failure on confirmation bias—specifically, their inability to make use of new information that contradicted their existing theories.
In the 2013 murder trial of David Camm, the defense argued that Camm was charged for the murders of his wife and two children solely because of confirmation bias within the investigation. Camm was arrested three days after the murders on the basis of faulty evidence. Despite the discovery that almost every piece of evidence on the probable cause affidavit was inaccurate or unreliable, the charges were not dropped against him. A sweatshirt found at the crime was subsequently discovered to contain the DNA of a convicted felon, his prison nickname, and his department of corrections number. Investigators looked for Camm's DNA on the sweatshirt, but failed to investigate any other pieces of evidence found on it and the foreign DNA was not run through CODIS until 5 years after the crime. When the second suspect was discovered, prosecutors charged them as co-conspirators in the crime despite finding no evidence linking the two men. Camm was acquitted of the murders.
In the paranormal
One factor in the appeal of alleged psychic readings is that listeners apply a confirmation bias which fits the psychic's statements to their own lives. By making a large number of ambiguous statements in each sitting, the psychic gives the client more opportunities to find a match. This is one of the techniques of cold reading, with which a psychic can deliver a subjectively impressive reading without any prior information about the client. Investigator James Randi compared the transcript of a reading to the client's report of what the psychic had said, and found that the client showed a strong selective recall of the "hits".
As a striking illustration of confirmation bias in the real world, Nickerson mentions numerological pyramidology: the practice of finding meaning in the proportions of the Egyptian pyramids. There are many different length measurements that can be made of, for example, the Great Pyramid of Giza and many ways to combine or manipulate them. Hence it is almost inevitable that people who look at these numbers selectively will find superficially impressive correspondences, for example with the dimensions of the Earth.
A distinguishing feature of scientific thinking is the search for falsifying as well as confirming evidence. However, many times in the history of science, scientists have resisted new discoveries by selectively interpreting or ignoring unfavorable data. Previous research has shown that the assessment of the quality of scientific studies seems to be particularly vulnerable to confirmation bias. It has been found several times that scientists rate studies that report findings consistent with their prior beliefs more favorably than studies reporting findings inconsistent with their previous beliefs. However, assuming that the research question is relevant, the experimental design adequate and the data are clearly and comprehensively described, the found results should be of importance to the scientific community and should not be viewed prejudicially, regardless of whether they conform to current theoretical predictions.
In the context of scientific research, confirmation biases can sustain theories or research programs in the face of inadequate or even contradictory evidence; the field of parapsychology has been particularly affected.
An experimenter's confirmation bias can potentially affect which data are reported. Data that conflict with the experimenter's expectations may be more readily discarded as unreliable, producing the so-called file drawer effect. To combat this tendency, scientific training teaches ways to prevent bias. For example, experimental design of randomized controlled trials (coupled with their systematic review) aims to minimize sources of bias. The social process of peer review is thought to mitigate the effect of individual scientists' biases, even though the peer review process itself may be susceptible to such biases. Confirmation bias may thus be especially harmful to objective evaluations regarding nonconforming results since biased individuals may regard opposing evidence to be weak in principle and give little serious thought to revising their beliefs. Scientific innovators often meet with resistance from the scientific community, and research presenting controversial results frequently receives harsh peer review.
Social psychologists have identified two tendencies in the way people seek or interpret information about themselves. Self-verification is the drive to reinforce the existing self-image and self-enhancement is the drive to seek positive feedback. Both are served by confirmation biases. In experiments where people are given feedback that conflicts with their self-image, they are less likely to attend to it or remember it than when given self-verifying feedback. They reduce the impact of such information by interpreting it as unreliable. Similar experiments have found a preference for positive feedback, and the people who give it, over negative feedback.
- Cherry picking (fallacy)
- Cognitive bias mitigation
- Cognitive inertia
- Experimenter's bias
- Congruence bias
- Filter bubble
- Hostile media effect
- List of biases in judgment and decision making
- List of memory biases
- Observer-expectancy effect
- Reinforcement theory
- Reporting bias
- Publication bias
- Selective exposure theory
- Selective perception
- Semmelweis reflex
- Woozle effect
- David Perkins, a geneticist, coined the term "myside bias" referring to a preference for "my" side of an issue. (Baron 2000, p. 195)
- Text in cited article: Tuchman (1984) described a form of confirmation bias at work in the process of justifying policies to which a government has committed itself: “Once a policy has been adopted and implemented, all subsequent activity becomes an effort to justify it” (p.245). In the context of a discussion of the policy that drew the United States into war in Vietnam and kept the U.S. military engaged for 16 years despite countless evidences that it was a lost cause from the beginning, Tuchman argued that once a policy has been adopted and implemented by a government, all subsequent activity of that government becomes focused on justification of that policy.
Wooden headedness, the source of self deception is a factor that plays a remarkably large role in government. It consists in assessing a situation in terms of preconceived fixed notions while ignoring or rejecting any contrary signs. It is acting according to wish while not allowing oneself to be deflected by the facts. It is epitomized in a historian’s statement about Philip II of Spain, the surpassing wooden head of all sovereigns: “no experience of the failure of his policy could shake his belief in essential excellence”. (p.7)
Folly, she argued, is a form of self-deception characterized by “insistence on a rooted notion regardless of contrary evidence” (p.209)
- "Assimilation bias" is another term used for biased interpretation of evidence. (Risen & Gilovich 2007, p. 113)
- Wason also used the term "verification bias". (Poletiek 2001, p. 73)
- Plous 1993, p. 233
- Nickerson, Raymond S. (June 1998). "Confirmation Bias: A Ubiquitous Phenomenon in Many Guises". Review of General Psychology 2 (2): 175–220. doi:10.1037/1089-2618.104.22.168.
- Darley, John M.; Gross, Paget H. (2000), "A Hypothesis-Confirming Bias in Labelling Effects", in Stangor, Charles, Stereotypes and prejudice: essential readings, Psychology Press, p. 212, ISBN 978-0-86377-589-5, OCLC 42823720
- Risen & Gilovich 2007
- Zweig, Jason (November 19, 2009), "How to Ignore the Yes-Man in Your Head", Wall Street Journal (Dow Jones & Company), retrieved 2010-06-13
- Nickerson 1998, pp. 177–178
- Kunda 1999, pp. 112–115
- Baron 2000, pp. 162–164
- Kida 2006, pp. 162–165
- Devine, Patricia G.; Hirt, Edward R.; Gehrke, Elizabeth M. (1990), "Diagnostic and confirmation strategies in trait hypothesis testing", Journal of Personality and Social Psychology (American Psychological Association) 58 (6): 952–963, doi:10.1037/0022-3522.214.171.1242, ISSN 1939-1315
- Trope, Yaacov; Bassok, Miriam (1982), "Confirmatory and diagnosing strategies in social information gathering", Journal of Personality and Social Psychology (American Psychological Association) 43 (1): 22–34, doi:10.1037/0022-35126.96.36.199, ISSN 1939-1315
- Klayman, Joshua; Ha, Young-Won (1987), "Confirmation, Disconfirmation and Information in Hypothesis Testing", Psychological Review (American Psychological Association) 94 (2): 211–228, doi:10.1037/0033-295X.94.2.211, ISSN 0033-295X, retrieved 2009-08-14
- Oswald & Grosjean 2004, pp. 82–83
- Kunda, Ziva; Fong, G.T.; Sanitoso, R.; Reber, E. (1993), "Directional questions direct self-conceptions", Journal of Experimental Social Psychology (Society of Experimental Social Psychology) 29: 62–63, ISSN 0022-1031 via Fine 2006, pp. 63–65
- Shafir, E. (1993), "Choosing versus rejecting: why some options are both better and worse than others", Memory and Cognition 21 (4): 546–556, doi:10.3758/bf03197186, PMID 8350746 via Fine 2006, pp. 63–65
- Snyder, Mark; Swann, Jr., William B. (1978), "Hypothesis-Testing Processes in Social Interaction", Journal of Personality and Social Psychology (American Psychological Association) 36 (11): 1202–1212, doi:10.1037/0022-35188.8.131.522 via Poletiek 2001, p. 131
- Kunda 1999, pp. 117–118
- Albarracin, D.; Mitchell, A.L. (2004). "The Role of Defensive Confidence in Preference for Proattitudinal Information: How Believing That One Is Strong Can Sometimes Be a Defensive Weakness". Personality and Social Psychology Bulletin 30 (12): 1565–1584. doi:10.1177/0146167204271180.
- Fischer, P.; Fischer, Julia K.; Aydin, Nilüfer; Frey, Dieter (2010). "Physically Attractive Social Information Sources Lead to Increased Selective Exposure to Information". Basic and Applied Social Psychology 32 (4): 340–347. doi:10.1080/01973533.2010.519208.
- Stanovich, K. E.; West, R. F.; Toplak, M. E. (2013). "Myside Bias, Rational Thinking, and Intelligence". Current Directions in Psychological Science 22 (4): 259–264. doi:10.1177/0963721413480174.
- Mynatt, Clifford R.; Doherty, Michael E.; Tweney, Ryan D. (1978), "Consequences of confirmation and disconfirmation in a simulated research environment", Quarterly Journal of Experimental Psychology 30 (3): 395–406, doi:10.1080/00335557843000007
- Kida 2006, p. 157
- Lord, Charles G.; Ross, Lee; Lepper, Mark R. (1979), "Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence", Journal of Personality and Social Psychology (American Psychological Association) 37 (11): 2098–2109, doi:10.1037/0022-35184.108.40.2068, ISSN 0022-3514
- Baron 2000, pp. 201–202
- Vyse 1997, p. 122
- Taber, Charles S.; Lodge, Milton (July 2006), "Motivated Skepticism in the Evaluation of Political Beliefs", American Journal of Political Science (Midwest Political Science Association) 50 (3): 755–769, doi:10.1111/j.1540-5907.2006.00214.x, ISSN 0092-5853
- Westen, Drew; Blagov, Pavel S.; Harenski, Keith; Kilts, Clint; Hamann, Stephan (2006), "Neural Bases of Motivated Reasoning: An fMRI Study of Emotional Constraints on Partisan Political Judgment in the 2004 U.S. Presidential Election", Journal of Cognitive Neuroscience (Massachusetts Institute of Technology) 18 (11): 1947–1958, doi:10.1162/jocn.2006.18.11.1947, PMID 17069484, retrieved 2009-08-14
- Gadenne, V.; Oswald, M. (1986), "Entstehung und Veränderung von Bestätigungstendenzen beim Testen von Hypothesen [Formation and alteration of confirmatory tendencies during the testing of hypotheses]", Zeitschrift für experimentelle und angewandte Psychologie 33: 360–374 via Oswald & Grosjean 2004, p. 89
- Hastie, Reid; Park, Bernadette (2005), "The Relationship Between Memory and Judgment Depends on Whether the Judgment Task is Memory-Based or On-Line", in Hamilton, David L., Social cognition: key readings, New York: Psychology Press, p. 394, ISBN 0-86377-591-8, OCLC 55078722
- Oswald & Grosjean 2004, pp. 88–89
- Stangor, Charles; McMillan, David (1992), "Memory for expectancy-congruent and expectancy-incongruent information: A review of the social and social developmental literatures", Psychological Bulletin (American Psychological Association) 111 (1): 42–61, doi:10.1037/0033-2909.111.1.42
- Snyder, M.; Cantor, N. (1979), "Testing hypotheses about other people: the use of historical knowledge", Journal of Experimental Social Psychology 15 (4): 330–342, doi:10.1016/0022-1031(79)90042-8 via Goldacre 2008, p. 231
- Kunda 1999, pp. 225–232
- Sanitioso, Rasyid; Kunda, Ziva; Fong, G.T. (1990), "Motivated recruitment of autobiographical memories", Journal of Personality and Social Psychology (American Psychological Association) 59 (2): 229–241, doi:10.1037/0022-35220.127.116.11, ISSN 0022-3514, PMID 2213492
- Levine, L.; Prohaska, V.; Burgess, S.L.; Rice, J.A.; Laulhere, T.M. (2001). "Remembering past emotions: The role of current appraisals.". Cognition and Emotion 15: 393–417. doi:10.1080/02699930125955.
- Safer, M.A.; Bonanno, G.A.; Field, N. (2001). ""It was never that bad": Biased recall of grief and long-term adjustment to the death of a spouse". Memory 9 (3): 195–203. doi:10.1080/09658210143000065.
- Russell, Dan; Jones, Warren H. (1980), "When superstition fails: Reactions to disconfirmation of paranormal beliefs", Personality and Social Psychology Bulletin (Society for Personality and Social Psychology) 6 (1): 83–88, doi:10.1177/014616728061012, ISSN 1552-7433 via Vyse 1997, p. 121
- Kuhn, Deanna; Lao, Joseph (March 1996), "Effects of Evidence on Attitudes: Is Polarization the Norm?", Psychological Science (American Psychological Society) 7 (2): 115–120, doi:10.1111/j.1467-9280.1996.tb00340.x
- Baron 2000, p. 201
- Miller, A.G.; McHoskey, J.W.; Bane, C.M.; Dowd, T.G. (1993), "The attitude polarization phenomenon: Role of response measure, attitude extremity, and behavioral consequences of reported attitude change", Journal of Personality and Social Psychology 64 (4): 561–574, doi:10.1037/0022-3518.104.22.1681
- "backfire effect". The Skeptic's Dictionary. Retrieved 26 April 2012.
- Silverman, Craig (2011-06-17). "The Backfire Effect". Columbia Journalism Review. Retrieved 2012-05-01. "When your deepest convictions are challenged by contradictory evidence, your beliefs get stronger."
- Nyhan, Brendan; Reifler, Jason (2010). "When Corrections Fail: The Persistence of Political Misperceptions". Political Behavior 32 (2): 303–330. doi:10.1007/s11109-010-9112-2. Retrieved 1 May 2012.
- Ross, Lee; Anderson, Craig A. (1982), "Shortcomings in the attribution process: On the origins and maintenance of erroneous social assessments", in Kahneman, Daniel; Slovic, Paul; Tversky, Amos, Judgment under uncertainty: Heuristics and biases, Cambridge University Press, pp. 129–152, ISBN 978-0-521-28414-1, OCLC 7578020
- Nickerson 1998, p. 187
- Kunda 1999, p. 99
- Ross, Lee; Lepper, Mark R.; Hubbard, Michael (1975), "Perseverance in self-perception and social perception: Biased attributional processes in the debriefing paradigm", Journal of Personality and Social Psychology (American Psychological Association) 32 (5): 880–892, doi:10.1037/0022-3522.214.171.1240, ISSN 0022-3514, PMID 1185517 via Kunda 1999, p. 99
- Baron 2000, pp. 197–200
- Fine 2006, pp. 66–70
- Plous 1993, pp. 164–166
- Redelmeir, D. A.; Tversky, Amos (1996), "On the belief that arthritis pain is related to the weather", Proceedings of the National Academy of Sciences 93 (7): 2895–2896, doi:10.1073/pnas.93.7.2895 via Kunda 1999, p. 127
- Kunda 1999, pp. 127–130
- Plous 1993, pp. 162–164
- Adapted from Fielder, Klaus (2004), "Illusory correlation", in Pohl, Rüdiger F., Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory, Hove, UK: Psychology Press, p. 103, ISBN 978-1-84169-351-4, OCLC 55124398
- Stanovich, K. E.; West, R. F.; Toplak, M. E. (5 August 2013). "Myside Bias, Rational Thinking, and Intelligence". Current Directions in Psychological Science 22 (4): 259–264. doi:10.1177/0963721413480174.
- Baron, Jonathan (1995). "Myside bias in thinking about abortion.". Thinking & Reasoning: 221–235.
- Wolfe, Christopher; Anne Britt (2008). "The locus of the myside bias in written argumentation.". Thinking & Reasoning: 1–27.
- Mason, Lucia; Scirica, Fabio (October 2006). "Prediction of students' argumentation skills about controversial topics by epistemological understanding". Learning and Instruction 16 (5): 492–509. doi:10.1016/j.learninstruc.2006.09.007.
- Weinstock, Michael (December 2009). "Relative expertise in an everyday reasoning task: Epistemic understanding, problem representation, and reasoning competence". Learning and Individual Differences 19 (4): 423–434. doi:10.1016/j.lindif.2009.03.003.
- Weinstock, Michael; Neuman, Yair; Tabak, Iris (January 2004). "Missing the point or missing the norms? Epistemological norms as predictors of students' ability to identify fallacious arguments". Contemporary Educational Psychology 29 (1): 77–94. doi:10.1016/S0361-476X(03)00024-9.
- Baron 2000, pp. 195–196
- Thucydides 4.108.4
- Alighieri, Dante. Paradiso canto XIII: 118–120. Trans. Allen Mandelbaum
- Bacon, Francis (1620). Novum Organum. reprinted in Burtt, E.A., ed. (1939), The English philosophers from Bacon to Mill, New York: Random House, p. 36 via Nickerson 1998, p. 176
- Tolstoy, Leo. What is Art? p. 124 (1899). In The Kingdom of God Is Within You (1893), he similarly declared, "The most difficult participants can be explained to the most slow-witted man if he has not formed any idea of them already; but the simplest thing cannot be made clear to the most intelligent man if he is firmly persuaded that he knows already, without a shadow of doubt, what is laid before him" (ch. 3). Translated from the Russian by Constance Garnett, New York, 1894. Project Gutenberg edition released November 2002. Retrieved 2009-08-24.
- Gale, Maggie; Ball, Linden J. (2002), "Does Positivity Bias Explain Patterns of Performance on Wason's 2-4-6 task?", in Gray, Wayne D.; Schunn, Christian D., Proceedings of the Twenty-Fourth Annual Conference of the Cognitive Science Society, Routledge, p. 340, ISBN 978-0-8058-4581-5, OCLC 469971634
- Wason, Peter C. (1960), "On the failure to eliminate hypotheses in a conceptual task", Quarterly Journal of Experimental Psychology (Psychology Press) 12 (3): 129–140, doi:10.1080/17470216008416717, ISSN 1747-0226
- Nickerson 1998, p. 179
- Lewicka 1998, p. 238
- Oswald & Grosjean 2004, pp. 79–96
- Wason, Peter C. (1968), "Reasoning about a rule", Quarterly Journal of Experimental Psychology (Psychology Press) 20 (3): 273–28, doi:10.1080/14640746808400161, ISSN 1747-0226
- Sutherland, Stuart (2007), Irrationality (2nd ed.), London: Pinter and Martin, pp. 95–103, ISBN 978-1-905177-07-3, OCLC 72151566
- Barkow, Jerome H.; Cosmides, Leda; Tooby, John (1995), The adapted mind: evolutionary psychology and the generation of culture, Oxford University Press US, pp. 181–184, ISBN 978-0-19-510107-2, OCLC 33832963
- Oswald & Grosjean 2004, pp. 81–82, 86–87
- Lewicka 1998, p. 239
- Tweney, Ryan D.; Doherty, Michael E.; Worner, Winifred J.; Pliske, Daniel B.; Mynatt, Clifford R.; Gross, Kimberly A.; Arkkelin, Daniel L. (1980), "Strategies of rule discovery in an inference task", The Quarterly Journal of Experimental Psychology (Psychology Press) 32 (1): 109–123, doi:10.1080/00335558008248237, ISSN 1747-0226 (Experiment IV)
- Oswald & Grosjean 2004, pp. 86–89
- Hergovich, Schott & Burger 2010
- Maccoun 1998
- Friedrich 1993, p. 298
- Kunda 1999, p. 94
- Nickerson 1998, pp. 198–199
- Nickerson 1998, p. 200
- Nickerson 1998, p. 197
- Baron 2000, p. 206
- Matlin, Margaret W. (2004), "Pollyanna Principle", in Pohl, Rüdiger F., Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory, Hove: Psychology Press, pp. 255–272, ISBN 978-1-84169-351-4, OCLC 55124398
- Dawson, Erica; Gilovich, Thomas; Regan, Dennis T. (October 2002), "Motivated Reasoning and Performance on the Wason Selection Task", Personality and Social Psychology Bulletin (Society for Personality and Social Psychology) 28 (10): 1379–1387, doi:10.1177/014616702236869, retrieved 2009-09-30
- Ditto, Peter H.; Lopez, David F. (1992), "Motivated skepticism: use of differential decision criteria for preferred and nonpreferred conclusions", Journal of personality and social psychology (American Psychological Association) 63 (4): 568–584, doi:10.1037/0022-35126.96.36.1998, ISSN 0022-3514
- Nickerson 1998, p. 198
- Oswald & Grosjean 2004, pp. 91–93
- Friedrich 1993, pp. 299, 316–317
- Trope, Y.; Liberman, A. (1996), "Social hypothesis testing: cognitive and motivational mechanisms", in Higgins, E. Tory; Kruglanski, Arie W., Social Psychology: Handbook of basic principles, New York: Guilford Press, ISBN 978-1-57230-100-9, OCLC 34731629 via Oswald & Grosjean 2004, pp. 91–93
- Dardenne, Benoit; Leyens, Jacques-Philippe (1995), "Confirmation Bias as a Social Skill", Personality and Social Psychology Bulletin (Society for Personality and Social Psychology) 21 (11): 1229–1239, doi:10.1177/01461672952111011, ISSN 1552-7433
- Shanteau, James (2003). Sandra L. Schneider, ed. Emerging perspectives on judgment and decision research. Cambridge [u.a.]: Cambridge University Press. p. 445. ISBN 0-521-52718-X.
- Haidt, Jonathan (2012). The Righteous Mind : Why Good People are Divided by Politics and Religion. New York: Pantheon Books. pp. 1473–4 (e–book edition). ISBN 0-307-37790-3.
- Lindzey, edited by Susan T. Fiske, Daniel T. Gilbert, Gardner (2010). The handbook of social psychology. (5th ed.). Hoboken, N.J.: Wiley. p. 811. ISBN 0-470-13749-5.
- Pompian, Michael M. (2006), Behavioral finance and wealth management: how to build optimal portfolios that account for investor biases, John Wiley and Sons, pp. 187–190, ISBN 978-0-471-74517-4, OCLC 61864118
- Hilton, Denis J. (2001), "The psychology of financial decision-making: Applications to trading, dealing, and investment analysis", Journal of Behavioral Finance (Institute of Behavioral Finance) 2 (1): 37–39, doi:10.1207/S15327760JPFM0201_4, ISSN 1542-7579
- Krueger, David; Mann, John David (2009), The Secret Language of Money: How to Make Smarter Financial Decisions and Live a Richer Life, McGraw Hill Professional, pp. 112–113, ISBN 978-0-07-162339-1, OCLC 277205993
- Nickerson 1998, p. 192
- Goldacre 2008, p. 233
- Singh, Simon; Ernst, Edzard (2008), Trick or Treatment?: Alternative Medicine on Trial, London: Bantam, pp. 287–288, ISBN 978-0-593-06129-9
- Atwood, Kimball (2004), "Naturopathy, Pseudoscience, and Medicine: Myths and Fallacies vs Truth", Medscape General Medicine 6 (1): 33
- Neenan, Michael; Dryden, Windy (2004), Cognitive therapy: 100 key points and techniques, Psychology Press, p. ix, ISBN 978-1-58391-858-6, OCLC 474568621
- Blackburn, Ivy-Marie; Davidson, Kate M. (1995), Cognitive therapy for depression & anxiety: a practitioner's guide (2 ed.), Wiley-Blackwell, p. 19, ISBN 978-0-632-03986-9, OCLC 32699443
- Harvey, Allison G.; Watkins, Edward; Mansell, Warren (2004), Cognitive behavioural processes across psychological disorders: a transdiagnostic approach to research and treatment, Oxford University Press, pp. 172–173, 176, ISBN 978-0-19-852888-3, OCLC 602015097
- Nickerson 1998, pp. 191–193
- Myers, D.G.; Lamm, H. (1976), "The group polarization phenomenon", Psychological Bulletin 83 (4): 602–627, doi:10.1037/0033-2909.83.4.602 via Nickerson 1998, pp. 193–194
- Halpern, Diane F. (1987), Critical thinking across the curriculum: a brief edition of thought and knowledge, Lawrence Erlbaum Associates, p. 194, ISBN 978-0-8058-2731-6, OCLC 37180929
- Roach, Kent (2010), "Wrongful Convictions: Adversarial and Inquisitorial Themes", North Carolina Journal of International Law and Commercial Regulation 35, SSRN 1619124, "Both adversarial and inquisitorial systems seem subject to the dangers of tunnel vision or confirmation bias."
- Baron 2000, pp. 191,195
- Kida 2006, p. 155
- Tetlock, Philip E. (2005), Expert Political Judgment: How Good Is It? How Can We Know?, Princeton, N.J.: Princeton University Press, pp. 125–128, ISBN 978-0-691-12302-8, OCLC 56825108
- "David Camm Blog: Investigation under fire". WDRB. October 10, 2013.
- Kircher, Travis. "David Camm blogsite: opening statements". WDRB. Retrieved January 3, 2014.
- "David Camm v. State of Indiana". Court of Appeals of Indiana. 2011-11-15.
- Boyd, Gordon (September 10, 2013). "Camm trial 9/10: Defense finds inconsistencies but can't touch Boney's past". WBRC.
- Zambroski,James. "Witness Says Prosecutor In First Camm Trial Blew Up When She Couldn't Link Camm's DNA To Boney's Shirt". WAVE News.
- Eisenmenger, Sarah (September 9, 2013). "Convicted Killer Charles Boney says David Camm was the shooter". wave3. Retrieved January 5, 2014.
- Eisenmenger, Sarah (Sep 9, 2013). "Convicted Killer Charles Boney says David Camm was the shooter". wave3.
- Adams, Harold J. (2011-02-18). "David Camm's attorney's appeal ruling, seek prosecutor's removal". Courier Journal, page B1.
- David Camm verdict: NOT GUILTY, WDRB TV, October 24, 2013
- Smith, Jonathan C. (2009), Pseudoscience and Extraordinary Claims of the Paranormal: A Critical Thinker's Toolkit, John Wiley and Sons, pp. 149–151, ISBN 978-1-4051-8122-8, OCLC 319499491
- Randi, James (1991), James Randi: psychic investigator, Boxtree, pp. 58–62, ISBN 978-1-85283-144-8, OCLC 26359284
- Nickerson 1998, p. 190
- Nickerson 1998, pp. 192–194
- Koehler 1993
- Mahoney 1977
- Proctor, Robert W.; Capaldi, E. John (2006), Why science matters: understanding the methods of psychological research, Wiley-Blackwell, p. 68, ISBN 978-1-4051-3049-3, OCLC 318365881
- Sternberg, Robert J. (2007), "Critical Thinking in Psychology: It really is critical", in Sternberg, Robert J.; Roediger III, Henry L.; Halpern, Diane F., Critical Thinking in Psychology, Cambridge University Press, p. 292, ISBN 0-521-60834-1, OCLC 69423179, "Some of the worst examples of confirmation bias are in research on parapsychology ... Arguably, there is a whole field here with no powerful confirming data at all. But people want to believe, and so they find ways to believe."
- Shadish, William R. (2007), "Critical Thinking in Quasi-Experimentation", in Sternberg, Robert J.; Roediger III, Henry L.; Halpern, Diane F., Critical Thinking in Psychology, Cambridge University Press, p. 49, ISBN 978-0-521-60834-3
- Jüni, P.; Altman, D. G.; Egger, M. (2001). "Systematic reviews in health care: Assessing the quality of controlled clinical trials". BMJ (Clinical research ed.) 323 (7303): 42–46. doi:10.1136/bmj.323.7303.42. PMC 1120670. PMID 11440947.
- Shermer, Michael (July 2006), "The Political Brain", Scientific American, ISSN 0036-8733, retrieved 2009-08-14
- Emerson, G. B.; Warme, W. J.; Wolf, F. M.; Heckman, J. D.; Brand, R. A.; Leopold, S. S. (2010). "Testing for the Presence of Positive-Outcome Bias in Peer Review: A Randomized Controlled Trial". Archives of Internal Medicine 170 (21): 1934–1939. doi:10.1001/archinternmed.2010.406. PMID 21098355.
- Horrobin 1990
- Swann, William B.; Pelham, Brett W.; Krull, Douglas S. (1989), "Agreeable Fancy or Disagreeable Truth? Reconciling Self-Enhancement and Self-Verification", Journal of Personality and Social Psychology (American Psychological Association) 57 (5): 782–791, doi:10.1037/0022-35188.8.131.522, ISSN 0022-3514, PMID 2810025
- Swann, William B.; Read, Stephen J. (1981), "Self-Verification Processes: How We Sustain Our Self-Conceptions", Journal of Experimental Social Psychology (Academic Press) 17 (4): 351–372, doi:10.1016/0022-1031(81)90043-3, ISSN 0022-1031
- Story, Amber L. (1998), "Self-Esteem and Memory for Favorable and Unfavorable Personality Feedback", Personality and Social Psychology Bulletin (Society for Personality and Social Psychology) 24 (1): 51–64, doi:10.1177/0146167298241004, ISSN 1552-7433
- White, Michael J.; Brockett, Daniel R.; Overstreet, Belinda G. (1993), "Confirmatory Bias in Evaluating Personality Test Information: Am I Really That Kind of Person?", Journal of Counseling Psychology (American Psychological Association) 40 (1): 120–126, doi:10.1037/0022-0184.108.40.206, ISSN 0022-0167
- Swann, William B.; Read, Stephen J. (1981), "Acquiring Self-Knowledge: The Search for Feedback That Fits", Journal of Personality and Social Psychology (American Psychological Association) 41 (6): 1119–1128, doi:10.1037/0022-35220.127.116.119, ISSN 0022-3514
- Shrauger, J. Sidney; Lund, Adrian K. (1975), "Self-evaluation and reactions to evaluations from others", Journal of Personality (Duke University Press) 43 (1): 94–108, doi:10.1111/j.1467-6494.1975.tb00574, PMID 1142062
- Baron, Jonathan (2000), Thinking and deciding (3rd ed.), New York: Cambridge University Press, ISBN 0-521-65030-5, OCLC 316403966
- Fine, Cordelia (2006), A Mind of its Own: how your brain distorts and deceives, Cambridge, UK: Icon books, ISBN 1-84046-678-2, OCLC 60668289
- Friedrich, James (1993), "Primary error detection and minimization (PEDMIN) strategies in social cognition: a reinterpretation of confirmation bias phenomena", Psychological Review (American Psychological Association) 100 (2): 298–319, doi:10.1037/0033-295X.100.2.298, ISSN 0033-295X, PMID 8483985
- Goldacre, Ben (2008), Bad Science, London: Fourth Estate, ISBN 978-0-00-724019-7, OCLC 259713114
- Hergovich, Andreas; Schott, Reinhard; Burger, Christoph (2010), "Biased Evaluation of Abstracts Depending on Topic and Conclusion: Further Evidence of a Confirmation Bias Within Scientific Psychology", Current Psychology 29 (3): 188–209, doi:10.1007/s12144-010-9087-5
- Horrobin, David F. (1990), "The philosophical basis of peer review and the suppression of innovation", Journal of the American Medical Association 263 (10): 1438–1441, doi:10.1001/jama.263.10.1438, PMID 2304222
- Kida, Thomas E. (2006), Don't believe everything you think: the 6 basic mistakes we make in thinking, Amherst, New York: Prometheus Books, ISBN 978-1-59102-408-8, OCLC 63297791
- Koehler, Jonathan J. (1993), "The influence of prior beliefs on scientific judgments of evidence quality", Organizational Behavior and Human Decision Processes 56: 28–55, doi:10.1006/obhd.1993.1044
- Kunda, Ziva (1999), Social Cognition: Making Sense of People, MIT Press, ISBN 978-0-262-61143-5, OCLC 40618974
- Lewicka, Maria (1998), "Confirmation Bias: Cognitive Error or Adaptive Strategy of Action Control?", in Kofta, Mirosław; Weary, Gifford; Sedek, Grzegorz, Personal control in action: cognitive and motivational mechanisms, Springer, pp. 233–255, ISBN 978-0-306-45720-3, OCLC 39002877
- Maccoun, Robert J. (1998), "Biases in the interpretation and use of research results", Annual Review of Psychology 49: 259–87, doi:10.1146/annurev.psych.49.1.259, PMID 15012470
- Mahoney, Michael J. (1977), "Publication prejudices: an experimental study of confirmatory bias in the peer review system", Cognitive Therapy and Research 1 (2): 161–175, doi:10.1007/BF01173636
- Nickerson, Raymond S. (1998), "Confirmation Bias; A Ubiquitous Phenomenon in Many Guises", Review of General Psychology (Educational Publishing Foundation) 2 (2): 175–220, doi:10.1037/1089-2618.104.22.168, ISSN 1089-2680
- Oswald, Margit E.; Grosjean, Stefan (2004), "Confirmation Bias", in Pohl, Rüdiger F., Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory, Hove, UK: Psychology Press, pp. 79–96, ISBN 978-1-84169-351-4, OCLC 55124398
- Plous, Scott (1993), The Psychology of Judgment and Decision Making, McGraw-Hill, ISBN 978-0-07-050477-6, OCLC 26931106
- Poletiek, Fenna (2001), Hypothesis-testing behaviour, Hove, UK: Psychology Press, ISBN 978-1-84169-159-6, OCLC 44683470
- Risen, Jane; Gilovich, Thomas (2007), "Informal Logical Fallacies", in Sternberg, Robert J.; Roediger III, Henry L.; Halpern, Diane F., Critical Thinking in Psychology, Cambridge University Press, pp. 110–130, ISBN 978-0-521-60834-3, OCLC 69423179
- Vyse, Stuart A. (1997), Believing in magic: The psychology of superstition, New York: Oxford University Press, ISBN 0-19-513634-9, OCLC 35025826
- Stanovich, Keith (2009), What Intelligence Tests Miss: The Psychology of Rational Thought, New Haven (CT): Yale University Press, ISBN 978-0-300-12385-2, lay summary (21 November 2010)
- Westen, Drew (2007), The political brain: the role of emotion in deciding the fate of the nation, PublicAffairs, ISBN 978-1-58648-425-5, OCLC 86117725
- Keohane, Joe (11 July 2010), "How facts backfire: Researchers discover a surprising threat to democracy: our brains", Boston Globe (NY Times)
- Skeptic's Dictionary: confirmation bias by Robert T. Carroll
- Teaching about confirmation bias, class handout and instructor's notes by K. H. Grobman
- Confirmation bias at You Are Not So Smart
- Confirmation bias learning object, interactive number triples exercise by Rod McFarland for Simon Fraser University
- Brief summary of the 1979 Stanford assimilation bias study by Keith Rollag, Babson College |
Math worksheets for pre-kindergarten children
Here is our great collection of well illustrated math worksheets for kindergarten children. Each worksheet covers a specific topic which parents and teachers can use to introduce basic math concepts to kids. Featuring are lesions on counting from one to three, one to ten and more. Children will also learn some basic shapes studied in geometry like shapes and their names e.g. square, rectangle, circle, rhombus, oval and more. We also teach children spatial awareness by asking them to find the position of objects in space like up, down, middle, bottom etc. Children will also learn to count and read time on clocks to the hours; e.g. one o’clock, two o’clock, 3 o’clock etc. That’s not all; children will improve their math vocabulary by learning how to spell numbers while teaching and coloring them. It does not end here, children will also learn the main operations needed in basic math e.g. addition, subtraction and division. Pre-kindergarten is the first step for children when it comes to learning in a structured context and using well illustrated worksheets like these will go a long way in strengthening and reinforcing notions learned. Have fun learning and keep sharing these free preschool math worksheets page in your networks. Remember all sheets are in pdf printable downloadable format.
Fun Math Games
We offer higly interactive math games for kids Snakes and ladders math games Memory math games Football games and more
Printable Pdf Math Games Croc Puzzle math game Coordinate math games Math with cards Math printable games for children
Parents, teachers and educators can now present their knowledge using these vividly presented short ppt presentations. Simply let the kids watch and learn.
Quizzes are designed around the topics like addition, subtraction, geometry, shapes, position, fractions, multiplication, division, arithmetic, algebra etc. |
The XENON1T dark matter experiment consists of a giant vat of liquid xenon in an underground chamber beneath the Gran Sasso mountain in Italy. Its task is to search for evidence of dark matter that astronomers cannot see but think fills the universe.
Because the Solar System is moving rapidly through the universe, the Earth ought to be ploughing through this ocean of dark matter. So any dark matter collisions inside XENON1T should come from our direction of travel.
But there is a problem with XENON1T, and other dark matter detectors like it. Although it ought to be able to see evidence of dark matter particles, it cannot tell which direction they are coming from. And that places significant constraints on what physicists can deduce from the data. What they’d like instead is a detector that can map the tracks that dark matter particles make as they pass through.
Now Ciaran O’Hare, at the University of Sydney in Australia, with colleagues, say they think they know how this can be done. The team is working on the design of an exotic new form of detector that can spot not only the presence of dark matter but also the direction in which it is travelling. The team have simulated for the first time how dark matter particles would interact inside the machine and say it has significant advantages over the current generation of detectors.
The new detector has a unique design based on DNA strands. It consists of a forest of double-stranded nuclei acids that hang from layers of gold metal sheeting. Each DNA strand is unique and its position within the detector known with nanometer resolution.
When a dark matter particle enters the detector, it slices through any DNA strands in its path, causing the broken segments to fall into a microfluidic collection system. “Since the sequences of base pairs in nucleic acid molecules can be precisely amplified and measured using polymerase chain reaction (PCR), the original spatial position of each broken strand inside the detector can be reconstructed with nanometer precision,” say the team. In this way, physicists can reconstruct the track of the dark matter particle through the machine.
The idea behind the DNA detector was put forward in 2012. The new work is the first simulation to test how the detection would work for dark matter particles of different types, energies and directions. “We conclude that a DNA detector could be a cost-effective, portable, and powerful new particle detection technology,” say O’Hare and co.
The new approach has other advantages over traditional dark matter detectors. The device is tiny compared to the behemoths used to detect dark matter today — portable even. It would also be significantly cheaper.
However, it is by no means perfect. The DNA detector does not provide enough information to easily identify the type of dark matter particle involved or even its precise energy. For that reason, these detectors are likely to be used in conjunction with the data from traditional machines.
Dark Matter Halo
Instead, its main advantage is its ability to determine the direction the particles came from. “Dark matter signals are expected to be strongly directional, a phenomenon generated by the orbit of the Solar System through the dark matter halo that envelopes our galaxy,” say O’Hare and co.
“A search for recoil tracks aligning with the direction of our galactic rotation would permit a convincing test of the veracity of any potential dark matter signal but would also allow it to be cleanly distinguished from sources of background noise such as cosmic rays, radioactive decays, and neutrinos.”
There is much work to be done. Last year, physicists found enigmatic evidence in the data from XENON1T suggesting something unusual. Exactly what, they are not sure and are currently hoping for more data to better study the effect.
If dark matter exists, a range of detectors will be needed to characterize its properties and how the Earth is ploughing through it. If dark matter doesn’t exist, all bets are off!
Ref: Particle Detection And Tracking With DNA: https://arxiv.org/abs/2105.11949 |
This pack is a great tool of learning during math classes/centers at any time during the school year! Or use each center activity to assess students' abilities throughout the year!
For whole pack students can take a print and write the answers within the sheet.
Math center contains the following worksheet activities:
1. Observing the Coordinate Grid and making Ordered pairs by the use of letters and numbers according to the instructions.
2. Observing the Coordinate Grid and making Ordered pairs by the use of given pictures and grid numbers according to the instructions.
3. Drawing and practicing various geometric shapes to make kids familiar them.
4. Counting of pictures of different birds and animals to get hands on practice on counting.
5. Solving the given division problems and writing the answer within a small space given in picture. After writing the answer, filling the color in a different parts of the picture as per the answer and instructions and completing the whole picture.
6.Observing the clock and writing the correct time within the sheet.
7.Performing the single digit multiplication with odd number.
8.Counting the objects and writing them within the sheet.
9. Students will count the colorful objects and will add same objects across (Horizontally) first and then they will add different objects down (Vertically).
10. Count the number of winter coats, then circle or color the correct number below.
11. Performing the single digit addition.
12. Students will count the pictures first and then add or subtract them.
You will find lots more in my store, including math, science, ELA, activities/craftivities, special education, interactive activities, and more!
You may also like:
• Mean - Median - Mode - Range -Understanding With Easy Steps!!!!
• Fraction (Math Common Core)
• Equivalent Fractions
• 3D shapes : Mini Unit for KG to 2nd Grade Common core
• Noun,Verb And Adjective sort out
• Borders & Frames- "Classic" |
SummaryMeasuring the dimensions of nano-circuits requires an expensive, high-resolution microscope with integrated video camera and a computer with sophisticated imaging software, but in this activity, students measure nano-circuits using a typical classroom computer and (the free-to-download) GeoGebra geometry software. Inserting (provided) circuit pictures from a high-resolution microscope as backgrounds in GeoGebra's graphing window, students use the application's tools to measure lengths and widths of circuit elements. To simplify the conversion from the on-screen units to the real circuits' units and the manipulation of the pictures, a GeoGebra measuring interface is provided. Students export their data from GeoGebra to Microsoft® Excel® for graphing and analysis. They test the statistical significance of the difference in circuit dimensions, as well as obtain a correlation between average changes in original vs. printed circuits' widths. This activity and its associated lesson are suitable for use during the last six weeks of the AP Statistics course; see the topics and timing note below for details.
Flexible nano-electronics—that is, electronic circuits that bend and take different forms—have seen rapid development during the last few years because of the plentiful range of applications that are difficult if not impossible achieve with conventional rigid electronics. Nano-wearable electronics are composed of millions of circuits arranged in a thin, lightweight, mechanically flexible, stretchable and conformable structure. They enable comfortable, continuous and mobile monitoring on people and animals. Using wearable electronics, engineers are able to find solutions for challenges such as: What if electronics were soft and pliable? What if electronics conformed to us, instead of us conforming to electronics? In this context, students see a real-world applied use for statistical analysis.
This end-of-the-year activity is designed for students taking AP Statistics. Specific required skills are:
- Hypothesis testing for dependent and independent samples
- Experimental design skills to obtain simple random samples
- Linear regression
- Algebra 2/pre-calculus, specifically logarithmic and exponential functions
In addition, basic computer skills in using Microsoft® Excel® and GeoGebra are necessary.
After this activity, students should be able to:
- Empirically quantify how much diffraction affects circuit dimensions during the printing process, and determine if these changes are statistically significant.
- Determine linear and non-linear correlations between variables.
- Use the GeoGebra geometry software to obtain indirect measurements of objects that cannot be measured directly.
- Use Excel® functions and capabilities to process, graph and perform statistical analysis on data.
- Use PowerPoint® to present results and conclusions.
More Curriculum Like This
Students are introduced to the technology of flexible circuits, some applications and the photolithography fabrication process. They are challenged to determine if the fabrication process results in a change in the circuit dimensions since, as circuits get smaller and smaller (nano-circuits), this c...
Students study how heart valves work and investigate how valves that become faulty over time can be replaced with advancements in engineering and technology. Learning about the flow of blood through the heart, students are able to fully understand how and why the heart is such a powerful organ in ou...
Students learn which contaminants have the greatest health risks and how they enter the food supply. While food supply contaminants can be identified from cultures grown in labs, bioengineers are creating technologies to make the detection of contaminated food quicker, easier and more effective.
Students apply their knowledge of scale and geometry to design wearables that would help people in their daily lives, perhaps for medical reasons or convenience. Like engineers, student teams follow the steps of the design process, to research the wearable technology field (watching online videos an...
Each TeachEngineering lesson or activity is correlated to one or more K-12 science,
technology, engineering or math (STEM) educational standards.
All 100,000+ K-12 STEM standards covered in TeachEngineering are collected, maintained and packaged by the Achievement Standards Network (ASN),
a project of D2L (www.achievementstandards.org).
In the ASN, standards are hierarchically structured: first by source; e.g., by state; within source by type; e.g., science or mathematics;
within type by subtype, then by grade, etc.
Each TeachEngineering lesson or activity is correlated to one or more K-12 science, technology, engineering or math (STEM) educational standards.
All 100,000+ K-12 STEM standards covered in TeachEngineering are collected, maintained and packaged by the Achievement Standards Network (ASN), a project of D2L (www.achievementstandards.org).
In the ASN, standards are hierarchically structured: first by source; e.g., by state; within source by type; e.g., science or mathematics; within type by subtype, then by grade, etc.
Analyze a major global challenge to specify qualitative and quantitative criteria and constraints for solutions that account for societal needs and wants.
(Grades 9 - 12)
Do you agree with this alignment? Thanks for your feedback!This standard focuses on the following Three Dimensional Learning aspects of NGSS:
Science & Engineering Practices Disciplinary Core Ideas Crosscutting Concepts Analyze complex real-world problems by specifying criteria and constraints for successful solutions. Criteria and constraints also include satisfying any requirements set by society, such as taking issues of risk mitigation into account, and they should be quantified to the extent possible and stated in such a way that one can tell if a given design meets them.Humanity faces major global challenges today, such as the need for supplies of clean water and food or for energy sources that minimize pollution, which can be addressed through engineering. These global challenges also may have manifestations in local communities. New technologies can have deep impacts on society and the environment, including some that were not anticipated. Analysis of costs and benefits is a critical aspect of decisions about technology.
Medical technologies include prevention and rehabilitation, vaccines and pharmaceuticals, medical and surgical procedures, genetic engineering, and the systems within which health is protected and maintained.
(Grades 9 - 12)
Do you agree with this alignment? Thanks for your feedback!
Use assessment techniques, such as trend analysis and experimentation, to make decisions about the future development of technology.
(Grades 9 - 12)
Do you agree with this alignment? Thanks for your feedback!
use a problem-solving model that incorporates analyzing given information, formulating a plan or strategy, determining a solution, justifying the solution, and evaluating the problem-solving process and the reasonableness of the solution;
Do you agree with this alignment? Thanks for your feedback!
create and properly display meaningful output;
Do you agree with this alignment? Thanks for your feedback!
use Graphical User Interfaces (GUIs) to create interactive interfaces to acquire data from a user and display program results;
Do you agree with this alignment? Thanks for your feedback!
Each group needs:
- computer with Internet access, one per student
- Microsoft Office® applications: Excel®, PowerPoint®, Word®
- (optional) video-creating equipment and software, to make final presentations (instead of PowerPoint®)
- GeoGebra freeware geometry software; download from https://www.geogebra.org/
- GeoGebra Measuring Interface, a ggb (GeoGebra) file, loaded on each computer
- Nano-Circuit Picture Sets, a zip file, loaded on each computer
- (optional) graphing calculator with statistical analysis capabilities, such as TI-Nspire CX or CAS
- Pre-Activity Test, one per person
- GeoGebra Measuring Interface Manual, one per person
- GeoGebra Basics Practice, one per person
- Graphing Data and Statistical Analysis with Excel Practice, one per person
- Project Rubric, one per person
To share with the entire class:
- computer that is set up the same as the student computers, but for the teacher's use
- projector, to show the teacher's monitor to the entire class
Modern electronics technology has plentiful and varied real-life applications. Think about medical applications. For instance, if you want to know your blood pressure, engineers have designed blood pressure (BP) monitors to make those measurements. Now suppose you need to monitor your blood pressure continuously over long periods, say one or two days. That's not a problem because portable and wearable BP monitors are available to perform the job for you (see Figure 1). But, if you wear a BP monitor like this for an entire day, it is annoying for an adult, and even worse for an active child. Thinking like an engineer, can you imagine an alternative, better solution for this situation? How could a BP monitor be more comfortable to use? Would it be possible for a health monitoring device to adapt to the human form?
In the last decade, engineers have been developing a new kind of electronics that places millions of nano-circuits on flexible materials. This flexible circuitry technology—known as flexible, soft or wearable electronics—enables the design of applications that were unthinkable years ago. For example, one wireless prototype is a flexible piece of soft electronics that adheres to skin in order to monitor a person's vital signs such as body temperature and heart rate. The prototype also contains a battery and an antenna and is able to transmit data to a central location that relays the information to a smart phone.
(Show students the 3:39-minute Monitor Your Health with Electronic Skin video at about existing and future flexible circuitry technology applications: https://www.youtube.com/watch?v=iaRhuWRSBao. See the Additional Multimedia Support section for additional suggestions of resources—articles, photographs, video—to show the class.)
The development of a prototype like this "electronic skin" is a long and complex engineering process, from the initial idea and design, to testing and redesign. This cycle, known as the engineering design process or EDP (see Figure 2), is repeated over and over—as many times as are needed to develop a successful solution. The steps help to guide the design team through the product development process in a logical manner. This process also requires teamwork. Typically, the solving of complex problems requires that specialists in different areas work together.
The EDP is a flexible and cyclical process. For example, engineers may design something and then discover a problem during the testing phase; then they jump back to an earlier design step to make modifications or brainstorm new ideas before moving on through the process. Engineers follow the steps of the design process to guide the development of their ideas and ensure that they create the best possible product (or other design solution) that addresses the requirements and constraints of what they are aiming to achieve.
Let's go through an example. Remember earlier we identified a possible problem: a bulky portable blood monitor is impractical for use by kids. Making the device lighter, smaller and able to conform to kids would be a great solution to the problem. Research shows that nano-electronics may solve the size problem and flexible electronics enables designs to adapt to specific forms. The next step is to design a blood monitor that integrates the two technologies. Different designs and their sub-components are fabricated and tested many times, with changes and revisions incorporated to make improved prototypes. At some point, the best prototype is tested under controlled conditions on humans, with the results evaluated to redesign for improvement. Finally, the prototype is tested again on humans, under normal conditions now, to gain more feedback. The cycle is repeated as necessary to improve device performance before it is produced and sold.
In this activity, you will statistically analyze a problem arising during the fabrication process: the difference in dimensions between the original design and the fabricated circuit. This problem may become critical for the fabrication of increasingly smaller circuits because a severe change in circuit dimensions may affect mechanical properties like strain and stress, or conductivity and resistivity, thus compromising circuit reliability.
Using the GeoGebra geometry software, you will measure the dimensions of nano-circuits on pictures of the original masks and on pictures of the final printed circuits. Then using Excel®, you will graph and analyze the data to determine if the differences in these measurements are statistically significant. To conclude, you will report and present in class your analysis, results and conclusions.
alternate hypothesis: Denoted as H1, a statement that directly contradicts a null hypothesis by stating that the actual value of a population parameter is less than, greater than or not equal to the value stated in the null hypothesis.
coefficient of determination: A measure of how well the regression line represents the data; labeled as r2. If the regression line passes exactly through every point on the scatter plot, it explains all of the variation and r2=1. The further the line is away from the points, the less it is able to explain the variation of the data and the r2 value approaches 0.
conductivity: The degree to which a specified material conducts electricity or heat. It is the reciprocal of resistivity.
correlation coefficient: A statistical measure of the degree to which changes to the value of one variable predict change to the value of another; labeled as r. In positively correlated variables, the value increases or decreases in tandem. In negatively correlated variables, the value of one increases as the value of the other decreases.
dependent samples: A subset of a population whose elements are measured "before and after" a situation. Also called paired or matched samples.
diffraction: A deviation in the direction of a wave at the edge of an obstacle in its path. The spreading of waves around obstacles.
electrical circuit: A closed loop through which charges can continuously move. A network consisting of a closed loop, giving a return path for the current.
empirical: Based on testing, experience or observation.
fabrication process: In electronics, a sequence of well-defined procedures to manufacture a circuit. The process begins with the circuit design, continues with the preparation of the raw materials to be used in the process, then all the manufacturing steps, and ends with the final product or circuit.
flexible circuit: A pattern of conductive traces bonded on a flexible substrate. A circuit printed on a flexible dielectric substrate.
hypothesis testing: A process to evaluate the credibility of a hypothesis about a population; uses sample data to infer truths about the entire population.
independent samples: Two or more samples that have no effect on one another.
interface : (computing) Something that enables separate and sometimes incompatible elements to coordinate or communicate. In computer science, hardware or software designed to communicate information between hardware devices, between software programs, between devices and programs, or between computer and users.
least-squares method: A statistical method used to determine a line of best fit by minimizing the sum of squares created by a mathematical function. A "square" is determined by squaring the distance between a data point and the regression line.
linear regression: In statistics, an approach for modeling the relationship or dependence between a scalar dependent or response variable y and one or more explanatory variables denoted x, by fitting a linear equation y = a + bx.
log-log graph: A two-dimensional graph of numerical data that uses logarithmic scales on both the horizontal and vertical axes. Also called log-log plot.
mean: The average value of the data; used to derive the central tendency of the data. Also called expected value.
measurement uncertainty: The doubt that exists about the result of any measurement because of the precision of the measuring instrument. A quantification of the doubt about the measurement result, usually defined as: measurement taken ± half of the smallest measurement scale of the instrument.
nanotechnology: A branch of technology that deals with dimensions and tolerances of less than 100 nanometers in size.
null hypothesis: Denoted as H0, the initial statement or assumption about a population parameter, such as the population mean, that is assumed to be true.
photolithography: The process of transferring a pattern or design on a mask to the surface of a silicon wafer or plate, using light, and light sensitive materials on the wafer or plate.
power regression: Taking the explanatory variable x and fitting for the response variable "y" a function of the form: y = a•xb, where a, b are constants. The function is based on the linear regression of x and y, with both axes scaled logarithmically. Also known as log-log regression.
printed circuit: An electrical device in which the wiring and certain components consist of a thin coat of electrically conductive material applied in a pattern on an insulating substrate.
resistivity: A property that quantifies how strongly a given material opposes the flow of electric current.
significance level: A criterion for judging a decision regarding a null hypothesis. The criterion is based on the probability of obtaining a statistic measured in a sample if the value stated in the null hypothesis were true. The criterion or level of significance is typically set at 5%. When the probability of obtaining the sample mean is less than 5%, assuming the null hypothesis H0 is true, then the evidence does not support the null hypothesis, and this is rejected (but not taken as false), and consequently the alternate hypothesis is accepted (but not taken as true).
simple random sample: An unbiased representation of a group; a subset of a statistical population in which each member of the subset has an equal probability of being chosen.
simulation: In science and engineering, the representation of the behavior or characteristics of a physical system through the use of another system, usually a computer running programs based on a mathematical representation of the physical system.
standard deviation: A numerical value used to indicate how widely individuals in a group vary.
statistical analysis: The mathematics of the collection, organization and interpretation of numerical data, in accordance with probability theory and the application of methods such as hypothesis testing to them.
strain: The relative change in shape or size of an object due to externally applied forces (strain is dimensionless): change in length/original length (dL/L).
stress: The force per unit area applied to an object.
type I error: The probability of rejecting a null hypothesis that is actually true. The largest probability of committing a type I error is the significance level value.
type II error: The probability of retaining a null hypothesis that is actually false.
ultraviolet light: A type of electromagnetic radiation with wavelengths shorter than visible light but longer than x-rays; in the range 0.4 × 10-6 and 1 × 10-8 meters.
wavelength: The distance between two points of the same phase in consecutive cycles of a wave. The distance between one peak or crest of a wave and the next peak or crest.
This project was designed and developed in the Wearable Electronics Laboratory at the University of Houston. The purpose of this lesson/activity set is to apply in the context of a real-world, state-of-the-art research problem, the concepts learned in an AP Statistics course: to verify if the fabrication process affects significantly an original circuit's dimensions defined during the design process. Project guidelines:
- Because of the project workload, it is accomplished by teams of three students each.
- Students work on two nano-circuit picture sets: circuit original masks (designs) and fabricated printed circuits.
- Students select samples from these pictures and open them with GeoGebra to perform measurements on the circuit elements using a special interface designed specifically for this. Students may work with paired samples or independent samples.
- The collected measurements are exported to Excel®, and using its graphical and mathematical capabilities, the data is graphed, compared and statistically analyzed.
- Students create a PowerPoint® slide show or a video (mp4, wma, mpeg) to present project results.
- Teams present to the rest of the class their results, analyses and conclusions.
- Make help available for the math and final presentation preparation, during afterschool tutorial time.
- All the project activities are part of the final grade. If two or all three team members justifiably miss any of these activities, have students arrange for a makeup session before the project deadline.
A basic knowledge of GeoGebra geometry software and Microsoft® Excel® is required. GeoGebra is used to make measurement on nano-circuit images, and Excel® is used for graphing and statistical results analysis. For more information, see the Statistical Analysis of Flexible Circuits associated lesson, as well as the resources listed in the Additional Multimedia Support section.
Before the Activity
- Due to the activity complexity, instructors must understand many little details in order to teach it and support students throughout the activity. Thus, it is highly recommended that instructors try the activity by themselves first, carefully performing every step, including the creation of a results presentation that can be used as an example for students. Be sure to read the teacher tips and notes included in the Lines and Circles Guided Practice: Teacher Instructions. If necessary, request help from the author at email@example.com.
- Prepare the computers so they have Internet access as well as Microsoft Office® and GeoGebra installed; also load the GeoGebra Measuring Interface, a ggb file. Prepare on each computer a subdirectory containing the paired nano circuit picture files from the Nano-Circuit Picture Sets zip file. Determine a method to randomly assign a circuit to each team; given the limited number of circuit pictures, more than one team may work with the same circuit set. Optionally, provide for teams an example results slide presentation or video (pptx or mp4 file) created by the teacher.
- Prepare a teacher's computer the same as the student computers. On Day 1, connect it to a projector so as to be able to show the entire class the teacher's monitor.
- Make copies of the Pre-Activity Test, GeoGebra Measuring Interface Manual, GeoGebra Basics Practice, Graphing Data and Statistical Analysis with Excel Practice and Project Rubric, one each per student. Personalize the project rubric with a project due date before printing copies. Note that two versions of the project checklist are provided; decided which to use, or modify one as desired.
- Optionally, make available graphing calculators with statistical functions, which students can use to verify their statistical results with the data sets.
With the Students—Day 1
Topics: Introduction to GeoGebra and the measuring interface
Estimated time: 50 minutes in the classroom/computer lab
- Administer the pre-activity test as described in the Assessment section.
- Present the Introduction/Motivation section content, including information about flexible electronics and the steps of the engineering design process.
- Begin with a brief presentation of the GeoGebra geometry software. Lead the class through the Lines and Circles Guided Practice, as detailed in the teacher's instructions.
- Introduce the GeoGebra Measuring Interface. Give every student a measuring interface manual. Perform a couple of examples showing how to use this interface, using the sample nano-circuits pictures. Note: Because the measuring interface is ad-hoc for measuring the circuits and very simple to use, expect students to have no trouble with the measuring process after this short practice.
- Then have teams conduct on their own one or both of the assignments on the GeoGebra Basics Practice handout: Spheres Student Independent Practice, Rapa-Nui Student Independent Practice. The purpose of doing these practices is for students to gain an understanding of how a scale factor is used in the GeoGebra measuring interface.
With the Students—Day 2
Topics: Project scope definition, first circuit measurements
Estimated time: 50 minutes in the classroom/computer lab
- Give every student a rubric and make sure they understand the scope and grade points for each of the project sections.
- Direct the students to organize themselves into groups of three.
- Randomly assign the circuit picture sets, one set per team. From their assigned sets, have students select their dependent samples (same circuit parts before and after printing) and independent samples (different circuit parts before and after printing). Note: Every set is paired. For example, c1-03m and c1-03p correspond to the same portion of circuit 1, in which m stands for the mask, and p for the printed circuit.
- Have groups begin the circuit measuring and data recording processes, with team members working together. At this grade level, expect AP students to be able to self-assign the necessary roles to make these processes efficient. In this case, one method is for two students to use the GeoGebra interface to measure circuits while the third student records data and circuit names in an Excel® spreadsheet.
With the Students—Day 3
Topics: Measuring samples, data collection
Estimated time: 50 minutes in the classroom/computer lab
Students continue the measuring and data collection process. For statistical analysis purposes, a data set of no less than 20 before/after measurement pairs is sufficient.
With the Students—Days 4-6
Topics: Data graphing and analysis
Estimated time: 150 minutes in the classroom/computer lab
- Pass out the Excel® practice handout. As a class, go through the guided practice (using the average faculty salaries data), explaining to students how to use Excel® to graph data, compute means, standard deviations, mins–maxes and T-test for data sets on spreadsheets.
- Students complete the independent practice (using the unemployment data) on the Excel® practice handout.
- Students perform the statistical analysis of the measured differences in circuits' dimensions for their teams' dependent and independent samples. Expect students to conclude that there is a statistically significant difference in the circuits' widths before-and-after-printing. (For more information, refer to the Lesson Background & Concepts for Teachers section in the associated lesson.)
- Have students share their circuit measurements. On the classroom board, draw a two-column table with a row for each team. Ask each team to write in the first column the obtained sample circuits' average width before printing (masks) and in the second column the obtained sample circuits average width after-printing (printed circuits). Direct students to copy down this table for their own reference.
- Using the combined class data in the table, direct students to create a third column on the table for the circuit average width relative change, computing for each entry the quotient (refer to the Lesson Background & Concepts for Teachers in the associated lesson) using the following equation. The teacher completes the table on the board with these values.
- Students graph the circuit's relative width change values (third column, on y-axis) versus original circuit's sample average width (first column, on x-axis). Expect these points to fit a non-linear correlation, not a linear correlation. For these points, students must obtain a correlation.
- Students may first attempt the Add Trend Line option in Excel® (click on graphed points > then click the right mouse button). Doing this, expect them to discover that the points are linked with a power relationship. Because power correlations are beyond the scope of the AP Statistics course in which students only learn about linear correlations, give students this new challenge: Since all you know (by now) is how to determine linear correlations, how could you transform the non-linear trend in your data into a linear trend?
- Give students this hint: In pre-calculus, you learned how to get rid of powers using logarithmic functions. Then write on the board the log's power property: log (xn) = n log (x)
- Next, ask the entire class to brainstorm ideas about how to use this property to address this new challenge. The answer has to be: Take the log of both the first and third column entries. Once students find the answer, require them to add two more columns to their tables and fill them with the corresponding log-values.
- The next step is to graph log(relative width change) vs. log(original circuit's average width). Expect the new log-data graph shape to now fit a linear correlation that students are able to compute, obtaining something like this: log(y) = a + b log(x).
- To find the power correlation, ask the teams to use their knowledge about logarithmic and exponential functions to get rid of logs in the previous equation. Consider giving a prize to the first team that obtains the correct answer: apply the corresponding exponential function to both sides of the equation: e x for ln(x) or 10x for log10(x), obtaining something like: y = e a x b.
With the Students—Days 7-8
Topic: Creating report presentations
Estimated time: 100 minutes in the classroom/computer lab PLUS extra class time to present final team presentations, and out-of-class time for presentation preparation
Require student teams to work after school to complete their results presentations in the form of 10-minute slide or video presentations. Use the in-class sessions for the following activities.
- As a class, review the Project Results Report and Presentations requirements in the rubric to make expectations clear.
- Give students time to work on their teams' reports and presentations—the latter in the form of a slide show (pptx) or video (mp4). Supervise and answer questions, providing guidance and feedback on how to create professional presentations.
- Select a day for teams to present their results and conclusions to the rest of the class, allowing some time for audience questions and answers.
Worksheets and Attachments
Because of the complexity of this activity and the necessity to understand many little details in order to teach it and support students, you are welcome to request assistance from the author, Miguel R. Ramirez, at firstname.lastname@example.org.
Statistical Inference/Hypothesis Testing Pre-Test: Administer the Pre-Activity Test as a way to gauge students' base knowledge and review concepts of hypothesis testing of dependent and independent samples.
Activity Embedded Assessment
Partial Data Graphing and Analysis: Students make graphs of the measured circuits' dimension changes. Make sure students include all data, important values and quantities used in the statistical analysis.
Project Report: Teams each prepare a summary activity written report, referring to the Project Rubric for guidance. Report content includes: measurement procedure, collected measurement data, measurement graphs, statistical analyses, correlations and conclusions. Review student reports against the rubric to assess student comprehension and competence in the activity subject matter.
Project Results Presentation: Teams create either slide shows or videos that include project background, development, obtained results and corresponding statistical analyses, as detailed in the Project Rubric. For slide shows, students make in-person presentations of their results. Give teams each 10 minutes to make the presentation to the class, plus a few extra minutes for Q&A sessions. Review team presentations against the rubric to assess student mastery of the concepts.
For students with good Excel® skills (that is, they know how to use formulas), challenge them to obtain the scale factor by themselves using the GeoGebra spreadsheet to write the corresponding equation to transform the on-screen units into real-world circuit units, perform the necessary computations and record their measurements.
Additional Multimedia Support
During the Introduction/Motivation, show students the following 3:39-minute video: Monitor Your Health with Electronic Skin about existing and future flexible circuitry technology applications at: https://www.youtube.com/watch?v=iaRhuWRSBao.
Additional suggestions for resources to show students during the Introduction/Motivation:
- Super Thin and Flexible Circuits Clear the Way for Truly Wearable Computers article with photographs at: http://www.businessinsider.com/flexible-thin-electronics-breakthrough-2013-7
- Imperceptible Electronics video (3:07 minutes): https://www.youtube.com/watch?v=k7-Hs7e3t5Q
- Biostamp Temporary Tattoo Electronic Circuits by MC 10 article with photographs at: http://www.dezeen.com/2013/03/28/biostamp-temporary-tattoo-wearable-electronic-circuits-john-rogers-mc10/
- [Wearable] MC10s Stretchable Circuits – Tiny Computers for Your Skin article with photographs at http://nxtinsight.com/wearable-mc10s-stretchable-circuits-tiny-computers-skin/
GeoGebra channel at YouTube: https://www.youtube.com/user/GeoGebraChannel
GeoGebra website at https://www.geogebra.org/
Free tutorials and materials related to this lesson/activity set may be found at Sophia Learning at http://www.sophia.org/. Tutorial topics include Excel® statistical functions and graphing data, GeoGebra basics, photolithography, wearable electronics, nanotechnology, statistics (hypothesis testing and linear correlation), and logarithmic and exponential functions. Access requires students to set up user accounts.
Bell, Stephanie. A Beginner's Guide to Uncertainty of Measurement. Published August 1999. Measurement Good Practice Guide No. 11, Issue 2, National Physical Laboratory, Teddington, Middlesex, UK, 2001. Accessed July 2014. (41-page PDF) https://www.wmo.int/pages/prog/gcos/documents/gruanmanuals/UK_NPL/mgpg11.pdf
Benson, Harris. University Physics. San Francisco, CA: John Wiley & Sons, 1995.
Brase-Brase. Understandable Statistics 8th Edition, Boston, MA: Houghton Mifflin, 2006.
GeoGebra freeware geometry software. Last updated spring 2014. International GeoGebra Institute. Accessed June 22, 2014. (a graphic calculator for geometry, algebra, calculus, statistics and 3D math) http://www.geogebra.org/cms/en/
Hamselou, Jessica. Electronic "Tattoos" to Monitor Vital Signs. Published August 11, 2011. Daily News, New Scientist. Accessed July 2014. http://www.newscientist.com/article/dn20787-electronic-tattoos-to-monitor-vital-signs.html#.VWkU29JViko
How to Find the Power of a Statistical Test. Last updated January 2014. Stat Trek (Teach yourself statistics). Accessed July 1, 2014. http://stattrek.com/hypothesis-test/statistical-power.aspx
Klein, Stacy S. and Alene H. Harris. (2007) "A User's Guide to the Legacy Cycle." Journal of Education and Human Development. Volume 1, Issue 1, ISSN 1934-7200. Accessed July 2014. http://www.scientificjournals.org/journals2007/articles/1088.pdf
Macleod, Peter. (June 2002) A Review of Flexible Circuit Technology and Its Applications. Prime Faraday Technology Watch, PRIME Faraday Partnership, Pera Knowledge, Wolfson School of Mechanical and Manufacturing Engineering, Loughborough University, Loughborough, Leics, UK. ISBN1-84402-023-1. Accessed July 2014. http://www.lboro.ac.uk/microsites/mechman/research/ipm-ktn/pdf/Technology_review/flexible-circuit-technology-and-its-applications.pdf
Welsh, Jennifer. Electronic Tattoo Monitors Brain, Heart and Muscles. Published January 30, 2012. Livescience. Accessed June 2014. http://www.livescience.com/18208-electronic-tattoo-brain-heart.html
Willis, Mike. Propagation Tutorial. Last updated December 26, 2006. Propagation via Diffraction. Accessed June 24, 2014. http://www.mike-willis.com/Tutorial/diffraction.htm
Other Related Information
AP Statistics topics and timing note: This activity and its associated lesson are intended to be taught during the last six weeks of the school year to address some of the last topics covered in the AP Statistics course: hypothesis testing and linear correlation. The time span recommended for this lesson AND its associated activity is three weeks (second semester, last three weeks); if students are unfamiliar with GeoGebra or Excel®, provide additional class periods to complete the activity. Use the grades that students obtain in the activity as their six-week test grades and the project results presentations as their final test grades.
ContributorsMiguel R. Ramirez, Cunjiang Yu, Minwei Xu, Song Chen
Copyright© 2015 by Regents of the University of Colorado; original © 2015 University of Houston
Supporting ProgramNational Science Foundation GK-12 and Research Experience for Teachers (RET) Programs, University of Houston
This digital library content was developed by the University of Houston's College of Engineering under National Science Foundation GK-12 grant number DGE-0840889. However, these contents do not necessarily represent the policies of the NSF and you should not assume endorsement by the federal government.
The authors also thank RET program director Fritz Claydon, RET academic advisors Stuart Long and Debora Rodrigues, RET advisors Mila Taylor and Marjorie Hernandez, as well as the National Science Foundation for its funding of the RET program.
Special thanks to the Wearable Electronics Laboratory (Flexible/Stretchable Electronics) of the Mechanical Engineering Department at the University of Houston's Cullen College of Engineering, and the Lithography Lab in the Texas Center for Superconductivity at the University of Houston Applied Research Hub in the Energy Research Park.
Last modified: September 14, 2018 |
Rotation in mathematics is a concept originating in geometry. Any rotation is a motion of a certain space that preserves at least one point. It can describe, for example, the motion of a rigid body around a fixed point. A rotation is different from other types of motions: translations, which have no fixed points, and (hyperplane) reflections, each of them having an entire (n − 1)-dimensional flat of fixed points in a n-dimensional space. A clockwise rotation is a negative magnitude so a counterclockwise turn has a positive magnitude.
Mathematically, a rotation is a map. All rotations about a fixed point form a group under composition called the rotation group (of a particular space). But in mechanics and, more generally, in physics, this concept is frequently understood as a coordinate transformation (importantly, a transformation of an orthonormal basis), because for any motion of a body there is an inverse transformation which if applied to the frame of reference results in the body being at the same coordinates. For example, in two dimensions rotating a body clockwise about a point keeping the axes fixed is equivalent to rotating the axes counterclockwise about the same point while the body is kept fixed. These two types of rotation are called active and passive transformations.
Related definitions and terminology
The rotation group is a Lie group of rotations about a fixed point. This (common) fixed point is called the center of rotation and is usually identified with the origin. The rotation group is a point stabilizer in a broader group of (orientation-preserving) motions.
For a particular rotation:
- The axis of rotation is a line of its fixed points. They exist only in n > 2.
- The plane of rotation is a plane that is invariant under the rotation. Unlike the axis, its points are not fixed themselves. The axis (where present) and the plane of a rotation are orthogonal.
A representation of rotations is a particular formalism, either algebraic or geometric, used to parametrize a rotation map. This meaning is somehow inverse to the meaning in the group theory.
Rotations of (affine) spaces of points and of respective vector spaces are not always clearly distinguished. The former are sometimes referred to as affine rotations (although the term is misleading), whereas the latter are vector rotations. See the article below for details.
Definitions and representations
In Euclidean geometry
A motion of a Euclidean space is the same as its isometry: it leaves the distance between any two points unchanged after the transformation. But a (proper) rotation also has to preserve the orientation structure. The "improper rotation" term refers to isometries that reverse (flip) the orientation. In the language of group theory the distinction is expressed as direct vs indirect isometries in the Euclidean group, where the former comprise the identity component. Any direct Euclidean motion can be represented as a composition of a rotation about the fixed point and a translation.
There are no non-trivial rotations in one dimension. In two dimensions, only a single angle is needed to specify a rotation about the origin – the angle of rotation that specifies an element of the circle group (also known as U(1)). The rotation is acting to rotate an object counterclockwise through an angle θ about the origin; see below for details. Composition of rotations sums their angles modulo 1 turn, which implies that all two-dimensional rotations about the same point commute. Rotations about different points, in general, do not commute. Any two-dimensional direct motion is either a translation or a rotation; see Euclidean plane isometry for details.
Rotations in three-dimensional space differ from those in two dimensions in a number of important ways. Rotations in three dimensions are generally not commutative, so the order in which rotations are applied is important even about the same point. Also, unlike the two-dimensional case, a three-dimensional direct motion, in general position, is not a rotation but a screw operation. Rotations about the origin have three degrees of freedom (see rotation formalisms in three dimensions for details), the same as the number of dimensions.
A three-dimensional rotation can be specified in a number of ways. The most usual methods are:
- Euler angles (pictured at the left). Any rotation about the origin can be represented as the composition of three rotations defined as the motion obtained by changing one of the Euler angles while leaving the other two constant. They constitute a mixed axes of rotation system because angles are measured with respect to a mix of different reference frames, rather than a single frame that is purely external or purely intrinsic. Specifically, the first angle moves the line of nodes around the external axis z, the second rotates around the line of nodes and the third is an intrinsic rotation (a spin) around an axis fixed in the body that moves. Euler angles are typically denoted as α, β, γ, or φ, θ, ψ. This presentation is convenient only for rotations about a fixed point.
- Axis–angle representation (pictured at the right) specifies an angle with the axis about which the rotation takes place. It can be easily visualised. There are two variants to represent it:
- Matrices, versors (quaternions), and other algebraic things: see the section Linear and Multilinear Algebra Formalism for details.
A general rotation in four dimensions has only one fixed point, the centre of rotation, and no axis of rotation; see rotations in 4-dimensional Euclidean space for details. Instead the rotation has two mutually orthogonal planes of rotation, each of which is fixed in the sense that points in each plane stay within the planes. The rotation has two angles of rotation, one for each plane of rotation, through which points in the planes rotate. If these are ω1 and ω2 then all points not in the planes rotate through an angle between ω1 and ω2. Rotations in four dimensions about a fixed point have six degrees of freedom. A four-dimensional direct motion in general position is a rotation about certain point (as in all even Euclidean dimensions), but screw operations exist also.
Linear and multilinear algebra formalism
When one considers motions of the Euclidean space that preserve the origin, the distinction between points and vectors, important in pure mathematics, can be erased because there is a canonical one-to-one correspondence between points and position vectors. The same is true for geometries other than Euclidean, but whose space is an affine space with a supplementary structure; see an example below. Alternatively, the vector description of rotations can be understood as a parametrization of geometric rotations up to their composition with translations. In other words, one vector rotation presents many equivalent rotations about all points in the space.
A motion that preserves the origin is the same as a linear operator on vectors that preserves the same geometric structure but expressed in terms of vectors. For Euclidean vectors, this expression is their magnitude (Euclidean norm). In components, such operator is expressed with n × n orthogonal matrix that is multiplied to column vectors.
As it was already stated, a (proper) rotation is different from an arbitrary fixed-point motion in its preservation of the orientation of the vector space. Thus, the determinant of a rotation orthogonal matrix must be 1. The only other possibility for the determinant of an orthogonal matrix is −1, and this result means the transformation is a hyperplane reflection, a point reflection (for odd n), or another kind of improper rotation. Matrices of all proper rotations form the special orthogonal group.
In two dimensions, to carry out a rotation using a matrix, the point (x, y) to be rotated counterclockwise is written as a column vector, then multiplied by a rotation matrix calculated from the angle θ:
The coordinates of the point after rotation are x′, y′, and the formulae for x′ and y′ are
The vectors and have the same magnitude and are separated by an angle θ as expected.
Points on the R2 plane can be also presented as complex numbers: the point (x, y) in the plane is represented by the complex number
This can be rotated through an angle θ by multiplying it by eiθ, then expanding the product using Euler's formula as follows:
and equating real and imaginary parts gives the same result as a two-dimensional matrix:
Since complex numbers form a commutative ring, vector rotations in two dimensions are commutative, unlike in higher dimensions. They have only one degree of freedom, as such rotations are entirely determined by the angle of rotation.
As in two dimensions, a matrix can be used to rotate a point (x, y, z) to a point (x′, y′, z′). The matrix used is a 3×3 matrix,
This is multiplied by a vector representing the point to give the result
The set of all appropriate matrices together with the operation of matrix multiplication is the rotation group SO(3). The matrix A is a member of the three-dimensional special orthogonal group, SO(3), that is it is an orthogonal matrix with determinant 1. That it is an orthogonal matrix means that its rows are a set of orthogonal unit vectors (so they are an orthonormal basis) as are its columns, making it simple to spot and check if a matrix is a valid rotation matrix.
Above-mentioned Euler angles and axis–angle representations can be easily converted to a rotation matrix.
Another possibility to represent a rotation of three-dimensional Euclidean vectors are quaternions described below.
Unit quaternions, or versors, are in some ways the least intuitive representation of three-dimensional rotations. They are not the three-dimensional instance of a general approach. They are more compact than matrices and easier to work with than all other methods, so are often preferred in real-world applications.
A versor (also called a rotation quaternion) consists of four real numbers, constrained so the norm of the quaternion is 1. This constraint limits the degrees of freedom of the quaternion to three, as required. Unlike matrices and complex numbers two multiplications are needed:
where q is the versor, q−1 is its inverse, and x is the vector treated as a quaternion with zero scalar part. The quaternion can be related to the rotation vector form of the axis angle rotation by the exponential map over the quaternions,
where v is the rotation vector treated as a quaternion.
A single multiplication by a versor, either left or right, is itself a rotation, but in four dimensions. Any four-dimensional rotation about the origin can be represented with two quaternion multiplications: one left and one right, by two different unit quaternions.
More generally, coordinate rotations in any dimension are represented by orthogonal matrices. The set of all orthogonal matrices in n dimensions which describe proper rotations (determinant = +1), together with the operation of matrix multiplication, forms the special orthogonal group SO(n).
Matrices are often used for doing transformations, especially when a large number of points are being transformed, as they are a direct representation of the linear operator. Rotations represented in other ways are often converted to matrices before being used. They can be extended to represent rotations and transformations at the same time using homogeneous coordinates. Projective transformations are represented by 4×4 matrices. They are not rotation matrices, but a transformation that represents a Euclidean rotation has a 3×3 rotation matrix in the upper left corner.
The main disadvantage of matrices is that they are more expensive to calculate and do calculations with. Also in calculations where numerical instability is a concern matrices can be more prone to it, so calculations to restore orthonormality, which are expensive to do for matrices, need to be done more often.
More alternatives to the matrix formalism
As was demonstrated above, there exist three multilinear algebra rotation formalisms: one with U(1), or complex numbers, for two dimensions, and two others with versors, or quaternions, for three and four dimensions.
In general (even for vectors equipped with a non-Euclidean Minkowski quadratic form) the rotation of a vector space can be expressed as a bivector. This formalism is used in geometric algebra and, more generally, in the Clifford algebra representation of Lie groups.
In the case of a positive-definite Euclidean quadratic form, the double covering group of the isometry group is known as the Spin group, . It can be conveniently described in terms of a Clifford algebra. Unit quaternions give the group .
In non-Euclidean geometries
In spherical geometry, a direct motion of the n-sphere (an example of the elliptic geometry) is the same as a rotation of (n + 1)-dimensional Euclidean space about the origin (SO(n + 1)). For odd n, most of these motions do not have fixed points on the n-sphere and, strictly speaking, are not rotations of the sphere; such motions are sometimes referred to as Clifford translations. Rotations about a fixed point in elliptic and hyperbolic geometries are not different from Euclidean ones.
One application of this is special relativity, as it can be considered to operate in a four-dimensional space, spacetime, spanned by three space dimensions and one of time. In special relativity this space is linear and the four-dimensional rotations, called Lorentz transformations, have practical physical interpretations. The Minkowski space is not a metric space, and the term isometry is inapplicable to Lorentz transformation.
If a rotation is only in the three space dimensions, i.e. in a plane that is entirely in space, then this rotation is the same as a spatial rotation in three dimensions. But a rotation in a plane spanned by a space dimension and a time dimension is a hyperbolic rotation, a transformation between two different reference frames, which is sometimes called a "Lorentz boost". These transformations demonstrate the pseudo-Euclidean nature of the Minkowski space. They are sometimes described as squeeze mappings and frequently appear on Minkowski diagrams which visualize (1 + 1)-dimensional pseudo-Euclidean geometry on planar drawings. The study of relativity is concerned with the Lorentz group generated by the space rotations and hyperbolic rotations.
Whereas SO(3) rotations, in physics and astronomy, correspond to rotations of celestial sphere as a 2-sphere in the Euclidean 3-space, Lorentz transformations from SO(3;1)+ induce conformal transformations of the celestial sphere. It is a broader class of the sphere transformations known as Möbius transformations.
Rotations define important classes of symmetry: rotational symmetry is an invariance with respect to a particular rotation. The circular symmetry is an invariance with respect to all rotation about the fixed axis.
As was stated above, Euclidean rotations are applied to rigid body dynamics. Moreover, most of mathematical formalism in physics (such as the vector calculus) is rotation-invariant; see rotation for more physical aspects. Euclidean rotations and, more generally, Lorentz symmetry described above are thought to be symmetry laws of nature. In contrast, the reflectional symmetry is not a precise symmetry law of nature.
The complex-valued matrices analogous to real orthogonal matrices are the unitary matrices , which represent rotations in complex space. The set of all unitary matrices in a given dimension n forms a unitary group of degree n; and its subgroup representing proper rotations (those that preserve the orientation of space) is the special unitary group of degree n. These complex rotations are important in the context of spinors. The elements of are used to parametrize three-dimensional Euclidean rotations (see above), as well as respective transformations of the spin (see representation theory of SU(2)).
- Lounesto 2001, p. 30.
- Hestenes 1999, pp. 580–588.
- Hestenes, David (1999). New Foundations for Classical Mechanics. Dordrecht: Kluwer Academic Publishers. ISBN 0-7923-5514-8.
- Lounesto, Pertti (2001). Clifford algebras and spinors. Cambridge: Cambridge University Press. ISBN 978-0-521-00551-7.
- Brannon, Rebecca M. (2002). "A review of useful theorems involving proper orthogonal matrices referenced to three-dimensional physical space" (PDF). Albuquerque: Sandia National Laboratories. |
11 - 3 Experiment 11 Simple Harmonic Motion Questions How are swinging pendulums and masses on springs related? Why are these types of problems so important in Physics? What is a spring’s force constant and how can you measure it? What is linear regression? How do you use graphs to ascertain physical meaning from equations? Again, how do you compare two numbers, which have errors? Note: This week all students must write a very brief lab report during the lab period. It is due at the end of the period. The explanation of the equations used, the introduction and the conclusion are not necessary this week. The discussion section can be as little as three sentences commenting on whether the two measurements of the spring constant are equivalent given the propagated errors. This mini-lab report will be graded out of 50 points Concept When an object (of mass m) is suspended from the end of a spring, the spring will stretch a distance x and the mass will come to equilibrium when the tension F in the spring balances the weight of the body, when F = - kx = mg. This is known as Hooke's Law. k is the force constant of the spring, and its units are Newtons / meter. This is the basis for Part 1. In Part 2 the object hanging from the spring is allowed to oscillate after being displaced down from its equilibrium position a distance -x. In this situation, Newton's Second Law gives for the acceleration of the mass: Fnet = m a or The force of gravity can be omitted from this analysis because it only serves to move the equilibrium position and doesn’t affect the oscillations. Acceleration is the second time- derivative of x, so this last equation is a differential equation. To solve: we make an educated guess: Here A and w are constants yet to be determined. At t = 0 this solution gives x(t=0) = A, which indicates that A is the initial distance the spring stretches before it oscillates. If friction is negligible, the mass will continue to oscillate with amplitude A. Now, does this guess actually solve the (differential) equation? A second time-derivative gives: Comparing this equation to the original differential equation, the correct solution was chosen if w2 = k / m. To understand w, consider the first derivative of the solution: −kx = ma a = − k m ⎛ ⎝ ⎜⎜⎜⎜ ⎞ ⎠ ⎟⎟⎟⎟ x d 2x dt 2 = − k m x x(t) = A cos(ωt) d 2x(t) dt 2 = −Aω2 cos(ωt) = −ω2x(t) James Gering Florida Institute of Technology 11 - 4 Integrating gives We assume the object completes one oscillation in a certain period of time, T. This helps set the limits of integration. Initially, we pull the object a distance A from equilibrium and release it. So at t = 0 and x = A. (one. |
What is a short answer type of test?
Short-answer tests are composed of items that are similar to objective items, in that a clearly-defined answer is required. They differ from the latter in that the answer has to be supplied by the person being tested rather than simply chosen from a number of options provided.
What is very short answer type questions?
Short-answer questions are open-ended questions that require students to create an answer. They are commonly used in examinations to assess the basic knowledge and understanding (low cognitive levels) of a topic before more in-depth assessment questions are asked on the topic. Often students may answer in bullet form.
How do I get rid of wrong answers on multiple choice?
Here are some strategies you can use that will help you eliminate multiple-choice answer choices in smart and effective ways.
- Reread to better understand the question.
- Take away any obviously wrong answers.
- Look for absolutes.
- Check for unrelated or extreme information.
- Use information from other questions to help.
Which type of question is the same as two true and false type questions?
Multiple choice questions work just like true/false questions except that there are more than two possible answer choices.
How do you guess true/false questions?
5 Quick Tips for Answering True-or-False Test Questions
- Read the questions carefully. In true-or-false test questions, it’s not uncommon to have just one word make the difference as to whether the statement is correct or not.
- Dissect the statement word-by-word and phrase-by-phrase.
- Look for inflexible words.
- Don’t become confused by negatives.
- When all else fails, guess.
What are true or false questions?
A true or false question consists of a statement that requires a true or false response. There are other variations of the True or False format as well, such as: “yes” or “no”, “correct” or “incorrect”, and “agree” or “disagree” which is often used in surveys.
Is 0 false or true?
Zero is used to represent false, and One is used to represent true. For interpretation, Zero is interpreted as false and anything non-zero is interpreted as true. To make life easier, C Programmers typically define the terms “true” and “false” to have values 1 and 0 respectively.
How do you study true or false questions?
The following strategies will enhance your ability to answer true/false questions correctly:
- Approach each statement as if it were true.
- For a sentence to be true, every part must be “true”.
- Pay attention to “qualifiers”.
- Don’t let “negatives” confuse you.
- Watch for statements with double negatives.
Is true and false true?
True is written: true; False is written: false; Not is written in a variety of ways.
What is the truth value of P ∨ Q?
The disjunction of p and q, denoted by p ∨ q, is the proposition “p or q.” The truth value of p ∨ q is false if both p and q are false. Otherwise, it is true.
What are the 3 logical operators?
What does V mean in truth tables?
logical disjunction operator
What is P and Q in truth table?
They are used to determine the truth or falsity of propositional statements by listing all possible outcomes of the truth-values for the included propositions. Given two propositions, p and q, “p and q” forms a conjunction. The conjunction “p and q” is only true if both p and q are true.
What does P Q mean?
The statement “p implies q” means that if p is true, then q must also be true. The statement “p implies q” is also written “if p then q” or sometimes “q if p.” Statement p is called the premise of the implication and q is called the conclusion. Example 1.
What is the other name of truth table?
other name of truth table is truth function.
What is a boolean truth table?
The table used to represent the boolean expression of a logic gate function is commonly called a Truth Table. A logic gate truth table shows each possible input combination to the gate or circuit with the resultant output depending upon the combination of these input(s).
Who invented truth tables?
What are rules to draw a truth table?
Constructing Truth Tables
- Step 1: Count how many statements you have, and make a column for each statement.
- Step 2: Fill in the different possible truth values for each column.
- Step 3: Add a column for each negated statement, and fill in the truth values.
How many rows are in a truth table with 3 variables?
How many rows should a truth table have?
Since each atomic statement has two possible values (True or False), a truth table will have 2n rows, where n is the number of atomic statements. So, if there are two atomic statements, the table has four rows; three atomic statements requires eight rows; four requires 16 rows; and so forth.
How do you find the number of rows in a truth table?
The number of rows that a truth-table needs is determined by the number of basic statement letters involved in the set of formulas that will be involved in the computation. The formula for the rows is 2n where n = the number of basic statement letters involved.
How does a truth table work?
A truth table is a way of organizing information to list out all possible scenarios. We title the first column p for proposition. In the second column we apply the operator to p, in this case it’s ~p (read: not p). So as you can see if our premise begins as True and we negate it, we obtain False, and vice versa.
What is the point of a truth table?
A truth table is a logically-based mathematical table that illustrates the possible outcomes of a scenario. The truth table contains the truth values that would occur under the premises of a given scenario. As a result, the table helps visualize whether an argument is logical (true) in the scenario.
What are the 7 logic gates?
The basic logic gates are classified into seven types: AND gate, OR gate, XOR gate, NAND gate, NOR gate, XNOR gate, and NOT gate. The truth table is used to show the logic gate function. All the logic gates have two inputs except the NOT gate, which has only one input.
What makes a truth table valid?
In general, to determine validity, go through every row of the truth-table to find a row where ALL the premises are true AND the conclusion is false. If not, the argument is valid. If there is one or more rows, then the argument is not valid.
How do you determine if a premise is true?
2. A sound argument must have a true conclusion. TRUE: If an argument is sound, then it is valid and has all true premises. Since it is valid, the argument is such that if all the premises are true, then the conclusion must be true.
Is the symbolic argument valid or invalid?
Symbolic Arguments A symbolic argument consists of a set of premises and a conclusion. It is called a symbolic argument because we generally write it in symbolic form to determine its validity. An argument is valid when its conclusion necessarily follows from a given set of premises.
How do you know if an argument is strong or weak?
Definition: A strong argument is a non-deductive argument that succeeds in providing probable, but not conclusive, logical support for its conclusion. A weak argument is a non-deductive argument that fails to provide probable support for its conclusion. |
A remote procedure call is a method for building distributed systems. This allows a program on one machine to call a subroutine on another machine without knowing that it is remote. Transport protocols such as RPC are not used to transport data; rather, they are used to communicate with existing communications features in a transparent manner.
Windows processes can communicate remotely, either between a client and server on a network or between a computer and a server on a single computer, using Remote Procedure Call (RPC). In RPC, dynamic ports are used for communication between computers, but static ports (TCP port 135) are also used for communication between computers.
What Is Rpc Example?
In addition to remote monitoring program control, remote FASTBUS access, remote error logging, remote terminal interaction with processors in VMEbus, and the submission of operating system commands from embedded microprocessors, RPC is also used in CERN experiments.
What Is Rpc Primarily Used For?
A remote procedure call (RPC) is a communication protocol used primarily between a client and a server.
What Is Rpc Over Tcp?
Remote Procedure Call is basically a form of inter-process communication that allows one program to directly communicate with another program on the same machine or another machine on the network. See Wikipedia for more information on remote procedure calls. TCP is used to run RPC.
Why Is Rpc Used?
Authentication is carried out by RPC in order to identify the server and client. In a network, RPC interfaces are typically used to communicate between different workstations. The RPC protocol, however, works just as well for communication between different processes on the same workstation as well.
What Is Rpc And Lpc?
A distributed environment can be accessed by using RPC, which allows client and server software to communicate. Communication between two user modes is carried out using LPC.
How Does Rpc Connection Work?
What RPC is and how it works. Function calls are similar to RPCs. In an RPC, the calling arguments are passed to the remote procedure, and the caller waits for the response to be returned from the remote procedure when the call is made. In this case, the client makes a procedure call that sends a request to the server.
What Is Meant By Rpc?
The term “Remote Procedure Call” refers to the call of a remote procedure. Computers use their CPU to run procedures, or instructions, which are the basis for most computer programs. An RPC, for example, may be used to access data from a network file system (NFS) on a computer without a hard drive.
How Do I Connect To An Rpc Server?
You can also use the Ctrl + Shift + Esc hotkey when right-clicking on the Windows Task Bar and selecting Task Manager.
The Services tab should be selected.
Open the window by clicking on the link near the bottom left corner.
The Remote Procedure Call service can be accessed by clicking here…
The DCOM Server Process Launcher can be found on the desktop.
Where Is Rpc Used?
Remote systems can be called using RPC, which is used to call other processes. In addition to a procedure call, a function call or a subroutine call can also be used. Client-server models are used in RPC. Client programs request services, and server programs provide them.
How Does A Rpc Work?
A RPC is a protocol that allows a client to initiate a request message to a known remote server and execute a specified procedure. A response is sent from the remote server to the client, and the application continues its operation.
What Protocol Does Rpc Use?
In general, RPC applications use UDP when sending data, and only switch back to TCP when the data to be transferred does not fit into a single datagram. Client programs must be able to determine which port a program number map to.
Is Rpc Used Today?
The term Remote Direct Memory Access refers to applications such as Network File Systems (Sandberg, Goldberg, Kleiman, Walsh, & Lyon, 1985) and Remote Direct Memory Access (Kalia, Kaminsky, & Andersen, n.d. In order to develop an ecosystem of microservices, RPC has been used everywhere.
Does Rpc Use Tcp Or Udp?
In general, RPC applications use UDP when sending data, and only switch back to TCP when the data to be transferred does not fit into a single datagram.
Does Rpc Work Over Http?
A client on the Internet can connect securely to a Microsoft Exchange Server without having to log into a virtual private network (VPN) first by using RPC over HTTP (Remote Procedure Call over HTTP).
Is Rpc 135 Tcp Or Udp?
In client/server applications (such as Exchange clients, the recently exploited messenger service, and other Windows NT/2K/XP programs), Remote Procedure Call (RPC) port 135 is used. All ports are used for the service: 135/tcp, 135/udp, 137/udp 138/udp, 139/tcp, 445/tcp. |
Presentation on theme: "By Sasha Fenimore, Max Leal, Will Fyfe. An organ is something that has many kinds of tissue that all function together to perform a specific task in."— Presentation transcript:
An organ is something that has many kinds of tissue that all function together to perform a specific task in an organism. Organs don’t usually function alone. Most of the time there are a group of organs called an organ system working together to carry out an important function.
Like we stated earlier organs are made up of several different tissues. One of these tissues is muscle tissue. Muscle tissue is composed of cells that can contract. ▪ The human body contains three kinds of muscle tissue. ▪ Skeletal muscle- moves bones in your trunk, limbs, and face. ▪ Smooth muscle- handles body functions that you cannot control consciously. ▪ Cardiac muscle- pumps blood through your body. Another of these tissues is nervous tissue. Nervous tissue contains cells that receive and transmit messages in the form of electrical impulses.
▪ These cells called neurons are specialized to send and receive messages from muscles, glands, and other neurons throughout the body. Epithelial tissue consists of layers of cells that line or cover all internal and external body surfaces. ▪ Each epithelial layer is formed from cells that are tightly bound together, providing a protective barrier for these surfaces. Connective tissue binds, supports, and protects structures in the body. ▪ Connective tissues are the most abundant and diverse of the four types of tissue, and include bone, cartilage, tendons, fat, blood, and lymph.
Here is a list of organ systems, their major structures, and their functions. Skeletal system: ▪ Major structures are bones. ▪ Its functions are that it provides structure and supports and protects internal organs. Muscular system: ▪ Major structures are muscles (skeletal, cardiac, and smooth). ▪ Its functions are the they provide structure; supports and moves trunk and limbs; moves substances through the body.
Integumentary system: ▪ Major structures are skin, hair, and nails. ▪ Its functions are to protect against pathogens and help regulate body temperature. Circulatory system: ▪ Major structures include the heart, blood vessels, and blood. ▪ Its function is to transport nutrients and wastes to and from all body tissues.
Respiratory system: ▪ Major structures are air passages and lungs. ▪ Its function is to carry air in and out of the lungs, where gases such as oxygen and carbon dioxide are exchanged. Immune system: ▪ Its major structures are lymph nodes and vessels and white blood cells. ▪ Its function is to provide protection against infection and disease. Digestive system: ▪ Major structures include the mouth, esophagus, stomach, liver, pancreas, small and large intestines. ▪ Its functions are to store and digest food, absorb nutrients, and eliminate waste.
Excretory system: ▪ Major structures include kidneys, ureters, bladder, urethra, skin, and lungs. ▪ Its functions are maintaining water and chemical balances and eliminating waste. Nervous system: ▪ Major structures are the brain, spinal cord, nerves, sense organs, and receptors. ▪ Its proposes are to control and coordinate body movements and sense, control consciousness and creativity, and to help monitor and maintain other body systems.
Endocrine system: ▪ Major structures include the adrenal gland, thyroid gland, pancreas, and hypothalamus. ▪ Its main proposes are to maintain homeostasis, regulate metabolism, regulate water and mineral balance, regulate growth, and regulate sexual development and reproduction. Reproductive system: ▪ Major structures are in females the ovaries, uterus, and mammary glands. In males are the testes. ▪ The function of this system is to produce offspring.
Many organs and organ systems in the human body are housed in separate compartments called body cavities. These cavities protect delicate internal organs from injuries and from the daily wear and tear of walking, jumping, and running. They also allow for organs to expand and contract while remaining supported.
The human body has four main cavities that house one or more internal organs. The cranial cavity encases the brain. The spinal cavity surrounds the spinal cord. The thoracic cavity contains the heart, the esophagus, the lungs, trachea, and bronchi. The abdominal cavity contains organs of the digestive, reproductive, and excretory systems.
Each organ system has organs associated with it according to the organs primary function. Its sometimes hard to tell which organ belongs to which organ system because organs sometimes perform different function vital to different systems. Each organ carries out its own specific function, but for an organism to survive, the organ systems must work together.
For example nutrients from the digestive system are distributed throughout the body by the circulatory system. And the efficiency of the circulatory all depends on collecting nutrients from the digestive system and from obtaining oxygen by way of the respiratory system. |
The Command Line
This book uses the term command to refer to both the characters you type on the command line and the program that action invokes.
A command line comprises a simple command (below), a pipeline (page 166), or a list (page 170).
A Simple Command
The shell executes a program when you enter a command in response to its prompt. For example, when you give an ls command, the shell executes the utility program named ls. You can cause the shell to execute other types of programs—such as shell scripts, application programs, and programs you have written—in the same way. The line that contains the command, including any arguments, is called a simple command. The following sections discuss simple commands; see page 155 for a more technical and complete description of a simple command.
Command-line syntax dictates the ordering and separation of the elements on a command line. When you press the RETURN key after entering a command, the shell scans the command line for proper syntax. The syntax for a simple command is
- command [arg1] [arg2] ... [argn] RETURN
Whitespace (any combination of SPACEs and/or TABs) must separate elements on the command line. The command is the name of the command, arg1 through argn are arguments, and RETURN is the keystroke that terminates the command line. The brackets in the command-line syntax indicate that the arguments they enclose are optional. Not all commands require arguments: Some commands do not allow arguments; other commands allow a variable number of arguments; and still others require a specific number of arguments. Options, a special kind of argument, are usually preceded by one or two hyphens (–).
Some useful Linux command lines consist of only the name of the command without any arguments. For example, ls by itself lists the contents of the working directory. Commands that require arguments typically give a short error message, called a usage message, when you use them without arguments, with incorrect arguments, or with the wrong number of arguments.
For example, the mkdir (make directory) utility requires an argument that specifies the name of the directory you want it to create. Without this argument, it displays a usage message (operand is another term for “argument”):
mkdir: missing operand
Try 'mkdir --help' for more information.
On the command line each sequence of nonblank characters is called a token or word. An argument is a token that a command acts on (e.g., a filename, a string of characters, a number). For example, the argument to a vim or emacs command is the name of the file you want to edit.
The following command line uses cp to copy the file named temp to tempcopy:
cp temp tempcopy
Arguments are numbered starting with the command itself, which is argument zero. In this example, cp is argument zero, temp is argument one, and tempcopy is argument two. The cp utility requires at least two arguments on the command line. Argument one is the name of an existing file. In this case, argument two is the name of the file that cp is creating or overwriting. Here the arguments are not optional; both arguments must be present for the command to work. When you do not supply the right number or kind of arguments, cp displays a usage message. Try typing cp and then pressing RETURN.
An option is an argument that modifies the effects of a command. These arguments are called options because they are usually optional. You can frequently specify more than one option, modifying the command in several ways. Options are specific to and interpreted by the program that the command line calls, not the shell.
By convention, options are separate arguments that follow the name of the command and usually precede other arguments, such as filenames. Many utilities require options to be prefixed with a single hyphen. However, this requirement is specific to the utility and not the shell. GNU long (multicharacter) program options are frequently prefixed with two hyphens. For example, – –help generates a (sometimes extensive) usage message.
The first of the following commands shows the output of an ls command without any options. By default, ls lists the contents of the working directory in alphabetical order, vertically sorted in columns. Next the –r (reverse order; because this is a GNU utility, you can also specify – –reverse) option causes the ls utility to display the list of files in reverse alphabetical order, still sorted in columns. The –x option causes ls to display the list of files in horizontally sorted rows.
hold mark names oldstuff temp zach
house max office personal test
zach temp oldstuff names mark hold
test personal office max house
hold house mark max names office
oldstuff personal temp test zach
When you need to use several options, you can usually group multiple single-letter options into one argument that starts with a single hyphen; do not put SPACEs between the options. You cannot combine options that are preceded by two hyphens in this way. Specific rules for combining options depend on the program you are running. The next example shows both the –r and –x options with the ls utility. Together these options generate a list of filenames in horizontally sorted rows in reverse alphabetical order.
zach test temp personal oldstuff office
names max mark house hold
Most utilities allow you to list options in any order; thus ls –xr produces the same results as ls –rx. The command ls –x –r also generates the same list.
Some utilities have options that require arguments. These arguments are not optional. For example, the gcc utility (C compiler) has a –o (output) option that must be followed by the name you want to give the executable file that gcc generates. Typically an argument to an option is separated from its option letter by a SPACE:
gcc -o prog prog.c
Some utilities sometimes require an equal sign between an option and its argument. For example, you can specify the width of output from diff in two ways:
diff -W 60 filea fileb
diff --width=60 filea fileb
Arguments that start with a hyphen
Another convention allows utilities to work with arguments, such as filenames, that start with a hyphen. If a file named –l is in the working directory, the following command is ambiguous:
This command could be a request to display a long listing of all files in the working directory (–l option) or a request for a listing of the file named –l. The ls utility interprets it as the former. Avoid creating a file whose name begins with a hyphen. If you do create such a file, many utilities follow the convention that a –– argument (two consecutive hyphens) indicates the end of the options (and the beginning of the arguments). To disambiguate the preceding command, you can type
ls -- -l
Using two consecutive hyphens to indicate the end of the options is a convention, not a hard-and-fast rule, and a number of utilities do not follow it (e.g., find). Following this convention makes it easier for users to work with a program you write.
For utilities that do not follow this convention, there are other ways to specify a filename that begins with a hyphen. You can use a period to refer to the working directory and a slash to indicate the following filename refers to a file in the working directory:
You can also specify the absolute pathname of the file:
Processing the Command Line
As you enter a command line, the tty device driver (part of the Linux kernel) examines each character to see whether it must take immediate action. When you press CONTROL-H (to erase a character) or CONTROL-U (to kill a line), the device driver immediately adjusts the command line as required; the shell never sees the character(s) you erased or the line you killed. Often a similar adjustment occurs when you press CONTROL-W (to erase a word). When the character you entered does not require immediate action, the device driver stores the character in a buffer and waits for additional characters. When you press RETURN, the device driver passes the command line to the shell for processing.
Parsing the command line
When the shell processes a command line, it looks at the line as a whole and parses (breaks) it into its component parts (Figure 5-1). Next the shell looks for the name of the command. Usually the name of the command is the first item on the command line after the prompt (argument zero). The shell takes the first characters on the command line up to the first blank (TAB or SPACE) and then looks for a command with that name. The command name (the first token) can be specified on the command line either as a simple filename or as a pathname. For example, you can call the ls command in either of the following ways:
Figure 5-1 Processing the command line
Absolute versus relative pathnames
From the command line, there are three ways you can specify the name of a file you want the shell to execute: as an absolute pathname (starts with a slash [/]; page 189), as a relative pathname (includes a slash but does not start with a slash; page 190), or as a simple filename (no slash). When you specify the name of a file for the shell to execute in either of the first two ways (the pathname includes a slash), the shell looks in the specified directory for a file with the specified name that you have permission to execute. When you specify a simple filename (no slash), the shell searches through a list of directories for a filename that matches the specified name and for which you have execute permission. The shell does not look through all directories but only the ones specified by the variable named PATH. Refer to page 365 for more information on PATH. Also refer to the discussion of the which and whereis utilities on page 263.
When it cannot find the file, bash displays the following message:
bash: abc: command not found...
Some systems are set up to suggest where you might be able to find the program you tried to run. One reason the shell might not be able to find the executable file is that it is not in a directory listed in the PATH variable. Under bash the following command temporarily adds the working directory (.) to PATH:
For security reasons, it is poor practice to add the working directory to PATH permanently; see the following tip and the one on page 366.
When the shell finds the file but cannot execute it (i.e., because you do not have execute permission for the file), it displays a message similar to
bash: ./def: Permission denied
See “ls –l: Displays Permissions” on page 199 for information on displaying access permissions for a file and “chmod: Changes Access Permissions” on page 201 for instructions on how to change file access permissions.
Executing a Command
If it finds an executable file with the name specified on the command line, the shell starts a new process. A process is the execution of a command by Linux (page 379). The shell makes each command-line argument, including options and the name of the command, available to the called program. While the command is executing, the shell waits for the process to finish. At this point the shell is in an inactive state named sleep. When the program finishes execution, it passes its exit status (page 1051) to the shell. The shell then returns to an active state (wakes up), issues a prompt, and waits for another command.
The shell does not process arguments
Because the shell does not process command-line arguments but merely passes them to the called program, the shell has no way of knowing whether a particular option or other argument is valid for a given program. Any error or usage messages about options or arguments come from the program itself. Some utilities ignore bad options.
Editing the Command Line
You can repeat and edit previous commands and edit the current command line. See pages 131 and 384 for more information. |
Consider the humble pocket calculator and home computer. Between the two, which gadget is more functional? The obvious answer is the home computer, as it can run programs for a multitude of computational problems — this makes it Turing complete. As for the calculator, it’s only equipped to handle a limited range of mathematical operations and is thus considered Turing incomplete.
As the above illustration demonstrates, Turing completeness refers to the extent of a system’s ability to execute any computational task. The more computational functions a system is able to perform, the more “Turing complete” it becomes.
Turing completeness has been applied to the world of blockchain to assess the capabilities of any given blockchain. Between Bitcoin and Ethereum, for example, Ethereum is Turing complete as it enables developers to write multifaceted programs in Solidity and run them on the Ethereum Virtual Machine (EVM).
This is different from Bitcoin, whose underlying Script programming language was intentionally designed as a non-Turing complete system that prevents it from executing complex, conditional smart contracts.
This article will walk you through the concept of Turing completeness and how it directly influences the range and complexity of tasks that can be executed on the blockchain.
The History of Turing Complete Systems
As a concept, Turing completeness is rooted in the early days of theoretical computer science, long before the advent of today’s sophisticated computers.
In 1936, British mathematician and logician Alan Turing published a paper introducing a conceptual framework for a machine capable of executing an arbitrary set of coded instructions. This hypothetical device later came to be known as the Turing machine.
The Turing machine was originally imagined as having an infinite tape, segmented into boxes. Each box contains a simple instruction, coded as a one or a zero. The machine reads these instructions sequentially before replacing the original symbol with a new one, also either a one or a zero. The machine then updates its internal state, which represents a snapshot of its computational process at a given moment.
You can view Turing's hypothetical machine as the prototype of the programmable computer, paving the way for the future development of universal, programmable computers. In today's computing landscape, Turing complete systems are ubiquitous, with the term itself used as a benchmark to gauge a system's computational capabilities.
Ethereum: The First Turing Complete Blockchain
Ethereum was the first blockchain platform to achieve Turing completeness, expanding the realm of possibilities in blockchain technology. This innovative platform allows for the programming of smart contracts and decentralized applications (dApps).
Ethereum’s Turing completeness comprises two key elements:
- Solidity, the programming language used for building smart contracts on Ethereum, is Turing complete. It is a general-purpose language tailored specifically for the Ethereum ecosystem.
- The Ethereum Virtual Machine (EVM), responsible for executing these smart contracts, is also Turing complete.
The EVM's architecture enables it to handle an expansive variety of smart contracts, even those with functionalities yet to be envisioned. This flexibility propelled Ethereum into a revolutionary phase, bringing blockchain technology from a niche utility to a versatile platform with nearly limitless applications.
Practical Limitations to Ethereum’s Turing Completeness
While Ethereum's Turing completeness is revolutionary, it's essential to acknowledge the platform's practical constraints. Theoretically, Turing completeness posits that a system can run any computation, given infinite time and resources. However, in the real-world blockchain environment, each Ethereum transaction consumes "gas," a unit that measures the computational work involved. Should a smart contract enter an infinite loop—a situation theoretically possible in a Turing complete system—it would eventually deplete its gas reserves.
This limitation is intentional. Infinite loops could cripple a public blockchain network, which operates on finite computational resources. Consequently, each transaction on Ethereum mandates a gas limit, specifying the maximum computational effort assignable to that operation. If a transaction fails to complete within this limit, it's automatically terminated.
Moreover, the vast majority of Ethereum smart contracts rarely utilize the full breadth of Turing completeness, including features like recursive loops.
Drawbacks of Turing Completeness in Blockchain
The infinitive programmability of Turing complete systems, while empowering, comes with specific vulnerabilities — this is especially true for public blockchains, where code is visible to anyone. The flexibility to craft any computation creates a broad array of possible outcomes, not all of which can be foreseen. Consequently, this leaves room for disruptions like software bugs and unintended functions that could hamper the protocol's effectiveness.
In centralized systems, unforeseen issues can be quickly rectified by the entity controlling the code through prompt updates. However, the decentralized architecture of blockchain complicates this process. Any modification to the code must undergo a community-driven voting process, delaying the implementation of critical patches.
As an example, consider The DAO, a decentralized venture capital fund set up on Ethereum in 2016. While often termed a 'hack,' the incident was more an exploitation of a previously overlooked vulnerability in the contract's code. This exploit allowed a malevolent actor to siphon over $150 million from an Ethereum smart contract, necessitating the rewind of the Ethereum blockchain to recover the stolen assets. This incident eventually gave way to the Ethereum Classic fork.
This incident was essentially a reentrancy attack, where the attacker exploited a flaw in the smart contract's code to withdraw funds illegally. Since The DAO debacle, the Ethereum community has made strides in improving coding best practices to avert similar vulnerabilities. However, the fluid nature of Turing complete systems—where new code is being constantly created—ensures that the threat of emerging vulnerabilities remains a persistent concern.
DeFiChain’s MetaChain Layer Combines the Best of Both Worlds
The upcoming integration of the MetaChain Layer is a pivotal milestone for DeFiChain, positioning it as the first blockchain to combine the Bitcoin and Ethereum ecosystems.
Deliberately detached from the conventional UTXO (Unspent Transaction Output) system, the MetaChain Layer allows users the flexibility to either engage with the Turing complete partition, akin to Ethereum, or remain in the well-known UTXO realm of Bitcoin.
The choice is entirely up to you, and armed with the insights on the advantages and disadvantages of Turing completeness that you've gleaned from this blog post, you'll be well-equipped to make an informed decision.
For a deeper dive into DeFiChain's groundbreaking MetaChain Layer, don't miss these insightful blog posts: |
Addition and Subtraction Rules ESL
In this ESL addition and subtraction worksheet, students review the terms commutative property and associative property as they use the words to complete 5 sentences.
3rd - 5th Math 3 Views 23 Downloads
Integers: Addition and Subtraction
Young mathematicians construct their own understanding of integers with an inquiry-based math lesson. Using colored chips to represent positive and negative numbers, children model a series of addition and subtraction problems as they...
5th - 8th Math CCSS: Adaptable
Solving Two-Step Word Problems Involving Addition and Subtraction of Whole Number Including Money
Students determine how to write equations to solve two-step word problems using addition or subtraction. In this word problem lesson, students review number facts and problem solving steps. They apply the problem solving steps as they...
2nd - 4th Math
Fraction Equivalence, Ordering, and Operations
Need a unit to teach fractions to fourth graders? Look no further than this well-developed and thorough set of lessons that takes teachers through all steps of planning, implementing, and assessing their lessons. Divided into eight...
3rd - 5th Math CCSS: Designed
Second Grade Adding and Subtracting
Reinforce addition and subtraction in your second grade class with a series of lessons that feature manipulatives, ten frames, spinners, and number lines. The unit is divided into three sections and includes concepts such as counting...
1st - 3rd Math CCSS: Designed
Adding & Subtracting (Combining) Integers
Maintain a positive atmosphere in your math class with this fun lesson on adding and subtracting integers. After first explaining the rules for combining positive and negative numbers, this resource uses a comic strip to guide students...
5th - 7th Math CCSS: Adaptable |
4.1 The Concepts of Force and Mass A force is a push or a pull. Arrows are used to represent forces. The length of the arrow is proportional to the magnitude of the force. 15 N 5 N
4.1 The Concepts of Force and Mass Mass is a measure of the amount of “stuff” contained in an object.
4.2 Newton’s First Law of Motion An object continues in a state of rest or in a state of motion at a constant speed along a straight line, unless changed by a net force The net force is the vector sum of all of the forces acting on an object. Newton’s First Law (Law of Inertia)
4.2 Newton’s First Law of Motion The net force on an object is the vector sum of all forces acting on that object. The SI unit of force is the Newton (N). Individual Forces Net Force 10 N 4 N 6 N
4.2 Newton’s First Law of Motion Individual Forces Net Force 3 N 4 N 5 N
4.2 Newton’s First Law of Motion Inertia is the natural tendency of an object to remain at rest or in motion at a constant speed along a straight line. The mass of an object is a quantitative measure of inertia. SI Unit of Mass: kilogram (kg)
4.3 Newton’s Second Law of Motion Mathematically, the net force is written as where the Greek letter sigma denotes the vector sum.
4.3 Newton’s Second Law of Motion Newton’s Second Law When a net external force acts on an object of mass m , the acceleration that results is directly proportional to the net force and has a magnitude that is inversely proportional to the mass. The direction of the acceleration is the same as the direction of the net force.
4.3 Newton’s Second Law of Motion SI Unit for Force This combination of units is called a newton (N).
4.3 Newton’s Second Law of Motion A free-body-diagram is a diagram that represents the object and the forces that act on it.
4.3 Newton’s Second Law of Motion The net force in this case is: 275 N + 395 N – 560 N = +110 N and is directed along the + x axis of the coordinate system.
4.3 Newton’s Second Law of Motion If the mass of the car is 1850 kg then, by Newton’s second law, the acceleration is
4.4 The Vector Nature of Newton’s Second Law The direction of force and acceleration vectors can be taken into account by using x and y components. is equivalent to
4.5 Newton’s Third Law of Motion Newton’s Third Law of Motion Whenever one body exerts a force on a second body, the second body exerts an oppositely directed force of equal magnitude on the first body.
4.5 Newton’s Third Law of Motion Suppose that the magnitude of the force is 36 N. If the mass of the spacecraft is 11,000 kg and the mass of the astronaut is 92 kg, what are the accelerations?
4.9 Static and Kinetic Frictional Forces When an object is in contact with a surface there is a force acting on that object. The component of this force that is parallel to the surface is called the frictional force .
4.9 Static and Kinetic Frictional Forces When the two surfaces are not sliding across one another the friction is called static friction .
4.9 Static and Kinetic Frictional Forces The magnitude of the static frictional force can have any value from zero up to a maximum value. is called the coefficient of static friction.
4.9 Static and Kinetic Frictional Forces Note that the magnitude of the frictional force does not depend on the contact area of the surfaces. What does it depend on???
4.9 Static and Kinetic Frictional Forces Static friction opposes the impending relative motion between two objects. Kinetic friction opposes the relative sliding motion motions that actually does occur. is called the coefficient of kinetic friction. |
Quasars are supermassive black holes that live at the center of distant massive galaxies. They shine as the most luminous beacons in the sky across the entire electromagnetic spectrum by rapidly accreting matter into their gravitationally inescapable centers. New work from Carnegie’s Hubble Fellow Yue Shen and Luis Ho of the Kavli Institute for Astronomy and Astrophysics (KIAA) at Peking University solves a quasar mystery that astronomers have been puzzling over for 20 years. Their work, published in the September 11 issue of Nature, shows that most observed quasar phenomena can be unified with two simple quantities: one that describes how efficiently the hole is being fed, and the other that reflects the viewing orientation of the astronomer.
Quasars display a broad range of outward appearances when viewed by astronomers, reflecting the diversity in the conditions of the regions close to their centers. But despite this variety, quasars have a surprising amount of regularity in their quantifiable physical properties, which follow well-defined trends (referred to as the “main sequence” of quasars) discovered more than 20 years ago. Shen and Ho solved a two-decade puzzle in quasar research: What unifies these properties into this main sequence?
Using the largest and most-homogeneous sample to date of over 20,000 quasars from the Sloan Digital Sky Survey, combined with several novel statistical tests, Shen and Ho were able to demonstrate that one particular property related to the accretion of the hole, called the Eddington ratio, is the driving force behind the so-called main sequence. The Eddington ratio describes the efficiency of matter fueling the black hole, the competition between the gravitational force pulling matter inward and the luminosity driving radiation outward. This push and pull between gravity and luminosity has long been suspected to be the primary driver behind the so-called main sequence, and their work at long last confirms this hypothesis.
Of additional importance, they found that the orientation of an astronomer’s line-of-sight when looking down into the black hole’s inner region plays a significant role in the observation of the fast-moving gas innermost to the hole, which produces the broad emission lines in quasar spectra. This changes scientists’ understanding of the geometry of the line-emitting region closest to the black hole, a place called the broad-line region: the gas is distributed in a flattened, pancake-like configuration. Going forward, this will help astronomers improve their measurements of black hole masses for quasars.
“Our findings have profound implications for quasar research. This simple unification scheme presents a pathway to better understand how supermassive black holes accrete matter and interplay with their environments,” Shen said.
“And better black hole mass measurements will benefit a variety of applications in understanding the cosmic growth of supermassive black holes and their place in galaxy formation,” Ho added.
SOURCE: The Carnegie Institution Of Washington |
|Part of the series on|
In logic, an argument (Latin argumentum: "proof, evidence, token, subject, contents") is a connected series of statements or propositions, called premises, that are intended to provide support, justification or evidence for the truth of another statement, the conclusion.
Deductive arguments
A deductive argument asserts that the truth of the conclusion is a logical consequence of the premises; an inductive argument asserts that the truth of the conclusion is supported by the premises. Deductive arguments are judged by the properties of validity and soundness. An argument is valid if and only if the conclusion is a logical consequence of the premises. A sound argument is a valid argument with true premises.
For example, the following is a valid argument (because the conclusion follows from the premises) and also sound (because additionally the premises are true):
- Premise 1. All Greeks are human.
- Premise 2. All humans are mortal.
- Conclusion. Therefore, all Greeks are mortal.
Invalid arguments involve several fallacies that do not satisfy the idea that an argument must deduce a conclusion that is logically coherent. A common example is the non sequitur, where the conclusion is completely disconnected from the premises.
Not all fallacious arguments are invalid. In a circular argument, the conclusion actually is a premise, so the argument is trivially valid. It is completely uninformative, however, and doesn't really prove anything.
In debate or discourse
In everyday practice an argument may be structured into talking points, issues that are supposed to help support said argument. Talking points based on distorted or false reality are often used in propaganda venues and political debates in tandem with loaded language to sway the course of a debate towards a predetermined conclusion. Such tactics turn an argument into emotional manipulation (having an argument) as opposed to logical exercise (making an argument). |
Units of Measure SI units: Systeme Internationale d’ Unites standard units of measurement to be understood by all scientists Base Units: defined unit of measurement that is based on an object or event in the physical world there are 7 base units some familiar quantities are time, length, mass, and temp
Time second (s) Many chemical reactions take place in less than a second so scientist often add prefixes, based on multiples of ten, to the base units. ex. Millisecond Length meter (m) A meter is the distance that light travels though a vacuum in 1/299 792 458 of a second. What is a vacuum? Close in length to a yard. Prefixes also apply…ex. millimeter
Mass mass is a measurement of matter kilogram (kg) about 2.2 pounds Masses measured in most laboratories are much smaller than a kilogram, so scientists use grams (g) or milligrams (mg). How many grams are in a kilogram? – 1000 How many milligrams are in a gram? – 1000
Derived Units Not all quantities are measured in base units A unit that is defined by a combination of base units is called a derived unit. Volume and Density are measured in derived units.
Volume The space occupied by an object Unit = cm 3 = mL Liters are used to measure the amount of liquid in a container (about the same volume as a quart) Prefixes also applied…ex. milliliter
Density The ratio that compares the mass of an object to its volume is called density. Units are g/cm 3 You can calculate density by the following equation: Density= mass/volume Ex: What is the density of a sample of aluminum that has a mass of 13.5 g and a volume of 5.0 cm 3 ? Density= 13.5g/5.0cm 3 =2.7g/cm 3
Temperature A measurement of how hot or cold an object is relative to other objects The kelvin (K) scale water freezes at 273K water boils at 373K We also use the Celsius (C) scale water freezes at 0 o C water boils at 100 o C
To Convert Celsius to Kelvin… Add 273!! ex: -39 o C + 273= 234 K To Convert Kelvin to Celsius… Subtract 273!! ex: 234K- 273= -39°C
Conversion factor: – A numerical factor used to multiply or divide a quantity when converting from one system of units to another. Conversion factors are always equal to 1 Dimensional analysis: – A fancy way of saying “converting units” by using conversion factors
Dimensional analysis often uses conversion factors Suppose you want to know how many meters are in 48 km. You have to choose a conversion factor that relates kilometers to meters. You know that for every 1 kilometer there is 1000 meters. What will your conversion factor be? 1000m/1km Now that you know your conversion factor, you can multiply it by your known…BUT you want to make sure you set it up so that kilometers cancels out. How would you do this?
48km x 1000m 1km =48,000 m TIP: Put the units you already have on the bottom of the conversion factor and the units you want on top.
Numbers that are extremely large can be difficult to deal with…sooo Scientists convert these numbers into scientific notation Scientific notation expresses numbers as a multiple of two factors: 1.A number between 1 and 10 (only 1 digit to the left of the decimal!) 2.Ten raised to a power
For example: A proton’s mass = 0.0000000000000000000000000017262 kg If you put it in scientific notation, the mass of a proton is expressed as 1.7262 x 10 -27 kg Remember: When numbers larger than 1 are expressed in scientific notation, the power of ten is positive When numbers smaller than 1 are expressed in scientific notation, the power of ten is negative
Try these: Convert 1,392,000 to scientific notation. = 1.392 x 10 6 Convert 0.000,000,028 to scientific notation. = 2.8 x 10 -8
Adding and Subtracting using Scientific Notation Make sure the exponents are the same!! 7.35 x 10 2 + 2.43 x 10 2 = 9.78 x 10 2 If the exponents are not the same, you have to make them the same!! Tip: if you increase the exponent, you decrease the decimal --- -- if you decrease the exponent, you increase the decimal Example: Tokyo pop: 2.70 x 10 7 Mexico City pop: 15.6 x 10 6 = 1.56 x 10 7 Sao Paolo pop: 0.165 x 10 8 = 1.65 x 10 7 NOW you can add them together and carry thru the exponent Total= 5.91 x 10 7
Multiplying and Dividing using Scientific Notation Multiplication: – Multiply decimals and ADD exponents Ex : (1.2 x 10 6 ) x (3.0 x 10 4 ) = 3.6 x 10 10 6 + 4 = 10 * Ex: (1.2 x 10 6 ) x (3.0 x 10 -4 ) = 3.6 x 10 2 6 + (-4) = 2 Division: – Divide decimals and SUBTRACT exponents Ex: (5.0 x 10 8 ) ÷ (2.5 x 10 4 ) = 2.0 x 10 4 8 – 4 = 4 *Ex: (5.0 x 10 8 ) ÷ (2.5 x 10 -4 ) = 2.0 x 10 12 8 – (-4) = 12
Accuracy and Precision Accuracy: How close measurements are to the actual value Precision: How close measurements are to each other
Percent Error An error is the difference between an experimental value and an accepted value Percent error= Percent error = accepted - experimental x 100 accepted value A tolerance is a very narrow range of error
Example: The accepted density for copper is 8.96g/mL. Calculate the percent error for each of these measurements. A.8.86g/mL B.8.92g/mL C.9.00g/mL D.8.98g/mL A.[(8.96 – 8.86)/8.96] x 100% = 1.12% B.[(8.96 – 8.92)/8.96] x 100% = 0.45% C.[(9.00 – 8.96)/8.96] x 100% = 0.45% D.[(8.98-8.96)/8.96] x 100% = 0.22%
Significant Figures Significant figures include all known digits plus one estimated digit Rules 1.Non-zero numbers are always significant 2.Zeros between non-zero numbers are always significant (“trapped zeros”) 3.All final zeros to the right of the decimal place are significant (“trailing zeros”) (but trailing zeros don’t count if there is no decimal in the number) 4.Zeros that act as place holders are not significant (convert to SN to remove placeholder zeros) (“leading zeros”) 5.Counting numbers and defined constants have an infinite number of sig figs
Rounding numbers An answer should have no more significant figures than the data with the fewest significant figures Example: Density of a given object = m = 22.44g = 1.5802817g/cm 3 V 14.2cm 3 How should the answer be rounded? 1.58 g/cm 3
Addition & Subtraction How do you add or subtract numbers that contain decimal point? The easiest way (which you learned in third grade) is to line up the decimal points then perform the math Then round according to the previous rule, rounding to the least numbers after the decimal! (ex: 5.25 + 10.3 = 15.55 15.6)
Multiplication & Division When you multiply or divide, your answer must have the same number of significant figures as the measurement with the fewest significant figures…just like adding or subtracting! Ex: 38736km 4784km = 8.096989967 8.097
Representing Data A goal of many experiments is to discover whether a pattern exists in a certain situation…when data are listed in a table the patterns may not be obvious Soooo, scientists often use graphs, which are visual displays of data – X-axis independent variable – Y-axis dependent variable |
B1 - - - DEFINITIONS - - - 10/03/2004
When three planes intersect they form a trihedral angle that consists
of three edge angles and three face angles.
An edge angle is the angle between two of the three planes forming the
A face angle is the angle between two of the edges.
Though there are many applications where the relationship of three
planes may have to be considered, it is hard to visualize three
planes or three lines in three dimensional space, but if we consider
a sphere surrounding the trihedral angle and centered on the
intersection point (called the vertex of the trihedral angle), then
the three planes will form three great circles on the surface of
the sphere forming a spherical triangle.
PRINCIPALS FROM GEOMETRY
- The section of the surface of a sphere made by a plane is a
great circle if the plane passes through the center of the
sphere and a small circle if the plane does not pass through
the center of the sphere.
- A spherical triangle is the figure on the surface of a sphere
bounded by three arcs of great circles.
- Each side of a spherical triangle is less than the sum of the
- The sum of the three sides of a spherical triangle is less than
- The sum of the angles of a spherical triangle is greater than
180 and less than 540 degrees.
- If two sides of a spherical triangle are equal, the angles opposite
them are equal; and conversely.
- If two sides of a spherical triangle are unequal, the angle opposite
these sides are unequal, and the greater side lies opposite the
greater angle; and conversely.
- The shortest distance on the surface of a sphere between two
points is an arc of the great circle joining them.
- The angle between two intersecting great circle arcs is
measured by the angle between the planes that created the
arcs (edge angle).
- The side of a spherical triangle has the same numeric
value as its central angle at the vertex of the trihedral
angle (face angle).
[ HOME ] - - -
[ NEXT ] - - -
[ BACK ]
- - - firstname.lastname@example.org |
Chapter 28 The Evolution and Distribution of Galaxies
By the end of this section, you will be able to:
- Explain how galaxies grow by merging with other galaxies and by consuming smaller galaxies (for lunch)
- Describe the effects that supermassive black holes in the centers of most galaxies have on the fate of their host galaxies
One of the conclusions astronomers have reached from studying distant galaxies is that collisions and mergers of whole galaxies play a crucial role in determining how galaxies acquired the shapes and sizes we see today. Only a few of the nearby galaxies are currently involved in collisions, but detailed studies of those tell us what to look for when we seek evidence of mergers in very distant and very faint galaxies. These in turn give us important clues about the different evolutionary paths galaxies have taken over cosmic time. Let’s examine in more detail what happens when two galaxies collide.
Mergers and Cannibalism
Figure 28.1 shows a dynamic view of two galaxies that are colliding. The stars themselves in this pair of galaxies will not be affected much by this cataclysmic event. (See the Astronomy Basics feature box Why Galaxies Collide but Stars Rarely Do.) Since there is a lot of space between the stars, a direct collision between two stars is very unlikely. However, the orbits of many of the stars will be changed as the two galaxies move through each other, and the change in orbits can totally alter the appearance of the interacting galaxies. A gallery of interesting colliding galaxies is shown in Figure 28.7. Great rings, huge tendrils of stars and gas, and other complex structures can form in such cosmic collisions. Indeed, these strange shapes are the signposts that astronomers use to identify colliding galaxies.
Throughout this book we have emphasized the large distances between objects in space. You might therefore have been surprised to hear about collisions between galaxies. Yet (except at the very cores of galaxies) we have not worried at all about stars inside a galaxy colliding with each other. Let’s see why there is a difference.
The reason is that stars are pitifully small compared to the distances between them. Let’s use our Sun as an example. The Sun is about 1.4 million kilometers wide, but is separated from the closest other star by about 4 light-years, or about 38 trillion kilometers. In other words, the Sun is 27 million of its own diameters from its nearest neighbor. If the Sun were a grapefruit in New York City, the nearest star would be another grapefruit in San Francisco. This is typical of stars that are not in the nuclear bulge of a galaxy or inside star clusters. Let’s contrast this with the separation of galaxies.
The visible disk of the Milky Way is about 100,000 light-years in diameter. We have three satellite galaxies that are just one or two Milky Way diameters away from us (and will probably someday collide with us). The closest major spiral is the Andromeda Galaxy (M31), about 2.4 million light-years away. If the Milky Way were a pancake at one end of a big breakfast table, M31 would be another pancake at the other end of the same table. Our nearest large galaxy neighbor is only 24 of our Galaxy’s diameters from us, and it will begin to crash into the Milky Way in about 3 billion years.
Galaxies in rich clusters are even closer together than those in our neighborhood (see The Distribution of Galaxies in Space). Thus, the chances of galaxies colliding are far greater than the chances of stars in the disk of a galaxy colliding. And we should note that the difference between the separation of galaxies and stars also means that when galaxies do collide, their stars almost always pass right by each other like smoke passing through a screen door.
The details of galaxy collisions are complex, and the process can take hundreds of millions of years. Thus, collisions are best simulated on a computer (Figure 28.8), where astronomers can calculate the slow interactions of stars, and clouds of gas and dust, via gravity. These calculations show that if the collision is slow, the colliding galaxies may coalesce to form a single galaxy.
When two galaxies of equal size are involved in a collision, we call such an interaction a merger (the term applied in the business world to two equal companies that join forces). But small galaxies can also be swallowed by larger ones—a process astronomers have called, with some relish, galactic cannibalism (Figure 28.9).
The very large elliptical galaxies we discussed in Galaxies probably form by cannibalizing a variety of smaller galaxies in their clusters. These “monster” galaxies frequently possess more than one nucleus and have probably acquired their unusually high luminosities by swallowing nearby galaxies. The multiple nuclei are the remnants of their victims (Figure 28.9). Many of the large, peculiar galaxies that we observe also owe their chaotic shapes to past interactions. Slow collisions and mergers can even transform two or more spiral galaxies into a single elliptical galaxy.
A change in shape is not all that happens when galaxies collide. If either galaxy contains interstellar matter, the collision can compress the gas and trigger an increase in the rate at which stars are being formed—by as much as a factor of 100. Astronomers call this abrupt increase in the number of stars being formed a starburst, and the galaxies in which the increase occurs are termed starburst galaxies (Figure 28.10). In some interacting galaxies, star formation is so intense that all the available gas is exhausted in only a few million years; the burst of star formation is clearly only a temporary phenomenon. While a starburst is going on, however, the galaxy where it is taking place becomes much brighter and much easier to detect at large distances.
When astronomers finally had the tools to examine a significant number of galaxies that emitted their light 11 to 12 billion years ago, they found that these very young galaxies often resemble nearby starburst galaxies that are involved in mergers: they also have multiple nuclei and peculiar shapes, they are usually clumpier than normal galaxies today, with multiple intense knots and lumps of bright starlight, and they have higher rates of star formation than isolated galaxies. They also contain lots of blue, young, type O and B stars, as do nearby merging galaxies.
Galaxy mergers in today’s universe are rare. Only about five percent of nearby galaxies are currently involved in interactions. Interactions were much more common billions of years ago (Figure 28.11) and helped build up the “more mature” galaxies we see in our time. Clearly, interactions of galaxies have played a crucial role in their evolution.
Active Galactic Nuclei and Galaxy Evolution
While galaxy mergers are huge, splashy events that completely reshape entire galaxies on scales of hundreds of thousands of light-years and can spark massive bursts of star formation, accreting black holes inside galaxies can also disturb and alter the evolution of their host galaxies. You learned in Active Galaxies, Quasars, and Supermassive Black Holes about a family of objects known as active galactic nuclei (AGN), all of them powered by supermassive black holes. If the black hole is surrounded by enough gas, some of the gas can fall into the black hole, getting swept up on the way into an accretion disk, a compact, swirling maelstrom perhaps only 100 AU across (the size of our solar system).
Within the disk the gas heats up until it shines brilliantly even in X-rays, often outshining the rest of the host galaxy with its billions of stars. Supermassive black holes and their accretion disks can be violent and powerful places, with some material getting sucked into the black hole but even more getting shot out along huge jets perpendicular to the disk. These powerful jets can extend far outside the starry edge of the galaxy.
AGN were much more common in the early universe, in part because frequent mergers provided a fresh gas supply for the black hole accretion disks. Examples of AGN in the nearby universe today include the one in galaxy M87 (see Figure 28.7), which sports a jet of material shooting out from its nucleus at speeds close to the speed of light, and the one in the bright galaxy NGC 5128, also known as Centaurus A (see Figure 28.12).
Many highly accelerated particles move with the jets in such galaxies. Along the way, the particles in the jets can plow into gas clouds in the interstellar medium, breaking them apart and scattering them. Since denser clouds of gas and dust are required for material to clump together to make stars, the disruption of the clouds can halt star formation in the host galaxy or cut it off before it even begins.
In this way, quasars and other kinds of AGN can play a crucial role in the evolution of their galaxies. For example, there is growing evidence that the merger of two gas-rich galaxies not only produces a huge burst of star formation, but also triggers AGN activity in the core of the new galaxy. That activity, in turn, could then slow down or shut off the burst of star formation—which could have significant implications for the apparent shape, brightness, chemical content, and stellar components of the entire galaxy. Astronomers refer to that process as AGN feedback, and it is apparently an important factor in the evolution of most galaxies.
Key Concepts and Summary
When galaxies of comparable size collide and coalesce we call it a merger, but when a small galaxy is swallowed by a much larger one, we use the term galactic cannibalism. Collisions play an important role in the evolution of galaxies. If the collision involves at least one galaxy rich in interstellar matter, the resulting compression of the gas will result in a burst of star formation, leading to a starburst galaxy. Mergers were much more common when the universe was young, and many of the most distant galaxies that we see are starburst galaxies that are involved in collisions. Active galactic nuclei powered by supermassive black holes in the centers of most galaxies can have major effects on the host galaxy, including shutting off star formation.
- galactic cannibalism
- a process by which a larger galaxy strips material from or completely swallows a smaller one
- a collision between galaxies (of roughly comparable size) that combine to form a single new structure
- a galaxy or merger of multiple galaxies that turns gas into stars much faster than usual |
When carrying out any kind of data collection or analysis, it’s essential to understand the nature of the data you’re dealing with.
Within your dataset, you’ll have different variables—and these variables can be recorded to varying degrees of precision. This is what’s known as the level of measurement.
There are four main levels of measurement: nominal, ordinal, interval, and ratio. In this guide, we’ll explain exactly what is meant by levels (also known as types or scales) of measurement within the realm of data and statistics—and why it matters. We’ll then introduce you to the four types of measurements, providing a few examples of each.
If you’d like to get your hands on some datasets already, try this free data analytics short course to get started.
Here’s what we’ll cover:
- What are levels of measurement in data and statistics?
- What are the four levels of measurement?
- Levels of measurement: FAQ
- Key takeaways
Let’s get started!
1. What are levels of measurement in data and statistics?
Level of measurement refers to how precisely a variable has been measured. When gathering data, you collect different types of information, depending on what you hope to investigate or find out.
For example, if you wanted to analyze the spending habits of people living in Tokyo, you might send out a survey to 500 people asking questions about their income, their exact location, their age, and how much they spend on various products and services. These are your variables: data that can be measured and recorded, and whose values will differ from one individual to the next.
When we talk about levels of measurement, we’re talking about how each variable is measured, and the mathematical nature of the values assigned to each variable. This, in turn, determines what type of analysis can be carried out.
Let’s imagine you want to gather data relating to people’s income. There are various levels of measurement you could use for this variable. You could ask people to provide an exact figure, or you could ask them to select their answer from a variety of ranges—for example:
- (a) 10-19k
- (b) 20-29
- (c) 30-39k
You could ask them to simply categorize their income as “high,” “medium,” or “low.”
Can you see how these levels vary in their precision? If you ask participants for an exact figure, you can calculate just how much the incomes vary across your entire dataset (for example).
However, if you only have classifications of “high,” “medium,” and “low,” you can’t see exactly how much one participant earns compared to another. You also have no concept of what salary counts as “high” and what counts as “low”—these classifications have no numerical value. As a result, the latter is a less precise level of measurement.
Why are levels of measurement important?
Level of measurement is important, as it determines the type of statistical analysis you can carry out. As a result, it affects both the nature and the depth of insights you’re able to glean from your data.
Certain statistical tests can only be performed where more precise levels of measurement have been used, so it’s essential to plan in advance how you’ll gather and measure your data.
3. What are the four levels of measurement? Nominal, ordinal, interval, and ratio scales explained
There are four types of measurement (or scales) to be aware of: nominal, ordinal, interval, and ratio.
Each scale builds on the previous, meaning that each scale not only “ticks the same boxes” as the previous scale, but also adds another level of precision.
Let’s go through each in turn to give you an idea of what they are, and how they interact.
The nominal scale simply categorizes variables according to qualitative labels (or names). These labels and groupings don’t have any order or hierarchy to them, nor do they convey any numerical value.
For example, the variable “hair color” could be measured on a nominal scale according to the following categories: blonde hair, brown hair, gray hair, and so on.
You can learn more in this complete guide to nominal data.
The ordinal scale also categorizes variables into labeled groups, and these categories have an order or hierarchy to them.
For example, you could measure the variable “income” on an ordinal scale as follows:
- low income
- medium income
- high income.
Another example could be level of education, classified as follows:
- high school
- master’s degree
These are still qualitative labels (as with the nominal scale), but you can see that they follow a hierarchical order.
Learn more in our guide to ordinal data.
The interval scale is a numerical scale which labels and orders variables, with a known, evenly spaced interval between each of the values.
A commonly-cited example of interval data is temperature in Fahrenheit, where the difference between 10 and 20 degrees Fahrenheit is exactly the same as the difference between, say, 50 and 60 degrees Fahrenheit.
Find out more about interval data in our full guide.
The ratio scale is exactly the same as the interval scale, with one key difference: The ratio scale has what’s known as a “true zero.”
A good example of ratio data is weight in kilograms. If something weighs zero kilograms, it truly weighs nothing—compared to temperature (interval data), where a value of zero degrees doesn’t mean there is “no temperature,” it simply means it’s extremely cold!
You’ll find we’ve made a full guide to ratio data if you want to dive deeper.
4. Another way of thinking about the levels of measurement
Another way to think about levels of measurement is in terms of the relationship between the values assigned to a given variable.
With the nominal scale, there’s no relationship between the values; there’s no relationship between the categories “blonde hair” and “black hair” when looking at hair color, for example. The ratio scale, on the other hand, is very telling about the relationship between variable values.
For example, if your variable is “number of clients” (which constitutes ratio data), you know that a value of four clients is double the value of two clients. As such, you can get a much more accurate and precise understanding of the relationship between the values in mathematical terms.
In that sense, there’s an implied hierarchy to the four levels of measurement. Analysis of nominal and ordinal data tends to be less sensitive, while interval and ratio scales lend themselves to more complex statistical analysis. With that in mind, it’s generally preferable to work with interval and ratio data.
5. Levels of measurement: FAQ
What are the 4 levels of measurement?
The 4 levels of measurement, also known as measurement scales, are nominal, ordinal, interval, and ratio. These levels are used to categorize and describe data based on their characteristics and properties.
What is level of measurement in statistics?
Level of measurement, also known as scale of measurement, refers to the process of categorizing data based on the characteristics and properties of the data. It’s important in statistics because it helps determine the appropriate statistical methods and tests that can be used to analyze the data.
Is age an interval or ratio?
Age is typically considered to be measured on a ratio scale. This is because age has a true zero point, which means that a value of zero represents the absence of age. In addition, it’s possible to perform mathematical operations such as addition, subtraction, multiplication, and division on age values.
Is gender nominal or ordinal?
Gender is typically considered to be measured on a nominal scale. This is because gender is a categorical variable that has no inherent order or ranking. It’s not possible to perform mathematical operations on gender values.
5. Key takeaways
So there you have it: the four levels of data measurement and how they’re analyzed. In this article, we’ve learned the difference between the various levels of measurement, and introduced some of the different descriptive statistics and analyses that can be applied to each.
If you’re looking to pursue a career in data analytics, or even just dabbling in statistics, this fundamental knowledge of the types of measurement will stand you in good stead.
If you enjoyed learning about the different levels of measurement, why not get a hands-on introduction to data analytics with this free, 5-day short course?
At the same time, keep building on your knowledge with these guides: |
Inflation is a term that defines the rise in general price levels of products and services within an economy over a period of time. Although inflation affects every individual within an economy, but it has harsh consequences for the poor. Unfortunately, the poor are the individuals most affected by inflation, even though they have little or no control over it. This essay will highlight how inflation is a rich problem paid for by the poor.Inflation is a tricky phenomenon that affects people from all walks of life. However, some groups are more affected than others, which is often the case for the poor. It is so because inflation has a way of enriching the already rich while leaving the poor in a state of perpetual struggle. According to recent figures, cost of living has risen exponentially over the past decade, while wages have remained stagnant for most low-income earners. This essay delves into the subject of inflation, discussing its causes and how it affects the lives of the poor. To start with, inflation occurs for various reasons. One among them is an increase in demand for goods and services. As such, a society with an increasing population will undoubtedly experience inflation as more goods will be needed to satisfy the surging demand. Inflation also occurs when there is any disruption in supply. Additionally, inflation happens when there is an increase in the money supply, leading to a surge in purchasing power causing suppliers to hike their prices as they experience more money chasing fewer goods. The outcome is increase the price of goods and services, and those earning lower wages are quickly priced out of the market. Firstly, inflation has a clear impact on the poor because they spend a significant portion of their income on basic commodities such as food, housing, transportation and alike. For instance, when the price of gasoline increases, it directly affects the poor, who cannot reduce their demand for transportation. This, in turn, increases their overall expenses. In contrast, the rich can adjust to these changes by reducing their demand for such commodities since they can afford to look for alternatives or use more efficient modes of transport. Therefore, inflation disproportionately affects the poor by reducing their purchasing power. Secondly, inflation affects the poor more severely because they do not have the right tools to fight it. The government and central banks worldwide use monetary policies to curb inflation. However, these policies impact social inequalities as they principally cater the rich. For instance, raising interest rates or reducing government spending is a commendable option to tackle inflation but can also be disastrous for the poor. It limits access to credit for the middle- and low-income segments of the population, thus compounding their financial woes. When prices rise due to inflation, low-income earners are the most affected. This is because, unlike their rich counterparts, they do not have a safety cushion to protect them from the rising cost of goods. For instance, a loaf of bread that costs $1.00 today might rise to $2.00 in the wake of inflation. While a $1.00 increment may be minimal for the rich, it can prove disastrous for the economically disadvantaged, who may have to sacrifice other essential needs like healthcare to purchase such an item. Additionally, inflation causes the prices of oil and other commodities used to produce goods to rise, ultimately resulting in a rise in the price of finished products. Thirdly, inflation has a ripple effect on the poor because they have to bear the brunt of the lack of access to relevant information. The rich utilize the latest technology and analytical tools to detect inflationary pressures early on, thereby minimizing their losses. In contrast, the poor work in a market environment that has limited access to relevant information. This disadvantage makes the poor more vulnerable to the consequences of inflation. They face worsening financial conditions when the prices suddenly increase because they have been unable to anticipate and plan for this event.Studies suggest that inflation affects the poor in different ways. For instance, low-income households spend a higher percentage of their income on food, utilities, and transportation, which are all products that rise in the wake of inflation. As a result, inflation has a way of eroding their purchasing power, making it more challenging to make ends meet. According to the consumer price index, which tracks the cost of living, inflation has risen by 5.4% over the past few months, making it increasingly hard for the low-income population to make meaningful progress financially.
Steps like offering subsidies on essential goods, raising the minimum wage, or providing direct cash transfers can provide a buffer for vulnerable populations, protecting them from the worst effects of inflation. As it stands, inflation remains a significant problem for most low-income earners worldwide.
Lastly, it is essential to highlight that inflation not only affects the poor’s standard of living but is also detrimental to their health and development. Uncontrolled inflation leads individuals to opt for low-cost alternatives, compromising the quality of the products and services they consume. This, in turn, can cause health hazards. Poor people are more vulnerable to diseases due to the low quality of the products they consume. Furthermore, inflation also negatively impacts the country’s economic growth, leading to fewer job creation and development opportunities. In conclusion, inflation is a rich problem paid for by the poor. The problem affects the poor disproportionately and leads to multiple adverse effects, including reduced purchasing power, lack of access to necessary tools, lack of information, adverse health effects, and inadequate economic growth. Therefore, policymakers must devise measures that cater to the poor and minimize the inflationary consequences for them. These measures can include devising policies aimed at reducing inequality and improving access to information for the poor.Inflation is a phenomenon that affects everyone, but it has a more significant impact on the economically disadvantaged. The rising cost of goods and services, coupled with stagnant wages, has resulted in a state of perpetual struggle for the poor. However, policymakers can help by introducing measures to alleviate their suffering. Steps like offering subsidies on essential goods, raising the minimum wage, or providing direct cash transfers can provide a buffer for vulnerable populations, protecting them from the worst effects of inflation. As it stands, inflation remains a significant problem for most low-income earners worldwide.
(While Dr Firdous Ahmad Malik is a Research Fellow at National Institute of Public Finance and Policy–NIPFP, Aamir Ahmad Teeli Research Fellow, Central university of Tamil Nadu. The views, opinions, facts, assumptions, presumptions and conclusions expressed in this article are those of the authors and aren’t necessarily in accord with the views of “Kashmir Horizon”.)
[email protected], [email protected]
Leave a Reply |
A Brief History of Cache
Imagine you are in the middle of a Math exam where, to your surprise, you come across a really familiar question, which you think you have seen in the practice test. “Can’t be that easy,” you think; checking the question data and numbers, you find out that it’s exactly the question you have practiced, not even a word different. You go ahead and put the answer you memorize into the answer sheet without having to do the problem all over again.
This scenario illustrates the mechanism by which caching works: storing known data somewhere for faster access.
I. Historical Context
CPUs have always been faster than memories. The imbalance has been preserved ever since, as both CPUs and memories have been enhanced over time. As it becomes possible to place more and more circuits on a chip over time, CPUs get even faster as engineers are using these new capabilities for pipelining and superscalar operations. Memory designers, on the other hand, tend to use new technology advance to increase the capacity of chips, not the speed, making the gap larger and larger. What effects does this hardware phenomenon impose in practice?
The problem is, when a CPU issues a command that requests memory access, it does not get the memory unit it wants right away. Instead, it has to wait for some number of CPU cycles for the memory to serve the request. The slower the memory, the longer the CPU has to wait.
II. Possible Solutions and their Drawbacks
There are two possible techniques for solving the problem of CPU — Memory disparity:
- Read and fetch from the memory when being requested by the CPU but continue to execute and stall the CPU if it tries to use the memory units requested before they have arrived
- Design computer systems that do not stall but instead require the compilers not to generate code that use memory units which haven’t arrived yet
Obviously, neither of these approaches actually tackle the problem effectively, as they both have some kinds of penalty. The longer the memory access time, the heavier the penalty.
Modern processors place overwhelming demands on the memory system, in terms of both latency (the delay in supplying an operand) and bandwidth (the amount of data supplied per unit of time) . However, these two aspects are rarely seen in harmony. Many approaches increase bandwidth only at the cost of increasing latency, and vice versa.
Storing data in the memory makes fetching inevitably slow. Even when engineers know how to make memories as fast as CPUs, to be able to run at full speed, they have to be located on the CPU chip.
However, this also invokes a question on economics. Putting a large memory on the CPU chip makes it bigger and more expensive, and thus, not market-suitable, not to mention that there are limits to how big a CPU can be. Large memory on the CPU makes it expensive, whereas none of this kind makes it underperforming. How about combining these aspects into a solution that takes the best of both worlds?
The most effective choice is to have a small amount of fast (on-CPU) memory AND a large amount of slow (off-CPU) memory to get the speed of the fast memory and the capacity of the large one at an affordable and market-friendly price. In this case, the small, fast memory is called cache (pronounced “cash”).
IV. Cache mechanism
The whole Caching idea is based on the locality principle, which observes that memory references made in a short period of time tend to only use a small proportion of the total memory . Therefore, we can store this small chunk of memory somewhere outside the main memory for faster access and query.
The mechanism and process of caching is simple: the most frequently accessed memory words (a group of bits making up a unit of data) are kept in a cache. When the CPU need a certain word, it first looks in the cache. If the queried word is available in the cache, the CPU will grab it for usage and save itself a trip to the main memory. The situation when the CPU find what it needs in the cache is called a cache hit. The hit ratio is the fraction of references that can be served with a cache hit.
Otherwise, if and only if the CPU can’t find what it needs in the cache, it will go to the main memory, get the word as well as inserting it into the cache for possible future use. This case is called a cache miss.
If we denote the cache access time as c, the main memory access time as m, and the hit ratio as h, the mean access time t can be calculated by:
t = c + (1 - h)m
1 — h being the miss ratio.
It’s easy to see that if a significant amount of the words are in the cache, the average access time can be substantially reduced. If a word is read n times in a short time interval, the CPU will only have to reference to the slow memory once and to the fast memory
n — 1times.
In computer systems, it has been acknowledged that memory are accessed not totally randomly. If the CPU references to a memory word at address N, it’s highly possible that the next memory reference would be one of N’s neighbors. Therefore, it’s common that when the CPU makes a reference to a word A, it and some of its neighbors are brought from the large and slow memory to the cache. This group of A and its friends are called a cache line, as they are located together as a block in the memory. When a cache miss occurs, the whole cache line is loaded from the main memory to the cache, not just the word being referenced.
Technology has always developed hand-in-hand with economics, and the story of balancing fast-slow memory sections to optimize pricing serves as a great example. Thanks for reading and see you in the next articles!
Tanenbaum, Andrew S. Structured Computer Organization. Englewood Cliffs, N.J: Prentice-Hall, 1976. Print.
Let’s Get In Touch:
- Personal website: https://billtrn.com
- LinkedIn: https://www.linkedin.com/in/billtrn/
- Email: firstname.lastname@example.org
- GitHub: https://github.com/billtrn |
Brightstorm is like having a personal tutor for every subject
See what all the buzz is aboutCheck it out
Limits 1,604 views
Limits are the basis of everything we do in calculus and the AP test, it always asks you a few. There's always going to be one or two of them in the multiple choice section, usually in the part where you don't have to use your calculator. Well, we are going to start off by reviewing some of the basic limits. And then we are going to move on to do some of the more complex types of problems that you might see on the AP test. Let's get started.
Basic limits. Let's have a look at few. Basic limits come in 4 fruit flavours. You've got finite numbers divided by infinity, you've got finite numbers divided by 0, you've got infinity divided by infinities, and 0 divided by 0. These are all impossible situations.
How do you divide by infinity? Well, you can't. That's where limits come in. How do you divide by 0? You really can't. So that's why you need limits. Limits are ways of finding numbers that are impossible numbers. You can almost quite get to but really never get there until you're infinitely close. Here is the first kind, finite divided by infinity. This will be kind of tough. 100,000 divided into an infinite number of parts. If you divide 100,000 into one part, you get 100,000. You divide it into 10 parts you get 10,000. If you divide it into 100,000 parts you get 1. So the bigger the number on the bottom gets, the smaller the result gets. But if you divide 100,000 into an infinite number of parts, each of them has to be infinitely small, in other words, 0. So anything that's finite divided by infinity, always going to be a 0. No matter how big it is. This could be 62 billion up here and if you divide it by infinity, it'll still be a zero.
Then you've the opposite, taking something finite and dividing it by 0. So if you started off with saying, we've got x is 1, you divided this by 1, you get -1000. If you make x smaller like 0.5, -1000 divided by 0.5 is -2000. It's going to be bigger. I'm going to do a quick graph of this one. So, you might recognize this as being a basic hyperbola function. Looks something like that and it's asymptotic to the y axis.
Here, we've gotten into a little bit of a problem. What happens is you are coming from the right side, dividing by smaller and smaller positive numbers, the result gets more and more negative. If you come around this way though, dividing by numbers that are negative but they became closer and closer to 0, this one grows more and more positive. That's what the little plus is about. It's about handling that little detail. This says we are going to approach 0, the numbers here are going to get closer and closer to 0 but they are going to come in from the larger side. So you have -1000 divided by smaller and smaller positive numbers. Look where it's going, negative infinity.
We'll look at this one again. Now, we are going to have the thing just approach 0, not going to worry about that positive or negative part. This time we've got a problem if we come in from the right side with a plus there, the result goes towards negative infinity. If we come in from the left side, if there was a negative there, we'd come in and approach positive infinity.
That last example is called a one sided limit. This is a two sided limit which is the normal kind of limits. Problem is, how do you decide you want to go towards negative infinity or positive infinity? You don't get to decide. If the limit isn't the same coming in from both directions, you just can't say that there is a limit at all. You have to say that it does not exist.
Next flavour. Infinity divided by infinity. That's weird. Something divided by itself. That's usually 1. But weird things can happen. Watch this. There is a neat little trick for doing this limit. You probably saw this one first in pre-calc and you probably reviewed it again at the beginning of this year. If you have something that's going towards infinity, and you have powers of x on the top and bottom, look for the largest power of x in the denominator. That's x³. I'm going to divide the top by x³, which I can do legally as long as I divided the bottom by x³. I'm going to do that. Divide the numerator by x³. Divide the denominator by x³. And now, I'll do a little bit of simplifying. See that x³ can divide with the x³, that x can divide with the x³.
Let me write out that simplification. So we've got our limit as x approaches infinity in the numerator. x³ divided by x³ is 1. X divided by x³ is 1/x². Down here, 2x³ divided by x³ is 2 and x² divided by x³ is 1/x. We're in business now. I can actually put the infinity in now because recognize that?
1/x³? Finite divided by infinite. Finite divided by infinite is 0. Doesn't matter any more. And down here, a finite number divided by an infinite number, that's 0. So that pesky dividing by infinity it's gone. What's left? 1/2. Crazy, huh? This whole thing just simplifies to just 1/2. If you were to put this out on graph, that formula, what you'd find is that it levels off at a height of 1/2. You've probably done this before. Horizontal asymptotes. We'll handle those in another episode.
Last type; 0 divided by 0. You can't divide anything by 0 including itself. So we are going to have to do a limit again because, limits let you divide things that can't normally be divided. In this one, the way to get this ready is to factor. Factoring is going to make this come out nice and neat. On the top, factoring out, x²+x-6. Just a plain old binomial factor like you've been doing since algebra 1. Nice and simple. If you were to graph this, one of the things you'd be expected to spot is that you have the x - 2 factor on the top and the bottom. That means it can divide out. But on the graph, when you divided it out, you wind up having a little gap in the graph. No need to worry about that right now though.
All we have to worry about is this. At the beginning we put the 2 in there because that would force us to do 2 - 2 which is 0. You can't divide by 0. But now all of a sudden, you're not dividing any more. All you have to do is put the 2 in for the x and your result is 2 + 3 or 5.
If you graph this formula, as you got closer and closer to 2, you'd have a spot right on the graph at 5, an open circle.
Let's tackle some sample problems. Now, the problems I'm going to show you in this section are ones along the lines of what you are going to see on the AP test. They'll rarely ask any of the basics. The ones I'm going to show you, now are ones that come out very commonly. First off, we've got two really important limits that you need to memorize. One of the limits is sine x/x as x approaches 0, only as x approaches 0 but the result is 1. Kind of surprising because the sine of 0 is 0 and dividing 0 by 0, how does that give you a 1? Well, it does in this case.
There is a proof for why this works. It can't be done with any of the basic techniques I just showed you. The other one that you might see is 1-cosine x/x. The limit of this one as you approach 0 is 0. Well, sorry to tell you this but memorize them, get over it. They are on the AP test. You don't have time to prove them, you don't have time to come up with them from basics. Just memorize them. Especially this one. If you are going to memorize any one limit, this is it. Let's use it in a problem. This is a really common variation that you can see on this. You might think, sine x/x, the limit is going to be 1. There is a little problem, this is 5x and that's 2x. It's not really the same variable.
The key to this one is realizing you have to make it be the same variable. Over here I started doing that. the first thing I did was to factor this 2 out of the denominator. That's to clear the way forward I need. The variable of that the sin is acting on is 5x.
And I'll be okay if the bottom's got the exact same variable on it, 5x. This can be really confusing for students because you know well, that's not one variable, it's 5x. X is a variable, 5x isn't. Well, variables can have multiple components. You can treat 5x as if it's a single variable and that's going to let this part of the limit become 1. Of course, I can't just multiply the bottom by 5 and say it's the same thing, no way. But I can balance it out. I divided the bottom by 5, so if I multiplied the top by 5, then business, I've rebalanced it. We're almost done. I'm going to do this limit.
The 1/2 stays there, the limit of the sine of some variable divided by the same variable as you're approaching 0 is 1. Times the 5 over 1 factor and, we're going to town. Answer is 5 1/2. Let's try another one. This one is tempting. You might say, Oh my gosh, look at this. The square there, the square there, they balance each other out. We can go to town that limit is just 0. No, sorry. The limit for cosines is 1 minus just a plain cosine over just a plain x. But I can do some things with this. The reason I wrote this down here, the Pythagorean identity, is because it's the key to doing this one. I'm going to rearrange this. I need 1 - cosine squared so I could take this and subtract cosine squared from both sides. So I get sin squared equals 1 - cosine squared. I hope you'll forgive me for being lazy, I really shouldn't leave the variable off but I want to save time. 1 - cosine squared can be replaced with sine squared and I'm going to do that now.
1 - cosine squared is sine squared. I'm putting the variable back in because it really does matter now. The variable have to match up. Wait a minute, it's not that cosine one after all. It's really the sine one. It's still not quite like what we need it to be though. Because it's squared on top and squared on bottom, but that's easy enough to handle. Sine squared x is the same thing as sine x times sine x. So I can split this up into two different limits. The limit as x approaches 0 of sine x/x times another sine x/x. And now I've got two of those limits that we just did.
This limit is one, that one's 1. 1 times 1 is 1. Here is another one of a type that you maybe are a little less likely to see but it's been known to show up on the test and it's a really surprising thing. Take a look at this right here. We've got a square root on the top and a number on the top, no square root on the bottom. Remember back in Algebra 1 and especially Algebra 2 and a lot in Pre-calculus, when you were told over and over again, never leave square roots on the bottom. Guess what? We are going to make this square root be on the bottom. It's a neat little trick that's going to make this one work out. Now, what I'm going to do is multiply this by the conjugate of this, top and bottom. So that is square root of 9+x+3 over square root of 9-x+3.
Now if I distribute this, this is what I get. The square root of 9 + x times itself is 9 + x and this is the difference of squares. Those two multiplied and those two multiplied they add up to 0 and -3 times 3 is -9. Down at the bottom, I didn't multiply this x through and there is a reason for that. Look what we got on the top here. 9 plus -9. 9 plus -9 is 0, it's gone. Look at this little beauty here, x divided by x. X divided by x is 1.
Now let's see what happened. Before, if we put 0 in here we'd be dividing by 0, it's impossible but watch what happens now when we try to divide by you put that 0 in. I'm actually just going to substitute it in and see what happens. So we've got one at the top and in the denominator we've got the square root of 9 plus, I'm going to put a 0 in for the x + 3. Square root of 9 + 0 is 3. It's 1/3 plus 3. You can do that. The answer is 1/6. We've got just a little bit more to cover in this segment. This is is one to pay particular attention to. As scary as it looks, this is actually an easy problem once you know what to do. In one of our episodes we've gone over the manual derivatives. You might want to go back and review that. That's where this definition comes from.
With a derivative, the definition of the derivative is that it's the limit of, as x approaches 0, of all of that. So the function of x plus change x minus f(x) over change in x. You wind up dividing by 0 but still produces a slope. Compare these two. That's the trick. It's just recognizing, hey wait a minute. This is just the definition with some steps substituted into it. It looks like f(x) is x³. And just to verify, look at this. F(x) plus change of x to change of x is the same variable so down here, h is the same variable as that.
This right here is the equivalent of that. Substituting x + h into the function x³. The function is x³. That's not going to be our final answer though because we're not actually doing the function, we're finding the limit of the function. The limit of the function is a derivative so all I need to do is use the shortcut for the derivative. Well, you've been doing this in your sleep, probably since the beginning or middle of first quarter. Derivative of that is 3x². That's it, that's the whole problem. It's hardly anything to do once you recognize, oh yeah, that's a derivative.
Let's do one more of these because there is another variation on this that takes one step. That's got the limit approaching 0 so it's got that part of the manual derivative. It's got that on the bottom. It looks like the function is cosine but there is no x in there right now. But if there were an x there, there'd have to be an x here as well.
This is the derivative of cosine. So let's see, our function is, cosine of x. We'll worry about the pi/2 in a minute. Since we are doing the limit of that whole mess, you know that the result has to be the derivative of cosine x. With this one don't forget, the derivative of cosine is negative of sine. We're almost done. If I replace the x with what it was replaced with in the original problem pi/2, I get f'(pi/2) equals -sine pi/2. Pi/2 is in radians of course. And the sine of pi/2, that's 1. So the result for this is -1. The negative came from the derivative.
If you ever run across a limit that you can't do and you're starting to feel sick, go to the hospital. It's actually pronounced L'Hopital's Rule and it's named after the Mathematician who invented it a couple of hundred years ago. It is really neat. The reason you didn't do it at the very beginning of the year, when you were first investigating limits is that, doing it requires derivatives and derivatives are based on limits. But it's a really cool thing. It can save you a ton of time.
What L'Hopital's Rule basically says is, if you're doing a limit of something that's a fraction, you can do the derivative of the top and the derivative of the bottom separately and the limit of that result is going to be the same as the limit of this. Don't make the mistake of using the quotient rule on it. L'Hopital's Rule doesn't use the quotient rule. So, we have the limit going to infinity.
The limit of ln to the 5th power. That's going to take chain rule, I threw that in to see if I could catch you. The derivative of a natural logarithm is 1 over whatever the logarithm's acting on. That's 1/x to the 5th but then the chain rule says we have to do the derivative of x to the 5th. So that's times 5x to the 4th. Down at the bottom the derivative of x to the 4th, we're doing it separately, is 4x³. Let's take care of a little bit of simplifying because we couldn't substitute infinity in here because we would have had infinity divided by infinity and we still do. But let's simplify a little bit, see what happens.
You've got the limit as x approaches infinity. 5x to the 4th divided by x to the 5th is 5/x and here I have 4x³, still a little more simplifying to do. So we got the limit. I'm going to be lazy and not write in the x approaching infinity. 5/x divided by 4x³ turns out to be 5/4x to the 4th. Now, we don't have infinity divided by infinity. We've got finite divided by infinite. The result, is 0.
This lesson started off with an interesting direction to the basic limits, moving on to some other more complex limits that you might see on the AP test, followed by a few of the problems that involve recognizing a derivative which you're very likely to see on the AP test.
And finally finishing up with L'Hopital's Rule which is a great shortcut when you can use it. I'd really recommend though that you download the supplementary materials from the website. Try some of the trickier limits just to get this cemented down your mind. |
In astronomy, stellar classification is the classification of stars based on their spectral characteristics. Electromagnetic radiation from the star is analyzed by splitting it with a prism or diffraction grating into a spectrum exhibiting the rainbow of colors interspersed with spectral lines. Each line indicates a particular chemical element or molecule, with the line strength indicating the abundance of that element. The strengths of the different spectral lines vary mainly due to the temperature of the photosphere, although in some cases there are true abundance differences. The spectral class of a star is a short code primarily summarizing the ionization state, giving an objective measure of the photosphere's temperature.
Most stars are currently classified under the Morgan–Keenan (MK) system using the letters O, B, A, F, G, K, and M, a sequence from the hottest (O type) to the coolest (M type). Each letter class is then subdivided using a numeric digit with 0 being hottest and 9 being coolest (e.g., A8, A9, F0, and F1 form a sequence from hotter to cooler). The sequence has been expanded with classes for other stars and star-like objects that do not fit in the classical system, such as class D for white dwarfs and classes S and C for carbon stars.
In the MK system, a luminosity class is added to the spectral class using Roman numerals. This is based on the width of certain absorption lines in the star's spectrum, which vary with the density of the atmosphere and so distinguish giant stars from dwarfs. Luminosity class 0 or Ia+ is used for hypergiants, class I for supergiants, class II for bright giants, class III for regular giants, class IV for sub-giants, class V for main-sequence stars, class sd (or VI) for sub-dwarfs, and class D (or VII) for white dwarfs. The full spectral class for the Sun is then G2V, indicating a main-sequence star with a surface temperature around 5,800 K.
Conventional color descriptionEdit
The conventional color description takes into account only the peak of the stellar spectrum. In actuality, however, stars radiate in all parts of the spectrum. Because all spectral colors combined appear white, the actual apparent colors the human eye would observe are far lighter than the conventional color descriptions would suggest. This characteristic of 'lightness' indicates that the simplified assignment of colors within the spectrum can be misleading. Excluding color-contrast illusions in dim light, there are no green, indigo, or violet stars. Red dwarfs are a deep shade of orange, and brown dwarfs do not literally appear brown, but hypothetically would appear dim grey to a nearby observer.
The modern classification system is known as the Morgan–Keenan (MK) classification. Each star is assigned a spectral class from the older Harvard spectral classification and a luminosity class using Roman numerals as explained below, forming the star's spectral type.
Other modern stellar classification systems, such as the UBV system, are based on color indexes—the measured differences in three or more color magnitudes. Those numbers are given labels such as "U−V" or "B−V", which represent the colors passed by two standard filters (e.g. Ultraviolet, Blue and Visual).
Harvard spectral classificationEdit
The Harvard system is a one-dimensional classification scheme by astronomer Annie Jump Cannon, who re-ordered and simplified the prior alphabetical system by Draper (see next paragraph). Stars are grouped according to their spectral characteristics by single letters of the alphabet, optionally with numeric subdivisions. Main-sequence stars vary in surface temperature from approximately 2,000 to 50,000 K, whereas more-evolved stars can have temperatures above 100,000 K. Physically, the classes indicate the temperature of the star's atmosphere and are normally listed from hottest to coldest.
|Class||Effective temperature||Vega-relative chromaticity[a]||Chromaticity (D65)[b]||Main-sequence mass
|Fraction of all|
|O||≥ 30,000 K||blue||blue||≥ 16 M☉||≥ 6.6 R☉||≥ 30,000 L☉||Weak||~0.00003%|
|B||10,000–30,000 K||blue white||deep blue white||2.1–16 M☉||1.8–6.6 R☉||25–30,000 L☉||Medium||0.13%|
|A||7,500–10,000 K||white||blue white||1.4–2.1 M☉||1.4–1.8 R☉||5–25 L☉||Strong||0.6%|
|F||6,000–7,500 K||yellow white||white||1.04–1.4 M☉||1.15–1.4 R☉||1.5–5 L☉||Medium||3%|
|G||5,200–6,000 K||yellow||yellowish white||0.8–1.04 M☉||0.96–1.15 R☉||0.6–1.5 L☉||Weak||7.6%|
|K||3,700–5,200 K||light orange||pale yellow orange||0.45–0.8 M☉||0.7–0.96 R☉||0.08–0.6 L☉||Very weak||12.1%|
|M||2,400–3,700 K||orange red||light orange red||0.08–0.45 M☉||≤ 0.7 R☉||≤ 0.08 L☉||Very weak||76.45%|
The spectral classes O through M, as well as other more specialized classes discussed later, are subdivided by Arabic numerals (0–9), where 0 denotes the hottest stars of a given class. For example, A0 denotes the hottest stars in class A and A9 denotes the coolest ones. Fractional numbers are allowed; for example, the star Mu Normae is classified as O9.7. The Sun is classified as G2.
Conventional color descriptions are traditional in astronomy, and represent colors relative to the mean color of an A class star, which is considered to be white. The apparent color descriptions are what the observer would see if trying to describe the stars under a dark sky without aid to the eye, or with binoculars. However, most stars in the sky, except the brightest ones, appear white or bluish white to the unaided eye because they are too dim for color vision to work. Red supergiants are cooler and redder than dwarfs of the same spectral type, and stars with particular spectral features such as carbon stars may be far redder than any black body.
The fact that the Harvard classification of a star indicated its surface or photospheric temperature (or more precisely, its effective temperature) was not fully understood until after its development, though by the time the first Hertzsprung–Russell diagram was formulated (by 1914), this was generally suspected to be true. In the 1920s, the Indian physicist Meghnad Saha derived a theory of ionization by extending well-known ideas in physical chemistry pertaining to the dissociation of molecules to the ionization of atoms. First he applied it to the solar chromosphere, then to stellar spectra.
Harvard astronomer Cecilia Payne then demonstrated that the O-B-A-F-G-K-M spectral sequence is actually a sequence in temperature. Because the classification sequence predates our understanding that it is a temperature sequence, the placement of a spectrum into a given subtype, such as B3 or A7, depends upon (largely subjective) estimates of the strengths of absorption features in stellar spectra. As a result, these subtypes are not evenly divided into any sort of mathematically representable intervals.
Yerkes spectral classificationEdit
The Yerkes spectral classification, also called the MKK system from the authors' initials, is a system of stellar spectral classification introduced in 1943 by William Wilson Morgan, Philip C. Keenan, and Edith Kellman from Yerkes Observatory. This two-dimensional (temperature and luminosity) classification scheme is based on spectral lines sensitive to stellar temperature and surface gravity, which is related to luminosity (whilst the Harvard classification is based on just surface temperature). Later, in 1953, after some revisions of list of standard stars and classification criteria, the scheme was named the Morgan–Keenan classification, or MK, and this system remains in use.
Denser stars with higher surface gravity exhibit greater pressure broadening of spectral lines. The gravity, and hence the pressure, on the surface of a giant star is much lower than for a dwarf star because the radius of the giant is much greater than a dwarf of similar mass. Therefore, differences in the spectrum can be interpreted as luminosity effects and a luminosity class can be assigned purely from examination of the spectrum.
A number of different luminosity classes are distinguished, as listed in the table below.
|0 or Ia+||hypergiants or extremely luminous supergiants||Cygnus OB2#12 – B3-4Ia+ |
|Ia||luminous supergiants||Eta Canis Majoris – B5Ia |
|Iab||intermediate-size luminous supergiants||Gamma Cygni – F8Iab |
|Ib||less luminous supergiants||Zeta Persei – B1Ib |
|II||bright giants||Beta Leporis – G0II |
|III||normal giants||Arcturus – K0III |
|IV||subgiants||Gamma Cassiopeiae – B0.5IVpe |
|V||main-sequence stars (dwarfs)||Achernar – B6Vep |
|sd (prefix) or VI||subdwarfs||HD 149382 – sdB5 or B5VI |
|D (prefix) or VII||white dwarfs [c]||van Maanen 2 – DZ8 |
Marginal cases are allowed; for example, a star may be either a supergiant or a bright giant, or may be in between the subgiant and main-sequence classifications. In these cases, two special symbols are used:
- A slash (/) means that a star is either one class or the other.
- A dash (-) means that the star is in between the two classes.
For example, a star classified as A3-4III/IV would be in between spectral types A3 and A4, while being either a giant star or a subgiant.
Sub-dwarf classes have also been used: VI for sub-dwarfs (stars slightly less luminous than the main sequence).
Nominal luminosity class VII (and sometimes higher numerals) is now rarely used for white dwarf or "hot sub-dwarf" classes, since the temperature-letters of the main sequence and giant stars no longer apply to white dwarfs.
Occasionally, letters a and b are applied to luminosity classes other than supergiants; for example, a giant star slightly more luminous than typical may be given a luminosity class of IIIb.
Additional nomenclature, in the form of lower-case letters, can follow the spectral type to indicate peculiar features of the spectrum.
|Code||Spectral peculiarities for stars|
|:||uncertain spectral value|
|...||Undescribed spectral peculiarities exist|
|e||Emission lines present|
|[e]||"Forbidden" emission lines present|
|er||"Reversed" center of emission lines weaker than edges|
|eq||Emission lines with P Cygni profile|
|f||N III and He II emission|
|f*||N IV λ4058Å is stronger than the N III λ4634Å, λ4640Å, & λ4642Å lines|
|f+||Si IV λ4089Å & λ4116Å are emitted, in addition to the N III line|
|(f)||N III emission, absence or weak absorption of He II|
|((f))||Displays strong He II absorption accompanied by weak N III emissions|
|h||WR stars with hydrogen emission lines.|
|ha||WR stars with hydrogen seen in both absorption and emission.|
|He wk||Weak Helium lines|
|k||Spectra with interstellar absorption features|
|m||Enhanced metal features|
|n||Broad ("nebulous") absorption due to spinning|
|nn||Very broad absorption features|
|neb||A nebula's spectrum mixed in|
|p||Unspecified peculiarity, peculiar star.[d]|
|pq||Peculiar spectrum, similar to the spectra of novae|
|q||P Cygni profiles|
|s||Narrow ("sharp") absorption lines|
|ss||Very narrow lines|
|sh||Shell star features|
|var||Variable spectral feature (sometimes abbreviated to "v")|
|wl||Weak lines (also "w" & "wk")|
|Abnormally strong spectral lines of the specified element(s)|
The reason for the odd arrangement of letters in the Harvard classification is historical, having evolved from the earlier Secchi classes and been progressively modified as understanding improved.
During the 1860s and 1870s, pioneering stellar spectroscopist Angelo Secchi created the Secchi classes in order to classify observed spectra. By 1866, he had developed three classes of stellar spectra, shown in the table below.
|Class number||Secchi class description|
|Secchi class I||White and blue stars with broad heavy hydrogen lines, such as Vega and Altair. This includes the modern class A and early class F.|
|Secchi class I
|A subtype of Secchi class I with narrow lines in place of wide bands, such as Rigel and Bellatrix. In modern terms, this corresponds to early B-type stars|
|Secchi class II||Yellow stars – hydrogen less strong, but evident metallic lines, such as the Sun, Arcturus, and Capella. This includes the modern classes G and K as well as late class F.|
|Secchi class III||Orange to red stars with complex band spectra, such as Betelgeuse and Antares. |
This corresponds to the modern class M.
|Secchi class IV||In 1868, he discovered carbon stars, which he put into a distinct group: |
Red stars with significant carbon bands and lines, corresponding to modern classes C and S.
|Secchi class V||In 1877, he added a fifth class: |
Emission-line stars, such as Gamma Cassiopeiae and Sheliak, which are in modern class Be. In 1891, Edward Charles Pickering proposed that class V should correspond to the modern class O (which then included Wolf-Rayet stars) and stars within planetary nebulae.
The Roman numerals used for Secchi classes should not be confused with the completely unrelated Roman numerals used for Yerkes luminosity classes and the proposed neutron star classes.
|I||A, B, C, D||Hydrogen lines dominant.|
|II||E, F, G, H, I, K, L|
|IV||N||Did not appear in the catalogue.|
|V||O||Included Wolf–Rayet spectra with bright lines.|
|Classes carried through into the MK system are in bold.|
In the 1880s, the astronomer Edward C. Pickering began to make a survey of stellar spectra at the Harvard College Observatory, using the objective-prism method. A first result of this work was the Draper Catalogue of Stellar Spectra, published in 1890. Williamina Fleming classified most of the spectra in this catalogue and was credited with classifying over 10,000 featured stars and discovering 10 novae and more than 200 variable stars. With the help of the Harvard computers, especially Williamina Fleming, the first iteration of the Henry Draper catalogue was devised to replace the Roman-numeral scheme established by Angelo Secchi.
The catalogue used a scheme in which the previously used Secchi classes (I to V) were subdivided into more specific classes, given letters from A to P. Also, the letter Q was used for stars not fitting into any other class. Fleming worked with Pickering to differentiate 17 different classes based on the intensity of hydrogen spectral lines, which causes variation in the wavelengths emanated from stars and results in variation in color appearance. The spectra in class A tended to produce the strongest hydrogen absorption lines while spectra in class O produced virtually no visible lines. The lettering system displayed the gradual decrease in hydrogen absorption in the spectral classes when moving down the alphabet. This classification system was later modified by Annie Jump Cannon and Antonia Maury to produce the Harvard spectral classification scheme.
In 1897, another computer at Harvard, Antonia Maury, placed the Orion subtype of Secchi class I ahead of the remainder of Secchi class I, thus placing the modern type B ahead of the modern type A. She was the first to do so, although she did not use lettered spectral types, but rather a series of twenty-two types numbered from I to XXII. Because the 22 Roman numeral groupings didn’t account for additional variations in spectra, three additional divisions were made to further specify differences. Groups I through V included Orion type stars that displayed an increasing strength in hydrogen absorption lines from group I to group V. Groups VII to XI were Secchi type I stars with decreasing strength in hydrogen absorption lines from groups VII to XI. Group VI acted as an intermediate between the Orion type and Secchi type I group, while groups XIII to XVI included Secchi type 2 stars with decreasing hydrogen absorption lines and increasing solar-type metallic lines. Groups XVII to XX included Secchi type 3 stars with increasing spectral lines. Group XXI included Secchi type 4 stars, and group XXII included Wolf-Reyet stars. An additional categorization using lowercase letters was added to differentiate relative line appearance in spectra. The lines were defined as a) average width, b) hazy, or c) sharp.
Antonia Maury published her own stellar classification catalogue in 1897 called "Spectra of Bright Stars Photographed with the 11-inch Draper Telescope as Part of the Henry Draper Memorial", which included 4,800 photographs and Maury’s analyses of 681 bright northern stars. This was the first instance in which a woman was credited for an observatory publication.
In 1901, Annie Jump Cannon returned to the lettered types, but dropped all letters except O, B, A, F, G, K, M, and N used in that order, as well as P for planetary nebulae and Q for some peculiar spectra. She also used types such as B5A for stars halfway between types B and A, F2G for stars one-fifth of the way from F to G, and so on. Finally, by 1912, Cannon had changed the types B, A, B5A, F2G, etc. to B0, A0, B5, F2, etc. This is essentially the modern form of the Harvard classification system. This system was developed through the analysis of spectra on photographic plates, which could convert light emanated from stars into a readable spectra.
Mount Wilson classesEdit
A luminosity classification known as the Mount Wilson system was used to distinguish between stars of different luminosities. This notation system is still sometimes seen on modern spectra.
The stellar classification system is taxonomic, based on type specimens, similar to classification of species in biology: The categories are defined by one or more standard stars for each category and sub-category, with an associated description of the distinguishing features.
"Early" and "late" nomenclatureEdit
Stars are often referred to as early or late types. "Early" is a synonym for hotter, while "late" is a synonym for cooler.
Depending on the context, "early" and "late" may be absolute or relative terms. "Early" as an absolute term would therefore refer to O or B, and possibly A stars. As a relative reference it relates to stars hotter than others, such as "early K" being perhaps K0, K1, and K3.
"Late" is used in the same way, with an unqualified use of the term indicating stars with spectral types such as K and M, but it can also be used for stars that are cool relative to other stars, as in using "late G" to refer to G7, G8, and G9.
In the relative sense, "early" means a lower Arabic numeral following the class letter, and "late" means a higher number.
This obscure terminology is a hold-over from an early 20th century model of stellar evolution, which supposed that stars were powered by gravitational contraction via the Kelvin–Helmholtz mechanism, which is now known to not apply to main sequence stars. If that were true, then stars would start their lives as very hot "early-type" stars and then gradually cool down into "late-type" stars. This mechanism provided ages of the Sun that were much smaller than what is observed in the geologic record, and was rendered obsolete by the discovery that stars are powered by nuclear fusion. The terms "early" and "late" were carried over, beyond the demise of the model they were based on.
O-type stars are very hot and extremely luminous, with most of their radiated output in the ultraviolet range. These are the rarest of all main-sequence stars. About 1 in 3,000,000 (0.00003%) of the main-sequence stars in the solar neighborhood are O-type stars.[e] Some of the most massive stars lie within this spectral class. O-type stars frequently have complicated surroundings that make measurement of their spectra difficult.
O-type spectra formerly were defined by the ratio of the strength of the He II λ4541 relative to that of He I λ4471, where λ is the radiation wavelength. Spectral type O7 was defined to be the point at which the two intensities are equal, with the He I line weakening towards earlier types. Type O3 was, by definition, the point at which said line disappears altogether, although it can be seen very faintly with modern technology. Due to this, the modern definition uses the ratio of the nitrogen line N IV λ4058 to N III λλ4634-40-42.
O-type stars have dominant lines of absorption and sometimes emission for He II lines, prominent ionized (Si IV, O III, N III, and C III) and neutral helium lines, strengthening from O5 to O9, and prominent hydrogen Balmer lines, although not as strong as in later types. Because they are so massive, O-type stars have very hot cores and burn through their hydrogen fuel very quickly, so they are the first stars to leave the main sequence.
When the MKK classification scheme was first described in 1943, the only subtypes of class O used were O5 to O9.5. The MKK scheme was extended to O9.7 in 1971 and O4 in 1978, and new classification schemes that add types O2, O3, and O3.5 have subsequently been introduced.
B-type stars are very luminous and blue. Their spectra have neutral helium lines, which are most prominent at the B2 subclass, and moderate hydrogen lines. As O- and B-type stars are so energetic, they only live for a relatively short time. Thus, due to the low probability of kinematic interaction during their lifetime, they are unable to stray far from the area in which they formed, apart from runaway stars.
The transition from class O to class B was originally defined to be the point at which the He II λ4541 disappears. However, with modern equipment, the line is still apparent in the early B-type stars. Today for main-sequence stars, the B-class is instead defined by the intensity of the He I violet spectrum, with the maximum intensity corresponding to class B2. For supergiants, lines of silicon are used instead; the Si IV λ4089 and Si III λ4552 lines are indicative of early B. At mid B, the intensity of the latter relative to that of Si II λλ4128-30 is the defining characteristic, while for late B, it is the intensity of Mg II λ4481 relative to that of He I λ4471.
These stars tend to be found in their originating OB associations, which are associated with giant molecular clouds. The Orion OB1 association occupies a large portion of a spiral arm of the Milky Way and contains many of the brighter stars of the constellation Orion. About 1 in 800 (0.125%) of the main-sequence stars in the solar neighborhood are B-type main-sequence stars.[e]
Massive yet non-supergiant entities known as "Be stars" are main-sequence stars that notably have, or had at some time, one or more Balmer lines in emission, with the hydrogen-related electromagnetic radiation series projected out by the stars being of particular interest. Be stars are generally thought to feature unusually strong stellar winds, high surface temperatures, and significant attrition of stellar mass as the objects rotate at a curiously rapid rate. Objects known as "B(e)" or "B[e]" stars possess distinctive neutral or low ionisation emission lines that are considered to have 'forbidden mechanisms', undergoing processes not normally allowed under current understandings of quantum mechanics.
- B0V – Upsilon Orionis
- B0Ia – Alnilam
- B2Ia – Chi2 Orionis
- B2Ib – 9 Cephei
- B3V – Eta Ursae Majoris
- B3V – Eta Aurigae
- B3Ia – Omicron2 Canis Majoris
- B5Ia – Eta Canis Majoris
- B8Ia – Rigel
A-type stars are among the more common naked eye stars, and are white or bluish-white. They have strong hydrogen lines, at a maximum by A0, and also lines of ionized metals (Fe II, Mg II, Si II) at a maximum at A5. The presence of Ca II lines is notably strengthening by this point. About 1 in 160 (0.625%) of the main-sequence stars in the solar neighborhood are A-type stars.[e]
- A0Van – Gamma Ursae Majoris
- A0Va – Vega
- A0Ib – Eta Leonis
- A0Ia – HD 21389
- A1V – Sirius A
- A2Ia – Deneb
- A3Va – Fomalhaut
F-type stars have strengthening spectral lines H and K of Ca II. Neutral metals (Fe I, Cr I) beginning to gain on ionized metal lines by late F. Their spectra are characterized by the weaker hydrogen lines and ionized metals. Their color is white. About 1 in 33 (3.03%) of the main-sequence stars in the solar neighborhood are F-type stars.[e]
G-type stars, including the Sun, have prominent spectral lines H and K of Ca II, which are most pronounced at G2. They have even weaker hydrogen lines than F, but along with the ionized metals, they have neutral metals. There is a prominent spike in the G band of CH molecules. Class G main-sequence stars make up about 7.5%, nearly one in thirteen, of the main-sequence stars in the solar neighborhood.[e]
Class G contains the "Yellow Evolutionary Void". Supergiant stars often swing between O or B (blue) and K or M (red). While they do this, they do not stay for long in the yellow supergiant G class, as this is an extremely unstable place for a supergiant to be.
- G0V – Beta Canum Venaticorum
- G0IV – Eta Boötis
- G0Ib – Beta Aquarii
- G2V – Sun
- G5V – Kappa Ceti
- G5IV – Mu Herculis
- G5Ib – 9 Pegasi
- G8V – 61 Ursae Majoris
- G8IV – Beta Aquilae
- G8IIIa – Kappa Geminorum
- G8IIIab – Epsilon Virginis
- G8Ib – Epsilon Geminorum
K-type stars are orangish stars that are slightly cooler than the Sun. They make up about 12% of the main-sequence stars in the solar neighborhood.[e] There are also giant K-type stars, which range from hypergiants like RW Cephei, to giants and supergiants, such as Arcturus, whereas orange dwarfs, like Alpha Centauri B, are main-sequence stars.
They have extremely weak hydrogen lines, if those are present at all, and mostly neutral metals (Mn I, Fe I, Si I). By late K, molecular bands of titanium oxide become present. Mainstream theories (those rooted in lower harmful radioactivity and star longevity) would thus suggest such stars have the optimal chances of heavily-evolved life developing on orbiting planets (if such life is directly analogous to earth's) due to a broad habitable zone yet much lower harmful periods of emission compared to those with the broadest such zones.
- K0V – Sigma Draconis
- K0III – Pollux
- K0III – Epsilon Cygni
- K2V – Epsilon Eridani
- K2III – Kappa Ophiuchi
- K3III – Rho Boötis
- K5V – 61 Cygni A
- K5III – Gamma Draconis
Class M stars are by far the most common. About 76% of the main-sequence stars in the solar neighborhood are class M stars.[e][f] However, class M main-sequence stars (red dwarfs) have such low luminosities that none are bright enough to be seen with the unaided eye, unless under exceptional conditions. The brightest known M-class main-sequence star is M0V Lacaille 8760, with magnitude 6.6 (the limiting magnitude for typical naked-eye visibility under good conditions is typically quoted as 6.5), and it is extremely unlikely that any brighter examples will be found.
Although most class M stars are red dwarfs, most of the largest ever supergiant stars in the Milky Way are M stars, such as VV Cephei, Antares, and Betelgeuse, which are also class M. Furthermore, the larger, hotter brown dwarfs are late class M, usually in the range of M6.5 to M9.5.
The spectrum of a class M star contains lines from oxide molecules (in the visible spectrum, especially TiO) and all neutral metals, but absorption lines of hydrogen are usually absent. TiO bands can be strong in class M stars, usually dominating their visible spectrum by about M5. Vanadium(II) oxide bands become present by late M.
Extended spectral typesEdit
A number of new spectral types have been taken into use from newly discovered types of stars.
Hot blue emission star classesEdit
Spectra of some very hot and bluish stars exhibit marked emission lines from carbon or nitrogen, or sometimes oxygen.
Class W: Wolf–RayetEdit
Once included as type O stars, the Wolf-Rayet stars of class W or WR are notable for spectra lacking hydrogen lines. Instead their spectra are dominated by broad emission lines of highly ionized helium, nitrogen, carbon, and sometimes oxygen. They are thought to mostly be dying supergiants with their hydrogen layers blown away by stellar winds, thereby directly exposing their hot helium shells. Class W is further divided into subclasses according to the relative strength of nitrogen and carbon emission lines in their spectra (and outer layers).
- WN – spectrum dominated by N III-V and He I-II lines
- WNE (WN2 to WN5 with some WN6) – hotter or "early"
- WNL (WN7 to WN9 with some WN6) – cooler or "late"
- Extended WN classes WN10 and WN11 sometimes used for the Ofpe/WN9 stars
- h tag used (e.g. WN9h) for WR with hydrogen emission and ha (e.g. WN6ha) for both hydrogen emission and absorption
- WN/C – WN stars plus strong C IV lines, intermediate between WN and WC stars
- WC – spectrum with strong C II-IV lines
- WCE (WC4 to WC6) – hotter or "early"
- WCL (WC7 to WC9) – cooler or "late"
- WO (WO1 to WO4) – strong O VI lines, extremely rare
Although the central stars of most planetary nebulae (CSPNe) show O type spectra, around 10% are hydrogen-deficient and show WR spectra. These are low-mass stars and to distinguish them from the massive Wolf-Rayet stars, their spectra are enclosed in square brackets: e.g. [WC]. Most of these show [WC] spectra, some [WO], and very rarely [WN].
The "Slash" starsEdit
The slash stars are O-type stars with WN-like lines in their spectra. The name "slash" comes from their printed spectral type having a slash in it (e.g. "Of/WNL").
There is a secondary group found with this spectra, a cooler, "intermediate" group designated "Ofpe/WN9". These stars have also been referred to as WN10 or WN11, but that has become less popular with the realisation of the evolutionary difference from other Wolf–Rayet stars. Recent discoveries of even rarer stars have extended the range of slash stars as far as O2-3.5If*/WN5-7, which are even hotter than the original "slash" stars.
The magnetic O starsEdit
They are O stars with strong magnetic fields. Designation is Of?p.
Cool red and brown dwarf classesEdit
Brown dwarfs, stars that do not undergo hydrogen fusion, cool as they age and so progress to later spectral types. Brown dwarfs start their lives with M-type spectra and will cool through the L, T, and Y spectral classes, faster the less massive they are; the highest-mass brown dwarfs cannot have cooled to Y or even T dwarfs within the age of the universe. Because this leads to an unresolvable overlap between spectral types' effective temperature and luminosity for some masses and ages of different L-T-Y types, no distinct temperature or luminosity values can be given.
Class L dwarfs get their designation because they are cooler than M stars and L is the remaining letter alphabetically closest to M. Some of these objects have masses large enough to support hydrogen fusion and are therefore stars, but most are of substellar mass and are therefore brown dwarfs. They are a very dark red in color and brightest in infrared. Their atmosphere is cool enough to allow metal hydrides and alkali metals to be prominent in their spectra.
Due to low surface gravity in giant stars, TiO- and VO-bearing condensates never form. Thus, L-type stars larger than dwarfs can never form in an isolated environment. However, it may be possible for these L-type supergiants to form through stellar collisions, an example of which is V838 Monocerotis while in the height of its luminous red nova eruption.
Class T: methane dwarfsEdit
Class T dwarfs are cool brown dwarfs with surface temperatures between approximately 550 and 1,300 K (277 and 1,027 °C; 530 and 1,880 °F). Their emission peaks in the infrared. Methane is prominent in their spectra.
Classes T and L could be more common than all the other classes combined if recent research is accurate. Because brown dwarfs persist for so long—a few times the age of the universe—in the absence of catastrophic collisions these smaller bodies can only increase in number.
Study of the number of proplyds (protoplanetary disks, clumps of gas in nebulae from which stars and planetary systems are formed) indicates that the number of stars in the galaxy should be several orders of magnitude higher than what was previously conjectured. It is theorized that these proplyds are in a race with each other. The first one to form will become a protostar, which are very violent objects and will disrupt other proplyds in the vicinity, stripping them of their gas. The victim proplyds will then probably go on to become main-sequence stars or brown dwarfs of the L and T classes, which are quite invisible to us.
Brown dwarfs of spectral class Y are cooler than those of spectral class T and have qualitatively different spectra from them. A total of 17 objects have been placed in class Y as of August 2013. Although such dwarfs have been modelled and detected within forty light-years by the Wide-field Infrared Survey Explorer (WISE) there is no well-defined spectral sequence yet and no prototypes. Nevertheless, several objects have been proposed as spectral classes Y0, Y1, and Y2.
The spectra of these prospective Y objects display absorption around 1.55 micrometers. Delorme et al. have suggested that this feature is due to absorption from ammonia, and that this should be taken as the indicative feature for the T-Y transition. In fact, this ammonia-absorption feature is the main criterion that has been adopted to define this class. However, this feature is difficult to distinguish from absorption by water and methane, and other authors have stated that the assignment of class Y0 is premature.
The latest brown dwarf proposed for the Y spectral type, WISE 1828+2650, is a > Y2 dwarf with an effective temperature originally estimated around 300 K, the temperature of the human body. Parallax measurements have, however, since shown that its luminosity is inconsistent with it being colder than ~400 K. The coolest Y dwarf currently known is WISE 0855−0714 with an approximate temperature of 250 K.
The mass range for Y dwarfs is 9–25 Jupiter masses, but young objects might reach below one Jupiter mass, which means that Y class objects straddle the 13 Jupiter mass deuterium-fusion limit that marks the current IAU division between brown dwarfs and planets.
Peculiar brown dwarfsEdit
|Symbols used for peculiar brown dwarfs|
|pec||This suffix (e.g. L2pec) stands for "peculiar".|
|sd||This prefix (e.g. sdL0) stands for subdwarf and indicates a low metallicity and blue color|
|β||Objects with the beta (β) suffix (e.g. L4β) have an intermediate surface gravity.|
|γ||Objects with the gamma (γ) suffix (e.g. L5γ) have a low surface gravity.|
|red||The red suffix (e.g. L0red) indicates objects without signs of youth, but high dust content|
|blue||The blue suffix (e.g. L3blue) indicates unusual blue near-infrared colors for L-dwarfs without obvious low metallicity|
Young brown dwarfs have low surface gravities because they have larger radii and lower masses compared to the field stars of similar spectral type. These sources are marked by a letter beta (β) for intermediate surface gravity and gamma (γ) for low surface gravity. Indication for low surface gravity are weak CaH, K I and Na I lines, as well as strong VO line. Alpha (α) stands for normal surface gravity and is usually dropped. Sometimes an extremely low surface gravity is denoted by a delta (δ). The suffix "pec" stands for peculiar. The peculiar suffix is still used for other features that are unusual and summarizes different properties, indicative of low surface gravity, subdwarfs and unresolved binaries. The prefix sd stands for subdwarf and only includes cool subdwarfs. This prefix indicates a low metallicity and kinematic properties that are more similar to halo stars than to disk stars. Subdwarfs appear bluer than disk objects. The red suffix describes objects with red color, but an older age. This is not interpreted as low surface gravity, but as a high dust content. The blue suffix describes objects with blue near-infrared colors that cannot be explained with low metallicity. Some are explained as L+T binaries, others are not binaries, such as 2MASS J11263991−5003550 and are explained with thin and/or large-grained clouds.
Late giant carbon-star classesEdit
Carbon-stars are stars whose spectra indicate production of carbon—a byproduct of triple-alpha helium fusion. With increased carbon abundance, and some parallel s-process heavy element production, the spectra of these stars become increasingly deviant from the usual late spectral classes G, K, and M. Equivalent classes for carbon-rich stars are S and C.
The giants among those stars are presumed to produce this carbon themselves, but some stars in this class are double stars, whose odd atmosphere is suspected of having been transferred from a companion that is now a white dwarf, when the companion was a carbon-star.
Class C: carbon starsEdit
Originally classified as R and N stars, these are also known as carbon stars. These are red giants, near the end of their lives, in which there is an excess of carbon in the atmosphere. The old R and N classes ran parallel to the normal classification system from roughly mid G to late M. These have more recently been remapped into a unified carbon classifier C with N0 starting at roughly C6. Another subset of cool carbon stars are the C-J type stars, which are characterized by the strong presence of molecules of 13CN in addition to those of 12CN. A few main-sequence carbon stars are known, but the overwhelming majority of known carbon stars are giants or supergiants. There are several subclasses:
- C-R – Formerly its own class (R) representing the carbon star equivalent of late G to early K-type stars.
- C-N – Formerly its own class representing the carbon star equivalent of late K to M-type stars.
- C-J – A subtype of cool C stars with a high content of 13C.
- C-H – Population II analogues of the C-R stars.
- C-Hd – Hydrogen-deficient carbon stars, similar to late G supergiants with CH and C2 bands added.
Class S stars form a continuum between class M stars and carbon stars. Those most similar to class M stars have strong ZrO absorption bands analogous to the TiO bands of class M stars, whereas those most similar to carbon stars have strong sodium D lines and weak C2 bands. Class S stars have excess amounts of zirconium and other elements produced by the s-process, and have more similar carbon and oxygen abundances than class M or carbon stars. Like carbon stars, nearly all known class S stars are asymptotic-giant-branch stars.
The spectral type is formed by the letter S and a number between zero and ten. This number corresponds to the temperature of the star and approximately follows the temperature scale used for class M giants. The most common types are S3 to S5. The non-standard designation S10 has only been used for the star Chi Cygni when at an extreme minimum.
The basic classification is usually followed by an abundance indication, following one of several schemes: S2,5; S2/5; S2 Zr4 Ti2; or S2*5. A number following a comma is a scale between 1 and 9 based on the ratio of ZrO and TiO. A number following a slash is a more recent but less common scheme designed to represent the ratio of carbon to oxygen on a scale of 1 to 10, where a 0 would be an MS star. Intensities of zirconium and titanium may be indicated explicitly. Also occasionally seen is a number following an asterisk, which represents the strength of the ZrO bands on a scale from 1 to 5.
In between the M and S classes, border cases are named MS stars. In a similar way, border cases between the S and C-N classes are named SC or CS. The sequence M → MS → S → SC → C-N is hypothesized to be a sequence of increased carbon abundance with age for carbon stars in the asymptotic giant branch.
White dwarf classificationsEdit
The class D (for Degenerate) is the modern classification used for white dwarfs – low-mass stars that are no longer undergoing nuclear fusion and have shrunk to planetary size, slowly cooling down. Class D is further divided into spectral types DA, DB, DC, DO, DQ, DX, and DZ. The letters are not related to the letters used in the classification of other stars, but instead indicate the composition of the white dwarf's visible outer layer or atmosphere.
- DA – a hydrogen-rich atmosphere or outer layer, indicated by strong Balmer hydrogen spectral lines.
- DB – a helium-rich atmosphere, indicated by neutral helium, He I, spectral lines.
- DO – a helium-rich atmosphere, indicated by ionized helium, He II, spectral lines.
- DQ – a carbon-rich atmosphere, indicated by atomic or molecular carbon lines.
- DZ – a metal-rich atmosphere, indicated by metal spectral lines (a merger of the obsolete white dwarf spectral types, DG, DK, and DM).
- DC – no strong spectral lines indicating one of the above categories.
- DX – spectral lines are insufficiently clear to classify into one of the above categories.
The type is followed by a number giving the white dwarf's surface temperature. This number is a rounded form of 50400/Teff, where Teff is the effective surface temperature, measured in kelvins. Originally, this number was rounded to one of the digits 1 through 9, but more recently fractional values have started to be used, as well as values below 1 and above 9.
Two or more of the type letters may be used to indicate a white dwarf that displays more than one of the spectral features above.
Extended white dwarf spectral typesEdit
- DAB – a hydrogen- and helium-rich white dwarf displaying neutral helium lines.
- DAO – a hydrogen- and helium-rich white dwarf displaying ionized helium lines.
- DAZ – a hydrogen-rich metallic white dwarf.
- DBZ – a helium-rich metallic white dwarf.
A different set of spectral peculiarity symbols are used for white dwarfs than for other types of stars:
|Code||Spectral peculiarities for stars|
|P||Magnetic white dwarf with detectable polarization|
|E||Emission lines present|
|H||Magnetic white dwarf without detectable polarization|
|PEC||Spectral peculiarities exist|
Non-stellar spectral types: Classes P and QEdit
Finally, the classes P and Q, left over from the Draper system by Cannon, are occasionally used for certain non-stellar objects. Type P objects are stars within planetary nebulae and type Q objects are novae.
Stellar remnants are objects associated with the death of stars. Included in the category are white dwarfs, and as can be seen from the radically different classification scheme for class D, non-stellar objects are difficult to fit into the MK system.
The Hertzsprung-Russell diagram, which the MK system is based on, is observational in nature so these remnants cannot easily be plotted on the diagram, or cannot be placed at all. Old neutron stars are relatively small and cold, and would fall on the far right side of the diagram. Planetary nebulae are dynamic and tend to quickly fade in brightness as the progenitor star transitions to the white dwarf branch. If shown, a planetary nebula would be plotted to the right of the diagram's upper right quadrant. A black hole emits no visible light of its own, and therefore would not appear on the diagram.
A classification system for neutron stars using Roman numerals has been proposed: type I for less massive neutron stars with low cooling rates, type II for more massive neutron stars with higher cooling rates, and a proposed type III for more massive neutron stars (possible exotic star candidates) with higher cooling rates. The more massive a neutron star is, the higher neutrino flux it carries. These neutrinos carry away so much heat energy that after only a few years the temperature of an isolated neutron star falls from the order of billions to only around a million Kelvin. This proposed neutron star classification system is not to be confused with the earlier Secchi spectral classes and the Yerkes luminosity classes.
Replaced spectral classesEdit
Several spectral types, all previously used for non-standard stars in the mid-20th century, have been replaced during revisions of the stellar classification system. They may still be found in old editions of star catalogs: R and N have been subsumed into the new C class as C-R and C-N.
Stellar classification, habitability, and the search for lifeEdit
Humans may eventually be able to colonize any kind of stellar habitat, this section will address the probability of life arising around other stars.
Stability, luminosity, and lifespan are all factors in stellar habitability. We only know of one star that hosts life, and that is our own—a G class star with an abundance of heavy elements and low variability in brightness. It is also unlike many stellar systems in that it only has one star in it (see Planetary habitability, under the binary systems section).
Working from these constraints and the problems of having an empirical sample set of only one, the range of stars that are predicted to be able to support life as we know it is limited by a few factors. Of the main-sequence star types, stars more massive than 1.5 times that of the Sun (spectral types O, B, and A) age too quickly for advanced life to develop (using Earth as a guideline). On the other extreme, dwarfs of less than half the mass of our Sun (spectral type M) are likely to tidally lock planets within their habitable zone, along with other problems (see Habitability of red dwarf systems). While there are many problems facing life on red dwarfs, due to their sheer numbers and longevity, many astronomers continue to model these systems.
For these reasons NASA's Kepler Mission is searching for habitable planets at nearby main sequence stars that are less massive than spectral type A but more massive than type M -- making the most probable stars to host life dwarf stars of types F, G, and K.
- Guest star – Ancient Chinese name for cataclysmic variable stars
- Spectral signature – The variation of reflectance or emittance of a material with respect to wavelengths
- Star count, survey of stars
- Stellar dynamics
- Stellar evolution – Changes to a star over its lifespan
- UBV photometric system
- This is the relative color of the star if Vega, generally considered a bluish star, is used as a standard for "white".
- Chromaticity can vary significantly within a class; for example, the Sun (a G2 star) is white, while a G9 star is yellow.
- Technically, white dwarfs are no longer “live” stars, but rather the “dead” remains of extinguished stars. Their classification uses a different set of spectral types from element-burning “live” stars.
- When used with A-type stars, this instead refers to abnormally strong metallic spectral lines
- These proportions are fractions of stars brighter than absolute magnitude 16; lowering this limit will render earlier types even rarer, whereas generally adding only to the M class.
- This rises to 78.6% if we include all stars. (See the above note.)
- Habets, G. M. H. J.; Heinze, J. R. W. (November 1981). "Empirical bolometric corrections for the main-sequence". Astronomy and Astrophysics Supplement Series. 46: 193–237 (Tables VII and VIII). Bibcode:1981A&AS...46..193H. – Luminosities are derived from Mbol figures, using Mbol(☉)=4.75.
- Weidner, Carsten; Vink, Jorick S. (December 2010). "The masses, and the mass discrepancy of O-type stars". Astronomy and Astrophysics. 524. A98. arXiv:1010.2204. Bibcode:2010A&A...524A..98W. doi:10.1051/0004-6361/201014491.
- Charity, Mitchell. "What color are the stars?". Vendian.org. Retrieved 13 May 2006.
- "The Colour of Stars". Australia Telescope National Facility. 17 October 2018.
- Moore, Patrick (1992). The Guinness Book of Astronomy: Facts & Feats (4th ed.). Guinness. ISBN 978-0-85112-940-2.
- "The Colour of Stars". Australia Telescope Outreach and Education. 21 December 2004. Retrieved 26 September 2007. — Explains the reason for the difference in color perception.
- Baraffe, I.; Chabrier, G.; Barman, T. S.; Allard, F.; Hauschildt, P. H. (May 2003). "Evolutionary models for cool brown dwarfs and extrasolar giant planets. The case of HD 209458". Astronomy and Astrophysics. 402 (2): 701–712. arXiv:astro-ph/0302293. Bibcode:2003A&A...402..701B. doi:10.1051/0004-6361:20030252.
- Ledrew, Glenn (February 2001). "The Real Starry Sky". Journal of the Royal Astronomical Society of Canada. 95: 32. Bibcode:2001JRASC..95...32L.
- Sota, A.; Maíz Apellániz, J.; Morrell, N. I.; Barbá, R. H.; Walborn, N. R.; et al. (March 2014). "The Galactic O-Star Spectroscopic Survey (GOSSS). II. Bright Southern Stars". The Astrophysical Journal Supplement Series. 211 (1). 10. arXiv:1312.6222. Bibcode:2014ApJS..211...10S. doi:10.1088/0067-0049/211/1/10.
- Phillips, Kenneth J. H. (1995). Guide to the Sun. Cambridge University Press. pp. 47–53. ISBN 978-0-521-39788-9.
- Russell, Henry Norris (March 1914). "Relations Between the Spectra and Other Characteristics of the Stars". Popular Astronomy. Vol. 22. pp. 275–294. Bibcode:1914PA.....22..275R.
- Saha, M. N. (May 1921). "On a Physical Theory of Stellar Spectra". Proceedings of the Royal Society of London. Series A. 99 (697): 135–153. Bibcode:1921RSPSA..99..135S. doi:10.1098/rspa.1921.0029.
- Payne, Cecilia Helena (1925). Stellar Atmospheres; a Contribution to the Observational Study of High Temperature in the Reversing Layers of Stars (Ph.D). Radcliffe College. Bibcode:1925PhDT.........1P.
- Pickles, A. J. (July 1998). "A Stellar Spectral Flux Library: 1150-25000 Å". Publications of the Astronomical Society of the Pacific. 110 (749): 863–878. Bibcode:1998PASP..110..863P. doi:10.1086/316197.
- Morgan, William Wilson; Keenan, Philip Childs; Kellman, Edith (1943). An atlas of stellar spectra, with an outline of spectral classification. The University of Chicago Press. Bibcode:1943assw.book.....M. OCLC 1806249.
- Morgan, William Wilson; Keenan, Philip Childs (1973). "Spectral Classification". Annual Review of Astronomy and Astrophysics. 11: 29–50. Bibcode:1973ARA&A..11...29M. doi:10.1146/annurev.aa.11.090173.000333.
- "A note on the spectral atlas and spectral classification". Centre de données astronomiques de Strasbourg. Retrieved 2 January 2015.
- Caballero-Nieves, S. M.; Nelan, E. P.; Gies, D. R.; Wallace, D. J.; DeGioia-Eastwood, K.; et al. (February 2014). "A High Angular Resolution Survey of Massive Stars in Cygnus OB2: Results from the Hubble Space Telescope Fine Guidance Sensors". The Astronomical Journal. 147 (2). 40. arXiv:1311.5087. Bibcode:2014AJ....147...40C. doi:10.1088/0004-6256/147/2/40.
- Prinja, R. K.; Massa, D. L. (October 2010). "Signature of wide-spread clumping in B supergiant winds". Astronomy and Astrophysics. 521. L55. arXiv:1007.2744. Bibcode:2010A&A...521L..55P. doi:10.1051/0004-6361/201015252.
- Gray, David F. (November 2010). "Photospheric Variations of the Supergiant γ Cyg". The Astronomical Journal. 140 (5): 1329–1336. Bibcode:2010AJ....140.1329G. doi:10.1088/0004-6256/140/5/1329.
- Nazé, Y. (November 2009). "Hot stars observed by XMM-Newton. I. The catalog and the properties of OB stars". Astronomy and Astrophysics. 506 (2): 1055–1064. arXiv:0908.1461. Bibcode:2009A&A...506.1055N. doi:10.1051/0004-6361/200912659.
- Lyubimkov, Leonid S.; Lambert, David L.; Rostopchin, Sergey I.; Rachkovskaya, Tamara M.; Poklad, Dmitry B. (February 2010). "Accurate fundamental parameters for A-, F- and G-type Supergiants in the solar neighbourhood". Monthly Notices of the Royal Astronomical Society. 402 (2): 1369–1379. arXiv:0911.1335. Bibcode:2010MNRAS.402.1369L. doi:10.1111/j.1365-2966.2009.15979.x.
- Gray, R. O.; Corbally, C. J.; Garrison, R. F.; McFadden, M. T.; Robinson, P. E. (October 2003). "Contributions to the Nearby Stars (NStars) Project: Spectroscopy of Stars Earlier than M0 within 40 Parsecs: The Northern Sample. I". The Astronomical Journal. 126 (4): 2048–2059. arXiv:astro-ph/0308182. Bibcode:2003AJ....126.2048G. doi:10.1086/378365.
- Cenarro, A. J.; Peletier, R. F.; Sanchez-Blazquez, P.; Selam, S. O.; Toloba, E.; Cardiel, N.; Falcon-Barroso, J.; Gorgas, J.; Jimenez-Vicente, J.; Vazdekis, A. (January 2007). "Medium-resolution Isaac Newton Telescope library of empirical spectra - II. The stellar atmospheric parameters". Monthly Notices of the Royal Astronomical Society. 374 (2): 664–690. arXiv:astro-ph/0611618. Bibcode:2007MNRAS.374..664C. doi:10.1111/j.1365-2966.2006.11196.x.
- Sion, Edward M.; Holberg, J. B.; Oswalt, Terry D.; McCook, George P.; Wasatonic, Richard (December 2009). "The White Dwarfs Within 20 Parsecs of the Sun: Kinematics and Statistics". The Astronomical Journal. 138 (6): 1681–1689. arXiv:0910.1288. Bibcode:2009AJ....138.1681S. doi:10.1088/0004-6256/138/6/1681.
- Smith, Myron A.; et al. (2011). "An Encoding System to Represent Stellar Spectral Classes in Archival Databases and Catalogs". arXiv:1112.3617 [astro-ph.SR].
- Arias, Julia I.; et al. (August 2016). "Spectral Classification and Properties of the OVz Stars in the Galactic O Star Spectroscopic Survey (GOSSS)". The Astronomical Journal. 152 (2): 31. arXiv:1604.03842. Bibcode:2016AJ....152...31A. doi:10.3847/0004-6256/152/2/31.
- MacRobert, Alan (1 August 2006). "The Spectral Types of Stars". Sky & Telescope.
- Allen, J. S. "The Classification of Stellar Spectra". UCL Department of Physics and Astronomy: Astrophysics Group. Retrieved 1 January 2014.
- Maíz Apellániz, J.; Walborn, Nolan R.; Morrell, N. I.; Niemela, V. S.; Nelan, E. P. (2007). "Pismis 24-1: The Stellar Upper Mass Limit Preserved". The Astrophysical Journal. 660 (2): 1480–1485. arXiv:astro-ph/0612012. Bibcode:2007ApJ...660.1480M. doi:10.1086/513098.
- Fariña, Cecilia; Bosch, Guillermo L.; Morrell, Nidia I.; Barbá, Rodolfo H.; Walborn, Nolan R. (2009). "Spectroscopic Study of the N159/N160 Complex in the Large Magellanic Cloud". The Astronomical Journal. 138 (2): 510–516. arXiv:0907.1033. Bibcode:2009AJ....138..510F. doi:10.1088/0004-6256/138/2/510.
- Rauw, G.; Manfroid, J.; Gosset, E.; Nazé, Y.; Sana, H.; De Becker, M.; Foellmi, C.; Moffat, A. F. J. (2007). "Early-type stars in the core of the young open cluster Westerlund 2". Astronomy and Astrophysics. 463 (3): 981–991. arXiv:astro-ph/0612622. Bibcode:2007A&A...463..981R. doi:10.1051/0004-6361:20066495.
- Crowther, Paul A. (2007). "Physical Properties of Wolf-Rayet Stars". Annual Review of Astronomy & Astrophysics. 45 (1): 177–219. arXiv:astro-ph/0610356. Bibcode:2007ARA&A..45..177C. doi:10.1146/annurev.astro.45.051806.110615.
- Rountree Lesh, J. (1968). "The Kinematics of the Gould Belt: An Expanding Group?". The Astrophysical Journal Supplement Series. 17: 371. Bibcode:1968ApJS...17..371L. doi:10.1086/190179.
- Analyse spectrale de la lumière de quelques étoiles, et nouvelles observations sur les taches solaires, P. Secchi, Comptes Rendus des Séances de l'Académie des Sciences 63 (July–December 1866), pp. 364–368.
- Nouvelles recherches sur l'analyse spectrale de la lumière des étoiles, P. Secchi, Comptes Rendus des Séances de l'Académie des Sciences 63 (July–December 1866), pp. 621–628.
- Hearnshaw, J. B. (1986). The Analysis of Starlight: One Hundred and Fifty Years of Astronomical Spectroscopy. Cambridge, UK: Cambridge University Press. pp. 60, 134. ISBN 978-0-521-25548-6.
- Classification of Stellar Spectra: Some History
- Kaler, James B. (1997). Stars and Their Spectra: An Introduction to the Spectral Sequence. Cambridge: Cambridge University Press. pp. 62–63. ISBN 978-0-521-58570-5.
- p. 60–63, Hearnshaw 1986; pp. 623–625, Secchi 1866.
- pp. 62–63, Hearnshaw 1986.
- p. 60, Hearnshaw 1986.
- Catchers of the Light: The Forgotten Lives of the Men and Women Who First Photographed the Heavens by Stefan Hughes.
- Pickering, Edward C. (1890). "The Draper Catalogue of stellar spectra photographed with the 8-inch Bache telescope as a part of the Henry Draper memorial". Annals of Harvard College Observatory. 27: 1. Bibcode:1890AnHar..27....1P.
- pp. 106–108, Hearnshaw 1986.
- "Williamina Fleming". Oxford Reference. doi:10.1093/oi/authority.20110803095823407. Retrieved 10 June 2020.
- "Williamina Paton Fleming -". www.projectcontinua.org. Retrieved 10 June 2020.
- "Classification of stellar spectra". spiff.rit.edu. Retrieved 10 June 2020.
- pp. 111–112, Hearnshaw 1986.
- Maury, Antonia C.; Pickering, Edward C. (1897). "Spectra of bright stars photographed with the 11 inch Draper Telescope as part of the Henry Draper Memorial". Annals of Harvard College Observatory. 28: 1. Bibcode:1897AnHar..28....1M.
- "Antonia Maury -". www.projectcontinua.org. Retrieved 10 June 2020.
- Hearnshaw, J. B.,. The analysis of starlight : two centuries of astronomical spectroscopy (Second ed.). New York, NY, USA. ISBN 978-1-107-03174-6. OCLC 855909920.CS1 maint: extra punctuation (link) CS1 maint: multiple names: authors list (link)
- Gray, R. O. (Richard O.) (2009). Stellar spectral classification. Corbally, C. J. (Christopher J.), Burgasser, Adam J. Princeton, N.J.: Princeton University Press. ISBN 978-0-691-12510-7. OCLC 276340686.
- Jones, Bessie Zaban,. The Harvard College Observatory : the First Four Directorships, 1839-1919. Boyd, Lyle Gifford, 1907-. Cambridge, Mass. ISBN 978-0-674-41880-6. OCLC 1013948519.CS1 maint: extra punctuation (link) CS1 maint: multiple names: authors list (link)
- Cannon, Annie J.; Pickering, Edward C. (1901). "Spectra of bright southern stars photographed with the 13 inch Boyden telescope as part of the Henry Draper Memorial". Annals of Harvard College Observatory. 28: 129. Bibcode:1901AnHar..28..129C.
- pp. 117–119, Hearnshaw 1986.
- Cannon, Annie Jump; Pickering, Edward Charles (1912). "Classification of 1,688 southern stars by means of their spectra". Annals of the Astronomical Observatory of Harvard College. 56 (5): 115. Bibcode:1912AnHar..56..115C.
- pp. 121–122, Hearnshaw 1986.
- "Annie Jump Cannon -". www.projectcontinua.org. Retrieved 10 June 2020.
- "SPECTRAL CLASSIFICATION OF STARS". www.eudesign.com. Retrieved 6 April 2019.
- Nassau, J. J.; Seyfert, Carl K. (March 1946). "Spectra of BD Stars Within Five Degrees of the North Pole". Astrophysical Journal. 103: 117. Bibcode:1946ApJ...103..117N. doi:10.1086/144796.
- FitzGerald, M. Pim (October 1969). "Comparison Between Spectral-Luminosity Classes on the Mount Wilson and Morgan–Keenan Systems of Classification". Journal of the Royal Astronomical Society of Canada. 63: 251. Bibcode:1969JRASC..63..251P.
- Sandage, A. (December 1969). "New subdwarfs. II. Radial velocities, photometry, and preliminary space motions for 112 stars with large proper motion". Astrophysical Journal. 158: 1115. Bibcode:1969ApJ...158.1115S. doi:10.1086/150271.
- Norris, Jackson M.; Wright, Jason T.; Wade, Richard A.; Mahadevan, Suvrath; Gettel, Sara (December 2011). "Non-detection of the Putative Substellar Companion to HD 149382". The Astrophysical Journal. 743 (1). 88. arXiv:1110.1384. Bibcode:2011ApJ...743...88N. doi:10.1088/0004-637X/743/1/88.
- Garrison, R. F. (1994). "A Hierarchy of Standards for the MK Process". Astronomical Society of the Pacific. 60: 3. Bibcode:1994ASPC...60....3G.
- Darling, David. "late-type star". The Internet Encyclopedia of Science. Retrieved 14 October 2007.
- Walborn, N. R. (2008). "Multiwavelength Systematics of OB Spectra". Massive Stars: Fundamental Parameters and Circumstellar Interactions (Eds. P. Benaglia. 33: 5. Bibcode:2008RMxAC..33....5W.
- An atlas of stellar spectra, with an outline of spectral classification, W. W. Morgan, P. C. Keenan and E. Kellman, Chicago: The University of Chicago Press, 1943.
- Walborn, N. R. (1971). "Some Spectroscopic Characteristics of the OB Stars: An Investigation of the Space Distribution of Certain OB Stars and the Reference Frame of the Classification". The Astrophysical Journal Supplement Series. 23: 257. Bibcode:1971ApJS...23..257W. doi:10.1086/190239.
- Morgan, W. W.; Abt, Helmut A.; Tapscott, J. W. (1978). "Revised MK Spectral Atlas for stars earlier than the sun". Williams Bay: Yerkes Observatory. Bibcode:1978rmsa.book.....M.
- Walborn, Nolan R.; Howarth, Ian D.; Lennon, Daniel J.; Massey, Philip; Oey, M. S.; Moffat, Anthony F. J.; Skalkowski, Gwen; Morrell, Nidia I.; Drissen, Laurent; Parker, Joel Wm. (2002). "A New Spectral Classification System for the Earliest O Stars: Definition of Type O2" (PDF). The Astronomical Journal. 123 (5): 2754–2771. Bibcode:2002AJ....123.2754W. doi:10.1086/339831.
- Slettebak, Arne (July 1988). "The Be Stars". Publications of the Astronomical Society of the Pacific. 100: 770–784. Bibcode:1988PASP..100..770S. doi:10.1086/132234.
- "SIMBAD Object query : CCDM J02319+8915". SIMBAD. Centre de Données astronomiques de Strasbourg. Retrieved 10 June 2010.
- Nieuwenhuijzen, H.; De Jager, C. (2000). "Checking the yellow evolutionary void. Three evolutionary critical Hypergiants: HD 33579, HR 8752 & IRC +10420". Astronomy and Astrophysics. 353: 163. Bibcode:2000A&A...353..163N.
- "On a cosmological timescale, The Earth's period of habitability is nearly over | International Space Fellowship". Spacefellowship.com. Retrieved 22 May 2012.
- "Discovered: Stars as Cool as the Human Body | Science Mission Directorate". science.nasa.gov.
- "Galactic refurbishment". www.spacetelescope.org. ESA/Hubble. Retrieved 29 April 2015.
- Figer, Donald F.; McLean, Ian S.; Najarro, Francisco (1997). "AK‐Band Spectral Atlas of Wolf‐Rayet Stars". The Astrophysical Journal. 486 (1): 420–434. Bibcode:1997ApJ...486..420F. doi:10.1086/304488.
- Kingsburgh, R. L.; Barlow, M. J.; Storey, P. J. (1995). "Properties of the WO Wolf-Rayet stars". Astronomy and Astrophysics. 295: 75. Bibcode:1995A&A...295...75K.
- Tinkler, C. M.; Lamers, H. J. G. L. M. (2002). "Mass-loss rates of H-rich central stars of planetary nebulae as distance indicators?". Astronomy and Astrophysics. 384 (3): 987–998. Bibcode:2002A&A...384..987T. doi:10.1051/0004-6361:20020061.
- Miszalski, B.; Crowther, P. A.; De Marco, O.; Köppen, J.; Moffat, A. F. J.; Acker, A.; Hillwig, T. C. (2012). "IC 4663: The first unambiguous [WN] Wolf-Rayet central star of a planetary nebula". Monthly Notices of the Royal Astronomical Society. 423 (1): 934–947. arXiv:1203.3303. Bibcode:2012MNRAS.423..934M. doi:10.1111/j.1365-2966.2012.20929.x.
- Crowther, P. A.; Walborn, N. R. (2011). "Spectral classification of O2-3.5 If*/WN5-7 stars". Monthly Notices of the Royal Astronomical Society. 416 (2): 1311–1323. arXiv:1105.4757. Bibcode:2011MNRAS.416.1311C. doi:10.1111/j.1365-2966.2011.19129.x.
- Kirkpatrick, J. D. (2008). "Outstanding Issues in Our Understanding of L, T, and Y Dwarfs". 14th Cambridge Workshop on Cool Stars. 384: 85. arXiv:0704.1522. Bibcode:2008ASPC..384...85K.
- Kirkpatrick, J. Davy; Reid, I. Neill; Liebert, James; Cutri, Roc M.; Nelson, Brant; Beichman, Charles A.; Dahn, Conard C.; Monet, David G.; Gizis, John E.; Skrutskie, Michael F. (10 July 1999). "Dwarfs Cooler than M: the Definition of Spectral Type L Using Discovery from the 2-µ ALL-SKY Survey (2MASS)". The Astrophysical Journal. 519 (2): 802–833. Bibcode:1999ApJ...519..802K. doi:10.1086/307414.
- Kirkpatrick, J. Davy (2005). "New Spectral Types L and T". Annual Review of Astronomy and Astrophysics. 43 (1): 195–246. Bibcode:2005ARA&A..43..195K. doi:10.1146/annurev.astro.42.053102.134017.
- Kirkpatrick, J. Davy; Barman, Travis S.; Burgasser, Adam J.; McGovern, Mark R.; McLean, Ian S.; Tinney, Christopher G.; Lowrance, Patrick J. (2006). "Discovery of a Very Young Field L Dwarf, 2MASS J01415823−4633574". The Astrophysical Journal. 639 (2): 1120–1128. arXiv:astro-ph/0511462. Bibcode:2006ApJ...639.1120K. doi:10.1086/499622.
- Kirkpatrick, J. Davy; Cushing, Michael C.; Gelino, Christopher R.; Beichman, Charles A.; Tinney, C. G.; Faherty, Jacqueline K.; Schneider, Adam; Mace, Gregory N. (2013). "Discovery of the Y1 Dwarf WISE J064723.23-623235.5". The Astrophysical Journal. 776 (2): 128. arXiv:1308.5372. Bibcode:2013ApJ...776..128K. doi:10.1088/0004-637X/776/2/128.
- Deacon, N. R.; Hambly, N. C. (2006). "Y-Spectral class for Ultra-Cool Dwarfs". Monthly Notices of the Royal Astronomical Society. 371: 1722–1730. arXiv:astro-ph/0607305. doi:10.1111/j.1365-2966.2006.10795.x.
- Wehner, Mike (24 August 2011). "NASA spots chilled-out stars cooler than the human body | Technology News Blog – Yahoo! News Canada". Ca.news.yahoo.com. Retrieved 22 May 2012.
- Venton, Danielle (23 August 2011). "NASA Satellite Finds Coldest, Darkest Stars Yet". Wired – via www.wired.com.
- "NASA - NASA'S Wise Mission Discovers Coolest Class of Stars". www.nasa.gov.
- Zuckerman, B.; Song, I. (2009). "The minimum Jeans mass, brown dwarf companion IMF, and predictions for detection of Y-type dwarfs". Astronomy and Astrophysics. 493 (3): 1149–1154. arXiv:0811.0429. Bibcode:2009A&A...493.1149Z. doi:10.1051/0004-6361:200810038.
- Dupuy, T. J.; Kraus, A. L. (2013). "Distances, Luminosities, and Temperatures of the Coldest Known Substellar Objects". Science. 341 (6153): 1492–5. arXiv:1309.1422. Bibcode:2013Sci...341.1492D. doi:10.1126/science.1241917. PMID 24009359.
- Leggett, S. K.; Cushing, Michael C.; Saumon, D.; Marley, M. S.; Roellig, T. L.; Warren, S. J.; Burningham, Ben; Jones, H. R. A.; Kirkpatrick, J. D.; Lodieu, N.; Lucas, P. W.; Mainzer, A. K.; Martín, E. L.; McCaughrean, M. J.; Pinfield, D. J.; Sloan, G. C.; Smart, R. L.; Tamura, M.; Van Cleve, J. (2009). "The Physical Properties of Four ∼600 K T Dwarfs". The Astrophysical Journal. 695 (2): 1517–1526. arXiv:0901.4093. Bibcode:2009ApJ...695.1517L. doi:10.1088/0004-637X/695/2/1517.
- Delorme, P.; Delfosse, X.; Albert, L.; Artigau, E.; Forveille, T.; Reylé, C.; Allard, F.; Homeier, D.; Robin, A. C.; Willott, C. J.; Liu, M. C.; Dupuy, T. J. (2008). "CFBDS J005910.90-011401.3: Reaching the T-Y brown dwarf transition?". Astronomy and Astrophysics. 482 (3): 961–971. arXiv:0802.4387. Bibcode:2008A&A...482..961D. doi:10.1051/0004-6361:20079317.
- Burningham, Ben; Pinfield, D. J.; Leggett, S. K.; Tamura, M.; Lucas, P. W.; Homeier, D.; Day-Jones, A.; Jones, H. R. A.; Clarke, J. R. A.; Ishii, M.; Kuzuhara, M.; Lodieu, N.; Zapatero Osorio, M. R.; Venemans, B. P.; Mortlock, D. J.; Barrado y Navascués, D.; Martin, E. L.; Magazzù, A. (2008). "Exploring the substellar temperature regime down to ∼550 K". Monthly Notices of the Royal Astronomical Society. 391 (1): 320–333. arXiv:0806.0067. Bibcode:2008MNRAS.391..320B. doi:10.1111/j.1365-2966.2008.13885.x.
- European Southern Observatory. "A Very Cool Pair of Brown Dwarfs", 23 March 2011
- Luhman, Kevin L.; Esplin, Taran L. (May 2016). "The Spectral Energy Distribution of the Coldest Known Brown Dwarf". The Astronomical Journal. 152 (3): 78. arXiv:1605.06655. Bibcode:2016AJ....152...78L. doi:10.3847/0004-6256/152/3/78.
- "Spectral type codes". simbad.u-strasbg.fr. Retrieved 6 March 2020.
- Burningham, Ben; Smith, L.; Cardoso, C. V.; Lucas, P. W.; Burgasser, A. J.; Jones, H. R. A.; Smart, R. L. (May 2014). "The discovery of a T6.5 subdwarf". MNRAS. 440 (1): 359–364. arXiv:1401.5982. Bibcode:2014MNRAS.440..359B. doi:10.1093/mnras/stu184. ISSN 0035-8711.
- Cruz, Kelle L.; Kirkpatrick, J. Davy; Burgasser, Adam J. (February 2009). "Young L Dwarfs Identified in the Field: A Preliminary Low-Gravity, Optical Spectral Sequence from L0 to L5". AJ. 137 (2): 3345–3357. arXiv:0812.0364. Bibcode:2009AJ....137.3345C. doi:10.1088/0004-6256/137/2/3345. ISSN 0004-6256.
- Looper, Dagny L.; Kirkpatrick, J. Davy; Cutri, Roc M.; Barman, Travis; Burgasser, Adam J.; Cushing, Michael C.; Roellig, Thomas; McGovern, Mark R.; McLean, Ian S.; Rice, Emily; Swift, Brandon J. (October 2008). "Discovery of Two Nearby Peculiar L Dwarfs from the 2MASS Proper-Motion Survey: Young or Metal-Rich?". Astrophysical Journal. 686 (1): 528–541. arXiv:0806.1059. Bibcode:2008ApJ...686..528L. doi:10.1086/591025. ISSN 0004-637X.
- Kirkpatrick, J. Davy; Looper, Dagny L.; Burgasser, Adam J.; Schurr, Steven D.; Cutri, Roc M.; Cushing, Michael C.; Cruz, Kelle L.; Sweet, Anne C.; Knapp, Gillian R.; Barman, Travis S.; Bochanski, John J. (September 2010). "Discoveries from a Near-infrared Proper Motion Survey Using Multi-epoch Two Micron All-Sky Survey Data". Astrophysical Journal Supplement Series. 190 (1): 100–146. arXiv:1008.3591. Bibcode:2010ApJS..190..100K. doi:10.1088/0067-0049/190/1/100. ISSN 0067-0049.
- Faherty, Jacqueline K.; Riedel, Adric R.; Cruz, Kelle L.; Gagne, Jonathan; Filippazzo, Joseph C.; Lambrides, Erini; Fica, Haley; Weinberger, Alycia; Thorstensen, John R.; Tinney, C. G.; Baldassare, Vivienne (July 2016). "Population Properties of Brown Dwarf Analogs to Exoplanets". Astrophysical Journal Supplement Series. 225 (1): 10. arXiv:1605.07927. Bibcode:2016ApJS..225...10F. doi:10.3847/0067-0049/225/1/10. ISSN 0067-0049.
- "Colour-magnitude data". www.stsci.edu. Retrieved 6 March 2020.
- Bouigue, R. (1954). Annales d'Astrophysique, Vol. 17, p. 104
- Keenan, P. C. (1954). Astrophysical Journal, vol. 120, p. 484
- Sion, E. M.; Greenstein, J. L.; Landstreet, J. D.; Liebert, J.; Shipman, H. L.; Wegner, G. A. (1983). "A proposed new white dwarf spectral classification system". Astrophysical Journal. 269: 253. Bibcode:1983ApJ...269..253S. doi:10.1086/161036.
- Córsico, A. H.; Althaus, L. G. (2004). "The rate of period change in pulsating DB-white dwarf stars". Astronomy and Astrophysics. 428: 159–170. arXiv:astro-ph/0408237. Bibcode:2004A&A...428..159C. doi:10.1051/0004-6361:20041372.
- McCook, George P.; Sion, Edward M. (1999). "A Catalog of Spectroscopically Identified White Dwarfs". The Astrophysical Journal Supplement Series. 121 (1): 1–130. Bibcode:1999ApJS..121....1M. CiteSeerX 10.1.1.565.5507. doi:10.1086/313186.
- "Pulsating Variable Stars and the Hertzsprung-Russell (H-R) Diagram". Harvard-Smithsonian Center for Astrophysics. 9 March 2015. Retrieved 23 July 2016.
- Yakovlev, D. G.; Kaminker, A. D.; Haensel, P.; Gnedin, O. Y. (2002). "The cooling neutron star in 3C 58". Astronomy & Astrophysics. 389: L24–L27. arXiv:astro-ph/0204233. Bibcode:2002A&A...389L..24Y. doi:10.1051/0004-6361:20020699.
- "Stars and Habitable Planets". www.solstation.com.
|Look up late-type star or early-type star in Wiktionary, the free dictionary.|
- Libraries of stellar spectra by D. Montes, UCM
- Spectral Types for Hipparcos Catalogue Entries
- Stellar Spectral Classification by Richard O. Gray and Christopher J. Corbally
- Spectral models of stars by P. Coelho
- Merrifield, Michael; Bauer, Amanda; Häußler, Boris (2010). "Star Classification". Sixty Symbols. Brady Haran for the University of Nottingham.
- Stellar classification table |
What is Number data type and type conversion in python?
To start with the python number data type and type conversion, Let’s first see Number data type.
Python number data type is the data type which is used to store the numeric data. You can perform arithmetic operations on this data type. When you assign a new value to numeric data type a new object gets created. This is immutable data type, Hence as soon as you try to change its value, new object gets created.
In python, There are basically three sub-types of Number data type.
- Integers are whole numbers which can have positive and negative values but without decimal point in it. Starting from python 3.0 there is no length limit on integer data type in python.
- Few numbers which belongs to integer data type: 40, 65, 89 etc.
- Float is numeric data with decimal point in it. Float data type in python stores the real numbers. Examples of float are 2.35, 3.68 etc. are the float numbers.
- Basic difference between integer and float is, Integers are without decimal point and floats are with the decimal point in it.
- Float is a integer along with its fractional part. e.g In example “8.31”, Number 8 is a integer and number 8.31 is a float.
Python Complex numbers:
- This data type has two parts i.e real and imaginary part. For example “5+8j” is a complex number where 5 is real part and 8j is an imaginary part of it.
- To get the real and imaginary part of any complex numbers following are the commands.
In above image we have stored the complex number 5+8j into variable named “complex_variable”.
- print(type(Variable_Name)) : This line prints the data type of variable in python.
- print(Variable_Name.real) : This line prints the real part of complex number variable.
- print(Variable_Name.imag) : This line prints the imaginary part of complex number.
Type conversion in python:
Converting the value of one data type into another is known as type conversion. This conversion is also called as Coercion.
Type conversion in python works in two ways. First way is interpreter itself understands the requirement of type conversion and performs conversion task. Second way is, You have to specifically tell the conversion requirements and then the interpreter will do it for you.
- Implicit type conversion
- Explicit type conversion
Implicit type conversion:
Implicit conversion is one of the type conversion in which python interpreter automatically converts one data type into another. There is no need of user to specify the conversion requirements.
To understand this implicit type conversion better, Let’s see one example. We are having two numbers say x=5.3 and y=8, Here x is a float since it is having decimal point in it and y is an integer. Now we’ll do the addition of these two numbers and see the data type of result.
In above example we have added two numbers of different data type i.e. one is integer and other is float. The data type of result is automatically converted to float. We explicitly didn’t performed anything to convert the result into float. This automatic conversion is known as implicit type conversion.
Explicit type conversion:
In explicit type conversion method user has to explicitly specify the conversion requirements. After reading the conversion requirements interpreter will perform the type conversion work for you.
While specifying the explicit conversion requirements you have to provide the requirements very clearly. Since you are forcing the interpreter to perform conversion forcefully, There are chances of data loss as well.
Syntax to specify the explicit data conversion in python is as below;
Required data type(Expression)
In example above, we are having two float variables named “first_float_variable” and “second_float_variable”. “Add” is variable created to store the addition of both the float numbers. Now we have printed the value of “Add” variable before type conversion and after the type conversion.
Before type conversion value of “Add” variable (11.0949999~) is printed along with decimal point and its fraction part. After type conversion variable “Add” is converted into integer (11) and same is printed on screen.
Let’s try to explicitly convert the string data type into list data type.
In above example we have converted string “Cake” into list data type. We have printed the value of “string_variable” before and after type conversion. You can see both the results, highlighted result is the value of “string_variable” after it is converted to list data type. |
The majority of what we know about the solar system and the greater universe comes from telescope images taken from Earth. The New Horizons mission set to change this. New Horizons is an interplanetary space probe that was engineered at Johns Hopkins University and launched in 2006 at 36,000 mph with its destination set as Pluto. It took nine years, but this space probe finally reached Pluto in 2015 and performed a flyby study, giving us some of the best and closest images of Pluto.
The general goals of New Horizons as stated by Wikipedia are to “understand the formation of the Pluto system, the Kuiper belt, and the transformation of the early Solar System.” The data that we receive from the spacecraft have helped us learn about the surface of Pluto, along with its mass and mass distribution. We also learned about Pluto’s atmosphere, specifically its density and composition.
In addition, this space probe did give us information along its way to the outer reaches of our solar system. It encountered an asteroid and also passed by Jupiter in 2007. Its closest approach was some 1.4 million miles away from Jupiter. Flying by Jupiter also allowed for a gravity assist that increased the speed of the space probe. New Horizons sent us information about Jupiter’s atmosphere, moons, and magnetosphere.
After passing by Jupiter, the space probe spent the majority of its time in hibernation to preserve the systems for when it would reach Pluto. At the end of 2014, scientists brought New Horizons back online in preparation for the Pluto flyby. At the start of 2015, the spacecraft began to approach Pluto. On July 14 of 2015, New Horizons flew 7,800 miles above Pluto’s surface, which made it the first spacecraft to ever explore Pluto, a dwarf planet whose 4.7 billion miles from Earth make explorations difficult. New Horizons sent data back to Earth for a few months, and on October 25, we received the last data from the Pluto flyby mission.
As a secondary mission, New Horizons is going to study Kuiper belt objects in the following decade. It is now on course for a flyby of one specific object and is expected to arrive on the first day of the year 2019. At this point, New Horizons will have reached an astonishing 43.4 AU from the Sun, where 1 AU is the distance from the Earth to the Sun. So far, the New Horizons spacecraft has cost around $700 million over a fifteen-year period. This spacecraft is letting us travel farther than ever before, and it is key to learning about objects about which we have no easy way of getting information. The kind of data that we receive from New Horizons is helping us discover more about the outer reaches of our solar system, something that is a notable advancement in the field of astronomy. |
- The Class Hierarchy
- Creating a New Class
- Declaration and Instantiation
- Constructors and Destructors
- Garbage Collection
- Object Operators
- Adding and Overriding Methods
- Calling the Overridden Method
- Access Scope: Public, Private, Protected
- Setting Properties with Methods
- Default and Optional Parameters
- Declaring Variables Static and Const
- Revisiting the StringParser Module
- Example: Creating a Properties Class
- Data-Oriented Classes and Visual Basic Data Types
- Advanced Techniques
When you subclass a class that has overloaded methods, if the subclass overloads methods, the superclass doesn't know about them. So if you instantiate the subclass but assign it to a variable as the superclass, the overloaded method can't be called from it (this contrasts with how overridden methods are handled, because you can call an overridden method in the same way and get the subclass implementation of it).
I'll provide an example of one way that really confused me for a few days until I realized what I was doing wrong.
An overloaded method is one that takes different arguments (but has the same name). An overridden method has the same signature (takes the same arguments), but is implemented differently. This applies to inheritance between a class and a subclass. An overloaded method is overloaded in the same class, and overridden method is overridden in a subclass.
Start with two classes, Parent and Child. Set the parent of the Child class to Parent. For both classes, implement a method called WhoAmI and one called GetName() as follows:
Sub WhoAmI(aParent as Parent) MsgBox "Parent.WhoAmI(aParent as Parent) /" + _ "Parameter: " + aParent.getName() End Sub Function getName() as String Return "Parent" End Function
Sub WhoAmI(aChild as Child) MsgBox "Child.WhoAmI(aChild as Child) /" + _ "Parameter: " + aChild.getName() End Sub Function getName() as String Return "Child" End Function
The MsgBox subroutine will cause a small Window to appear, displaying the string passed to it.
Next, take a look at the following code. The text that would appear in the MsgBox is listed in a comment next to the method.
Dim p as Parent Dim c as Child Dim o as Parent p = New Parent c = New Child o = New Child p.WhoAmI p // "Parent.WhoAmI : Parent.getName" p.WhoAmI c // "Parent.WhoAmI : Child.getName" c.WhoAmI p // "Parent.WhoAmI : Parent.getName" c.WhoAmI c // "Child.WhoAmI : Child.getName" o.WhoAmI p // "Parent.WhoAmI : Parent.getName" o.WhoAmI c // "Parent.WhoAmI : Child.getName" o.WhoAmI o // "Parent.WhoAmI : Child.getName" Child(o).WhoAmI p // "Parent.WhoAmI : Parent.getName" Child(o).WhoAmI c // "Child.WhoAmI : Child.getName" Child(o).WhoAmI o // "Parent.WhoAmI : Child.getName"
The left side of the response shows which class implemented the WhoAmI method that was just called, and the right side of the response shows which class implemented the getName() that was called on the object passed as an argument to the WhoAmI method.
The first two examples show the Parent object p calling the WhoAmI method. In the first example, a Parent object is passed as the argument for WhoAmI and in the second example, a Child instance is passed as the argument. Both examples act as you would expect them to. The p object always uses the methods implemented in the Parent class and the c object uses a method implemented in the Child class.
The next group of examples are the same as the first two, with one important difference: A Child instance is calling WhoAmI instead of a Parent instance. There's something strange about the results, however:
c.WhoAmI p // "Parent.WhoAmI : Parent.getName" c.WhoAmI c // "Child.WhoAmI : Child.getName"
If c is a Child object, why do the results show that c called the Parent class implementation of the WhoAmI method? Why does it call the Child class implementation of WhoAmI in the second example?
The answer is that Child.WhoAmI() does not override Parent.WhoAmI(). It overloads it. Remember that to override a method, the signatures have to be the same. When Child implements WhoAmI, the parameter is defined as a Child object, but when Parent implements it, the parameter is defined as a Parent. REALbasic decides which version of the overloaded method to call based on the signature. What is tricky here is that it is easy to forget that WhoAmI is overloaded only in the Child class, not in the Parent class, so when a Parent object is passed as an argument, REALbasic uses the Parent class implementation of WhoAmI. However, the c object has access to the methods of the Parent class because it is a subclass of the Parent class.
The rest of the examples work on the object o, which is in a unique situation. It was declared to be a variable in the Parent class, but when it was instantiated, it was instantiated as a Child and this makes for some interesting results. The first three sets of responses all show that o is calling the Parent class version of the WhoAmI method:
o.WhoAmI p // "Parent.WhoAmI : Parent.getName" o.WhoAmI c // "Parent.WhoAmI : Child.getName" o.WhoAmI o // "Parent.WhoAmI : Child.getName"
The getName() method is where things get interesting. As expected, when the argument is an instance of the Parent class, the Parent.getName() method is executed; likewise, if it is a member of the Child class. But when you pass o as the argument to WhoAmI, o calls the Child implementation of getName(), rather than the Parent implementation.
This seems odd because when o calls WhoAmI, it calls the Parent class method, but when it calls getName(), it calls the Child class method. Because o was declared as a member of the Parent class, it is cast as a member of the Parent class, even though you used the Child class Constructor. In fact, when you cast o as a Child, you get an entirely different set of results:
Child(o).WhoAmI p // "Parent.WhoAmI : Parent.getName" Child(o).WhoAmI c // "Child.WhoAmI : Child.getName" Child(o).WhoAmI o // "Parent.WhoAmI : Child.getName" Child(o).WhoAmI child(o) // "Child.WhoAmI : Child.getName"
In the first of this quartet, you get the expected answer because p is a Parent instance, and that means that regardless of whether o is a Child or a Parent, REALbasic will call the Parent class implementation of WhoAmI. In the second example of this group, after o is cast as a Child, it behaves as expected, calling the Child class version of WhoAmI.
However, when o is passed as the argument to WhoAmI, it doesn't matter that o has been cast as a Child, it again calls the Parent.WhoAmI method. On the other hand, if you also cast the o object passed in the argument as a Child as well, things again are as they should be. So the question is why does this happen:
Child(o).WhoAmI o // "Parent.WhoAmI : Child.getName"
The implementation of WhoAmI that is used is determined by the type of the object passed in the parameter. The implementation of getName() is not determined by any parameter because it doesn't require a parameter. You can also do the following and get the same result:
Child(o).WhoAmI parent(o) // "Parent.WhoAmI : Child.getName"
The getName method is overridden, not overloaded. Because o was instantiated as a Child object, it will always call the Child implementation of getName. WhoAmI is overloaded, not overridden, and it is overloaded only in the Child class and not the Parent class. |
The Search for extraterrestrial life is undoubtedly one of the most profound scientific events of our time. If extraterrestrial biological life will be found near another world around another star, we'll finally know that life outside of our Solar system possible. To find the traces of extraterrestrial biology in distant worlds is extremely difficult. But astronomers are developing a new technique that will be used powerful telescopes of the next generation and allow them to accurately measure substances in the atmosphere of exoplanets. The hope, of course, is to find evidence of extraterrestrial life.
In recent years, the search for exoplanets has attracted a lot of attention, thanks in part to the discovery of seven small alien worlds orbiting a tiny star — a red dwarf TRAPPIST-1. Three of these extrasolar planets spin in the potentially habitable zone of a star. That is, in the region near any star, which is not too hot and not too cold so that water could exist in liquid form.
Everywhere on Earth where there is liquid water there is life, so if at least one of potentially habitable worlds TRAPPIST-1 will have water, it may be life.
But life potential TRAPPIST-1 remain pure speculation. Despite the fact that this amazing star system located in the back yard of our galaxy, we have no idea whether there is water in the atmosphere, at least one of these worlds. We don't even know if they have atmosphere. All we know is how long exoplanets are in orbit and what are their physical dimensions.
"the First detection of biosignature in other worlds can be one of the most significant scientific discoveries of our lives," says Garrett Rouen, an astronomer at the California Institute of technology. "This will be a major step to answer one of the biggest questions of mankind: are we alone?".
Ruan works in the Laboratory ekzoplanetoy technology Caltech, ET Lab, which is developing new search strategies ekzoplanetoy of biosignature such as molecules of oxygen and methane. Usually molecules like these actively react with other chemicals quickly decaying in the planetary atmosphere. Therefore, if astronomers can find a spectroscopic "fingerprint" of methane in the atmosphere of extrasolar planets, it could mean that his production responsible of alien biological processes.
Unfortunately, we can't just take the world's most powerful telescope and direct it to TRAPPIS-1 to see whether the atmospheres of those planets methane.
"to detect molecules in the atmospheres of exoplanets, astronomers must be able to analyze the light of the planet without being completely blinded by the light of a neighboring star," says Rouen.
Fortunately, red dwarf stars (or M dwarfs) like TRAPPIST-1 cold and dull, so the problem of glare is less severe. And since these stars are the most common type of stars in our galaxy, scientists pay attention to the very first red dwarfs in search of discoveries.
Astronomers used a tool known as a "coronagraph" to isolate the reflected starlight from exoplanets. Once the sec catches the dim light of exoplanets, a spectrometer with low resolution analyses of the chemical "fingerprints" of this world. Unfortunately, this technology is limited by studying only the largest exoplanets, rotating away from their stars.
New methods ET Lab use coronagraph, optical fibers and spectrometer high-resolution work together, highlighting the glow of the stars and catching the detailed chemical fingerprint of any world in its orbit. This method is known as the high-dispersion coronography (HDC) and could turn our understanding of the diversity ekzoplanetoy atmospheres. Work on this subject was published in The Astronomy Journal.
"What makes the method powerful HDC is the fact that it is possible to identify the spectral signature of the planet, even when it is buried in the bright light of the stars," says Rouen. "This allows us to detect molecules in the atmosphere of planets that are extremely difficult to visualize".
"the Trick is to divide the light into multiple signals and to create what astronomers call a spectrum of high resolution, which helps to distinguish the signature of the planet from the star light."
All you need now is a powerful telescope to connect to the system.
At the end of 2020-ies of the thirty-meter telescope will be the world's largest ground-based optical telescope, and when it will be used in conjunction with HDC, astronomers will be able to explore the atmospheres of potentially habitable worlds orbiting red dwarf stars.
"the Discovery of oxygen and methane in the atmospheres of earth-like planets orbiting M-dwarfs, similar to Proxima Centauri b, the forces of the thirty-meter telescope will be extremely exciting," says Rouen. "We still have much to learn about the potential habitability of these planets, but it may well be that these planets would be like Earth".
It is estimated that in our galaxy there are 58 billion red dwarfs, and we know that most of them will have planets, so when a thirty-meter-telescope will come into operation, astronomers will be able to find a lot of things that were previously unavailable.
In 2016, astronomers have discovered an exoplanet the size of Earth, circling at the nearest to the Earth M-dwarf, Proxima Centauri. Proxima b also rotates within the potentially habitable zone of its star, making it a Prime target for the search of alien life. At a distance of just four light-years, Proxima b literally teasing us with the opportunity to visit her sometime in the future....
Elon Musk — the founder of the company Starlink As the portal , SpaceX launched 60 mini-satellites in the second orbital party devices, to provide global Internet coverage. Each of the 26-pound satellites will join the rest of sixty, which was ...
the Mars Rover Curiosity never ceases to amaze for the First time in the history of space exploration, scientists have measured seasonal changes in the gases that fill the atmosphere just above the surface of Gale crater on Mars. As a result, they no...
Red planet does not give people peace for many years the Red planet, look out. To Elon musk, Jeff Bezos and NASA officially joined . It is not excluded that the statement of the Chinese space Agency plans to send humans to caused by the desire of Chi...
Drop blocks of the carrier rocket «Union» led to the fire in the steppe of Kazakhstan near the town of Zhezkazgan, which killed the truck driver — reported «Roskosmos» the morning of 15 June. Employee N...
we are on the Earth another billion or two years before the oceans boil and the planet will become uninhabitable. The sun will heat up, turn into a red giant, burn the fuel in the core, will inflate their outer layers and shrink i...
National Aeronautics and space administration (NASA) was established on 1 October 1958. It was the direct descendant of the "space race" period of the 20th century, when the United States and the Soviet Union essentially were comp... |
An experiment on newtons third law
To find the newton's third law partner force, we can simply reverse the order of the objects in the force description for example, the pairs listed below all form newton's third law force pairs the partner force for the downward force of gravity on the box exerted by earth would be the upward force of gravity on earth exerted by the box. Students will be able to use a skateboard and a large mass to experiment with newton's 3rd law big idea when you toss a medicine ball it causes an equal and opposite force that is normally never observed however, do it while on a skateboard and newton's 3rd law can be seen newton's third law will be in effect for this lesson newton's. In this experiment, you'll be testing how mass affects inertia based on newton's first law of motion in this lab, you'll be demonstrating this relationship by dropping an egg into a glass of water. Newton's third law: for every action, there is an equal and opposite reaction not all textbooks say that, but if you ask a random person on the street that is likely what they will describe as.
Newton’s third law e-lab balloon experiment during the actual e-lab, students will conduct an experiment following the directions of the lab director please have all materials ready to distribute to each group prior to the beginning of the e-lab this will provide more time for conducting the. Our teacher gave us a 0 for this assignment and we were almost suspended, enjoy :. Objects in motion tend to stay in motion this is the simplest of newton's laws, and is usually referred to as inertia inertia means that once an object starts in a certain direction, it requires an equal or greater force to stop it from moving. The electric force between a current charge and a discontinuity charge obeys newton's third law as in coulomb’s law the forces exerted on current charges allow the charges to produce either a non-zero or zero net force on the containing infinitesimal current element.
Forces science experiment: air powered rocket – newton’s third law this is newton’s third law – for every action there is an equal and opposite reaction so the force of the air coming out of the balloon is equal to the force pushing the balloon rocket along the string • experiment description air a13– one copy per student. The two forces in newton's third law are of the same type (eg, if the road exerts a forward frictional force on an accelerating car's tires, then it is also a frictional force that newton's third law predicts for the tires pushing backward on the road) newton's laws were verified by experiment and observation for over 200. Newton’s third law of motion stated simply, newton’s third law of motion says that ‘for every action, there is an equal and opposite reaction’ use a pair of roller skates and a ball to show how this works. Introduction: isaac newton’s third law of physics states that for every action there is an equal and opposite reaction this principle describes interactions between bodies, and an experiment has been conducted to study these relations. Newton’s third law states that for every action, there is an equal and opposite reaction if you push against a wall, the wall pushes back against you with the same amount of force when you’re sitting on a skateboard and you throw a ball, you move in the opposite direction.
Experiment 2: newton’s third law and force pairs data tables and post-lab assessment table 3: forces on stationary spring force on stationary 10 n spring scale (n) 560 n force on stationary 5n spring scale (n) 55 5n table 4: spring scale force data suspension set up force (n) on 10 n spring scale force (n) on 5 n spring scale 05 kg mass on. Researchers break newton’s third law — with lasers researchers break newton’s third law — with lasers by graham templeton on october 17, 2013 at 11:10 am in the experiment, the team. I) coversheet/worksheet/report as a single pdf file ii) aim – a brief explanation of the goals of experiment 1 iii) theory – this should be a concise ½ page description beginning with a free body diagram and newtons 2nd law to describe the forces on the object.
An experiment on newtons third law
Newton's third law of motion builds further on the first and second laws of motion the third law of motion states that for every action, there is an equal and opposite reaction this can be observed both in objects at rest and those that are accelerating. Newton's third law - using the force sensors from two smart carts, students prove that forces between objects are equal in magnitude yet opposite in direction these experiments include both tug-of-war exercises and collisions between cars. My video ebook my video ebook contains over 25 high quality images and over eleven minutes of detailed animation it is the only place where you will be able to see the full animation of professor mac conducting his experiments explaining newton’s third law of motion. Newton's third law of motion for every force, there is an equal and opposite force newton’s first law of motion explains the law of inertia, the connection between force and motion.
- Class content newton's laws newton's laws as foothold principles prerequisites newton's laws reciprocity newton's 3rd law: (it basically has a spring inside with some electronics to detect how much it is bent) an experiment to test n3 would use two force probes and look something like this.
- Experiment 2: newton’s third law and force pairs the materials needed for this experiment are a 5 n spring scale, a 10 n spring scale, two 30 cm pieces of string, a 05 kg mass and a pulley after calibration the spring scales, you will hook the 5 n spring scale to the 10 n spring scale.
Newton’s third law happens in everyday life for example, fighting over the remote control with your brother, you both add an equal and opposite reaction of force overall this lab was fun and a learning lesson. The activity p 4 provides an engaging experiment which illustrates new- this is the essence of newton’s third law: for every action there is an equal and opposite reaction however, it is important to understand that the action and the reaction are acting on different objects in our previous newton’s law posters, we examined what. Unlike newton’s first two laws, which deal with the effects of forces on a single object, the third law describes the interaction between two objects in this experiment, you will examine this interaction in a variety of situations so that you might develop a better understanding of the forces involved when two objects interact. If you do an experiment and it shows newtons 3rd law to be true - if you repeat the same experiment using different sized masses, it will still show that newton 3 is true similarly an experiment showing the conservation of momentum will not give a contardictory result if you use different sized masses. |
- Scientists have managed to calculate the mass of our galaxy more accurately than ever before, according to a report published by the European Space Agency.
- Until relatively recently, it was thought that our neighboring galaxy, Andromeda, was far larger than the Milky Way.
- But the Milky Way is, in fact, considerably larger than Andromeda.
New scientific findings are much more intriguing when they turn widely accepted facts on their heads.
Until recently, it was thought that our neighboring galaxy, Andromeda, was far larger than the Milky Way — more than twice the size of our home galaxy. Later research showed that Andromeda and the Milky Way were actually around the same size. But now new research has surfaced showing that the Milky Way is much larger than previously assumed, and in fact tops Andromeda.
According to a report published by the European Space Agency (ESA), scientists have calculated the mass of our galaxy with new precision.
By combining data from the ESA's Gaia space probe and data from the Hubble space telescope — which is jointly operated by the ESA and NASA — astronomers were able to determine that the mass of the Milky Way is 1.5 trillion solar masses. The mass of Andromeda, on the other hand, is roughly 800 billion solar masses, according to current estimates.
The reason that previous calculations weren't quite accurate is due to "dark matter", a theoretical type of matter that could account for a sizeable quantity of all matter in the universe.
Dark matter can't be detected or measured in the typical ways.
"That's what leads to the present uncertainty in the Milky Way's mass," Laura Watkins of the European Southern Observatory explained in the new report, "you can't measure accurately what you can't see."
To estimate the mass of dark matter in the Milky Way, astronomers measured the velocity of globular clusters — dense star clusters that orbit the spiral disc of the galaxy.
"The more massive a galaxy, the faster its clusters move under the pull of its gravity," explained astrophysicist N. Wyn Evans from the University of Cambridge.
The orbital velocity of the globular clusters around the Milky Way was far greater than what the visible matter in the galaxy would suggest. So the scientists were able to deduce from this anomaly that dark matter was responsible for the additional gravitation.
Using Gaia — a space probe designed to produce a precise 3D map of the Milky Way — the scientists were able for the first time to evaluate three-dimensional measurements of cluster velocities.
"Most previous measurements have found the speed at which a cluster is approaching or receding from Earth, that is the velocity along our line of sight," said Wyn Evans. "However, we were able to also measure the sideways motion of the clusters, from which the total velocity, and consequently the galactic mass, can be calculated."
Observations from Hubble allowed for distant globular clusters — some as far as 130 000 light-years from Earth — to be added to the study.
"We were lucky to have such a great combination of data," explained Roeland P. van der Marel of NASA's Hubble Institute. "By combining Gaia's measurements of 34 globular clusters with measurements of 12 more distant clusters from Hubble, we could pin down the Milky Way's mass in a way that would be impossible without these two space telescopes."
With a mass of 1.5 trillion solar masses, the Milky Way could even be the largest galaxy in the vicinity.
This also means a potential future collision and fusion of the two galaxies would look quite different than what researchers had previously pictured. Until now, it was thought that Andromeda would devour the Milky Way but, according to these recent findings, the opposite may be true. |
Virtual work arises in the application of the principle of least action to the study of forces and movement of a mechanical system. The work of a force acting on a particle as it moves along a displacement is different for different displacements. Among all the possible displacements that a particle may follow, called virtual displacements, one will minimize the action. This displacement is therefore the displacement followed by the particle according to the principle of least action. The work of a force on a particle along a virtual displacement is known as the virtual work.
Historically, virtual work and the associated calculus of variations were formulated to analyze systems of rigid bodies, but they have also been developed for the study of the mechanics of deformable bodies.
- 1 History
- 2 Overview
- 3 Introduction
- 4 Static equilibrium
- 5 Law of the lever
- 6 Gear train
- 7 Dynamic equilibrium for rigid bodies
- 8 Virtual work principle for a deformable body
- 9 Alternative forms
- 10 See also
- 11 External links
- 12 References
- 13 Bibliography
The principle of virtual work had always been used in some form since antiquity in the study of statics. It was used by the Greeks, medieval Arabs and Latins, and Renaissance Italians as "the law of lever". The idea of virtual work was invoked by many notable physicists of the 17th century, such as Galileo, Descartes, Torricelli, Wallis, and Huygens, in varying degrees of generality, when solving problems in statics. Working with Leibnizian concepts, Johann Bernoulli systematized the virtual work principle and made explicit the concept of infinitesimal displacement. He was able to solve problems for both rigid bodies as well as fluids. Bernoulli's version of virtual work law appeared in his letter to Pierre Varignon in 1715, which was later published in Varignon's second volume of Nouvelle mécanique ou Statique in 1725. This formulation of the principle is today known as the principle of virtual velocities and is commonly considered as the prototype of the contemporary virtual work principles. In 1743 D'Alembert published his Traité de Dynamique where he applied the principle of virtual work, based on Bernoulli's work, to solve various problems in dynamics. His idea was to convert a dynamical problem into static problem by introducing inertial force. In 1768, Lagrange presented the virtual work principle in a more efficient form by introducing generalized coordinates and presented it as an alternative principle of mechanics by which all problems of equilibrium could be solved. A systematic exposition of Lagrange's program of applying this approach to all of mechanics, both static and dynamic, essentially D'Alembert's principle, was given in his Mécanique Analytique of 1788. Although Lagrange had presented his version of least action principle prior to this work, he recognized the virtual work principle to be more fundamental mainly because it could be assumed alone as the foundation for all mechanics, unlike the modern understanding that least action does not account for non-conservative forces.
If a force acts on a particle as it moves from point to point , then, for each possible trajectory that the particle may take, it is possible to compute the total work done by the force along the path. The principle of virtual work, which is the form of the principle of least action applied to these systems, states that the path actually followed by the particle is the one for which the difference between the work along this path and other nearby paths is zero (to first order). The formal procedure for computing the difference of functions evaluated on nearby paths is a generalization of the derivative known from differential calculus, and is termed the calculus of variations.
Consider a point particle that moves along a path which is described by a function from point , where , to point , where . It is possible that the particle moves from to along a nearby path described by , where is called the variation of . The variation satisfies the requirement . The scalar components of the variation , and , are called virtual displacements. This can be generalized to an arbitrary mechanical system defined by the generalized coordinates , . In which case, the variation of the trajectory is defined by the virtual displacements , .
Virtual work is the total work done by the applied forces and the inertial forces of a mechanical system as it moves through a set of virtual displacements. When considering forces applied to a body in static equilibrium, the principle of least action requires the virtual work of these forces to be zero.
Consider a particle P that moves from a point A to a point B along a trajectory r(t), while a force F(r(t)) is applied to it. The work done by the force F is given by the integral
where dr is the differential element along the curve that is the trajectory of P, and v is its velocity. It is important to notice that the value of the work W depends on the trajectory r(t).
Now consider particle P that moves from point A to point B again, but this time it moves along the nearby trajectory that differs from r(t) by the variation δr(t)=εh(t), where ε is a scaling constant that can be made as small as desired and h(t) is an arbitrary function that satisfies h(t0) = h(t1) = 0. Suppose the force F(r(t)+εh(t)) is the same as F(r(t)). The work done by the force is given by the integral
The variation of the work δW associated with this nearby path, known as the virtual work, can be computed to be
If there are no constraints on the motion of P, then 6 parameters are needed to completely describe P's position at any time t. If there are k (k ≤ 6) constraint forces, then n = (6 - k) parameters are needed. Hence, we can define n generalized coordinates qi (t) (i = 1, 2, ..., n), and express r(t) and δr=εh(t) in terms of the generalized coordinates. That is,
Then, the derivative of the variation δr=εh(t) is given by
then we have
The requirement that the virtual work be zero for an arbitrary variation δr(t)=εh(t) is equivalent to the set of requirements
The terms Qi are called the generalized forces associated with the virtual displacement δr.
Static equilibrium is a state in which the net force and net torque acted upon the system is zero. In other words, both linear momentum and angular momentum of the system are conserved. The principle of virtual work states that the virtual work of the applied forces is zero for all virtual movements of the system from static equilibrium. This principle can be generalized such that three dimensional rotations are included: the virtual work of the applied forces and applied moments is zero for all virtual movements of the system from static equilibrium. That is
where Fi , i = 1, 2, ..., m and Mj , j = 1, 2, ..., n are the applied forces and applied moments, respectively, and δri , i = 1, 2, ..., m and δφj , j = 1, 2, ..., n are the virtual displacements and virtual rotations, respectively.
Suppose the system consists of N particles, and it has f (f ≤ 6N) degrees of freedom. It is sufficient to use only f coordinates to give a complete description of the motion of the system, so f generalized coordinates qk , k = 1, 2, ..., f are defined such that the virtual movements can be expressed in terms of these generalized coordinates. That is,
where the generalized forces Qk are defined as
The principle of virtual work requires that the virtual work done on a system by the forces Fi and moments Mj vanishes if it is in equilibrium. Therefore, the generalized forces Qk are zero, that is
An important benefit of the principle of virtual work is that only forces that do work as the system moves through a virtual displacement are needed to determine the mechanics of the system. There are many forces in a mechanical system that do no work during a virtual displacement, which means that they need not be considered in this analysis. The two important examples are (i) the internal forces in a rigid body, and (ii) the constraint forces at an ideal joint.
Lanczos presents this as the postulate: "The virtual work of the forces of reaction is always zero for any virtual displacement which is in harmony with the given kinematic constraints." The argument is as follows. The principle of virtual work states that in equilibrium the virtual work of the forces applied to a system is zero. Newton's laws state that at equilibrium the applied forces are equal and opposite to the reaction, or constraint forces. This means the virtual work of the constraint forces must be zero as well.
Law of the leverEdit
A lever is modeled as a rigid bar connected to a ground frame by a hinged joint called a fulcrum. The lever is operated by applying an input force FA at a point A located by the coordinate vector rA on the bar. The lever then exerts an output force FB at the point B located by rB. The rotation of the lever about the fulcrum P is defined by the rotation angle θ.
Let the coordinate vector of the point P that defines the fulcrum be rP, and introduce the lengths
which are the distances from the fulcrum to the input point A and to the output point B, respectively.
Now introduce the unit vectors eA and eB from the fulcrum to the point A and B, so
This notation allows us to define the velocity of the points A and B as
where eA⊥ and eB⊥ are unit vectors perpendicular to eA and eB, respectively.
The angle θ is the generalized coordinate that defines the configuration of the lever, therefore using the formula above for forces applied to a one degree-of-freedom mechanism, the generalized force is given by
Now, denote as FA and FB the components of the forces that are perpendicular to the radial segments PA and PB. These forces are given by
This notation and the principle of virtual work yield the formula for the generalized force as
The ratio of the output force FB to the input force FA is the mechanical advantage of the lever, and is obtained from the principle of virtual work as
This equation shows that if the distance a from the fulcrum to the point A where the input force is applied is greater than the distance b from fulcrum to the point B where the output force is applied, then the lever amplifies the input force. If the opposite is true that the distance from the fulcrum to the input point A is less than from the fulcrum to the output point B, then the lever reduces the magnitude of the input force.
A gear train is formed by mounting gears on a frame so that the teeth of the gears engage. Gear teeth are designed to ensure the pitch circles of engaging gears roll on each other without slipping, this provides a smooth transmission of rotation from one gear to the next. For this analysis, we consider a gear train that has one degree-of-freedom, which means the angular rotation of all the gears in the gear train are defined by the angle of the input gear.
The size of the gears and the sequence in which they engage define the ratio of the angular velocity ωA of the input gear to the angular velocity ωB of the output gear, known as the speed ratio, or gear ratio, of the gear train. Let R be the speed ratio, then
The input torque TA acting on the input gear GA is transformed by the gear train into the output torque TB exerted by the output gear GB. If we assume, that the gears are rigid and that there are no losses in the engagement of the gear teeth, then the principle of virtual work can be used to analyze the static equilibrium of the gear train.
Let the angle θ of the input gear be the generalized coordinate of the gear train, then the speed ratio R of the gear train defines the angular velocity of the output gear in terms of the input gear, that is
The formula above for the principle of virtual work with applied torques yields the generalized force
The mechanical advantage of the gear train is the ratio of the output torque TB to the input torque TA, and the above equation yields
Thus, the speed ratio of a gear train also defines its mechanical advantage. This shows that if the input gear rotates faster than the output gear, then the gear train amplifies the input torque. And, if the input gear rotates slower than the output gear, then the gear train reduces the input torque.
Dynamic equilibrium for rigid bodiesEdit
If the principle of virtual work for applied forces is used on individual particles of a rigid body, the principle can be generalized for a rigid body: When a rigid body that is in equilibrium is subject to virtual compatible displacements, the total virtual work of all external forces is zero; and conversely, if the total virtual work of all external forces acting on a rigid body is zero then the body is in equilibrium.
If a system is not in static equilibrium, D'Alembert showed that by introducing the acceleration terms of Newton's laws as inertia forces, this approach is generalized to define dynamic equilibrium. The result is D'Alembert's form of the principle of virtual work, which is used to derive the equations of motion for a mechanical system of rigid bodies.
The expression compatible displacements means that the particles remain in contact and displace together so that the work done by pairs of action/reaction inter-particle forces cancel out. Various forms of this principle have been credited to Johann (Jean) Bernoulli (1667–1748) and Daniel Bernoulli (1700–1782).
Generalized inertia forcesEdit
Let a mechanical system be constructed from n rigid bodies, Bi, i=1,...,n, and let the resultant of the applied forces on each body be the force-torque pairs, Fi and Ti, i=1,...,n. Notice that these applied forces do not include the reaction forces where the bodies are connected. Finally, assume that the velocity Vi and angular velocities ωi, i=,1...,n, for each rigid body, are defined by a single generalized coordinate q. Such a system of rigid bodies is said to have one degree of freedom.
Consider a single rigid body which moves under the action of a resultant force F and torque T, with one degree of freedom defined by the generalized coordinate q. Assume the reference point for the resultant force and torque is the center of mass of the body, then the generalized inertia force Q* associated with the generalized coordinate q is given by
This inertia force can be computed from the kinetic energy of the rigid body,
by using the formula
A system of n rigid bodies with m generalized coordinates has the kinetic energy
which can be used to calculate the m generalized inertia forces
D'Alembert's form of the principle of virtual workEdit
D'Alembert's form of the principle of virtual work states that a system of rigid bodies is in dynamic equilibrium when the virtual work of the sum of the applied forces and the inertial forces is zero for any virtual displacement of the system. Thus, dynamic equilibrium of a system of n rigid bodies with m generalized coordinates requires that
for any set of virtual displacements δqj. This condition yields m equations,
which can also be written as
The result is a set of m equations of motion that define the dynamics of the rigid body system.
If the generalized forces Qj are derivable from a potential energy V(q1,...,qm), then these equations of motion take the form
In this case, introduce the Lagrangian, L=T-V, so these equations of motion become
These are known as Lagrange's equations of motion.
Virtual work principle for a deformable bodyEdit
- The -State : This shows external surface forces T, body forces f, and internal stresses in equilibrium.
- The -State : This shows continuous displacements and consistent strains .
The superscript * emphasizes that the two states are unrelated. Other than the above stated conditions, there is no need to specify if any of the states are real or virtual.
Imagine now that the forces and stresses in the -State undergo the displacements and deformations in the -State: We can compute the total virtual (imaginary) work done by all forces acting on the faces of all cubes in two different ways:
- First, by summing the work done by forces such as which act on individual common faces (Fig.c): Since the material experiences compatible displacements, such work cancels out, leaving only the virtual work done by the surface forces T (which are equal to stresses on the cubes' faces, by equilibrium).
- Second, by computing the net work done by stresses or forces such as , which act on an individual cube, e.g. for the one-dimensional case in Fig.(c):
- where the equilibrium relation has been used and the second order term has been neglected.
- Integrating over the whole body gives:
- – Work done by the body forces f.
Equating the two results leads to the principle of virtual work for a deformable body:
where the total external virtual work is done by T and f. Thus,
The right-hand-side of (d,e) is often called the internal virtual work. The principle of virtual work then states: External virtual work is equal to internal virtual work when equilibrated forces and stresses undergo unrelated but consistent displacements and strains. It includes the principle of virtual work for rigid bodies as a special case where the internal virtual work is zero.
Proof of equivalence between the principle of virtual work and the equilibrium equationEdit
We start by looking at the total work done by surface traction on the body going through the specified deformation:
Applying divergence theorem to the right hand side yields:
Now switch to indicial notation for the ease of derivation.
To continue our derivation, we substitute in the equilibrium equation . Then
The first term on the right hand side needs to be broken into a symmetric part and a skew part as follows:
where is the strain that is consistent with the specified displacement field. The 2nd to last equality comes from the fact that the stress matrix is symmetric and that the product of a skew matrix and a symmetric matrix is zero.
Now recap. We have shown through the above derivation that
Move the 2nd term on the right hand side of the equation to the left:
The physical interpretation of the above equation is, the External virtual work is equal to internal virtual work when equilibrated forces and stresses undergo unrelated but consistent displacements and strains.
For practical applications:
- In order to impose equilibrium on real stresses and forces, we use consistent virtual displacements and strains in the virtual work equation.
- In order to impose consistent displacements and strains, we use equilibriated virtual stresses and forces in the virtual work equation.
These two general scenarios give rise to two often stated variational principles. They are valid irrespective of material behaviour.
Principle of virtual displacementsEdit
Depending on the purpose, we may specialize the virtual work equation. For example, to derive the principle of virtual displacements in variational notations for supported bodies, we specify:
- Virtual displacements and strains as variations of the real displacements and strains using variational notation such as and
- Virtual displacements be zero on the part of the surface that has prescribed displacements, and thus the work done by the reactions is zero. There remains only external surface forces on the part that do work.
The virtual work equation then becomes the principle of virtual displacements:
This relation is equivalent to the set of equilibrium equations written for a differential element in the deformable body as well as of the stress boundary conditions on the part of the surface. Conversely, (f) can be reached, albeit in a non-trivial manner, by starting with the differential equilibrium equations and the stress boundary conditions on , and proceeding in the manner similar to (a) and (b).
Since virtual displacements are automatically compatible when they are expressed in terms of continuous, single-valued functions, we often mention only the need for consistency between strains and displacements. The virtual work principle is also valid for large real displacements; however, Eq.(f) would then be written using more complex measures of stresses and strains.
Principle of virtual forcesEdit
Here, we specify:
- Virtual forces and stresses as variations of the real forces and stresses.
- Virtual forces be zero on the part of the surface that has prescribed forces, and thus only surface (reaction) forces on (where displacements are prescribed) would do work.
The virtual work equation becomes the principle of virtual forces:
This relation is equivalent to the set of strain-compatibility equations as well as of the displacement boundary conditions on the part . It has another name: the principle of complementary virtual work.
A specialization of the principle of virtual forces is the unit dummy force method, which is very useful for computing displacements in structural systems. According to D'Alembert's principle, inclusion of inertial forces as additional body forces will give the virtual work equation applicable to dynamical systems. More generalized principles can be derived by:
- allowing variations of all quantities.
- using Lagrange multipliers to impose boundary conditions and/or to relax the conditions specified in the two states.
These are described in some of the references.
Among the many energy principles in structural mechanics, the virtual work principle deserves a special place due to its generality that leads to powerful applications in structural analysis, solid mechanics, and finite element method in structural mechanics.
- C. Lánczos, The Variational Principles of Mechanics, 4th Ed., General Publishing Co., Canada, 1970
- Dym, C. L. and I. H. Shames, Solid Mechanics: A Variational Approach, McGraw-Hill, 1973.
- Capecchi, Danilo (2012). History of Virtual Work Laws. Milano: Springer Milan. doi:10.1007/978-88-470-2056-6. ISBN 978-88-470-2055-9.
- René Dugas, A History of Mechanics, Courier Corporation, 2012
- T. R. Kane and D. A. Levinson, Dynamics: theory and applications, McGraw-Hill, New York, 1985
- Usher, A. P. (1929). A History of Mechanical Inventions. Harvard University Press (reprinted by Dover Publications 1988). p. 94. ISBN 978-0-486-14359-0. OCLC 514178. Retrieved 7 April 2013.
- T. R. Kane and D. A. Levinson, Dynamics, Theory and Applications, McGraw-Hill, NY, 2005.
- Bathe, K.J. "Finite Element Procedures", Prentice Hall, 1996. ISBN 0-13-301458-4
- Charlton, T.M. Energy Principles in Theory of Structures, Oxford University Press, 1973. ISBN 0-19-714102-1
- Dym, C. L. and I. H. Shames, Solid Mechanics: A Variational Approach, McGraw-Hill, 1973.
- Greenwood, Donald T. Classical Dynamics, Dover Publications Inc., 1977, ISBN 0-486-69690-1
- Hu, H. Variational Principles of Theory of Elasticity With Applications, Taylor & Francis, 1984. ISBN 0-677-31330-6
- Langhaar, H. L. Energy Methods in Applied Mechanics, Krieger, 1989.
- Reddy, J.N. Energy Principles and Variational Methods in Applied Mechanics, John Wiley, 2002. ISBN 0-471-17985-X
- Shames, I. H. and Dym, C. L. Energy and Finite Element Methods in Structural Mechanics, Taylor & Francis, 1995, ISBN 0-89116-942-3
- Tauchert, T.R. Energy Principles in Structural Mechanics, McGraw-Hill, 1974. ISBN 0-07-062925-0
- Washizu, K. Variational Methods in Elasticity and Plasticity, Pergamon Pr, 1982. ISBN 0-08-026723-8
- Wunderlich, W. Mechanics of Structures: Variational and Computational Methods, CRC, 2002. ISBN 0-8493-0700-7 |
Multiply by 12 #2
In this multiplication activity, students complete 40 problems, multiplying by 12. Worksheet includes links to answers and additional worksheets.
3 Views 1 Download
Divide by a Two-Digit Divisor
Through a chapter from a text book, upper-elementary learners explore how to divide by two-digit divisors accurately. In the beginning, they use rounding and compatible numbers to estimate quotients when dividing two-digit divisors. The...
5th - 6th Math CCSS: Adaptable
Smiling at Two Digit Multiplication!
How do I solve a two-digit multiplication problem? Your class tackles this question by walking through problem solving methods. They first investigates and applies traditional multiplication methods, and they then compare those with...
3rd - 4th Math CCSS: Adaptable
Add, Subtract and Multiply Fractions
Your future chefs will appreciate this comprehensive lesson where learners practice operations on fractions using pizza and soup analogies. Learners begin with a pizza analogy that requires the learners to multiply a whole number by a...
4th - 6th Math CCSS: Designed
Number & Operations: Multi-Digit Multiplication
A set of 14 lessons on multiplication would make a great learning experience for your fourth grade learners. After completing a pre-assessment, kids work through lessons that focus on multiples of 10, double-digit multiplication, and...
4th Math CCSS: Adaptable
Multiplying Fractions Times Whole Numbers and Fractions
Fifth graders make the connections using models between multiplying a fraction times a whole number and multiplying a fraction times a fraction. After completing three teacher-led activities, pupils access a website on their own to play...
5th - 6th Math CCSS: Designed |
Presentation on theme: "< < < > > > . There are two kinds of notation for graphs of inequalities: open circle or filled in circle notation and interval notation."— Presentation transcript:
< < < > > >
There are two kinds of notation for graphs of inequalities: open circle or filled in circle notation and interval notation brackets. You should be familiar with both [ Both of these number lines show the inequality above. They are just using two different notations. Because the inequality is "greater than or equal to" the solution can equal the endpoint. That is why the circle is filled in. With interval notation brackets, a square bracket means it can equal the endpoint. circle filled insquared end bracket Remember---these mean the same thing---just two different notations.
Let's look at the two different notations with a different inequality sign ) Since this says "less than" we make the arrow go the other way. Since it doesn't say "or equal to" the solution cannot equal the endpoint. That is why the circle is not filled in. With interval notation brackets, a rounded bracket means it cannot equal the endpoint. circle not filled inrounded end bracket Remember---these mean the same thing---just two different notations.
Compound Inequalities Let's consider a "double inequality" (having two inequality signs) ( I think of these as the "inbetweeners". x is inbetween the two numbers. This is an "and" inequality which means both parts must be true. It says that x is greater than –2 and x is less than or equal to 3. ]
Compound Inequalities Now let's look at another form of a "double inequality" (having two inequality signs) ) Instead of "and", these are "or" problems. One part or the other part must be true (but not necessarily both). Either x is less than –2 or x is greater than or equal to 3. In this case both parts cannot be true at the same time since a number can't be less than –2 and also greater than 3. [
Just like graphically there are two different notations, when you write your answers you can use inequality notation or interval notation. Again you should be familiar with both [ Inequality notation for graphs shown above. Interval notation for graphs shown above.
Let's have a look at the interval notation. For interval notation you list the smallest x can be, a comma, and then the largest x can be so solutions are anything that falls between the smallest and largest. This means x is unbounded above [ unbounded The bracket before the –1 is square because this is greater than "or equal to" (solution can equal the endpoint). The bracket after the infinity sign is rounded because the interval goes on forever (unbounded) and since infinity is not a number, it doesn't equal the endpoint (there is no endpoint).
Let's try another one Rounded bracket means cannot equal -2 Squared bracket means can equal 4 The brackets used in the interval notation above are the same ones used when you graph this. (] This means everything between –2 and 4 but not including -2
Let's look at another one This means x is unbounded below Notice how the bracket notation for graphing corresponds to the brackets in interval notation. Remember that square is "or equal to" and round is up to but not equal. By the infinity sign it will always be round because it can't equal infinity (that is not a number). This means the largest x can be is 4 but can't equal 4 )
Now let's look an "or" compound inequality There are two intervals to list when you list in interval notation )[ When the solution consists of more than one interval, we join them with a union sign.
Properties of Inequalities. Essentially, all of the properties that you learned to solve linear equations apply to solving linear inequalities with the exception that if you multiply or divide by a negative you must reverse the inequality sign. So to solve an inequality just do the same steps as with an equality to get the variable alone but if in the process you multiply or divide by a negative let it ring an alarm in your brain that says "Oh yeah, I have to turn the sign the other way to keep it true".
Example: - 4x Ring the alarm! We divided by a negative! We turned the sign!
Acknowledgement I wish to thank Shawna Haider from Salt Lake Community College, Utah USA for her hard work in creating this PowerPoint. Shawna has kindly given permission for this resource to be downloaded from and for it to be modified to suit the Western Australian Mathematics Curriculum.www.mathxtc.com Stephen Corcoran Head of Mathematics St Stephen’s School – Carramar |
In geometry, the incenter of a triangle is a triangle center, a point defined for any triangle in a way that is independent of the triangle's placement or scale. The incenter may be equivalently defined as the point where the internal angle bisectors of the triangle cross, as the point equidistant from the triangle's sides, as the junction point of the medial axis and innermost point of the grassfire transform of the triangle, and as the center point of the inscribed circle of the triangle.
Together with the centroid, circumcenter, and orthocenter, it is one of the four triangle centers known to the ancient Greeks, and the only one of the four that does not in general lie on the Euler line. It is the first listed center, X(1), in Clark Kimberling's Encyclopedia of Triangle Centers, and the identity element of the multiplicative group of triangle centers.
For polygons with more than three sides, the incenter only exists for tangential polygons - those that have an incircle that is tangent to each side of the polygon. In this case the incenter is the center of this circle and is equally distant from all sides.
It is a theorem in Euclidean geometry that the three interior angle bisectors of a triangle meet in a single point. In Euclid's Elements, Proposition 4 of Book IV proves that this point is also the center of the inscribed circle of the triangle. The incircle itself may be constructed by dropping a perpendicular from the incenter to one of the sides of the triangle and drawing a circle with that segment as its radius.
The incenter lies at equal distances from the three line segments forming the sides of the triangle, and also from the three lines containing those segments. It is the only point equally distant from the line segments, but there are three more points equally distant from the lines, the excenters, which form the centers of the excircles of the given triangle. The incenter and excenters together form an orthocentric system.
The medial axis of a polygon is the set of points whose nearest neighbor on the polygon is not unique: these points are equidistant from two or more sides of the polygon. One method for computing medial axes is using the grassfire transform, in which one forms a continuous sequence of offset curves, each at some fixed distance from the polygon; the medial axis is traced out by the vertices of these curves. In the case of a triangle, the medial axis consists of three segments of the angle bisectors, connecting the vertices of the triangle to the incenter, which is the unique point on the innermost offset curve. The straight skeleton, defined in a similar way from a different type of offset curve, coincides with the medial axis for convex polygons and so also has its junction at the incenter.
Let the bisection of
Then we have to prove that
A line that is an angle bisector is equidistant from both of its lines when measuring by the perpendicular. At the point where two bisectors intersect, this point is perpendicularly equidistant from the final angle's forming lines (because they are the same distance from this angles opposite edge), and therefore lies on its angle bisector line.
The trilinear coordinates for a point in the triangle give the ratio of distances to the triangle sides. Trilinear coordinatesfor the incenter are given by
The collection of triangle centers may be given the structure of a group under coordinatewise multiplication of trilinear coordinates; in this group, the incenter forms the identity element.
The barycentric coordinates for a point in a triangle give weights such that the point is the weighted average of the triangle vertex positions.Barycentric coordinates for the incenter are given by
The Cartesian coordinates of the incenter are a weighted average of the coordinates of the three vertices using the side lengths of the triangle relative to the perimeter—i.e., using the barycentric coordinates given above, normalized to sum to unity—as weights. (The weights are positive so the incenter lies inside the triangle as stated above.) If the three vertices are located at
Denoting the incenter of triangle ABC as I, the distances from the incenter to the vertices combined with the lengths of the triangle sides obey the equation
|IA ⋅ IA|
|CA ⋅ AB|
|IB ⋅ IB|
|AB ⋅ BC|
|IC ⋅ IC|
|BC ⋅ CA|
IA ⋅ IB ⋅ IC=4Rr2,
The distance from the incenter to the centroid is less than one third the length of the longest median of the triangle.
By Euler's theorem in geometry, the squared distance from the incenter I to the circumcenter O is given by
where R and r are the circumradius and the inradius respectively; thus the circumradius is at least twice the inradius, with equality only in the equilateral case.
The distance from the incenter to the center N of the nine point circle is
The squared distance from the incenter to the orthocenter H is
IG<HG, IH<HG, IG<IO, 2IN<IO.
The incenter is the Nagel point of the medial triangle (the triangle whose vertices are the midpoints of the sides) and therefore lies inside this triangle. Conversely the Nagel point of any triangle is the incenter of its anticomplementary triangle.
The incenter must lie in the interior of a disk whose diameter connects the centroid G and the orthocenter H (the orthocentroidal disk), but it cannot coincide with the nine-point center, whose position is fixed 1/4 of the way along the diameter (closer to G). Any other point within the orthocentroidal disk is the incenter of a unique triangle.
The Euler line of a triangle is a line passing through its circumcenter, centroid, and orthocenter, among other points.The incenter generally does not lie on the Euler line; it is on the Euler line only for isosceles triangles, for which the Euler line coincides with the symmetry axis of the triangle and contains all triangle centers.
Denoting the distance from the incenter to the Euler line as d, the length of the longest median as v, the length of the longest side as u, the circumradius as R, the length of the Euler line segment from the orthocenter to the circumcenter as e, and the semiperimeter as s, the following inequalities hold:
Any line through a triangle that splits both the triangle's area and its perimeter in half goes through the triangle's incenter; every line through the incenter that splits the area in half also splits the perimeter in half. There are either one, two, or three of these lines for any given triangle.
Let X be a variable point on the internal angle bisector of A. Then X = I (the incenter) maximizes or minimizes the ratio |
It often said that the battle of Marathon was one of the few really decisive battles in history. The truth, however, is that we cannot establish this with certainty. Still, the fight had important consequences: it gave rise to the idea that East and West were opposites, an idea that has survived until the present day, in spite of the fact that “Marathon” has become the standard example to prove that historians can better refrain from such bold statements.
Presenting Marathon – Then and Now
The Spartans were the first to commemorate the battle of Marathon. Although they arrived too late for the fight, they visited the battlefield, inspected the dead, and praised the Athenians. The story is told by Herodotus, note[Herodotus, Histories 6.120.] the author of our main source for the fight. The very first question we ought to ask is why he chose to tell it. After all, his ambition was to record “great and marvelous deeds”, and the late arrival of the reinforcements was neither great nor marvelous. The Spartan presence at Marathon, however, served to present the battle that had been, or ought to have been, a fight by all Greeks.
That “Marathon” had been more than a normal battle, was hardly a new idea. Prior to Herodotus’ writing, monuments had already been erected, which presented the warriors as the equals of the heroes of the Trojan War. Other monuments, like the one mentioned by Pausanias, presented the dead as defenders of democracy: Pausanias mentions an Athenian “grave in the plain with are stones on it, carved with the names of the dead in their voting districts”.note[Pausanias, Guide for Greece 1.32.3. Three fragments of the inscriptions of Pausanias’ monument were recently excavated in Astros on the Peloponnese, where the Athenian billionaire Herodes Atticus (second century CE) owned a villa. He apparently removed the inscriptions from the monument seen by Pausanias. The names were indeed arranged by voting districts, which means that the original tomb was a monument of the Athenian democracy. Greek archaeologist G. Spyropoulos has suggested that the famous funeral mound in the plain was erected at the same time. This would explain why the dead were buried in a tumulus: a very aristocratic type of burial that had come to an end after the reforms of Solon (594 BCE), had no parallels in classical Athens, cannot have been used in 490, but may well have been deemed suitable in the Roman age, when the aristocratic associations were no longer remembered.] A monument erected in Delphi presented the ten tribes and lauded the democratically elected Miltiades, but conspicuously ignored the polemarch Callimachus.
Framing the Battle
Herodotus chose not to present the battle in the same way. Knowing that the Persians had returned in 480 and had tried to conquer Greece, he interpreted the battle as a first attempt to do the same, which made the fight important for all of Greece. This is unlikely to be a correct judgment: the Persian army was too small for conquest and occupation, and most historians have rejected this.
What they did not reject, was the context in which Herodotus presented the violent actions. His Histories presuppose an elaborate model of action and reaction, which is Herodotus’ way to express historical causality: Cyrus conquered the Greek towns in Asia (action), they revolted (reaction), a war broke out in which Athens and Eretria supported the rebels (action), Persia restored order and decided to subdue the allies (reaction), the Persians came to Attica (action), where the Athenians defeated them at Marathon (reaction), so the Persians returned with a bigger army to avenge themselves.
This pattern of action and reaction is unlikely to correspond to historical fact. After all, the first action and the first reaction are separated by a considerable period, and the campaign of 490 was not aimed at the conquest of Greece. So, while Herodotus’ sequence of the events between 500 and 479 is probably correct, we may have some doubt about the causal connections. The Halicarnassian may in the end turn out to be right, but that is not now at issue: what needs to be stressed is that the framework in which we place the battle of Marathon, was created by Herodotus.
This framework also presents the struggle between the Greeks and the Asians as going back to times immemorial. The very first part of the Histories is a slightly ironic account of some ancient legends about women being carried away, but Herodotus continues by pointing at “the man who to the best of my knowledge was the first to commit wrong against the Greeks”, king Croesus of Lydia. The restriction “to the best of my knowledge” suggests that Herodotus believed that the conflict had started earlier. Herodotus is not just the father of history, he is also the father of the idea that East and West are eternal opposites.
Even more importantly, he is the first author to make this antagonism something more than a geographical opposition. The Asians were the slaves of the great king, and they went to war because the ruler ordered them to, while the Greeks were citizens of free cities, who obeyed the law and went to defend their liberty. This is borne out by the words of the Spartan exile Demaratus to Xerxes:
Over the Greeks is set Law as a master, whom they fear much more even than your people fear you”.note[Herodotus, Histories 7.103.]
This speech is, of course, one of Herodotus’ own compositions: not only are “tragic warners” in the Histories invariably speaking on behalf of the author, but the topic under discussion, the tension between the rule of a leader and the rule of the law, is typical for the political debate in democratic Athens.note[Compare the famous, equally fictitious constitutional debate in Herodotus, Histories 3.80-82.]
Herodotus’ framing of the Persian Wars as a struggle between a monarchical Asia and a free Greece explains his authorial choices. He might have mentioned the Spartan visit to the battlefield very briefly, but inserted a long digression, because the incident, although completely irrelevant for the battle, was useful to convert Marathon into a panhellenic event.
Greece versus Asia: although popular in the classical age, this theme lost relevance in the Hellenistic age. Once Rome had seized power, the main opposition was that between the barbarians outside the Empire and the civilized Mediterranean city dwellers. When Christianity became popular, the main antagonism was between pagans and orthodox believers. In the Early Middle Ages, new self-identifications and oppositions arose: the scholars of Constantinople believed that Islam was the archenemy of the Byzantine Empire, whereas in the Carolingian Empire, scribes believed in an antagonism between Islam and those who were called “Europenses”. The first reference to Europeans as a cultural unity is the Mozarabic Chronicle of 754.
For centuries, the inhabitants of western Europe associated their culture with Rome and Christianity. In the eighteenth century, however, the famous German art historian Johann Joachim Winckelmann created the modern paradigm that Rome had merely continued Greek culture, and that Athens was the real origin of western civilization.
This new idea was successful, and in the early nineteenth century, the belief that Athens was the cradle of a freedom-loving, rational European civilization, was fully accepted. It was freedom, philosophers argued, that had at Marathon been defended by the Athenians. Because their victory had inspired other Greeks to resist Xerxes, Marathon had been an important battle: in Marathon, the foundations of western civilization had been laid. The British philosopher John Stuart Mill judged that “the battle of Marathon, even as an event in English history, is more important than the battle of Hastings”.
That bold, often repeated statement, is based on three assumptions. The first is that the Athenians were fighting for the independence of Greece. The pre-Herodotean monuments prove that this was not the perspective of the participants: Athenian democrats fighting against a Persian army that wanted to bring back the tyrant (sole ruler) Hippias. As indicated above, it was Herodotus who introduced the panhellenic element.
The second assumption is that the political independence of Greece guaranteed the freedom of its culture. In 1901, the great German historian Eduard Meyer wrote in his Geschichte des Altertums (“History of Antiquity”) that the consequences of a Persian victory in 490 or 480 would have been serious.
The end result would have been that some kind of religion … would have put Greek thought under a yoke, and any free spiritual life would have been bound in chains. The new Greek culture would, just like oriental culture, have been of a theocratic-religious nature.
The argument is, more or less, that the great king would have replaced democracy with tyranny, so that the free Athenian civilization would have vanished in a maelstrom of oriental despotism, irrationality, and cruelty. Without democracy, no Greek philosophy, no innovative Greek literature, no arts, no rationalism. In this sense, the Greek victory in the Persian Wars was decisive for Greek culture.
The third assumption is that there is continuity from ancient Greece to nineteenth-century Europe. This sociological statement has never been properly tested, even though there is an obvious counterargument: after the fall of Rome, people did not recognize this continuity. The “Europeans” were not recognized as a cultural unity until 754, and when they were, they were Frankish Christians fighting Iberian Muslims, not Greeks fighting Asians. Some scholars (e.g., Anthony Pagden) have tried to solve this problem by arguing that, in spite of the fact that nobody had noticed it, the spirit of freedom had always been there, just like the spirit of monarchism had always remained alive in the East, influencing individual behavior. This type of argument is called “ontological holism”, and is better known from Marx’ idea that history was forged by the struggle between classes, or the notorious idea that history was a war between races. Class struggle, race war, or the clash between free Europe and tyrannical Asia are abstractions that do not really exist.
A more sophisticated way to refute the counterargument is the idea, best known from Jacob Burckhardt’s famous Geschichte der Renaissance in Italien ("Civilization of the Renaissance in Italy", 1867), is that the Renaissance was a rebirth of Roman civilization and that Winckelmann was the first scholar who understood that Roman civilization had been a continuation of Athenian civilization. This cannot be discarded out of hand, because social scientists have never developed the tools to test such bold statements about continuity.
Meyer’s View Assessed
Today, the German scholar Max Weber is best known as the father of sociology, but he started his career as an ancient historian. In 1904/1905, he published the two “Critical Studies in the Logic of Cultural Sciences”, in which he investigated the epistemological foundations of the study of the past. The second essay deals with “Objective Possibility and Adequate Causation in Historical Explanation”, and has become rightly famous. As it happens, one of Weber’s examples is Meyer’s analysis of the meaning of Marathon, which is shown to be the result of a counterfactual argument: if the Persians had won, the preconditions would not have been met for the rise of Athenian civilization. But, Weber argued, this was nothing but speculation. Counterfactual arguments are usually fallacious.
For example, how did Meyer know that the Persians, after a victory in the Persian Wars, would have put an end to democracy? We must pause for thought when we read that Herodotus explicitly states that the Persian commander Mardonius supported Greek democracy.note[Herodotus, Histories 6.43.] Another point is that very few historians, right now, will accept that the ancient Near East was “of a theocratic-religious nature”: it was in Persian Babylonia that astronomers developed the scientific method. Plato and Aristotle might have lived in a Persian Athens. Likewise, Eric Dodds’ The Greeks and the Irrational (1951) meant the end of the idea that Greek culture represented a more rational view of life.
So, Meyer’s reading of the Persian War has been decisively challenged. We cannot make bold statements about the meaning of Marathon. Unfortunately, not everybody is aware that there are limits to what we can understand about the past: over the past years, several books have appeared that pretend that there is a direct continuity from Marathon to our own age. Historians and social scientists have something really important to discuss.
[Originally published in the Marathon Special of Ancient Warfare (2011).] |
Presentation on theme: "Warm Up Write the equivalent percent. 1. 2.3. Find each value. 4. 20% of 360 5. 75% of 360 6."— Presentation transcript:
Warm Up Write the equivalent percent Find each value % of % of
Organize data in tables and graphs. Choose a table or graph to display data. Learning Targets
A bar graph displays data with vertical or horizontal bars. Bar graphs are a good way to display data that can be organized into categories. Using a bar graph, you can quickly compare the categories.
Reading and Interpreting Bar Graphs Use the graph to answer each question. A. Which casserole was ordered the most? B. About how many total orders were placed? C. About how many more tuna noodle casseroles were ordered than king ranch casseroles? D. About what percent of the total orders were for baked ziti?
Check It Out! Use the graph to answer each question. a. Which ingredient contains the least amount of fat? b. Which ingredients contain at least 8 grams of fat?
A double-bar graph can be used to compare two data sets. A double-bar graph has a key to distinguish between the two sets of data.
Reading and Interpreting Double Bar Graphs Use the graph to answer each question. A. Which feature received the same satisfaction rating for each SUV? Find the two bars that are the same. B. Which SUV received a better rating for mileage? Find the longest mileage bar.
Check It Out! Use the graph to determine which years had the same average basketball attendance. What was the average attendance for those years?
A line graph displays data using line segments. Line graphs are a good way to display data that changes over a period of time.
Reading and Interpreting Line Graphs Use the graph to answer each question. A. At what time was the humidity the lowest? B. During which 4-hour time period did the humidity increase the most? 4 A.M. 12 to 4 P.M. Identify the lowest point. Look for the segment with the greatest positive slope.
Check It Out! Use the graph to estimate the difference in temperature between 4:00 A.M. and noon. About 18°F Compare the temperatures at the two times.
A double-line graph can be used to compare how two related data sets change over time. A double- line graph has a key to distinguish between the two sets of data.
Reading and Interpreting Double-Line Graphs Use the graph to answer each question. A. In which month did station A charge more than station B? May Look for the point when the station A line is above the station B line. B. During which month(s) did the stations charge the same for gasoline? April and July See where the data points overlap.
Check It Out! Use the graph to describe the general trend of the data. Prices increased from Jan through Jul or Aug, and then prices decreased through Nov.
A circle graph shows parts of a whole. The entire circle represents 100% of the data and each sector represents a percent of the total. Circle graphs are good for comparing each category of data to the whole set.
Reading and Interpreting Circle Graphs Use the graph to answer the question. Which ingredients are present in equal amounts? Lemon sherbet and pineapple juice. Look for same sized sectors. 12.5% 25% 50%
Check It Out! Use the graph to determine what percent of the fruit salad is cantaloupe. Find the cups of cantaloupe and divide that into total cups of fruit.
The sections of a circle graph are called sectors. Reading Math
Choosing and Creating an Appropriate Display Use the given data to make a graph. Explain why you chose that type of graph. A bar graph is good for displaying categories that do not make up a whole. Step 1 Choose an appropriate scale and interval. The scale must include all of the data values. The scale is separated into equal parts called intervals. Flowers in an Arrangement
Step 2 Use the data to determine the lengths of the bars. Draw bars of equal width. The bars should not touch. Step 3 Title the graph and label the horizontal and vertical scales.
Use the given data to make a graph. Explain why you choose that type of graph. A circle graph is good for displaying categories that make up a whole. Degrees Held by Faculty Bachelor's: Master's: PhD: Step 1 Calculate the percent of total represented by each category.
Step 2 Find the angle measure for each sector of the graph. Since there are 360° in a circle, multiply each percent by 360°. PhD: 0.10 360° = 36° Master ’ s: 0.39 360° = 140.4° Bachelor ’ s: 0.51 360° = 183.6° Step 3 Use a compass to draw a circle. Mark the center and use a straightedge to draw one radius. Then use a protractor to draw each central angle.
Use the given data to make a graph. Explain why you chose that type of graph. A line graph is appropriate for this data because it will show the change over time. Step 1 Determine the scale and interval for each set of data. Time should be plotted on the horizontal axis because it is independent. County Farms 248
Step 2 Plot a point for each pair of values. Connect the points using line segments. Step 3 Title the graph and label the horizontal and vertical scales. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.