text stringlengths 16 172k | source stringlengths 32 122 |
|---|---|
Developmental psychologyis thescientificstudy of how and whyhumansgrow, change, and adapt across the course of their lives. Originally concerned withinfantsandchildren, the field has expanded to includeadolescence,adult development,aging, and the entire lifespan.[1]Developmental psychologists aim to explain howthinking,feeling, andbehaviorschange throughout life. This field examines change[2]across three major dimensions, which arephysical development,cognitive development, andsocial emotional development.[3][4]Within these three dimensions are a broad range of topics includingmotor skills,executive functions,moral understanding,language acquisition,social change,personality, emotional development,self-concept, andidentity formation.
Developmental psychology examines the influences of natureandnurture on the process of human development, as well as processes of change in context across time. Many researchers are interested in the interactions among personal characteristics, the individual's behavior, andenvironmental factors, including thesocial contextand thebuilt environment. Ongoing debates in regards to developmental psychology include biologicalessentialismvs.neuroplasticityandstages of developmentvs. dynamic systems of development. Research in developmental psychology has some limitations but at the moment researchers are working to understand how transitioning through stages of life and biological factors may impact our behaviors and development.[5]
Developmental psychology involves a range of fields,[2]such aseducational psychology,child psychopathology,forensic developmental psychology,child development,cognitive psychology,ecological psychology, andcultural psychology. Influential developmental psychologists from the 20th century includeUrie Bronfenbrenner,Erik Erikson,Sigmund Freud,Anna Freud,Jean Piaget,Barbara Rogoff,Esther Thelen, andLev Vygotsky.[6]
Jean-Jacques RousseauandJohn B. Watsonare typically cited as providing the foundation for modern developmental psychology.[7]In the mid-18th century, Jean Jacques Rousseau described three stages of development:infants(infancy),puer(childhood) andadolescenceinEmile: Or, On Education. Rousseau's ideas were adopted and supported by educators at the time.
Developmental psychology generally focuses on how and why certain changes (cognitive, social, intellectual, personality) occur over time in the course of a human life. Many theorists have made a profound contribution to this area of psychology. One of them is the psychologist Erik Erikson,[8]who created a model of eight phases of psychosocial development.[8]According to his theory, people go through different phases in their lives, each of which has its own developmental crisis that shapes a person's personality and behavior.[9]
In the late 19th century, psychologists familiar with theevolutionary theoryofDarwinbegan seeking anevolutionary description of psychological development;[7]prominent here was the pioneering psychologistG. Stanley Hall,[7]who attempted to correlate ages of childhood withprevious ages of humanity.James Mark Baldwin, who wrote essays on topics that includedImitation: A Chapter in the Natural History of ConsciousnessandMental Development in the Child and the Race: Methods and Processes, was significantly involved in the theory of developmental psychology.[7]Sigmund Freud, whose concepts were developmental, significantly affected public perceptions.[7]
Sigmund Freuddeveloped a theory that suggested that humans behave as they do because they are constantly seeking pleasure. This process of seeking pleasure changes through stages because people evolve. Each period of seeking pleasure that a person experiences is represented by a stage of psychosexual development. These stages symbolize the process of arriving to become a maturing adult.[10]
The first is theoral stage, which begins at birth and ends around a year and a half of age. During the oral stage, the child finds pleasure in behaviors like sucking or other behaviors with the mouth. The second is theanal stage, from about a year or a year and a half to three years of age. During the anal stage, the child defecates from the anus and is often fascinated with its defecation. This period of development often occurs during the time when the child is being toilet trained. The child becomes interested with feces and urine. Children begin to see themselves as independent from their parents. They begin to desire assertiveness and autonomy.
The third is thephallic stage, which occurs from three to five years of age (most of a person's personality forms by this age). During the phallic stage, the child becomes aware of its sexual organs. Pleasure comes from finding acceptance and love from the opposite sex. The fourth is thelatency stage, which occurs from age five until puberty. During the latency stage, the child's sexual interests are repressed.
Stage five is thegenital stage, which takes place from puberty until adulthood. During the genital stage, puberty begins to occur.[11]Children have now matured, and begin to think about other people instead of just themselves. Pleasure comes from feelings of affection from other people.
Freud believed there is tension between the conscious and unconscious because the conscious tries to hold back what the unconscious tries to express. To explain this, he developed three personality structures: id, ego, and superego. The id, the most primitive of the three, functions according to the pleasure principle: seek pleasure and avoid pain.[12]The superego plays the critical and moralizing role, while the ego is the organized, realistic part that mediates between the desires of the id and the superego.[13]
Jean Piaget, a Swiss theorist, posited that children learn by actively constructing knowledge through their interactions with their physical and social environments.[14]He suggested that the adult's role in helping the child learn was to provide appropriate materials. In his interview techniques with children that formed an empirical basis for his theories, he used something similar toSocratic questioningto get children to reveal their thinking. He argued that a principal source of development was through the child's inevitable generation of contradictions through their interactions with their physical and social worlds. The child's resolution of these contradictions led to more integrated and advanced forms of interaction, a developmental process that he called, "equilibration."
Piaget argued that intellectual development takes place through a series of stages generated through the equilibration process. Each stage consists of steps the child must master before moving to the next step. He believed that these stages are not separate from one another, but rather that each stage builds on the previous one in a continuous learning process. He proposed four stages:sensorimotor,pre-operational,concrete operational, andformal operational. Though he did not believe these stages occurred at any given age, many studies have determined when these cognitive abilities should take place.[15]
Piaget claimed that logic and morality develop through constructive stages.[16]Expanding on Piaget's work,Lawrence Kohlbergdetermined that the process of moral development was principally concerned with justice, and that it continued throughout the individual's lifetime.[17]
He suggested three levels of moral reasoning; pre-conventional moral reasoning, conventional moral reasoning, and post-conventional moral reasoning. The pre-conventional moral reasoning is typical of children and is characterized by reasoning that is based on rewards and punishments associated with different courses of action. Conventional moral reason occurs during late childhood and early adolescence and is characterized by reasoning based on rules and conventions of society. Lastly, post-conventional moral reasoning is a stage during which the individual sees society's rules and conventions as relative and subjective, rather than as authoritative.[18]
Kohlberg used the Heinz Dilemma to apply to his stages of moral development. The Heinz Dilemma involves Heinz's wife dying from cancer and Heinz having the dilemma to save his wife by stealing a drug. Preconventional morality, conventional morality, and post-conventional morality applies to Heinz's situation.[19]
German-American psychologistErik Eriksonand his collaborator and wife,Joan Erikson, posits eight stages of individual human development influenced by biological, psychological, and social factors throughout the lifespan.[8]At each stage the person must resolve a challenge, or an existential dilemma. Successful resolution of the dilemma results in the person ingraining a positive virtue, but failure to resolve the fundamental challenge of that stage reinforces negative perceptions of the person or the world around them and the person's personal development is unable to progress.[8]
The first stage, "Trust vs. Mistrust", takes place in infancy. The positive virtue for the first stage is hope, in the infant learning whom to trust and having hope for a supportive group of people to be there for him/her. The second stage is "Autonomy vs. Shame and Doubt" with the positive virtue being will. This takes place in early childhood when the child learns to become more independent by discovering what they are capable of whereas if the child is overly controlled, feelings of inadequacy are reinforced, which can lead to low self-esteem and doubt.
The third stage is "Initiative vs. Guilt". The virtue of being gained is a sense of purpose. This takes place primarily via play. This is the stage where the child will be curious and have many interactions with other kids. They will ask many questions as their curiosity grows. If too much guilt is present, the child may have a slower and harder time interacting with their world and other children in it.
The fourth stage is "Industry (competence) vs. Inferiority". The virtue for this stage is competency and is the result of the child's early experiences in school. This stage is when the child will try to win the approval of others and understand the value of their accomplishments.
The fifth stage is "Identity vs. Role Confusion". The virtue gained is fidelity and it takes place in adolescence. This is when the child ideally starts to identify their place in society, particularly in terms of their gender role.
The sixth stage is "Intimacy vs. Isolation", which happens in young adults and the virtue gained is love. This is when the person starts to share his/her life with someone else intimately and emotionally. Not doing so can reinforce feelings of isolation.
The seventh stage is "Generativity vs. Stagnation". This happens in adulthood and the virtue gained is care. A person becomes stable and starts to give back by raising a family and becoming involved in the community.
The eighth stage is "Ego Integrity vs. Despair". When one grows old, they look back on their life and contemplate their successes and failures. If they resolve this positively, the virtue of wisdom is gained. This is also the stage when one can gain a sense of closure and accept death without regret or fear.[20]
Michael Commonsenhanced and simplifiedBärbel Inhelderand Piaget's developmental theory and offers a standard method of examining the universal pattern of development. The Model of Hierarchical Complexity (MHC) is not based on the assessment of domain-specific information, It divides the Order of Hierarchical Complexity of tasks to be addressed from the Stage performance on those tasks. A stage is the order hierarchical complexity of the tasks the participant's successfully addresses. He expanded Piaget's original eight stage (counting the half stages) to seventeen stages. The stages are:
The order of hierarchical complexity of tasks predicts how difficult the performance is with an R ranging from 0.9 to 0.98.
In the MHC, there are three main axioms for an order to meet in order for the higher order task to coordinate the next lower order task. Axioms are rules that are followed to determine how the MHC orders actions to form a hierarchy. These axioms are: a) defined in terms of tasks at the next lower order of hierarchical complexity task action; b) defined as the higher order task action that organizes two or more less complex actions; that is, the more complex action specifies the way in which the less complex actions combine; c) defined as the lower order task actions have to be carried out non-arbitrarily.[citation needed]
Ecological systems theory, originally formulated byUrie Bronfenbrenner, specifies four types of nested environmental systems, with bi-directional influences within and between the systems. The four systems are microsystem, mesosystem, exosystem, and macrosystem. Each system contains roles, norms and rules that can powerfully shape development. The microsystem is the direct environment in our lives such as our home and school. Mesosystem is how relationships connect to the microsystem. Exosystem is a larger social system where the child plays no role. Macrosystem refers to the cultural values, customs and laws of society.[21]
The microsystem is the immediate environment surrounding and influencing the individual (example: school or the home setting). The mesosystem is the combination of two microsystems and how they influence each other (example: sibling relationships at home vs. peer relationships at school). The exosystem is the interaction among two or more settings that are indirectly linked (example: a father's job requiring more overtime ends up influencing his daughter's performance in school because he can no longer help with her homework). The macrosystem is broader taking into account social economic status, culture, beliefs, customs and morals (example: a child from a wealthier family sees a peer from a less wealthy family as inferior for that reason). Lastly, the chronosystem refers to the chronological nature of life events and how they interact and change the individual and their circumstances through transition (example: a mother losing her own mother to illness and no longer having that support in her life).[15]
Since its publication in 1979, Bronfenbrenner's major statement of this theory,The Ecology of Human Development,[22]has had widespread influence on the way psychologists and others approach the study of human beings and their environments. As a result of this conceptualization of development, these environments—from the family to economic and political structures—have come to be viewed as part of the life course from childhood through to adulthood.[23]
Lev Vygotskywas a Russian theorist from the Soviet era, who posited that children learn through hands-on experience and social interactions with members of their culture.[24]Vygotsky believed that a child's development should be examined during problem-solving activities.[25]Unlike Piaget, he claimed that timely and sensitive intervention by adults when a child is on the edge of learning a new task (called the "zone of proximal development") could help children learn new tasks. Zone of proximal development is a tool used to explain the learning of children and collaborating problem solving activities with an adult or peer.[25]This adult role is often referred to as the skilled "master", whereas the child is considered the learning apprentice through an educational process often termed "cognitive apprenticeship" Martin Hill stated that "The world of reality does not apply to the mind of a child." This technique is called "scaffolding", because it builds upon knowledge children already have with new knowledge that adults can help the child learn.[26]Vygotsky was strongly focused on the role of culture in determining the child's pattern of development, arguing that development moves from the social level to the individual level.[26]In other words, Vygotsky claimed that psychology should focus on the progress of human consciousness through the relationship of an individual and their environment.[27]He felt that if scholars continued to disregard this connection, then this disregard would inhibit the full comprehension of the human consciousness.[27]
Constructivism is a paradigm in psychology that characterizes learning as a process of actively constructing knowledge. Individuals create meaning for themselves or make sense of new information by selecting, organizing, and integrating information with other knowledge, often in the context of social interactions. Constructivism can occur in two ways: individual and social. Individual constructivism is when a person constructs knowledge through cognitive processes of their own experiences rather than by memorizing facts provided by others. Social constructivism is when individuals construct knowledge through an interaction between the knowledge they bring to a situation and social or cultural exchanges within that content.[15]A foundational concept of constructivism is that the purpose of cognition is to organize one's experiential world, instead of the ontological world around them.[28]
Jean Piaget, a Swiss developmental psychologist, proposed that learning is an active process because children learn through experience and make mistakes and solve problems. Piaget proposed that learning should be whole by helping students understand that meaning is constructed.[29]
Evolutionary developmental psychology is a research paradigm that applies the basic principles of Darwinianevolution, particularlynatural selection, to understand the development of human behavior and cognition. It involves the study of both thegeneticand environmental mechanisms that underlie the development of social andcognitivecompetencies, as well as theepigenetic(gene-environment interactions) processes that adapt these competencies to local conditions.[30]
EDP considers both the reliably developing, species-typical features of ontogeny (developmental adaptations), as well asindividual differencesin behavior, from an evolutionary perspective. While evolutionary views tend to regard most individual differences as the result of either random genetic noise (evolutionary byproducts)[31]and/or idiosyncrasies (for example, peer groups, education, neighborhoods, and chance encounters)[32]rather than products of natural selection, EDP asserts that natural selection can favor the emergence of individual differences via "adaptive developmental plasticity".[30][33]From this perspective, human development follows alternative life-history strategies in response to environmental variability, rather than following one species-typical pattern of development.[30]
EDP is closely linked to the theoretical framework ofevolutionary psychology(EP), but is also distinct from EP in several domains, including research emphasis (EDP focuses on adaptations of ontogeny, as opposed to adaptations of adulthood) and consideration of proximate ontogenetic and environmental factors (i.e., how development happens) in addition to more ultimate factors (i.e., why development happens), which are the focus of mainstream evolutionary psychology.[34]
Attachment theory, originally developed byJohn Bowlby, focuses on the importance of open, intimate, emotionally meaningful relationships.[35]Attachment is described as a biological system or powerful survival impulse that evolved to ensure the survival of the infant. A threatened or stressed child will move toward caregivers who create a sense of physical, emotional, and psychological safety for the individual. Attachment feeds on body contact and familiarity. LaterMary Ainsworthdeveloped theStrange Situationprotocol and the concept of the secure base. This tool has been found to help understand attachment, such as the Strange Situation Test and the Adult Attachment Interview. Both of which help determine factors to certain attachment styles. The Strange Situation Test helps find "disturbances in attachment" and whether certain attributes are found to contribute to a certain attachment issue.[36]The Adult Attachment Interview is a tool that is similar to the Strange Situation Test but instead focuses attachment issues found in adults.[36]Both tests have helped many researchers gain more information on the risks and how to identify them.[36]
Theorists have proposed four types of attachment styles:[37]secure, anxious-avoidant, anxious-resistant,[18]and disorganized.[37]Secure attachment is a healthy attachment between the infant and the caregiver. It is characterized by trust. Anxious-avoidant is an insecure attachment between an infant and a caregiver. This is characterized by the infant's indifference toward the caregiver. Anxious-resistant is an insecure attachment between the infant and the caregiver characterized by distress from the infant when separated and anger when reunited.[18]Disorganized is an attachment style without a consistent pattern of responses upon return of the parent.[37]
It is possible to prevent a child's innate propensity to develop bonds. Some infants are kept in isolation or subjected to severe neglect or abuse, or they are raised without the stimulation and care of a regular caregiver. This deprivation may cause short-term consequences such as separation, rage, despair, and a brief lag in cerebral growth. Increased aggression, clinging behavior, alienation, psychosomatic illnesses, and an elevated risk of adult depression are among the long-term consequences.[38][page needed][39][page needed]\
According to attachment theory, which is a psychological concept, people's capacity to develop healthy social and emotional ties later in life is greatly impacted by their early relationships with their primary caregivers, especially during infancy. This suggests that humans have an inbuilt need to develop strong bonds with caregivers in order to survive and be healthy. Childhood attachment styles can have an impact on how people behave in adult social situations, including romantic partnerships.[40]
A significant concern of developmental psychology is the relationship between innateness and environmental influences on development. This is often referred to as "nature and nurture" ornativismversusempiricism. A nativist account of development would argue that the processes in question are innate, that is, they are specified by the organism'sgenes.[41]What makes a person who they are? Is it their environment or their genetics? This is the debate of nature vs nurture.[42]
According to an empiricist viewpoint, those processes are learned through interaction with the environment. Today developmental psychologists rarely take such polarized positions with regard to most aspects of development; rather they investigate, among many other things, the relationship between innate and environmental influences. One of the ways this relationship has been explored in recent years is through the emerging field ofevolutionary developmental psychology.
The dispute over innateness has been well represented in the field oflanguage acquisitionstudies. A major question in this area is whether or not certain properties of human language are specified genetically or can be acquired throughlearning. The empiricist position on the issue of language acquisition suggests that the language input provides the necessary information required for learning the structure of language and that infants acquire language through a process ofstatistical learning. From this perspective, language can be acquired via general learning methods that also apply to other aspects of development, such asperceptual learning.[43]
The nativist position argues that the input from language is too impoverished for infants and children to acquire the structure of language. LinguistNoam Chomskyasserts that, evidenced by the lack of sufficient information in the language input, there is auniversal grammarthat applies to all human languages and is pre-specified. This has led to the idea that there is a special cognitivemodulesuited for learning language, often called thelanguage acquisition device. Chomsky's critique of the behaviorist model of language acquisition is regarded by many as a key turning point in the decline in the prominence of the theory of behaviorism generally.[44]But Skinner's conception of "Verbal Behavior" has not died, perhaps in part because it has generated successful practical applications.[44]
Maybe there could be "strong interactions of both nature and nurture".[45]
One of the major discussions in developmental psychology includes whether development is discontinuous or continuous.
Continuous development is quantifiable and quantitative, whereas discontinuous development is qualitative. Quantitative estimations of development can be measuring the stature of a child, and measuring their memory or consideration span. "Particularly dramatic examples of qualitative changes are metamorphoses, such as the emergence of a caterpillar into a butterfly."[46]
Those psychologists who bolster the continuous view of improvement propose that improvement includes slow and progressing changes all through the life span, with behavior within the prior stages of advancement giving the premise of abilities and capacities required for the other stages. "To many, the concept of continuous, quantifiable measurement seems to be the essence of science".[46]
However, not all psychologists concur that advancement could be a continuous process. A few see advancement as a discontinuous process. They accept advancement includes unmistakable and partitioned stages with diverse sorts of behavior happening in each organization. This proposes that the development of certain capacities in each arrange, such as particular feelings or ways of considering, has a definite beginning and ending point. Nevertheless, there is no exact moment when a capacity suddenly appears or disappears. Although some sorts of considering, feeling or carrying on could seem to seem abruptly, it is more than likely that this has been developing gradually for some time.[47]
Stage theories of development rest on the suspicion that development may be a discontinuous process including particular stages which are characterized by subjective contrasts in behavior. They moreover assume that the structure of the stages is not variable concurring to each person, in any case, the time of each arrangement may shift separately. Stage theories can be differentiated with ceaseless hypotheses, which set that development is an incremental process.[48]
This issue involves the degree to which one becomes older renditions of their early experience or whether they develop into something different from who they were at an earlier point in development.[49]It considers the extent to which early experiences (especially infancy) or later experiences are the key determinants of a person's development. Stability is defined as the consistent ordering of individual differences with respect to some attribute.[50]Change is altering someone/something.
Mosthuman developmentlifespan developmentalists recognize that extreme positions are unwise. Therefore, the key to a comprehensive understanding of development at any stage requires the interaction of different factors and not only one.[51]
Theory of mind is the ability to attribute mental states to ourselves and others.[52]It is a complex but vital process in which children begin to understand the emotions, motives, and feelings of not only themselves but also others. Theory of mind allows individuals to understand that others have unique beliefs and desires different from their own. This ability enables successful social interactions by recognizing and interpreting the mental states of others. If a child does not fully develop theory of mind within this crucial 5-year period, they can suffer from communication barriers that follow them into adolescence and adulthood.[53]Exposure to more people and the availability of stimuli that encourages social-cognitive growth is a factor that relies heavily on family.[54]
Developmental psychology is concerned not only with describing the characteristics of psychological change over time but also seeks to explain the principles and internal workings underlying these changes. Psychologists have attempted to better understand these factors by usingmodels. A model must simply account for the means by which a process takes place. This is sometimes done in reference to changes in thebrainthat may correspond to changes in behavior over the course of the development.
Mathematical modeling is useful in developmental psychology for implementing theory in a precise and easy-to-study manner, allowing generation, explanation, integration, and prediction of diverse phenomena. Several modeling techniques are applied to development:symbolic,connectionist(neural network), ordynamical systemsmodels.
Dynamic systems models illustrate how many different features of a complex system may interact to yield emergent behaviors and abilities. Nonlinear dynamics has been applied to human systems specifically to address issues that require attention to temporality such as life transitions, human development, and behavioral or emotional change over time. Nonlinear dynamic systems is currently being explored as a way to explain discrete phenomena of human development such as affect,[55]second language acquisition,[56]and locomotion.[57]
One critical aspect of developmental psychology is the study of neural development, which investigates how the brain changes and develops during different stages of life. Neural development focuses on how the brain changes and develops during different stages of life. Studies have shown that the human brain undergoes rapid changes during prenatal and early postnatal periods. These changes include the formation of neurons, the development of neural networks, and the establishment of synaptic connections.[58]The formation of neurons and the establishment of basic neural circuits in the developing brain are crucial for laying the foundation of the brain's structure and function, and disruptions during this period can have long-term effects on cognitive and emotional development.[59]
Experiences and environmental factors play a crucial role in shaping neural development. Early sensory experiences, such as exposure to language and visual stimuli, can influence the development of neural pathways related to perception and language processing.[60]
Genetic factors play a huge roll in neural development. Genetic factors can influence the timing and pattern of neural development, as well as the susceptibility to certain developmental disorders, such as autism spectrum disorder and attention-deficit/hyperactivity disorder.[61]
Research finds that the adolescent brain undergoes significant changes in neural connectivity and plasticity. During this period, there is a pruning process where certain neural connections are strengthened while others are eliminated, resulting in more efficient neural networks and increased cognitive abilities, such as decision-making and impulse control.[62]
The study of neural development provides crucial insights into the complex interplay between genetics, environment, and experiences in shaping the developing brain. By understanding the neural processes underlying developmental changes, researchers gain a better understanding of cognitive, emotional, and social development in humans.
Cognitive development is primarily concerned with how infants and children acquire, develop, and use internal mental capabilities such as: problem-solving, memory, and language. Major topics in cognitive development are the study of language acquisition and the development of perceptual and motor skills. Piaget was one of the influential early psychologists to study the development of cognitive abilities. His theory suggests that development proceeds through a set of stages from infancy to adulthood and that there is an end point or goal.
Other accounts, such as that ofLev Vygotsky, have suggested that development does not progress through stages, but rather that the developmental process that begins at birth and continues until death is too complex for such structure and finality. Rather, from this viewpoint, developmental processes proceed more continuously. Thus, development should be analyzed, instead of treated as a product to obtain.
K. Warner Schaiehas expanded the study of cognitive development into adulthood. Rather than being stable from adolescence, Schaie sees adults as progressing in the application of their cognitive abilities.[63]
Modern cognitive development has integrated the considerations ofcognitive psychologyand the psychology ofindividual differencesinto the interpretation and modeling of development.[64]Specifically,the neo-Piagetian theories of cognitive developmentshowed that the successive levels or stages of cognitive development are associated with increasing processing efficiency andworking memorycapacity. These increases explain differences between stages, progression to higher stages, and individual differences of children who are the same-age and of the same grade-level. However, other theories have moved away from Piagetian stage theories, and are influenced by accounts ofdomain-specificinformation processing, which posit that development is guided by innate evolutionarily-specified and content-specific information processing mechanisms.
Developmental psychologists who are interested in social development examine how individuals develop social and emotional competencies. For example, they study how children form friendships, how they understand and deal with emotions, and how identity develops. Research in this area may involve study of the relationship between cognition or cognitive development and social behavior.
Emotional regulationor ER refers to an individual's ability to modulate emotional responses across a variety of contexts. In young children, this modulation is in part controlled externally, by parents and other authority figures. As children develop, they take on more and more responsibility for their internal state. Studies have shown that the development of ER is affected by the emotional regulation children observe in parents and caretakers, the emotional climate in the home, and the reaction of parents and caretakers to the child's emotions.[65]
Music also has an influence on stimulating and enhancing the senses of a child through self-expression.[66]
A child's social and emotional development can be disrupted by motor coordination problems, evidenced by the environmental stress hypothesis. The environmental hypothesis explains how children with coordination problems anddevelopmental coordination disorderare exposed to several psychosocial consequences which act as secondary stressors, leading to an increase ininternalizing symptomssuch as depression and anxiety.[67]Motor coordination problems affect fine and gross motor movement as well as perceptual-motor skills. Secondary stressors commonly identified include the tendency for children with poor motor skills to be less likely to participate in organized play with other children and more likely to feel sociallyisolated.[67]
Social and emotional development focuses on five keys areas: Self-Awareness, Self Management, Social Awareness, Relationship Skills and Responsible Decision Making.[68]
Physical development concerns the physical maturation of an individual's body until it reaches the adult stature. Although physical growth is a highly regular process, all children differ tremendously in the timing of their growth spurts.[69]Studies are being done to analyze how the differences in these timings affect and are related to other variables of developmental psychology such as information processing speed. Traditional measures of physical maturity using x-rays are less in practice nowadays, compared to simple measurements of body parts such as height, weight, head circumference, and arm span.[69]
A few other studies and practices with physical developmental psychology are the phonological abilities of mature 5- to 11-year-olds, and the controversial hypotheses of left-handers being maturationally delayed compared to right-handers. A study by Eaton, Chipperfield, Ritchot, and Kostiuk in 1996 found in three different samples that there was no difference between right- and left-handers.[69]
Researchers interested in memory development look at the way our memory develops from childhood and onward. According tofuzzy-trace theory, a theory ofcognitionoriginally proposed byValerie F. ReynaandCharles Brainerd, people have two separate memory processes: verbatim and gist. These two traces begin to develop at different times as well as at a different pace. Children as young as four years old have verbatim memory, memory for surface information, which increases up to early adulthood, at which point it begins to decline. On the other hand, our capacity for gist memory, memory for semantic information, increases up to early adulthood, at which point it is consistent through old age. Furthermore, one's reliance on gist memory traces increases as one ages.[70]
Developmental psychology employs many of theresearch methodsused in other areas of psychology. However, infants and children cannot be tested in the same ways as adults, so different methods are often used to study their development.
Developmental psychologists have a number of methods to study changes in individuals over time. Common research methods include systematic observation, includingnaturalistic observationor structured observation; self-reports, which could be clinical interviews orstructured interviews; clinical orcase studymethod; andethnographyor participant observation.[71]These methods differ in the extent of control researchers impose on study conditions, and how they construct ideas about which variables to study.[72]Every developmental investigation can be characterized in terms of whether its underlying strategy involves theexperimental,correlational, orcase studyapproach.[73][74]Theexperimental methodinvolves "actual manipulation of various treatments, circumstances, or events to which the participant or subject is exposed;[74]theexperimental designpoints tocause-and-effect relationships.[75]This method allows for strong inferences to be made of causal relationships between the manipulation of one or moreindependent variablesand subsequent behavior, as measured by thedependent variable.[74]The advantage of using this research method is that it permits determination of cause-and-effect relationships among variables.[75]On the other hand, the limitation is that data obtained in an artificial environment may lack generalizability.[75]The correlational method explores the relationship between two or more events by gathering information about these variables without researcher intervention.[74][75]The advantage of using a correlational design is that it estimates the strength and direction of relationships among variables in the natural environment;[75]however, the limitation is that it does not permit determination of cause-and-effect relationships among variables.[75]Thecase studyapproach allows investigations to obtain an in-depth understanding of an individual participant by collecting data based oninterviews, structured questionnaires, observations, and test scores.[75]Each of these methods have its strengths and weaknesses but the experimental method when appropriate is the preferred method of developmental scientists because it provides a controlled situation and conclusions to be drawn about cause-and-effect relationships.[74]
Most developmental studies, regardless of whether they employ the experimental, correlational, or case study method, can also be constructed using research designs.[72]Research designs are logical frameworks used to make key comparisons within research studies such as:
In alongitudinal study, a researcher observes many individuals born at or around the same time (acohort) and carries out new observations as members of the cohort age. This method can be used to draw conclusions about which types of development are universal (ornormative) and occur in most members of a cohort. As an example a longitudinal study of early literacy development examined in detail the early literacy experiences of one child in each of 30 families.[76]
Researchers may also observe ways that development varies between individuals, and hypothesize about the causes of variation in their data. Longitudinal studies often require large amounts of time and funding, making them unfeasible in some situations. Also, because members of a cohort all experience historical events unique to their generation, apparently normative developmental trends may, in fact, be universal only to their cohort.[77]
In across-sectional study, a researcher observes differences between individuals of different ages at the same time. This generally requires fewer resources than the longitudinal method, and because the individuals come from different cohorts, shared historical events are not so much of aconfounding factor. By the same token, however, cross-sectional research may not be the most effective way to study differences between participants, as these differences may result not from their different ages but from their exposure todifferenthistorical events.[78]
A third study design, thesequential design, combines both methodologies. Here, a researcher observes members of different birth cohorts at the same time, and then tracks all participants over time, charting changes in the groups. While much more resource-intensive, the format aids in a clearer distinction between what changes can be attributed to an individual or historical environment from those that are truly universal.[79]
Because every method has some weaknesses, developmental psychologists rarely rely on one study or even one method to reach conclusions by finding consistent evidence from as many converging sources as possible.[74]
Prenatal development is of interest to psychologists investigating the context of early psychological development. The whole prenatal development involves three main stages: germinal stage, embryonic stage and fetal stage. Germinal stage begins at conception until 2 weeks; embryonic stage means the development from 2 weeks to 8 weeks; fetal stage represents 9 weeks until birth of the baby.[80]The senses develop in the womb itself: a fetus can both see and hear by the second trimester (13 to 24 weeks of age). The sense of touch develops in the embryonic stage (5 to 8 weeks).[81]Most of the brain's billions of neurons also are developed by the second trimester.[82]Babies are hence born with some odor, taste and sound preferences, largely related to the mother's environment.[83]
Someprimitive reflexestoo arise before birth and are still present in newborns. One hypothesis is that these reflexes are vestigial and have limited use in early human life.Piaget's theory of cognitive developmentsuggested that some early reflexes are building blocks for infant sensorimotor development. For example, thetonic neck reflexmay help development by bringing objects into the infant's field of view.[84]
Other reflexes, such as thewalking reflex, appear to be replaced by more sophisticated voluntary control later in infancy. This may be because the infant gains too much weight after birth to be strong enough to use the reflex, or because the reflex and subsequent development are functionally different.[85]It has also been suggested that some reflexes (for example themoroandwalking reflexes) are predominantly adaptations to life in the womb with little connection to early infant development.[84]Primitive reflexes reappear in adults under certain conditions, such as neurological conditions likedementiaor traumatic lesions.
Ultrasoundshave shown that infants are capable of a range of movements in the womb, many of which appear to be more than simple reflexes.[85]By the time they are born, infants can recognize and have a preference for their mother's voice suggesting some prenatal development of auditory perception.[85]Prenatal development and birth complications may also be connected to neurodevelopmental disorders, for example inschizophrenia. With the advent ofcognitive neuroscience,embryologyand the neuroscience of prenatal development is of increasing interest to developmental psychology research.
Several environmental agents—teratogens—can cause damage during the prenatal period. These include prescription and nonprescription drugs, illegal drugs, tobacco, alcohol, environmental pollutants, infectious disease agents such as therubellavirus and thetoxoplasmosisparasite, maternal malnutrition, maternal emotional stress, and Rh factor blood incompatibility between mother and child.[86]There are many statistics which prove the effects of the aforementioned substances. A leading example of this would be that at least 100,000 "cocaine babies" were born in the United States annually in the late 1980s. "Cocaine babies" are proven to have quite severe and lasting difficulties which persist throughout infancy and right throughout childhood. The drug also encourages behavioural problems in the affected children and defects of various vital organs.[87]
From birth until the first year, children are referred to asinfants. As they grow, children respond to their environment in unique ways.[88]Developmental psychologists vary widely in their assessment of infant psychology, and the influence the outside world has upon it.
The majority of a newborn infant's time is spent sleeping.[89]At first, their sleep cycles are evenly spread throughout the day and night, but after a couple of months, infants generally becomediurnal.[90]In human or rodent infants, there is always the observation of a diurnal cortisol rhythm, which is sometimes entrained with a maternal substance.[91]Nevertheless, the circadian rhythm starts to take shape, and a 24-hour rhythm is observed in just some few months after birth.[90][91]
Infants can be seen to have six states, grouped into pairs:
Infant perception is what a newborn can see, hear, smell, taste, and touch. These five features are considered as the "five senses".[94]Because of these different senses, infants respond to stimuli differently.[85]
Babies are born with the ability to discriminate virtually all sounds of all human languages.[102]Infants of around six months can differentiate betweenphonemesin their own language, but not between similar phonemes in another language. Notably, infants are able to differentiate between various durations and sound levels and can easily differentiate all the languages they have encountered, hence easy for infants to understand a certain language compared to an adult.[103]
At this stage infants also start tobabble, whereby they start making vowel consonant sound as they try to understand the true meaning of language and copy whatever they are hearing in their surrounding producing their own phonemes.
In various cultures, a distinct form of speech called "babytalk" is used when communicating with newborns and young children. This register consists of simplified terms for common topics such as family members, food, hygiene, and familiar animals. It also exhibits specific phonological patterns, such as substituting alveolar sounds with initial velar sounds, especially in languages like English. Furthermore, babytalk often involves morphological simplifications, such as regularizing verb conjugations (for instance, saying "corned" instead of "cornered" or "goed" instead of "went"). This language is typically taught to children and is perceived as their natural way of communication. Interestingly, in mythology and popular culture, certain characters, such as the "Hausa trickster" or the Warner Bros cartoon character "Tweety Pie", are portrayed as speaking in a babytalk-like manner.[104]
Piaget suggested that an infant's perception and understanding of the world depended on their motor development, which was required for the infant to link visual, tactile and motor representations of objects.[105]The concept of object permanence refers to the knowledge that an object exists even when it is not directly perceived or visible; in other words, something is still there even if it is not visible. This is a crucial developmental milestone for infants, who learn that something is not necessarily lost forever just because it is hidden. When a child displays object permanence, they will look for a toy that is hidden, showing that they are aware that the item is still there even when it is covered by a blanket. Most babies start to exhibit symptoms of object permanence around the age of eight months. According to this theory, infants developobject permanencethrough touching and handling objects.[85]
Piaget's sensorimotor stage comprised six sub-stages (seesensorimotor stagesfor more detail). In the early stages, development arises out of movements caused byprimitive reflexes.[106]Discovery of new behaviors results fromclassicalandoperant conditioning, and the formation ofhabits.[106]From eight months the infant is able to uncover a hidden object but will persevere when the object is moved.
Piaget concluded that infants lacked object permanence before 18 months when infants' before this age failed to look for an object where it had last been seen. Instead, infants continued to look for an object where it was first seen, committing the "A-not-B error". Some researchers have suggested that before the age of 8–9 months, infants' inability to understand object permanence extends to people, which explains why infants at this age do not cry when their mothers are gone ("Out of sight, out of mind").
In the 1980s and 1990s, researchers developed new methods of assessing infants' understanding of the world with far more precision and subtlety than Piaget was able to do in his time. Since then, many studies based on these methods suggest that young infants understand far more about the world than first thought.
Based on recent findings, some researchers (such asElizabeth SpelkeandRenee Baillargeon) have proposed that an understanding of object permanence is not learned at all, but rather comprises part of the innate cognitive capacities of our species.
According to Jean Piaget's developmental psychology, object permanence, or the awareness that objects exist even when they are no longer visible, was thought to emerge gradually between the ages of 8 and 12 months. However, experts such as Elizabeth Spelke and Renee Baillargeon have questioned this notion. They studied infants' comprehension of object permanence at a young age using novel experimental approaches such as violation-of-expectation paradigms. These findings imply that children as young as 3 to 4 months old may have an innate awareness of object permanence. Baillargeon's "drawbridge" experiment, for example, showed that infants were surprised when they saw occurrences that contradicted object permanence expectations. This proposition has important consequences for our understanding of infant cognition, implying that infants may be born with core cognitive abilities rather than developing them via experience and learning.[107]
Other research has suggested that young infants in their first six months of life may possess an understanding of numerous aspects of the world around them, including:
There arecritical periodsin infancy and childhood during which development of certain perceptual, sensorimotor, social and language systems depends crucially on environmental stimulation.[111]Feral childrensuch asGenie, deprived of adequate stimulation, fail to acquire important skills and are unable to learn in later childhood. In this case, Genie is used to represent the case of a feral child because she was socially neglected and abused while she was just a young girl. She underwent abnormal child psychology which involved problems with her linguistics. This happened because she was neglected while she was very young with no one to care about her and had less human contact. The concept of critical periods is also well-established inneurophysiology, from the work ofHubelandWieselamong others. Neurophysiology in infants generally provides correlating details that exists between neurophysiological details and clinical features and also focuses on vital information on rare and common neurological disorders that affect infants.
Studies have been done to look at the differences in children who have developmental delays versus typical development. Normally when being compared to one another, mental age (MA) is not taken into consideration. There still may be differences in developmentally delayed (DD) children vs. typical development (TD) behavioral, emotional and other mental disorders. When compared to MA children there is a bigger difference between normal developmental behaviors overall. DDs can cause lower MA, so comparing DDs with TDs may not be as accurate. Pairing DDs specifically with TD children at similar MA can be more accurate. There are levels of behavioral differences that are considered as normal at certain ages. When evaluating DDs and MA in children, consider whether those with DDs have a larger amount of behavior that is not typical for their MA group. Developmental delays tend to contribute to other disorders or difficulties than their TD counterparts.[112]
Infants shift between ages of one and two to a developmental stage known as toddlerhood. In this stage, an infant's transition into toddlerhood is highlighted through self-awareness, developing maturity in language use, and presence of memory and imagination.
During toddlerhood, babies begin learning how to walk, talk, and make decisions for themselves. An important characteristic of this age period is thedevelopment of language, where children are learning how to communicate and express their emotions and desires through the use of vocal sounds, babbling, and eventually words.[113]Self-control also begins to develop. At this age, children take initiative to explore, experiment and learn from making mistakes. Caretakers who encourage toddlers to try new things and test their limits, help the child become autonomous, self-reliant, and confident.[114]If the caretaker is overprotective or disapproving of independent actions, the toddler may begin to doubt their abilities and feel ashamed of the desire for independence. The child's autonomic development is inhibited, leaving them less prepared to deal with the world in the future. Toddlers also begin to identify themselves ingender roles, acting according to their perception of what a man or woman should do.[115]
Socially, the period of toddler-hood is commonly called the "terrible twos".[116]Toddlers often use their new-found language abilities to voice their desires, but are often misunderstood by parents due to their language skills just beginning to develop. A person at this stage testing their independence is another reason behind the stage's infamous label. Tantrums in a fit of frustration are also common.
Erik Eriksondivides childhood into four stages, each with its distinct social crisis:[117]
As stated, the psychosocial crisis for Erikson is Trust versus Mistrust. Needs are the foundation for gaining or losing trust in the infant. If the needs are met, trust in the guardian and the world forms. If the needs are not met, or the infant is neglected, mistrust forms alongside feelings of anxiety and fear.[119]
Autonomy versus shame follows trust in infancy. The child begins to explore their world in this stage and discovers preferences in what they like. If autonomy is allowed, the child grows in independence and their abilities. If freedom of exploration is hindered, it leads to feelings of shame and low self-esteem.[119]
In the earliest years, children are "completely dependent on the care of others". Therefore, they develop a "social relationship" with their care givers and, later, with family members. During their preschool years (3–5), they "enlarge their social horizons" to include people outside the family.[120]
Preoperationaland thenoperationalthinking develops, which means actions are reversible, and egocentric thought diminishes.[121]
The motor skills of preschoolers increase so they can do more things for themselves. They become more independent. No longer completely dependent on the care of others, the world of this age group expands. More people have a role in shaping their individual personalities. Preschoolers explore and question their world.[122]ForJean Piaget, the child is "a little scientistexploring and reflecting on these explorations to increase competence" and this is done in "a very independent way".[123]
Play is a major activity for ages 3–5. For Piaget, through play "a child reaches higher levels of cognitive development."[124]
In their expanded world, children in the 3–5 age group attempt to find their own way. If this is done in a socially acceptable way, the child develops the initiative. If not, the child develops guilt.[125]Children who develop "guilt" rather than "initiative" have failed Erikson's psychosocial crisis for the 3–5 age group.
For Erik Erikson, the psychosocial crisis during middle childhood is Industry vs. Inferiority which, if successfully met, instills a sense of Competency in the child.[117]
In all cultures, middle childhood is a time for developing "skills that will be needed in their society."[126]School offers an arena in which children can gain a view of themselves as "industrious (and worthy)". They are "graded for their school work and often for their industry". They can also develop industry outside of school in sports, games, and doing volunteer work.[127]Children who achieve "success in school or games might develop a feeling of competence."
The "peril during this period is that feelings of inadequacy and inferiority will develop.[126]Parents and teachers can "undermine" a child's development by failing to recognize accomplishments or being overly critical of a child's efforts.[127]Children who are "encouraged and praised" develop a belief in their competence. Lack of encouragement or ability to excel lead to "feelings of inadequacy and inferiority".[128]
TheCenters for Disease Control(CDC) divides Middle Childhood into two stages, 6–8 years and 9–11 years, and gives "developmental milestones for each stage".[129][130]
Entering elementary school, children in this age group begin to thinks about the future and their "place in the world". Working with other students and wanting their friendship and acceptance become more important. This leads to "more independence from parents and family". As students, they develop the mental and verbal skills "to describe experiences and talk about thoughts and feelings". They become less self-centered and show "more concern for others".[129]
For children ages 9–11 "friendships and peer relationships" increase in strength, complexity, and importance. This results in greater "peer pressure". They grow even less dependent on their families and they are challenged academically. To meet this challenge, they increase their attention span and learn to see other points of view.[130]
Adolescence is the period of life between the onset of puberty and the full commitment to an adult social role, such as worker, parent, and/or citizen. It is the period known for the formation of personal and social identity (seeErik Erikson) and the discovery of moral purpose (seeWilliam Damon). Intelligence is demonstrated through the logical use of symbols related to abstract concepts and formal reasoning. A return toegocentricthought often occurs early in the period. Only 35% develop the capacity to reason formally during adolescence or adulthood. (Huitt, W. and Hummel, J. January 1998)[131]
Erik Erikson labels this stage identity versus role confusion. Erikson emphasizes the importance of developing a sense of identity in adolescence because it affects the individual throughout their life. Identity is a lifelong process and is related with curiosity and active engagement. Role confusion is often considered the current state of identity of the individual. Identity exploration is the process of changing from role confusion to resolution.[132]
During Erik Erikson's identity versus role uncertainty stage, which occurs in adolescence, people struggle to form a cohesive sense of self while exploring many social roles and prospective life routes. This time is characterized by deep introspection, self-examination, and the pursuit of self-understanding. Adolescents are confronted with questions regarding their identity, beliefs, and future goals. The major problem is building a strong sense of identity in the face of society standards, peer pressure, and personal preferences. Adolescents participate in identity exploration, commitment, and synthesis, actively seeking out new experiences, embracing ideals and aspirations, and merging their changing sense of self into a coherent identity. Successfully navigating this stage builds the groundwork for good psychological development in adulthood, allowing people to pursue meaningful relationships, make positive contributions to society, and handle life's adversities with perseverance and purpose.[9]
It is divided into three parts, namely:
The adolescent unconsciously explores questions such as "Who am I? Who do I want to be?" Like toddlers, adolescents must explore, test limits, becomeautonomous, and commit to anidentity, orsense of self. Different roles, behaviors andideologiesmust be tried out to select an identity. Role confusion and inability to choose vocation can result from a failure to achieve a sense of identity through, for example, friends.[133]
Early adulthood generally refers to the period between ages 18 to 39,[134]and according to theorists such as Erik Erikson, is a stage where development is mainly focused onmaintaining relationships.[135]Erikson shows the importance of relationships by labeling this stageintimacyvsisolation. Intimacy suggests a process of becoming part of something larger than oneself by sacrificing in romantic relationships and working for both life and career goals.[136]Other examples include creating bonds of intimacy, sustaining friendships, and starting a family. Some theorists state that development of intimacy skills rely on the resolution of previous developmental stages. A sense of identity gained in the previous stages is also necessary for intimacy to develop. If this skill is not learned the alternative is alienation, isolation, a fear of commitment, and the inability to depend on others.
Isolation,on the other hand, suggests something different than most might expect. Erikson defined it as a delay of commitment in order to maintain freedom. Yet, this decision does not come without consequences. Erikson explained that choosing isolation may affect one's chances of getting married, progressing in a career, and overall development.[136]
A related framework for studying this part of the lifespan is that ofemerging adulthood. Scholars of emerging adulthood, such as Jeffrey Arnett, are not necessarily interested in relationship development. Instead, this concept suggests that people transition after their teenage years into a period, not characterized as relationship building and an overall sense of constancy with life, but with years of living with parents, phases of self-discovery, and experimentation.[137]
Middle adulthood generally refers to the period between ages 40 to 64. During this period, middle-aged adults experience a conflict between generativity and stagnation. Generativity is the sense of contributing to society, the next generation, or their immediate community. On the other hand, stagnation results in a lack of purpose.[138]The adult's identity continues to develop in middle-adulthood. Middle-aged adults often adopt opposite gender characeristics. The adult realizes they are half-way through their life and often reevaluate vocational and social roles. Life circumstances can also cause a reexamination of identity.[139]
Physically, the middle-aged experience a decline in muscular strength, reaction time, sensory keenness, and cardiac output. Also, women experiencemenopauseat an average age of 48.8 and a sharp drop in the hormoneestrogen.[140]Men experience an equivalent endocrine system event to menopause.Andropausein males is a hormone fluctuation with physical and psychological effects that can be similar to those seen in menopausal females. As men age lowered testosterone levels can contribute to mood swings and a decline inspermcount. Sexual responsiveness can also be affected, including delays inerectionand longer periods ofpenile stimulationrequired to achieveejaculation.
The important influence of biological and social changes experienced by women and men in middle adulthood is reflected in the fact that depression is highest at age 48.5 around the world.[141]
TheWorld Health Organizationfinds "no general agreement on the age at which a person becomes old." Most"developed countries"set the age as 65 or 70. However, indeveloping countriesinability to make "active contribution" to society, not chronological age, marks the beginning of old age.[142][143]According toErikson's stages of psychosocial development, old age is the stage in which individuals assess the quality of their lives.[144]
Erikson labels this stage as integrity versus despair. For integrated persons, there is a sense of fulfillment in life. They have become self-aware and optimistic due to life's commitments and connection to others. While reflecting on life, people in this stage develop feelings of contentment with their experiences. If a person falls into despair, they are often disappointed about failures or missed chances in life. They may feel that the time left in life is an insufficient amount to turn things around.[145]
Physically, older people experience a decline in muscular strength, reaction time, stamina, hearing, distance perception, and the sense of smell.[146]They also are more susceptible to diseases such as cancer and pneumonia due to a weakened immune system.[147]Programs aimed at balance, muscle strength, and mobility have been shown to reduce disability among mildly (but not more severely) disabled elderly.[148]
Sexual expression depends in large part upon the emotional and physical health of the individual. Many older adults continue to be sexually active and satisfied with their sexual activity.[149]
Mentaldisintegrationmay also occur, leading todementiaor ailments such asAlzheimer's disease. The average age of onset for dementia in males is 78.8 and 81.9 for women.[150]It is generally believed thatcrystallized intelligenceincreases up to old age, whilefluid intelligencedecreases with age.[151]Whether or not normal intelligence increases or decreases with age depends on the measure and study.Longitudinal studiesshow that perceptual speed, inductive reasoning, and spatial orientation decline.[152]An article on adult cognitive development reports thatcross-sectional studiesshow that "some abilities remained stable into early old age".[152]
Parenting variables alone have typically accounted for 20 to 50 percent of the variance in child outcomes.[153]
All parents have their own parenting styles. Parenting styles, according to Kimberly Kopko, are "based upon two aspects of parenting behavior; control and warmth. Parental control refers to the degree to which parents manage their children's behavior. Parental warmth refers to the degree to which parents are accepting and responsive to their children's behavior."[154]
The followingparenting styleshave been described in the child development literature:
Parenting research has traditionally focused on mothers, but recent studies highlight the important role of fathers in child development. Children as young as 15 months benefit significantly from substantial engagement with their father.[158][159]In particular, a study in the U.S. and New Zealand found the presence of the natural father was the most significant factor in reducing rates of early sexual activity and rates of teenage pregnancy in girls.[160]However, neither a mother nor a father is actually essential in successful parenting, and both single parents as well as homosexual couples can support positive child outcomes.[161]Children need at least one consistently responsible adult with whom they can form a positive emotional bond. Having multiple such figures further increases the likelihood of positive outcomes.[161]
Another parental factor often debated in terms of its effects on child development is divorce. Divorce in itself is not a determining factor of negative child outcomes. In fact, the majority of children from divorcing families fall into the normal range on measures of psychological and cognitive functioning.[162]A number of mediating factors play a role in determining the effects divorce has on a child, for example, divorcing families with young children often face harsher consequences in terms of demographic, social, and economic changes than do families with older children.[162]Positive coparenting after divorce is part of a pattern associated with positive child coping, while hostile parenting behaviors lead to a destructive pattern leaving children at risk.[162]Additionally, direct parental relationship with the child also affects the development of a child after a divorce. Overall, protective factors facilitating positive child development after a divorce are maternal warmth, positive father-child relationship, and cooperation between parents.[162]
A way to improve developmental psychology is a representation of cross-cultural studies. The psychology field in general assumes that "basic" human developments are represented in any population, specifically the Western-Educated-Industrialized-Rich and Democratic (W.E.I.R.D.) subjects that are relied on for a majority of their studies. Previous research generalizes the findings done with W.E.I.R.D. samples because many in the Psychological field assume certain aspects of development are exempted from or are not affected by life experiences. However, many of the assumptions have been proven incorrect or are not supported by empirical research. For example, according to Kohlberg, moral reasoning is dependent on cognitive abilities. While both analytical and holistic cognitive systems do have the potential to develop in any adult, the West is still on the extreme end of analytical thinking, and the non-West tend to use holistic processes. Furthermore, moral reasoning in the West only considers aspects that support autonomy and the individual, whereas non-Western adults emphasize moral behaviors supporting the community and maintaining an image of holiness or divinity. Not all aspects of human development are universal and we can learn a lot from observing different regions and subjects.[163]
An example of anon-Westernmodel for development stages is the Indian model, focusing a large amount of its psychological research on morality and interpersonal progress. The developmental stages in Indian models are founded by Hinduism, which primarily teaches stages of life in the process of someone discovering their fate orDharma.[164]This cross-cultural model can add another perspective to psychological development in which the West behavioral sciences have not emphasized kinship, ethnicity, or religion.[163]
Indian psychologists study the relevance of attentive families during the early stages of life. The early life stages conceptualize a different parenting style from the West because it does not try to rush children out of dependency. The family is meant to help the child grow into the next developmental stage at a particular age. This way, when children finally integrate into society, they are interconnected with those around them and reachrenunciationwhen they are older. Children are raised in joint families so that in early childhood (ages 6 months to 2 years) the other family members help gradually wean the child from its mother. During ages 2 to 5, the parents do not rush toilet training. Instead of training the child to perform this behavior, the child learns to do it as they mature at their own pace.
This model of early human development encourages dependency, unlike Western models that value autonomy and independence. By being attentive and not forcing the child to become independent, they are confident and have a sense of belonging by late childhood and adolescence. This stage in life (5–15 years) is also when children start education and increase their knowledge of Dharma.[165]It is within early and middle adulthood that we see moral development progress. Early, middle, and late adulthood are all concerned with caring for others and fulfilling Dharma. The main distinction between early adulthood to middle or late adulthood is how far their influence reaches. Early adulthood emphasizes the importance of fulfilling the immediate family needs, until later adulthood when they broaden their responsibilities to the general public. The old-age life stage development reaches renunciation or a complete understanding of Dharma.[164]
The current mainstream views in the psychological field are against the Indian model for human development. The criticism against such models is that the parenting style is overly protective and encourages too much dependency. It focuses on interpersonal instead of individual goals. Also, there are some overlaps and similarities between Erikson's stages of human development and the Indian model but both of them still have major differences. The West prefers Erickson's ideas over the Indian model because they are supported by scientific studies. The life cycles based on Hinduism are not as favored, because it is not supported with research and it focuses on the ideal human development.[164]
[1][2] | https://en.wikipedia.org/wiki/Developmental_psychology |
Finger-counting, also known asdactylonomy, is the act ofcountingusing one's fingers. There are multiple different systems used across time and between cultures, though many of these have seen a decline in use because of the spread ofArabic numerals.
Finger-counting can serve as a form ofmanual communication, particularly inmarketplacetrading – includinghand signalingduringopen outcryinfloor trading– and also inhand games, such asmorra.
Finger-counting is known to go back toancient Egyptat least, and probably even further back.[Note 1]
Complex systems of dactylonomy were used in the ancient world.[1]The Greco-Roman author Plutarch, in hisLives, mentions finger counting as being used in Persia in the first centuries CE, so the practice may have originated in Iran. It was later used widely in medieval Islamic lands. The earliest reference to this method of using the hands to refer to the natural numbers may have been in some Prophetic traditions going back to theearly days of Islamduring the early 600s. In one tradition as reported by Yusayra,Muhammadenjoined upon his female companions to express praise to God and to count using their fingers (=واعقدن بالأنامل )( سنن الترمذي).
In Arabic, dactylonomy is known as "Number reckoning by finger folding" (=حساب العقود ). The practice was well known in theArabic-speaking worldand was quite commonly used as evidenced by the numerous references to it in Classical Arabic literature. Poets could allude to a miser by saying that his hand made "ninety-three", i.e. a closed fist, the sign of avarice. When an old man was asked how old he was he could answer by showing a closed fist, meaning 93. The gesture for 50 was used by some poets (for example Ibn Al-Moutaz) to describe the beak of the goshawk.
Some of the gestures used to refer to numbers were even known in Arabic by special technical terms such as Kas' (=القصع ) for the gesture signifying 29, Dabth (=الـضَـبْـث ) for 63 and Daff (= الـضَـفّ) for 99 (فقه اللغة).
The polymath Al-Jahiz advised schoolmasters in his book Al-Bayan (البيان والتبيين) to teach finger counting which he placed among the five methods of human expression. Similarly, Al-Suli, in his Handbook for Secretaries, wrote that scribes preferred dactylonomy to any other system because it required neither materials nor an instrument, apart from a limb. Furthermore, it ensured secrecy and was thus in keeping with the dignity of the scribe's profession.
Books dealing with dactylonomy, such as a treatise by the mathematicianAbu'l-Wafa al-Buzajani, gave rules for performing complex operations, including the approximate determination of square roots. Several pedagogical poems dealt exclusively with finger counting, some of which were translated into European languages, including a short poem by Shamsuddeen Al-Mawsili (translated into French byAristide Marre) and one by Abul-Hasan Al-Maghribi (translated into German by Julius Ruska[2]).
A very similar form is presented by the English monk and historianBedein the first chapter of hisDe temporum ratione,(725), entitled "Tractatus de computo, vel loquela per gestum digitorum",[3][1]which allowed counting up to 9,999 on two hands, though it was apparently little-used for numbers of 100 or more. This system remained in use through the European Middle Ages, being presented in slightly modified form byLuca Pacioliin his seminalSumma de arithmetica(1494).
Finger-counting varies between cultures and over time, and is studied byethnomathematics. Cultural differences in counting are sometimes used as ashibboleth, particularly to distinguish nationalities in war time. These form a plot point in the filmInglourious Basterds,byQuentin Tarantino, and in the bookPi in the Sky,byJohn D. Barrow.[4][3]
Finger-counting systems in use in many regions of Asia allow for counting to 12 by using a single hand. The thumb acts as a pointer touching the threefinger bonesof each finger in turn, starting with the outermost bone of thelittle finger. One hand is used to count numbers up to 12. The other hand is used to display the number of completed base-12s. This continues until twelve dozen is reached, therefore 144 is counted.[5][6][Note 2]
Chinese number gesturescount up to 10 but can exhibit some regional differences.
In Japan, counting for oneself begins with the palm of one hand open. Like in East Slavic countries, the thumb represents number 1; the little finger is number 5. Digits are folded inwards while counting, starting with the thumb.[7]A closed palm indicates number 5. By reversing the action, number 6 is indicated by extending the little finger.[8]A return to an open palm signals the number 10. However to indicate numerals to others, the hand is used in the same manner as an English speaker. The index finger becomes number 1; the thumb now represents number 5. For numbers above five, the appropriate number of fingers from the other hand are placed against the palm. For example, number 7 is represented by the index and middle finger pressed against the palm of the open hand.[9]Number 10 is displayed by presenting both hands open with outward palms.
In Korea,Chisanbopallows for signing any number between 0 and 99.
In theWestern worlda finger is raised for each unit. While there are extensive differences between and even within countries, there are, generally speaking, two systems. The main difference between the two systems is that the "German" or "French" system starts counting with the thumb, while the "American" system starts counting with the index finger.[12]
In the system used for example in Germany and France, thethumbrepresents 1, the thumb plus theindex fingerrepresents 2, and so on, until the thumb plus the index,middle,ring, andlittle fingersrepresents 5. This continues on to the other hand, where the entire one hand plus the thumb of the other hand means 6, and so on.
In the system used inthe Americas, the index finger represents 1; the index and middle fingers represents 2; the index, middle and ring fingers represents 3; the index, middle, ring, and little fingers represents 4; and the four fingers plus the thumb represents 5. This continues on to the other hand, where the entire one hand plus the index finger of the other hand means 6, and so on.
Infinger binary(base 2), each finger represents a different bit, for example thumb for 1, index for 2, middle for 4, ring for 8, and pinky for 16. This allows counting from zero to 31 using the fingers of one hand, or 1023 using both.
Insenary finger counting(base 6), one hand represents the units (0 to 5) and the other hand represents multiples of6. It counts up to 55senary(35decimal). Two related representations can be expressed: wholes and sixths (counts up to 5.5 by sixths), sixths andthirty-sixths(counts up to 0.55 by thirty-sixths). For example, "12" (left 1 right 2) can represent eight (12 senary), four-thirds (1.2 senary) or two-ninths (0.12 senary).
Undoubtedly thedecimal(base-10) counting system came to prominence due to the widespread use of finger counting[citation needed], but many other counting systems have been used throughout the world. Likewise,base-20counting systems, such as used by thePre-ColumbianMayan, are likely due to counting on fingers and toes. This is suggested in the languages of Central Brazilian tribes, where the word for twenty often incorporates the word for "feet".[13]Other languages using a base-20 system often refer to twenty in terms of "men", that is, 1 "man" = 20 "fingers and toes". For instance, the Dene-Dinje tribe of North America refer to 5 as "my hand dies", 10 as "my hands have died", 15 as "my hands are dead and one foot is dead", and 20 as "a man dies".[14]
Even the French language today shows remnants of aGaulishbase-20 system in the names of the numbers from 60 through 99. For example, sixty-five issoixante-cinq(literally, "sixty [and] five"), while seventy-five issoixante-quinze(literally, "sixty [and] fifteen").
TheYukilanguage inCaliforniaand the Pamean languages[15]inMexicohaveoctal (base-8)systems because the speakers count using the spaces between their fingers rather than the fingers themselves.[16]
In languages of New Guinea and Australia, such as theTelefol languageofPapua New Guinea, body counting is used, to give higher base counting systems, up to base-27. In Muralug Island, the counting system works as follows: Starting with the little finger of the left hand, count each finger, then for six through ten, successively touch and name the left wrist, left elbow, left shoulder, left breast and sternum. Then for eleven through to nineteen count the body parts in reverse order on the right side of the body (with the right little finger signifying nineteen). A variant among the Papuans of New Guinea uses on the left, the fingers, then the wrist, elbow, shoulder, left ear and left eye. Then on the right, the eye, nose, mouth, right ear, shoulder, wrist and finally, the fingers of the right hand, adding up to 22anusiwhich means little finger.[18] | https://en.wikipedia.org/wiki/Finger_counting |
Thehistory of mathematicsdeals with the origin of discoveries inmathematicsand themathematical methods and notation of the past. Before themodern ageand the worldwide spread of knowledge, written examples of new mathematical developments have come to light only in a few locales. From 3000 BC theMesopotamianstates ofSumer,AkkadandAssyria, followed closely byAncient Egyptand the Levantine state ofEblabegan usingarithmetic,algebraandgeometryfor purposes oftaxation,commerce, trade and also in the field ofastronomyto record time and formulatecalendars.
The earliest mathematical texts available are fromMesopotamiaandEgypt–Plimpton 322(Babylonianc.2000– 1900 BC),[2]theRhind Mathematical Papyrus(Egyptianc. 1800 BC)[3]and theMoscow Mathematical Papyrus(Egyptian c. 1890 BC). All of these texts mention the so-calledPythagorean triples, so, by inference, thePythagorean theoremseems to be the most ancient and widespread mathematical development after basic arithmetic and geometry.
The study of mathematics as a "demonstrative discipline" began in the 6th century BC with thePythagoreans, who coined the term "mathematics" from the ancientGreekμάθημα(mathema), meaning "subject of instruction".[4]Greek mathematicsgreatly refined the methods (especially through the introduction of deductive reasoning andmathematical rigorinproofs) and expanded the subject matter of mathematics.[5]Theancient Romansusedapplied mathematicsinsurveying,structural engineering,mechanical engineering,bookkeeping, creation oflunarandsolar calendars, and evenarts and crafts.Chinese mathematicsmade early contributions, including aplace value systemand the first use ofnegative numbers.[6][7]TheHindu–Arabic numeral systemand the rules for the use of its operations, in use throughout the world today evolved over the course of the first millennium AD inIndiaand were transmitted to theWestern worldviaIslamic mathematicsthrough the work ofMuḥammad ibn Mūsā al-Khwārizmī.[8][9]Islamic mathematics, in turn, developed and expanded the mathematics known to these civilizations.[10]Contemporaneous with but independent of these traditions were the mathematics developed by theMaya civilizationofMexicoandCentral America, where the concept ofzerowas given a standard symbol inMaya numerals.
Many Greek and Arabic texts on mathematics weretranslated into Latinfrom the 12th century onward, leading to further development of mathematics inMedieval Europe. From ancient times through theMiddle Ages, periods of mathematical discovery were often followed by centuries of stagnation.[11]Beginning inRenaissanceItalyin the 15th century, new mathematical developments, interacting with new scientific discoveries, were made at anincreasing pacethat continues through the present day. This includes the groundbreaking work of bothIsaac NewtonandGottfried Wilhelm Leibnizin the development of infinitesimalcalculusduring the course of the 17th century and following discoveries ofGerman mathematicianslikeCarl Friedrich GaussandDavid Hilbert.
The origins of mathematical thought lie in the concepts ofnumber,patterns in nature,magnitude, andform.[12]Modern studies of animal cognition have shown that these concepts are not unique to humans. Such concepts would have been part of everyday life inhunter-gatherersocieties. The idea of the "number" concept evolving gradually over time is supported by the existence of languages which preserve the distinction between "one", "two", and "many", but not of numbers larger than two.[12]
The use of yarn byNeanderthalssome 40,000 years ago at a site in Abri du Maras in the south ofFrancesuggests they knew basic concepts in mathematics.[13][14]TheIshango bone, found near the headwaters of theNileriver (northeasternCongo), may be more than20,000years old and consists of a series of marks carved in three columns running the length of the bone. Common interpretations are that the Ishango bone shows either atallyof the earliest known demonstration ofsequencesofprime numbers[15][failed verification]or a six-month lunar calendar.[16]Peter Rudman argues that the development of the concept of prime numbers could only have come about after the concept of division, which he dates to after 10,000 BC, with prime numbers probably not being understood until about 500 BC. He also writes that "no attempt has been made to explain why a tally of something should exhibit multiples of two, prime numbers between 10 and 20, and some numbers that are almost multiples of 10."[17]The Ishango bone, according to scholarAlexander Marshack, may have influenced the later development of mathematics in Egypt as, like some entries on the Ishango bone, Egyptian arithmetic also made use of multiplication by 2; this however, is disputed.[18]
Predynastic Egyptiansof the 5th millennium BC pictorially represented geometric designs. It has been claimed thatmegalithicmonuments inEnglandandScotland, dating from the 3rd millennium BC, incorporate geometric ideas such ascircles,ellipses, andPythagorean triplesin their design.[19]All of the above are disputed however, and the currently oldest undisputed mathematical documents are from Babylonian and dynastic Egyptian sources.[20]
Babylonianmathematics refers to any mathematics of the peoples ofMesopotamia(modernIraq) from the days of the earlySumeriansthrough theHellenistic periodalmost to the dawn ofChristianity.[21]The majority of Babylonian mathematical work comes from two widely separated periods: The first few hundred years of the second millennium BC (Old Babylonian period), and the last few centuries of the first millennium BC (Seleucidperiod).[22]It is named Babylonian mathematics due to the central role ofBabylonas a place of study. Later under theArab Empire, Mesopotamia, especiallyBaghdad, once again became an important center of study forIslamic mathematics.
In contrast to the sparsity of sources inEgyptian mathematics, knowledge of Babylonian mathematics is derived from more than 400 clay tablets unearthed since the 1850s.[23]Written inCuneiform script, tablets were inscribed whilst the clay was moist, and baked hard in an oven or by the heat of the sun. Some of these appear to be graded homework.[24]
The earliest evidence of written mathematics dates back to the ancientSumerians, who built the earliest civilization in Mesopotamia. They developed a complex system ofmetrologyfrom 3000 BC that was chiefly concerned with administrative/financial counting, such as grain allotments, workers, weights of silver, or even liquids, among other things.[25]From around 2500 BC onward, the Sumerians wrotemultiplication tableson clay tablets and dealt with geometrical exercises anddivisionproblems. The earliest traces of the Babylonian numerals also date back to this period.[26]
Babylonian mathematics were written using asexagesimal(base-60)numeral system.[23]From this derives the modern-day usage of 60 seconds in a minute, 60 minutes in an hour, and 360 (60 × 6) degrees in a circle, as well as the use of seconds and minutes of arc to denote fractions of a degree. It is thought the sexagesimal system was initially used by Sumerian scribes because 60 can be evenly divided by 2, 3, 4, 5, 6, 10, 12, 15, 20 and 30,[23]and for scribes (doling out the aforementioned grain allotments, recording weights of silver, etc.) being able to easily calculate by hand was essential, and so a sexagesimal system is pragmatically easier to calculate by hand with; however, there is the possibility that using a sexagesimal system was an ethno-linguistic phenomenon (that might not ever be known), and not a mathematical/practical decision.[27]Also, unlike the Egyptians, Greeks, and Romans, the Babylonians had a place-value system, where digits written in the left column represented larger values, much as in thedecimalsystem. The power of the Babylonian notational system lay in that it could be used to represent fractions as easily as whole numbers; thus multiplying two numbers that contained fractions was no different from multiplying integers, similar to modern notation. The notational system of the Babylonians was the best of any civilization until theRenaissance, and its power allowed it to achieve remarkable computational accuracy; for example, the Babylonian tabletYBC 7289gives an approximation of√2accurate to five decimal places.[28]The Babylonians lacked, however, an equivalent of the decimal point, and so the place value of a symbol often had to be inferred from the context.[22]By the Seleucid period, the Babylonians had developed a zero symbol as a placeholder for empty positions; however it was only used for intermediate positions.[22]This zero sign does not appear in terminal positions, thus the Babylonians came close but did not develop a true place value system.[22]
Other topics covered by Babylonian mathematics include fractions, algebra, quadratic and cubic equations, and the calculation ofregular numbers, and theirreciprocalpairs.[29]The tablets also include multiplication tables and methods for solvinglinear,quadratic equationsandcubic equations, a remarkable achievement for the time.[30]Tablets from the Old Babylonian period also contain the earliest known statement of thePythagorean theorem.[31]However, as with Egyptian mathematics, Babylonian mathematics shows no awareness of the difference between exact and approximate solutions, or the solvability of a problem, and most importantly, no explicit statement of the need forproofsor logical principles.[24]
Egyptianmathematics refers to mathematics written in theEgyptian language. From theHellenistic period,Greekreplaced Egyptian as the written language ofEgyptianscholars. Mathematical study inEgyptlater continued under theArab Empireas part ofIslamic mathematics, whenArabicbecame the written language of Egyptian scholars. Archaeological evidence has suggested that the Ancient Egyptian counting system had origins in Sub-Saharan Africa.[32]Also, fractal geometry designs which are widespread among Sub-Saharan African cultures are also found in Egyptian architecture and cosmological signs.[33]
The most extensive Egyptian mathematical text is theRhind papyrus(sometimes also called the Ahmes Papyrus after its author), dated to c. 1650 BC but likely a copy of an older document from theMiddle Kingdomof about 2000–1800 BC.[34]It is an instruction manual for students in arithmetic and geometry. In addition to giving area formulas and methods for multiplication, division and working with unit fractions, it also contains evidence of other mathematical knowledge,[35]includingcompositeandprime numbers;arithmetic,geometricandharmonic means; and simplistic understandings of both theSieve of Eratosthenesandperfect number theory(namely, that of the number 6).[36]It also shows how to solve first orderlinear equations[37]as well asarithmeticandgeometric series.[38]
Another significant Egyptian mathematical text is theMoscow papyrus, also from theMiddle Kingdomperiod, dated to c. 1890 BC.[39]It consists of what are today calledword problemsorstory problems, which were apparently intended as entertainment. One problem is considered to be of particular importance because it gives a method for finding the volume of afrustum(truncated pyramid).
Finally, theBerlin Papyrus 6619(c. 1800 BC) shows that ancient Egyptians could solve a second-orderalgebraic equation.[40]
Greek mathematics refers to the mathematics written in theGreek languagefrom the time ofThales of Miletus(~600 BC) to the closure of theAcademy of Athensin 529 AD.[41]Greek mathematicians lived in cities spread over the entire Eastern Mediterranean, from Italy to North Africa, but were united by culture and language. Greek mathematics of the period followingAlexander the Greatis sometimes calledHellenisticmathematics.[42]
Greek mathematics was much more sophisticated than the mathematics that had been developed by earlier cultures. All surviving records of pre-Greek mathematics show the use ofinductive reasoning, that is, repeated observations used to establish rules of thumb. Greek mathematicians, by contrast, useddeductive reasoning. The Greeks used logic to derive conclusions from definitions and axioms, and usedmathematical rigorto prove them.[43]
Greek mathematics is thought to have begun withThales of Miletus(c. 624–c.546 BC) andPythagoras of Samos(c. 582–c. 507 BC). Although the extent of the influence is disputed, they were probably inspired byEgyptianandBabylonian mathematics. According to legend, Pythagoras traveled to Egypt to learn mathematics, geometry, and astronomy from Egyptian priests.
Thales usedgeometryto solve problems such as calculating the height ofpyramidsand the distance of ships from the shore. He is credited with the first use of deductive reasoning applied to geometry, by deriving four corollaries toThales' Theorem. As a result, he has been hailed as the first true mathematician and the first known individual to whom a mathematical discovery has been attributed.[44]Pythagoras established thePythagorean School, whose doctrine it was that mathematics ruled the universe and whose motto was "All is number".[45]It was the Pythagoreans who coined the term "mathematics", and with whom the study of mathematics for its own sake begins. The Pythagoreans are credited with the first proof of thePythagorean theorem,[46]though the statement of the theorem has a long history, and with the proof of the existence ofirrational numbers.[47][48]Although he was preceded by theBabylonians,Indiansand theChinese,[49]theNeopythagoreanmathematicianNicomachus(60–120 AD) provided one of the earliestGreco-Romanmultiplication tables, whereas the oldest extant Greek multiplication table is found on a wax tablet dated to the 1st century AD (now found in theBritish Museum).[50]The association of the Neopythagoreans with the Western invention of the multiplication table is evident in its laterMedievalname: themensa Pythagorica.[51]
Plato(428/427 BC – 348/347 BC) is important in the history of mathematics for inspiring and guiding others.[52]HisPlatonic Academy, inAthens, became the mathematical center of the world in the 4th century BC, and it was from this school that the leading mathematicians of the day, such asEudoxus of Cnidus(c. 390 - c. 340 BC), came.[53]Plato also discussed the foundations of mathematics,[54]clarified some of the definitions (e.g. that of a line as "breadthless length").
Eudoxus developed themethod of exhaustion, a precursor of modernintegration[55]and a theory of ratios that avoided the problem ofincommensurable magnitudes.[56]The former allowed the calculations of areas and volumes of curvilinear figures,[57]while the latter enabled subsequent geometers to make significant advances in geometry. Though he made no specific technical mathematical discoveries,Aristotle(384–c.322 BC) contributed significantly to the development of mathematics by laying the foundations oflogic.[58]
In the 3rd century BC, the premier center of mathematical education and research was theMusaeumofAlexandria.[60]It was there thatEuclid(c.300 BC) taught, and wrote theElements, widely considered the most successful and influential textbook of all time.[1]TheElementsintroducedmathematical rigorthrough theaxiomatic methodand is the earliest example of the format still used in mathematics today, that of definition, axiom, theorem, and proof. Although most of the contents of theElementswere already known, Euclid arranged them into a single, coherent logical framework.[61]TheElementswas known to all educated people in the West up through the middle of the 20th century and its contents are still taught in geometry classes today.[62]In addition to the familiar theorems ofEuclidean geometry, theElementswas meant as an introductory textbook to all mathematical subjects of the time, such asnumber theory,algebraandsolid geometry,[61]including proofs that the square root of two is irrational and that there are infinitely many prime numbers. Euclid alsowrote extensivelyon other subjects, such asconic sections,optics,spherical geometry, and mechanics, but only half of his writings survive.[63]
Archimedes(c.287–212 BC) ofSyracuse, widely considered the greatest mathematician of antiquity,[64]used themethod of exhaustionto calculate theareaunder the arc of aparabolawith thesummation of an infinite series, in a manner not too dissimilar from modern calculus.[65]He also showed one could use the method of exhaustion to calculate the value of π with as much precision as desired, and obtained the most accurate value of π then known,3+10/71< π < 3+10/70.[66]He also studied thespiralbearing his name, obtained formulas for thevolumesofsurfaces of revolution(paraboloid, ellipsoid, hyperboloid),[65]and an ingenious method ofexponentiationfor expressing very large numbers.[67]While he is also known for his contributions to physics and several advanced mechanical devices, Archimedes himself placed far greater value on the products of his thought and general mathematical principles.[68]He regarded as his greatest achievement his finding of the surface area and volume of a sphere, which he obtained by proving these are 2/3 the surface area and volume of a cylinder circumscribing the sphere.[69]
Apollonius of Perga(c.262–190 BC) made significant advances to the study ofconic sections, showing that one can obtain all three varieties of conic section by varying the angle of the plane that cuts a double-napped cone.[70]He also coined the terminology in use today for conic sections, namelyparabola("place beside" or "comparison"), "ellipse" ("deficiency"), and "hyperbola" ("a throw beyond").[71]His workConicsis one of the best known and preserved mathematical works from antiquity, and in it he derives many theorems concerning conic sections that would prove invaluable to later mathematicians and astronomers studying planetary motion, such as Isaac Newton.[72]While neither Apollonius nor any other Greek mathematicians made the leap to coordinate geometry, Apollonius' treatment of curves is in some ways similar to the modern treatment, and some of his work seems to anticipate the development of analytical geometry by Descartes some 1800 years later.[73]
Around the same time,Eratosthenes of Cyrene(c.276–194 BC) devised theSieve of Eratosthenesfor findingprime numbers.[74]The 3rd century BC is generally regarded as the "Golden Age" of Greek mathematics, with advances in pure mathematics henceforth in relative decline.[75]Nevertheless, in the centuries that followed significant advances were made in applied mathematics, most notablytrigonometry, largely to address the needs of astronomers.[75]Hipparchus of Nicaea(c.190–120 BC) is considered the founder of trigonometry for compiling the first known trigonometric table, and to him is also due the systematic use of the 360 degree circle.[76]Heron of Alexandria(c.10–70 AD) is credited withHeron's formulafor finding the area of a scalene triangle and with being the first to recognize the possibility of negative numbers possessing square roots.[77]Menelaus of Alexandria(c.100 AD) pioneeredspherical trigonometrythroughMenelaus' theorem.[78]The most complete and influential trigonometric work of antiquity is theAlmagestofPtolemy(c.AD 90–168), a landmark astronomical treatise whose trigonometric tables would be used by astronomers for the next thousand years.[79]Ptolemy is also credited withPtolemy's theoremfor deriving trigonometric quantities, and the most accurate value of π outside of China until the medieval period, 3.1416.[80]
Following a period of stagnation after Ptolemy, the period between 250 and 350 AD is sometimes referred to as the "Silver Age" of Greek mathematics.[81]During this period,Diophantusmade significant advances in algebra, particularlyindeterminate analysis, which is also known as "Diophantine analysis".[82]The study ofDiophantine equationsandDiophantine approximationsis a significant area of research to this day. His main work was theArithmetica, a collection of 150 algebraic problems dealing with exact solutions to determinate andindeterminate equations.[83]TheArithmeticahad a significant influence on later mathematicians, such asPierre de Fermat, who arrived at his famousLast Theoremafter trying to generalize a problem he had read in theArithmetica(that of dividing a square into two squares).[84]Diophantus also made significant advances in notation, theArithmeticabeing the first instance of algebraic symbolism and syncopation.[83]
Among the last great Greek mathematicians isPappus of Alexandria(4th century AD). He is known for hishexagon theoremandcentroid theorem, as well as thePappus configurationandPappus graph. HisCollectionis a major source of knowledge on Greek mathematics as most of it has survived.[85]Pappus is considered the last major innovator in Greek mathematics, with subsequent work consisting mostly of commentaries on earlier work.
The first woman mathematician recorded by history wasHypatiaof Alexandria (AD 350–415). She succeeded her father (Theon of Alexandria) as Librarian at the Great Library[citation needed]and wrote many works on applied mathematics. Because of a political dispute, theChristian communityin Alexandria had her stripped publicly and executed.[86]Her death is sometimes taken as the end of the era of the Alexandrian Greek mathematics, although work did continue in Athens for another century with figures such asProclus,SimpliciusandEutocius.[87]Although Proclus and Simplicius were more philosophers than mathematicians, their commentaries on earlier works are valuable sources on Greek mathematics. The closure of the neo-PlatonicAcademy of Athensby the emperorJustinianin 529 AD is traditionally held as marking the end of the era of Greek mathematics, although the Greek tradition continued unbroken in theByzantine empirewith mathematicians such asAnthemius of TrallesandIsidore of Miletus, the architects of theHagia Sophia.[88]Nevertheless, Byzantine mathematics consisted mostly of commentaries, with little in the way of innovation, and the centers of mathematical innovation were to be found elsewhere by this time.[89]
Althoughethnic Greekmathematicians continued under the rule of the lateRoman Republicand subsequentRoman Empire, there were no noteworthynative Latinmathematicians in comparison.[90][91]Ancient Romanssuch asCicero(106–43 BC), an influential Roman statesman who studied mathematics in Greece, believed that Romansurveyorsandcalculatorswere far more interested inapplied mathematicsthan thetheoretical mathematicsand geometry that were prized by the Greeks.[92]It is unclear if the Romans first derivedtheir numerical systemdirectly fromthe Greek precedentor fromEtruscan numeralsused by theEtruscan civilizationcentered in what is nowTuscany,central Italy.[93]
Using calculation, Romans were adept at both instigating and detecting financialfraud, as well asmanaging taxesfor thetreasury.[94]Siculus Flaccus, one of the Romangromatici(i.e. land surveyor), wrote theCategories of Fields, which aided Roman surveyors in measuring thesurface areasof allotted lands and territories.[95]Aside from managing trade and taxes, the Romans also regularly applied mathematics to solve problems inengineering, including the erection ofarchitecturesuch asbridges,road-building, andpreparation for military campaigns.[96]Arts and craftssuch asRoman mosaics, inspired by previousGreek designs, created illusionist geometric patterns and rich, detailed scenes that required precise measurements for eachtesseratile, theopus tessellatumpieces on average measuring eight millimeters square and the fineropus vermiculatumpieces having an average surface of four millimeters square.[97][98]
The creation of theRoman calendaralso necessitated basic mathematics. The first calendar allegedly dates back to 8th century BC during theRoman Kingdomand included 356 days plus aleap yearevery other year.[99]In contrast, thelunar calendarof the Republican era contained 355 days, roughly ten-and-one-fourth days shorter than thesolar year, a discrepancy that was solved by adding an extra month into the calendar after the 23rd of February.[100]This calendar was supplanted by theJulian calendar, asolar calendarorganized byJulius Caesar(100–44 BC) and devised bySosigenes of Alexandriato include aleap dayevery four years in a 365-day cycle.[101]This calendar, which contained an error of 11 minutes and 14 seconds, was later corrected by theGregorian calendarorganized byPope Gregory XIII(r.1572–1585), virtually the same solar calendar used in modern times as the international standard calendar.[102]
At roughly the same time,the Han Chineseand the Romans both invented the wheeledodometerdevice for measuringdistancestraveled, the Roman model first described by the Roman civil engineer and architectVitruvius(c.80 BC– c.15 BC).[103]The device was used at least until the reign of emperorCommodus(r.177 – 192 AD), but its design seems to have been lost until experiments were made during the 15th century in Western Europe.[104]Perhaps relying on similar gear-work andtechnologyfound in theAntikythera mechanism, the odometer of Vitruvius featured chariot wheels measuring 4 feet (1.2 m) in diameter turning four-hundred times in oneRoman mile(roughly 4590 ft/1400 m). With each revolution, a pin-and-axle device engaged a 400-toothcogwheelthat turned a second gear responsible for dropping pebbles into a box, each pebble representing one mile traversed.[105]
An analysis of early Chinese mathematics has demonstrated its unique development compared to other parts of the world, leading scholars to assume an entirely independent development.[106]The oldest extant mathematical text from China is theZhoubi Suanjing(周髀算經), variously dated to between 1200 BC and 100 BC, though a date of about 300 BC during theWarring States Periodappears reasonable.[107]However, theTsinghua Bamboo Slips, containing the earliest knowndecimalmultiplication table(although ancient Babylonians had ones with a base of 60), is dated around 305 BC and is perhaps the oldest surviving mathematical text of China.[49]
Of particular note is the use in Chinese mathematics of a decimal positional notation system, the so-called "rod numerals" in which distinct ciphers were used for numbers between 1 and 10, and additional ciphers for powers of ten.[108]Thus, the number 123 would be written using the symbol for "1", followed by the symbol for "100", then the symbol for "2" followed by the symbol for "10", followed by the symbol for "3". This was the most advanced number system in the world at the time, apparently in use several centuries before the common era and well before the development of the Indian numeral system.[109]Rod numeralsallowed the representation of numbers as large as desired and allowed calculations to be carried out on thesuan pan, or Chinese abacus. The date of the invention of thesuan panis not certain, but the earliest written mention dates from AD 190, inXu Yue'sSupplementary Notes on the Art of Figures.
The oldest extant work on geometry in China comes from the philosophicalMohistcanonc.330 BC, compiled by the followers ofMozi(470–390 BC). TheMo Jingdescribed various aspects of many fields associated with physical science, and provided a small number of geometrical theorems as well.[110]It also defined the concepts ofcircumference,diameter,radius, andvolume.[111]
In 212 BC, the EmperorQin Shi Huangcommanded all books in theQin Empireother than officially sanctioned ones be burned. This decree was not universally obeyed, but as a consequence of this order little is known about ancient Chinese mathematics before this date. After thebook burningof 212 BC, theHan dynasty(202 BC–220 AD) produced works of mathematics which presumably expanded on works that are now lost. The most important of these isThe Nine Chapters on the Mathematical Art, the full title of which appeared by AD 179, but existed in part under other titles beforehand. It consists of 246 word problems involving agriculture, business, employment of geometry to figure height spans and dimension ratios forChinese pagodatowers, engineering,surveying, and includes material onright triangles.[107]It created mathematical proof for thePythagorean theorem,[112]and a mathematical formula forGaussian elimination.[113]The treatise also provides values ofπ,[107]which Chinese mathematicians originally approximated as 3 untilLiu Xin(d. 23 AD) provided a figure of 3.1457 and subsequentlyZhang Heng(78–139) approximated pi as 3.1724,[114]as well as 3.162 by taking thesquare rootof 10.[115][116]Liu Huicommented on theNine Chaptersin the 3rd century AD andgave a value of πaccurate to 5 decimal places (i.e. 3.14159).[117][118]Though more of a matter of computational stamina than theoretical insight, in the 5th century ADZu Chongzhicomputedthe value of πto seven decimal places (between 3.1415926 and 3.1415927), which remained the most accurate value of π for almost the next 1000 years.[117][119]He also established a method which would later be calledCavalieri's principleto find the volume of asphere.[120]
The high-water mark of Chinese mathematics occurred in the 13th century during the latter half of theSong dynasty(960–1279), with the development of Chinese algebra. The most important text from that period is thePrecious Mirror of the Four ElementsbyZhu Shijie(1249–1314), dealing with the solution of simultaneous higher order algebraic equations using a method similar toHorner's method.[117]ThePrecious Mirroralso contains a diagram ofPascal's trianglewith coefficients of binomial expansions through the eighth power, though both appear in Chinese works as early as 1100.[121]The Chinese also made use of the complex combinatorial diagram known as themagic squareandmagic circles, described in ancient times and perfected byYang Hui(AD 1238–1298).[121]
Even after European mathematics began to flourish during theRenaissance, European and Chinese mathematics were separate traditions, with significant Chinese mathematical output in decline from the 13th century onwards.Jesuitmissionaries such asMatteo Riccicarried mathematical ideas back and forth between the two cultures from the 16th to 18th centuries, though at this point far more mathematical ideas were entering China than leaving.[121]
Japanese mathematics,Korean mathematics, andVietnamese mathematicsare traditionally viewed as stemming from Chinese mathematics and belonging to theConfucian-basedEast Asian cultural sphere.[122]Korean and Japanese mathematics were heavily influenced by the algebraic works produced during China's Song dynasty, whereas Vietnamese mathematics was heavily indebted to popular works of China'sMing dynasty(1368–1644).[123]For instance, although Vietnamese mathematical treatises were written in eitherChineseor the native VietnameseChữ Nômscript, all of them followed the Chinese format of presenting a collection of problems withalgorithmsfor solving them, followed by numerical answers.[124]Mathematics in Vietnam and Korea were mostly associated with the professional court bureaucracy ofmathematicians and astronomers, whereas in Japan it was more prevalent in the realm ofprivate schools.[125]
The earliest civilization on the Indian subcontinent is theIndus Valley civilization(mature second phase: 2600 to 1900 BC) that flourished in theIndus riverbasin. Their cities were laid out with geometric regularity, but no known mathematical documents survive from this civilization.[127]
The oldest extant mathematical records from India are theSulba Sutras(dated variously between the 8th century BC and the 2nd century AD),[128]appendices to religious texts which give simple rules for constructing altars of various shapes, such as squares, rectangles, parallelograms, and others.[129]As with Egypt, the preoccupation with temple functions points to an origin of mathematics in religious ritual.[128]The Sulba Sutras give methods for constructing acircle with approximately the same area as a given square, which imply several different approximations of the value of π.[130][131][a]In addition, they compute thesquare rootof 2 to several decimal places, list Pythagorean triples, and give a statement of thePythagorean theorem.[131]All of these results are present in Babylonian mathematics, indicating Mesopotamian influence.[128]It is not known to what extent the Sulba Sutras influenced later Indian mathematicians. As in China, there is a lack of continuity in Indian mathematics; significant advances are separated by long periods of inactivity.[128]
Pāṇini(c. 5th century BC) formulated the rules forSanskrit grammar.[132]His notation was similar to modern mathematical notation, and used metarules,transformations, andrecursion.[133]Pingala(roughly 3rd–1st centuries BC) in his treatise ofprosodyuses a device corresponding to abinary numeral system.[134][135]His discussion of thecombinatoricsofmeterscorresponds to an elementary version of thebinomial theorem. Pingala's work also contains the basic ideas ofFibonacci numbers(calledmātrāmeru).[136]
The next significant mathematical documents from India after theSulba Sutrasare theSiddhantas, astronomical treatises from the 4th and 5th centuries AD (Gupta period) showing strong Hellenistic influence.[137]They are significant in that they contain the first instance of trigonometric relations based on the half-chord, as is the case in modern trigonometry, rather than the full chord, as was the case in Ptolemaic trigonometry.[138]Through a series of translation errors, the words "sine" and "cosine" derive from the Sanskrit "jiya" and "kojiya".[138]
Around 500 AD,Aryabhatawrote theAryabhatiya, a slim volume, written in verse, intended to supplement the rules of calculation used in astronomy and mathematical mensuration, though with no feeling for logic or deductive methodology.[139]It is in theAryabhatiyathat the decimal place-value system first appears. Several centuries later, theMuslim mathematicianAbu Rayhan Birunidescribed theAryabhatiyaas a "mix of common pebbles and costly crystals".[140]
In the 7th century,Brahmaguptaidentified theBrahmagupta theorem,Brahmagupta's identityandBrahmagupta's formula, and for the first time, inBrahma-sphuta-siddhanta, he lucidly explained the use ofzeroas both a placeholder anddecimal digit, and explained theHindu–Arabic numeral system.[141]It was from a translation of this Indian text on mathematics (c. 770) that Islamic mathematicians were introduced to this numeral system, which they adapted asArabic numerals. Islamic scholars carried knowledge of this number system to Europe by the 12th century, and it has now displaced all older number systems throughout the world. Various symbol sets are used to represent numbers in the Hindu–Arabic numeral system, all of which evolved from theBrahmi numerals. Each of the roughly dozen major scripts of India has its own numeral glyphs. In the 10th century,Halayudha's commentary onPingala's work contains a study of theFibonacci sequence[142]andPascal's triangle,[143]and describes the formation of amatrix.[citation needed]
In the 12th century,Bhāskara II,[144]who lived in southern India, wrote extensively on all then known branches of mathematics. His work contains mathematical objects equivalent or approximately equivalent to infinitesimals,the mean value theoremand the derivative of the sine function although he did not develop the notion of a derivative.[145][146]In the 14th century,Narayana Panditacompleted hisGanita Kaumudi.[147]
Also in the 14th century,Madhava of Sangamagrama, the founder of theKerala School of Mathematics, found theMadhava–Leibniz seriesand obtained from it atransformed series, whose first 21 terms he used to compute the value of π as 3.14159265359. Madhava also foundthe Madhava-Gregory seriesto determine the arctangent, the Madhava-Newtonpower seriesto determine sine and cosine andthe Taylor approximationfor sine and cosine functions.[148]In the 16th century,Jyesthadevaconsolidated many of the Kerala School's developments and theorems in theYukti-bhāṣā.[149][150]It has been argued that certain ideas of calculus like infinite series and taylor series of some trigonometry functions, were transmitted to Europe in the 16th century[6]viaJesuitmissionaries and traders who were active around the ancient port ofMuzirisat the time and, as a result, directly influenced later European developments in analysis and calculus.[151]However, other scholars argue that the Kerala School did not formulate a systematic theory ofdifferentiationandintegration, and that there is not any direct evidence of their results being transmitted outside Kerala.[152][153][154][155]
TheIslamic Empireestablished across theMiddle East,Central Asia,North Africa,Iberia, and in parts ofIndiain the 8th century made significant contributions towards mathematics. Although most Islamic texts on mathematics were written inArabic, they were not all written byArabs, since much like the status of Greek in the Hellenistic world, Arabic was used as the written language of non-Arab scholars throughout the Islamic world at the time.[156]
In the 9th century, the Persian mathematicianMuḥammad ibn Mūsā al-Khwārizmīwrote an important book on theHindu–Arabic numeralsand one on methods for solving equations. His bookOn the Calculation with Hindu Numerals, written about 825, along with the work ofAl-Kindi, were instrumental in spreadingIndian mathematicsandIndian numeralsto the West. The wordalgorithmis derived from the Latinization of his name, Algoritmi, and the wordalgebrafrom the title of one of his works,Al-Kitāb al-mukhtaṣar fī hīsāb al-ğabr wa’l-muqābala(The Compendious Book on Calculation by Completion and Balancing). He gave an exhaustive explanation for the algebraic solution of quadratic equations with positive roots,[157]and he was the first to teach algebra in anelementary formand for its own sake.[158]He also discussed the fundamental method of "reduction" and "balancing", referring to the transposition of subtracted terms to the other side of an equation, that is, the cancellation of like terms on opposite sides of the equation. This is the operation which al-Khwārizmī originally described asal-jabr.[159]His algebra was also no longer concerned "with a series of problems to be resolved, but anexpositionwhich starts with primitive terms in which the combinations must give all possible prototypes for equations, which henceforward explicitly constitute the true object of study." He also studied an equation for its own sake and "in a generic manner, insofar as it does not simply emerge in the course of solving a problem, but is specifically called on to define an infinite class of problems."[160]
In Egypt,Abu Kamilextended algebra to the set ofirrational numbers, accepting square roots and fourth roots as solutions and coefficients to quadratic equations. He also developed techniques used to solve three non-linear simultaneous equations with three unknown variables. One unique feature of his works was trying to find all the possible solutions to some of his problems, including one where he found 2676 solutions.[161]His works formed an important foundation for the development of algebra and influenced later mathematicians, such as al-Karaji and Fibonacci.
Further developments in algebra were made byAl-Karajiin his treatiseal-Fakhri, where he extends the methodology to incorporate integer powers and integer roots of unknown quantities. Something close to aproofbymathematical inductionappears in a book written by Al-Karaji around 1000 AD, who used it to prove thebinomial theorem,Pascal's triangle, and the sum of integralcubes.[162]Thehistorianof mathematics, F. Woepcke,[163]praised Al-Karaji for being "the first who introduced thetheoryofalgebraiccalculus." Also in the 10th century,Abul Wafatranslated the works ofDiophantusinto Arabic.Ibn al-Haythamwas the first mathematician to derive the formula for the sum of the fourth powers, using a method that is readily generalizable for determining the general formula for the sum of any integral powers. He performed an integration in order to find the volume of aparaboloid, and was able to generalize his result for the integrals ofpolynomialsup to thefourth degree. He thus came close to finding a general formula for the integrals of polynomials, but he was not concerned with any polynomials higher than the fourth degree.[164]
In the late 11th century,Omar KhayyamwroteDiscussions of the Difficulties in Euclid, a book about what he perceived as flaws inEuclid'sElements, especially theparallel postulate. He was also the first to find the general geometric solution tocubic equations. He was also very influential incalendar reform.[165]
In the 13th century,Nasir al-Din Tusi(Nasireddin) made advances inspherical trigonometry. He also wrote influential work on Euclid'sparallel postulate. In the 15th century,Ghiyath al-Kashicomputed the value of π to the 16th decimal place. Kashi also had an algorithm for calculatingnth roots, which was a special case of the methods given many centuries later byRuffiniandHorner.
Other achievements of Muslim mathematicians during this period include the addition of thedecimal pointnotation to theArabic numerals, the discovery of all the moderntrigonometric functionsbesides the sine,al-Kindi's introduction ofcryptanalysisandfrequency analysis, the development ofanalytic geometrybyIbn al-Haytham, the beginning ofalgebraic geometrybyOmar Khayyamand the development of analgebraic notationbyal-Qalasādī.[166]
During the time of theOttoman EmpireandSafavid Empirefrom the 15th century, the development of Islamic mathematics became stagnant.
In thePre-Columbian Americas, theMaya civilizationthat flourished inMexicoandCentral Americaduring the 1st millennium AD developed a unique tradition of mathematics that, due to its geographic isolation, was entirely independent of existing European, Egyptian, and Asian mathematics.[167]Maya numeralsused abaseof twenty, thevigesimalsystem, instead of a base of ten that forms the basis of thedecimalsystem used by most modern cultures.[167]The Maya used mathematics to create theMaya calendaras well as to predict astronomical phenomena in their nativeMaya astronomy.[167]While the concept ofzerohad to be inferred in the mathematics of many contemporary cultures, the Maya developed a standard symbol for it.[167]
Medieval European interest in mathematics was driven by concerns quite different from those of modern mathematicians. One driving element was the belief that mathematics provided the key to understanding the created order of nature, frequently justified byPlato'sTimaeusand the biblical passage (in theBook of Wisdom) that God hadordered all things in measure, and number, and weight.[168]
Boethiusprovided a place for mathematics in the curriculum in the 6th century when he coined the termquadriviumto describe the study of arithmetic, geometry, astronomy, and music. He wroteDe institutione arithmetica, a free translation from the Greek ofNicomachus'sIntroduction to Arithmetic;De institutione musica, also derived from Greek sources; and a series of excerpts from Euclid'sElements. His works were theoretical, rather than practical, and were the basis of mathematical study until the recovery of Greek and Arabic mathematical works.[169][170]
In the 12th century, European scholars traveled to Spain and Sicilyseeking scientific Arabic texts, includingal-Khwārizmī'sThe Compendious Book on Calculation by Completion and Balancing, translated into Latin byRobert of Chester, and the complete text of Euclid'sElements, translated in various versions byAdelard of Bath,Herman of Carinthia, andGerard of Cremona.[171][172]These and other new sources sparked a renewal of mathematics.
Leonardo of Pisa, now known asFibonacci, serendipitously learned about theHindu–Arabic numeralson a trip to what is nowBéjaïa,Algeriawith his merchant father. (Europe was still usingRoman numerals.) There, he observed a system ofarithmetic(specificallyalgorism) which due to thepositional notationof Hindu–Arabic numerals was much more efficient and greatly facilitated commerce. Leonardo wroteLiber Abaciin 1202 (updated in 1254) introducing the technique to Europe and beginning a long period of popularizing it. The book also brought to Europe what is now known as theFibonacci sequence(known to Indian mathematicians for hundreds of years before that)[173]which Fibonacci used as an unremarkable example.
The 14th century saw the development of new mathematical concepts to investigate a wide range of problems.[174]One important contribution was development of mathematics of local motion.
Thomas Bradwardineproposed that speed (V) increases in arithmetic proportion as the ratio of force (F) to resistance (R) increases in geometric proportion. Bradwardine expressed this by a series of specific examples, but although the logarithm had not yet been conceived, we can express his conclusion anachronistically by writing:
V = log (F/R).[175]Bradwardine's analysis is an example of transferring a mathematical technique used byal-KindiandArnald of Villanovato quantify the nature of compound medicines to a different physical problem.[176]
One of the 14th-centuryOxford Calculators,William Heytesbury, lackingdifferential calculusand the concept oflimits, proposed to measure instantaneous speed "by the path thatwouldbe described by [a body]if... it were moved uniformly at the same degree of speed with which it is moved in that given instant".[179]
Heytesbury and others mathematically determined the distance covered by a body undergoing uniformly accelerated motion (today solved by integration), stating that "a moving body uniformly acquiring or losing that increment [of speed] will traverse in some given time a [distance] completely equal to that which it would traverse if it were moving continuously through the same time with the mean degree [of speed]".[180]
Nicole Oresmeat theUniversity of Parisand the ItalianGiovanni di Casaliindependently provided graphical demonstrations of this relationship, asserting that the area under the line depicting the constant acceleration, represented the total distance traveled.[181]In a later mathematical commentary on Euclid'sElements, Oresme made a more detailed general analysis in which he demonstrated that a body will acquire in each successive increment of time an increment of any quality that increases as the odd numbers. Since Euclid had demonstrated the sum of the odd numbers are the square numbers, the total quality acquired by the body increases as the square of the time.[182]
During theRenaissance, the development of mathematics and ofaccountingwere intertwined.[183]While there is no direct relationship between algebra and accounting, the teaching of the subjects and the books published often intended for the children of merchants who were sent to reckoning schools (inFlandersandGermany) orabacus schools(known asabbacoin Italy), where they learned the skills useful for trade and commerce. There is probably no need for algebra in performingbookkeepingoperations, but for complex bartering operations or the calculation ofcompound interest, a basic knowledge of arithmetic was mandatory and knowledge of algebra was very useful.
Piero della Francesca(c. 1415–1492) wrote books onsolid geometryandlinear perspective, includingDe Prospectiva Pingendi(On Perspective for Painting),Trattato d’Abaco (Abacus Treatise), andDe quinque corporibus regularibus(On the Five Regular Solids).[184][185][186]
Luca Pacioli'sSumma de Arithmetica, Geometria, Proportioni et Proportionalità(Italian: "Review ofArithmetic,Geometry,RatioandProportion") was first printed and published inVenicein 1494. It included a 27-page treatise on bookkeeping,"Particularis de Computis et Scripturis"(Italian: "Details of Calculation and Recording"). It was written primarily for, and sold mainly to, merchants who used the book as a reference text, as a source of pleasure from themathematical puzzlesit contained, and to aid the education of their sons.[187]InSumma Arithmetica, Pacioli introduced symbols forplus and minusfor the first time in a printed book, symbols that became standard notation in Italian Renaissance mathematics.Summa Arithmeticawas also the first known book printed in Italy to contain algebra. Pacioli obtained many of his ideas from Piero Della Francesca whom he plagiarized.
In Italy, during the first half of the 16th century,Scipione del FerroandNiccolò Fontana Tartagliadiscovered solutions forcubic equations.Gerolamo Cardanopublished them in his 1545 bookArs Magna, together with a solution for thequartic equations, discovered by his studentLodovico Ferrari. In 1572Rafael Bombellipublished hisL'Algebrain which he showed how to deal with theimaginary quantitiesthat could appear in Cardano's formula for solving cubic equations.
Simon Stevin'sDe Thiende('the art of tenths'), first published in Dutch in 1585, contained the first systematic treatment ofdecimal notationin Europe, which influenced all later work on thereal number system.[188][189]
Driven by the demands of navigation and the growing need for accurate maps of large areas,trigonometrygrew to be a major branch of mathematics.Bartholomaeus Pitiscuswas the first to use the word, publishing hisTrigonometriain 1595. Regiomontanus's table of sines and cosines was published in 1533.[190]
During the Renaissance the desire of artists to represent the natural world realistically, together with the rediscovered philosophy of the Greeks, led artists to study mathematics. They were also the engineers and architects of that time, and so had need of mathematics in any case. The art of painting in perspective, and the developments in geometry that were involved, were studied intensely.[191]
The 17th century saw an unprecedented increase of mathematical and scientific ideas across Europe.Tycho Brahehad gathered a large quantity of mathematical data describing the positions of the planets in the sky. By his position as Brahe's assistant,Johannes Keplerwas first exposed to and seriously interacted with the topic of planetary motion. Kepler's calculations were made simpler by the contemporaneous invention oflogarithmsbyJohn NapierandJost Bürgi. Kepler succeeded in formulating mathematical laws of planetary motion.[192]Theanalytic geometrydeveloped byRené Descartes(1596–1650) allowed those orbits to be plotted on a graph, inCartesian coordinates.
Building on earlier work by many predecessors,Isaac Newtondiscovered the laws of physics that explainKepler's Laws, and brought together the concepts now known ascalculus. Independently,Gottfried Wilhelm Leibniz, developed calculus and much of the calculus notation still in use today. He also refined thebinary numbersystem, which is the foundation of nearly all digital (electronic,solid-state,discrete logic)computers.[193]
Science and mathematics had become an international endeavor, which would soon spread over the entire world.[194]
In addition to the application of mathematics to the studies of the heavens,applied mathematicsbegan to expand into new areas, with the correspondence ofPierre de FermatandBlaise Pascal. Pascal and Fermat set the groundwork for the investigations ofprobability theoryand the corresponding rules ofcombinatoricsin their discussions over a game ofgambling. Pascal, with hiswager, attempted to use the newly developing probability theory to argue for a life devoted to religion, on the grounds that even if the probability of success was small, the rewards were infinite. In some sense, this foreshadowed the development ofutility theoryin the 18th and 19th centuries.
The most influential mathematician of the 18th century was arguablyLeonhard Euler(1707–1783). His contributions range from founding the study ofgraph theorywith theSeven Bridges of Königsbergproblem to standardizing many modern mathematical terms and notations. For example, he named the square root of minus 1 with the symboli, and he popularized the use of the Greek letterπ{\displaystyle \pi }to stand for the ratio of a circle's circumference to its diameter. He made numerous contributions to the study of topology, graph theory, calculus, combinatorics, and complex analysis, as evidenced by the multitude of theorems and notations named for him.
Other important European mathematicians of the 18th century includedJoseph Louis Lagrange, who did pioneering work in number theory, algebra, differential calculus, and the calculus of variations, andPierre-Simon Laplace, who, in the age ofNapoleon, did important work on the foundations ofcelestial mechanicsand onstatistics.
Throughout the 19th century mathematics became increasingly abstract.[195]Carl Friedrich Gauss(1777–1855) epitomizes this trend.[citation needed]He did revolutionary work onfunctionsofcomplex variables, ingeometry, and on the convergence ofseries, leaving aside his many contributions to science. He also gave the first satisfactory proofs of thefundamental theorem of algebraand of thequadratic reciprocity law.[citation needed]
This century saw the development of the two forms ofnon-Euclidean geometry, where theparallel postulateof Euclidean geometry no longer holds.
The Russian mathematicianNikolai Ivanovich Lobachevskyand his rival, the Hungarian mathematicianJános Bolyai, independently defined and studiedhyperbolic geometry, where uniqueness of parallels no longer holds. In this geometry the sum of angles in a triangle add up to less than 180°.Elliptic geometrywas developed later in the 19th century by the German mathematicianBernhard Riemann; here no parallel can be found and the angles in a triangle add up to more than 180°. Riemann also developedRiemannian geometry, which unifies and vastly generalizes the three types of geometry, and he defined the concept of amanifold, which generalizes the ideas ofcurvesandsurfaces, and set the mathematical foundations for thetheory of general relativity.[196]
The 19th century saw the beginning of a great deal ofabstract algebra.Hermann Grassmannin Germany gave a first version ofvector spaces,William Rowan Hamiltonin Ireland developednoncommutative algebra.[citation needed]The British mathematicianGeorge Booledevised an algebra that soon evolved into what is now calledBoolean algebra, in which the only numbers were 0 and 1. Boolean algebra is the starting point ofmathematical logicand has important applications inelectrical engineeringandcomputer science.[citation needed][197]Augustin-Louis Cauchy,Bernhard Riemann, andKarl Weierstrassreformulated the calculus in a more rigorous fashion.[citation needed]
Also, for the first time, the limits of mathematics were explored.Niels Henrik Abel, a Norwegian, andÉvariste Galois, a Frenchman, proved that there is no general algebraic method for solving polynomial equations of degree greater than four (Abel–Ruffini theorem).[198]Other 19th-century mathematicians used this in their proofs that straight edge and compass alone are not sufficient totrisect an arbitrary angle, to construct the side of a cube twice the volume of a given cube,nor to construct a square equal in area to a given circle.[citation needed]Mathematicians had vainly attempted to solve all of these problems since the time of the ancient Greeks.[citation needed]On the other hand, the limitation of threedimensionsin geometry was surpassed in the 19th century through considerations ofparameter spaceandhypercomplex numbers.[citation needed]
Abel and Galois's investigations into the solutions of various polynomial equations laid the groundwork for further developments ofgroup theory, and the associated fields ofabstract algebra. In the 20th century physicists and other scientists have seen group theory as the ideal way to studysymmetry.[citation needed]
In the later 19th century,Georg Cantorestablished the first foundations ofset theory, which enabled the rigorous treatment of the notion of infinity and has become the common language of nearly all mathematics. Cantor's set theory, and the rise ofmathematical logicin the hands ofPeano,L.E.J. Brouwer,David Hilbert,Bertrand Russell, andA.N. Whitehead, initiated a long running debate on thefoundations of mathematics.[citation needed]
The 19th century saw the founding of a number of national mathematical societies: theLondon Mathematical Societyin 1865,[199]theSociété Mathématique de Francein 1872,[200]theCircolo Matematico di Palermoin 1884,[201][202]theEdinburgh Mathematical Societyin 1883,[203]and theAmerican Mathematical Societyin 1888.[204]The first international, special-interest society, theQuaternion Society, was formed in 1899, in the context of avector controversy.[205]
In 1897,Kurt Henselintroducedp-adic numbers.[206]
The 20th century saw mathematics become a major profession. By the end of the century, thousands of new Ph.D.s in mathematics were being awarded every year, and jobs were available in both teaching and industry.[207]An effort to catalogue the areas and applications of mathematics was undertaken inKlein's encyclopedia.[208]
In a 1900 speech to theInternational Congress of Mathematicians,David Hilbertset out a list of23 unsolved problems in mathematics.[209]These problems, spanning many areas of mathematics, formed a central focus for much of 20th-century mathematics. Today, 10 have been solved, 7 are partially solved, and 2 are still open. The remaining 4 are too loosely formulated to be stated as solved or not.[210]
Notable historical conjectures were finally proven. In 1976,Wolfgang HakenandKenneth Appelproved thefour color theorem, controversial at the time for the use of a computer to do so.[211]Andrew Wiles, building on the work of others, provedFermat's Last Theoremin 1995.[212]Paul CohenandKurt Gödelproved that thecontinuum hypothesisisindependentof (could neither be proved nor disproved from) thestandard axioms of set theory.[213]In 1998,Thomas Callister Halesproved theKepler conjecture, also using a computer.[214]
Mathematical collaborations of unprecedented size and scope took place. An example is theclassification of finite simple groups(also called the "enormous theorem"), whose proof between 1955 and 2004 required 500-odd journal articles by about 100 authors, and filling tens of thousands of pages.[215]A group of French mathematicians, includingJean DieudonnéandAndré Weil, publishing under thepseudonym"Nicolas Bourbaki", attempted to exposit all of known mathematics as a coherent rigorous whole. The resulting several dozen volumes has had a controversial influence on mathematical education.[216]
Differential geometrycame into its own whenAlbert Einsteinused it ingeneral relativity.[citation needed]Entirely new areas of mathematics such asmathematical logic,topology, andJohn von Neumann'sgame theorychanged the kinds of questions that could be answered by mathematical methods.[citation needed]All kinds ofstructureswere abstracted using axioms and given names likemetric spaces,topological spacesetc.[citation needed]As mathematicians do, the concept of an abstract structure was itself abstracted and led tocategory theory.[citation needed]GrothendieckandSerrerecastalgebraic geometryusingsheaf theory.[citation needed]Large advances were made in the qualitative study ofdynamical systemsthatPoincaréhad begun in the 1890s.[citation needed]Measure theorywas developed in the late 19th and early 20th centuries. Applications of measures include theLebesgue integral,Kolmogorov's axiomatisation ofprobability theory, andergodic theory.[citation needed]Knot theorygreatly expanded.[citation needed]Quantum mechanicsled to the development offunctional analysis,[citation needed]a branch of mathematics that was greatly developed byStefan Banachand his collaborators who formed theLwów School of Mathematics.[217]Other new areas includeLaurent Schwartz'sdistribution theory,fixed point theory,singularity theoryandRené Thom'scatastrophe theory,model theory, andMandelbrot'sfractals.[citation needed]Lie theorywith itsLie groupsandLie algebrasbecame one of the major areas of study.[218]
Non-standard analysis, introduced byAbraham Robinson, rehabilitated theinfinitesimalapproach to calculus, which had fallen into disrepute in favour of the theory oflimits, by extending the field of real numbers to theHyperreal numberswhich include infinitesimal and infinite quantities.[citation needed]An even larger number system, thesurreal numberswere discovered byJohn Horton Conwayin connection withcombinatorial games.[citation needed]
The development and continual improvement ofcomputers, at first mechanical analog machines and then digital electronic machines, allowedindustryto deal with larger and larger amounts of data to facilitate mass production and distribution and communication, and new areas of mathematics were developed to deal with this:Alan Turing'scomputability theory;complexity theory;Derrick Henry Lehmer's use ofENIACto further number theory and theLucas–Lehmer primality test;Rózsa Péter'srecursive function theory;Claude Shannon'sinformation theory;signal processing;data analysis;optimizationand other areas ofoperations research.[citation needed]In the preceding centuries much mathematical focus was on calculus and continuous functions, but the rise of computing and communication networks led to an increasing importance ofdiscreteconcepts and the expansion ofcombinatoricsincludinggraph theory. The speed and data processing abilities of computers also enabled the handling of mathematical problems that were too time-consuming to deal with by pencil and paper calculations, leading to areas such asnumerical analysisandsymbolic computation.[citation needed]Some of the most important methods andalgorithmsof the 20th century are: thesimplex algorithm, thefast Fourier transform,error-correcting codes, theKalman filterfromcontrol theoryand theRSA algorithmofpublic-key cryptography.[citation needed]
At the same time, deep insights were made about the limitations to mathematics. In 1929 and 1930, it was proved[by whom?]the truth or falsity of all statements formulated about thenatural numbersplus either addition or multiplication (but not both), wasdecidable, i.e. could be determined by some algorithm.[citation needed]In 1931,Kurt Gödelfound that this was not the case for the natural numbers plus both addition and multiplication; this system, known asPeano arithmetic, was in factincomplete. (Peano arithmetic is adequate for a good deal ofnumber theory, including the notion ofprime number.) A consequence of Gödel's twoincompleteness theoremsis that in any mathematical system that includes Peano arithmetic (including all ofanalysisand geometry), truth necessarily outruns proof, i.e. there are true statements thatcannot be provedwithin the system. Hence mathematics cannot be reduced to mathematical logic, andDavid Hilbert's dream of making all of mathematics complete and consistent needed to be reformulated.[citation needed]
One of the more colorful figures in 20th-century mathematics wasSrinivasa Aiyangar Ramanujan(1887–1920), an Indianautodidact[219]who conjectured or proved over 3000 theorems[citation needed], including properties ofhighly composite numbers,[220]thepartition function[219]and itsasymptotics,[221]andmock theta functions.[219]He also made major investigations in the areas ofgamma functions,[222][223]modular forms,[219]divergent series,[219]hypergeometric series[219]and prime number theory.[219]
Paul Erdőspublished more papers than any other mathematician in history,[224]working with hundreds of collaborators. Mathematicians have a game equivalent to theKevin Bacon Game, which leads to theErdős numberof a mathematician. This describes the "collaborative distance" between a person and Erdős, as measured by joint authorship of mathematical papers.[225][226]
Emmy Noetherhas been described by many as the most important woman in the history of mathematics.[227]She studied the theories ofrings,fields, andalgebras.[228]
As in most areas of study, the explosion of knowledge in the scientific age has led to specialization: by the end of the century, there were hundreds of specialized areas in mathematics, and theMathematics Subject Classificationwas dozens of pages long.[229]More and moremathematical journalswere published and, by the end of the century, the development of theWorld Wide Webled to online publishing.[citation needed]
In 2000, theClay Mathematics Instituteannounced the sevenMillennium Prize Problems.[230]In 2003 thePoincaré conjecturewas solved byGrigori Perelman(who declined to accept an award, as he was critical of the mathematics establishment).[231]
Most mathematical journals now have online versions as well as print versions, and many online-only journals are launched.[232][233]There is an increasing drive towardopen access publishing, first made popular byarXiv.[citation needed]
There are many observable trends in mathematics, the most notable being that the subject is growing ever larger as computers are ever more important and powerful; the volume of data being produced by science and industry, facilitated by computers, continues expanding exponentially. As a result, there is a corresponding growth in the demand for mathematics to help process and understand thisbig data.[234]Math science careers are also expected to continue to grow, with the USBureau of Labor Statisticsestimating (in 2018) that "employment of mathematical science occupations is projected to grow 27.9 percent from 2016 to 2026."[235] | https://en.wikipedia.org/wiki/History_of_mathematics |
Jetonsorjettonsaretokensorcoin-like medals produced across Europe from the 13th through the 18th centuries. They were produced as counters for use in calculation on acounting board, a lined board similar to anabacus. Jetons for calculation were commonly used in Europe from about 1200 to 1700,[1]and remained in occasional use into the early nineteenth century. They also found use as a money substitute in games, similar to modern casino chips orpoker chips.
Thousands of different jetons exist, mostly of religious and educational designs, as well as portraits, the last of which most resemble coinage, somewhat similar to modern, non-circulationcommemorative coins. The spelling "jeton" is from the French; it is sometimes spelled "jetton" in English.
The Romans similarly usedpebbles(inLatin:calculi"little stones", whence Englishcalculate).[2]Addition is straightforward, and relatively efficient algorithms for multiplication and division were known.
The custom of stamping counters like coins began in France, with the oldest known coming from the fiscal offices of the royal government of France and dating from around the middle of the 13th century.[3]From the late 13th century to the end of the 14th century, jetons were produced in England, similar in design to contemporary Edwardianpennies. Although they were made of brass they were often pierced or indented at the centre to avoid them being plated with silver and passed off as real silver coins. By the middle of the 14th century, English jetons were being produced in a larger size, similar to thegroat.
Throughout the 15th century competition fromFranceand theLow Countriesended jeton manufacture in England, but not for long.Nurembergjeton masters initially started by copying counters of their European neighbours, but by the mid 16th century they gained a monopoly by mass-producing cheaper jetons for commercial use. Later – "counter casting" being obsolete – production shifted to jetons for use in games and toys, sometimes copying more or less famous jetons with a political background.
Mints in the Low Countries in the late Middle Ages in general produced the counters for official bookkeeping. Most of them show the effigy of the ruler within a flattering text and on the reverse the ruler'sescutcheonand the name or city of the accounting office.
During theDutch Revolt(1568–1609) this pattern changed and by both parties, the North in front, about 2,000 different, mostly political, jetons (Dutch:Rekenpenning) were minted depicting the victories, ideals and aims. Specifically in the last quarter of the 16th century, wheregeuzenor "beggars" made important military contributions to the Dutch side and bookkeeping was already done without counters, the production in the North was just for propaganda.
The mints and treasuries of the big estates in Central Europe used their own jetons and then had a number of them struck in gold and silver as New Year gifts for their employees, who in turn commissioned jetons with their own mottoes and coats-of-arms. In the sixteenth century the Czech Royal Treasury bought between two and three thousand pieces at the beginning of each year.
AsArabic numeralsand thezerocame into use, "pen reckoning" gradually displaced "counter casting" as the common accounting method.
In the 21st century, jetons continue to be used in some countries astelephone tokensorgettonein coin-operatedpublic telephonesor invending machines. They are usually made of metal or hard plastic. InGermanthe wordJetonrefers specifically tocasino tokens. InPolishthe wordżeton, pronounced similarly to Frenchjeton, refers both to tokens used in vending machines, phones etc. and to those used in casinos. The wordжетонhas the same use inRussian, as does the wordjetoninRomanianandžetooninEstonian. However inHungarythe wordzsetonis (somewhat dated) slang for money, particularly coins. Plastic jetons used to be used for paying the fare for theStar FerryinHong Kong.[citation needed]
Apart from their monetary use in casinos, jetons are used incard games, particularly inFrancebut also inDenmark. They are traditionally made of wood of different shapes and sizes to represent different values such as 1, 5, 10, 50 or 100 points. For example, in traditional French games, jetons are round and usually worth 1 unit; fiches are long and rectangular in shape and may be worth 10 to 20 jetons; contrats are the short rectangular counters and may be worth, say, 100 units.
The jetons are also stained or coloured so that each player can have his or her own colour. This facilitates scoring because players do not need to start with exactly the same number of counters. Nowadays plastic jetons are a cheap alternative. Games that typically use jetons includeNain Jaune,Belote,Piquet,Ombre,Mistigri,Danish TarokandVira. A dedicated box calledvirapullais used to contain Vira jetons.[citation needed]
In France and other countries ajetonis also a token amount ofmoneypaid to members of asocietyor a legislative chamber each time they are present in ameeting.[citation needed] | https://en.wikipedia.org/wiki/Jeton |
Level of measurementorscale of measureis a classification that describes the nature of information within the values assigned tovariables.[1]PsychologistStanley Smith Stevensdeveloped the best-known classification with four levels, or scales, of measurement:nominal,ordinal,interval, andratio.[1][2]This framework of distinguishing levels of measurement originated in psychology and has since had a complex history, being adopted and extended in some disciplines and by some scholars, and criticized or rejected by others.[3]Other classifications include those by Mosteller andTukey,[4]and by Chrisman.[5]
Stevens proposed his typology in a 1946Sciencearticle titled "On the theory of scales of measurement".[2]In that article, Stevens claimed that allmeasurementin science was conducted using four different types of scales that he called "nominal", "ordinal", "interval", and "ratio", unifying both "qualitative" (which are described by his "nominal" type) and "quantitative" (to a different degree, all the rest of his scales). The concept of scale types later received the mathematical rigour that it lacked at its inception with the work of mathematical psychologists Theodore Alper (1985, 1987), Louis Narens (1981a, b), andR. Duncan Luce(1986, 1987, 2001). As Luce (1997, p. 395) wrote:
S. S. Stevens (1946, 1951, 1975) claimed that what counted was having an interval or ratio scale. Subsequent research has given meaning to this assertion, but given his attempts to invoke scale type ideas it is doubtful if he understood it himself ...no measurement theorist I know accepts Stevens's broad definition of measurement ...in our view, the only sensible meaning for 'rule' is empirically testable laws about the attribute.
A nominal scale consists only of a number of distinct classes or categories, for example: [Cat, Dog, Rabbit]. Unlike the other scales, no kind of relationship between the classes can be relied upon. Thus measuring with the nominal scale is equivalent toclassifying.
Nominal measurement may differentiate between items or subjects based only on their names or (meta-)categories and other qualitative classifications they belong to. Thus it has been argued that evendichotomousdata relies on aconstructivist epistemology. In this case, discovery of an exception to a classification can be viewed as progress.
Numbers may be used to represent the variables but the numbers do not have numerical value or relationship: for example, aglobally unique identifier.
Examples of these classifications include gender, nationality, ethnicity, language, genre, style, biological species, and form.[6][7]In a university one could also use residence hall or department affiliation as examples. Other concrete examples are
Nominal scales were often called qualitative scales, and measurements made on qualitative scales were called qualitative data. However, the rise of qualitative research has made this usage confusing. If numbers are assigned as labels in nominal measurement, they have no specific numerical value or meaning. No form of arithmetic computation (+, −, ×, etc.) may be performed on nominal measures. The nominal level is the lowest measurement level used from a statistical point of view.
Equalityand other operations that can be defined in terms of equality, such asinequalityandset membership, are the onlynon-trivialoperationsthat generically apply to objects of the nominal type.
Themode, i.e. themost commonitem, is allowed as the measure ofcentral tendencyfor the nominal type. On the other hand, themedian, i.e. themiddle-rankeditem, makes no sense for the nominal type of data since ranking is meaningless for the nominal type.[8]
The ordinal type allows forrank order(1st, 2nd, 3rd, etc.) by which data can be sorted but still does not allow for a relativedegree of differencebetween them. Examples include, on one hand,dichotomousdata with dichotomous (or dichotomized) values such as "sick" vs. "healthy" when measuring health, "guilty" vs. "not-guilty" when making judgments in courts, "wrong/false" vs. "right/true" when measuringtruth value, and, on the other hand,non-dichotomousdata consisting of a spectrum of values, such as "completely agree", "mostly agree", "mostly disagree", "completely disagree" when measuringopinion.
The ordinal scale places events in order, but there is no attempt to make the intervals of the scale equal in terms of some rule. Rank orders represent ordinal scales and are frequently used in research relating to qualitative phenomena. A student's rank in his graduation class involves the use of an ordinal scale. One has to be very careful in making a statement about scores based on ordinal scales. For instance, if Devi's position in his class is 10th and Ganga's position is 40th, it cannot be said that Devi's position is four times as good as that of Ganga.
Ordinal scales only permit the ranking of items from highest to lowest. Ordinal measures have no absolute values, and the real differences between adjacent ranks may not be equal. All that can be said is that one person is higher or lower on the scale than another, but more precise comparisons cannot be made. Thus, the use of an ordinal scale implies a statement of "greater than" or "less than" (an equality statement is also acceptable) without our being able to state how much greater or less. The real difference between ranks 1 and 2, for instance, may be more or less than the difference between ranks 5 and 6. Since the numbers of this scale have only a rank meaning, the appropriate measure of central tendency is the median. A percentile or quartile measure is used for measuring dispersion. Correlations are restricted to various rank order methods. Measures of statistical significance are restricted to the non-parametric methods (R. M. Kothari, 2004).
Themedian, i.e.middle-ranked, item is allowed as the measure ofcentral tendency; however, the mean (or average) as the measure ofcentral tendencyis not allowed. Themodeis allowed.
In 1946, Stevens observed that psychological measurement, such as measurement of opinions, usually operates on ordinal scales; thus means and standard deviations have novalidity, but they can be used to get ideas for how to improveoperationalizationof variables used inquestionnaires. Mostpsychologicaldata collected bypsychometricinstruments and tests, measuringcognitiveand other abilities, are ordinal, although some theoreticians have argued they can be treated as interval or ratio scales. However, there is littleprima facieevidence to suggest that such attributes are anything more than ordinal (Cliff, 1996; Cliff & Keats, 2003; Michell, 2008).[9]In particular,[10]IQ scores reflect an ordinal scale, in which all scores are meaningful for comparison only.[11][12][13]There is no absolute zero, and a 10-point difference may carry different meanings at different points of the scale.[14][15]
The interval type allows for defining thedegree of differencebetween measurements, but not the ratio between measurements. Examples includetemperature scaleswith theCelsius scale, which has two defined points (the freezing and boiling point of water at specific conditions) and then separated into 100 intervals,datewhen measured from an arbitrary epoch (such as AD),locationin Cartesian coordinates, anddirectionmeasured in degrees from true or magnetic north. Ratios are not meaningful since 20 °C cannot be said to be "twice as hot" as 10 °C (unlike temperature inkelvins), nor can multiplication/division be carried out between any two dates directly. However,ratios of differencescan be expressed; for example, one difference can be twice another; for example, the ten-degree difference between 15 °C and 25 °C is twice the five-degree difference between 17 °C and 22 °C. Interval type variables are sometimes also called "scaled variables", but the formal mathematical term is anaffine space(in this case anaffine line).
Themode,median, andarithmetic meanare allowed to measure central tendency of interval variables, while measures of statistical dispersion includerangeandstandard deviation. Since one can only divide bydifferences, one cannot define measures that require some ratios, such as thecoefficient of variation. More subtly, while one can definemomentsabout theorigin, only central moments are meaningful, since the choice of origin is arbitrary. One can definestandardized moments, since ratios of differences are meaningful, but one cannot define the coefficient of variation, since the mean is a moment about the origin, unlike the standard deviation, which is (the square root of) a central moment.
The ratio type takes its name from the fact that measurement is the estimation of the ratio between a magnitude of a continuous quantity and aunit of measurementof the same kind (Michell, 1997, 1999). Most measurement in the physical sciences and engineering is done on ratio scales. Examples includemass,length,duration,plane angle,energyandelectric charge. In contrast to interval scales, ratios can be compared usingdivision. Very informally, many ratio scales can be described as specifying "how much" of something (i.e. an amount or magnitude). Ratio scales are often used to express anorder of magnitudesuch as for temperature inOrders of magnitude (temperature).
Thegeometric meanand theharmonic meanare allowed to measure the central tendency, in addition to the mode, median, and arithmetic mean. Thestudentized rangeand thecoefficient of variationare allowed to measure statistical dispersion. All statistical measures are allowed because all necessary mathematical operations are defined for the ratio scale.
While Stevens's typology is widely adopted, it is still being challenged by other theoreticians, particularly in the cases of the nominal and ordinal types (Michell, 1986).[16]Duncan (1986), for example, objected to the use of the wordmeasurementin relation to the nominal type and Luce (1997) disagreed with Stevens's definition of measurement.
On the other hand, Stevens (1975) said of his own definition of measurement that "the assignment can be any consistent rule. The only rule not allowed would be random assignment, for randomness amounts in effect to a nonrule". Hand says, "Basic psychology texts often begin with Stevens's framework and the ideas are ubiquitous. Indeed, the essential soundness of his hierarchy has been established for representational measurement by mathematicians, determining the invariance properties of mappings from empirical systems to real number continua. Certainly the ideas have been revised, extended, and elaborated, but the remarkable thing is his insight given the relatively limited formal apparatus available to him and how many decades have passed since he coined them."[17]
The use of the mean as a measure of the central tendency for the ordinal type is still debatable among those who accept Stevens's typology. Many behavioural scientists use the mean for ordinal data anyway. This is often justified on the basis that the ordinal type in behavioural science is in fact somewhere between the true ordinal and interval types; although the interval difference between two ordinal ranks is not constant, it is often of the same order of magnitude.
For example, applications of measurement models in educational contexts often indicate that total scores have a fairly linear relationship with measurements across the range of an assessment. Thus, some argue that so long as the unknown interval difference between ordinal scale ranks is not too variable, interval scale statistics such as means can meaningfully be used on ordinal scale variables. Statistical analysis software such asSPSSrequires the user to select the appropriate measurement class for each variable. This ensures that subsequent user errors cannot inadvertently perform meaningless analyses (for example correlation analysis with a variable on a nominal level).
L. L. Thurstonemade progress toward developing a justification for obtaining the interval type, based on thelaw of comparative judgment. A common application of the law is theanalytic hierarchy process. Further progress was made byGeorg Rasch(1960), who developed the probabilisticRasch modelthat provides a theoretical basis and justification for obtaining interval-level measurements from counts of observations such as total scores on assessments.
Typologies aside from Stevens's typology have been proposed. For instance,MostellerandTukey(1977) and Nelder (1990)[18]described continuous counts, continuous ratios, count ratios, and categorical modes of data. See also Chrisman (1998), van den Berg (1991).[19]
Mosteller and Tukey[4]noted that the four levels are not exhaustive and proposed seven instead:
For example, percentages (a variation on fractions in the Mosteller–Tukey framework) do not fit well into Stevens's framework: No transformation is fully admissible.[16]
Nicholas R. Chrisman[5]introduced an expanded list of levels of measurement to account for various measurements that do not necessarily fit with the traditional notions of levels of measurement. Measurements bound to a range and repeating (like degrees in a circle, clock time, etc.), graded membership categories, and other types of measurement do not fit to Stevens's original work, leading to the introduction of six new levels of measurement, for a total of ten:
While some claim that the extended levels of measurement are rarely used outside of academic geography,[20]graded membership is central tofuzzy set theory, while absolute measurements include probabilities and the plausibility and ignorance inDempster–Shafer theory. Cyclical ratio measurements include angles and times. Counts appear to be ratio measurements, but the scale is not arbitrary and fractional counts are commonly meaningless. Log-interval measurements are commonly displayed in stock market graphics. All these types of measurements are commonly used outside academic geography, and do not fit well to Stevens's original work.
The theory of scale types is the intellectual handmaiden to Stevens's "operational theory of measurement", which was to become definitive within psychology and thebehavioral sciences,[citation needed]despite Michell's characterization as its being quite at odds with measurement in the natural sciences (Michell, 1999). Essentially, the operational theory of measurement was a reaction to the conclusions of a committee established in 1932 by theBritish Association for the Advancement of Scienceto investigate the possibility of genuine scientific measurement in the psychological and behavioral sciences. This committee, which became known as theFerguson committee, published a Final Report (Ferguson, et al., 1940, p. 245) in which Stevens'ssonescale (Stevens & Davis, 1938) was an object of criticism:
…any law purporting to express a quantitative relation between sensation intensity and stimulus intensity is not merely false but is in fact meaningless unless and until a meaning can be given to the concept of addition as applied to sensation.
That is, if Stevens'ssonescale genuinely measured the intensity of auditory sensations, then evidence for such sensations as being quantitative attributes needed to be produced. The evidence needed was the presence ofadditive structure—a concept comprehensively treated by the German mathematicianOtto Hölder(Hölder, 1901). Given that the physicist and measurement theoristNorman Robert Campbelldominated the Ferguson committee's deliberations, the committee concluded that measurement in the social sciences was impossible due to the lack ofconcatenationoperations. This conclusion was later rendered false by the discovery of thetheory of conjoint measurementby Debreu (1960) and independently by Luce & Tukey (1964). However, Stevens's reaction was not to conduct experiments to test for the presence of additive structure in sensations, but instead to render the conclusions of the Ferguson committee null and void by proposing a new theory of measurement:
Paraphrasing N. R. Campbell (Final Report, p. 340), we may say that measurement, in the broadest sense, is defined as the assignment of numerals to objects and events according to rules (Stevens, 1946, p. 677).
Stevens was greatly influenced by the ideas of another Harvard academic,[21]theNobel laureatephysicistPercy Bridgman(1927), whose doctrine ofoperationalismStevens used to define measurement. In Stevens's definition, for example, it is the use of a tape measure that defines length (the object of measurement) as being measurable (and so by implication quantitative). Critics of operationalism object that it confuses the relations between two objects or events for properties of one of those of objects or events (Moyer, 1981a, b; Rogers, 1989).[22][23]
The Canadian measurement theorist William Rozeboom was an early and trenchant critic of Stevens's theory of scale types.[24]
Another issue is that the same variable may be a different scale type depending on how it is measured and on the goals of the analysis. For example, hair color is usually thought of as a nominal variable, since it has no apparent ordering.[25]However, it is possible to order colors (including hair colors) in various ways, including by hue; this is known ascolorimetry. Hue is an interval level variable. | https://en.wikipedia.org/wiki/Level_of_measurement |
This is a list of notablenumbersand articles about notable numbers. The list does not contain all numbers in existence as most of thenumber setsare infinite. Numbers may be included in the list based on their mathematical, historical or cultural notability, but all numbers have qualities that could arguably make them notable. Even the smallest "uninteresting" number is paradoxically interesting for that very property. This is known as theinteresting number paradox.
The definition of what is classed as a number is rather diffuse and based on historical distinctions. For example, the pair of numbers (3,4) is commonly regarded as a number when it is in the form of acomplex number(3+4i), but not when it is in the form of avector(3,4). This list will also be categorized with the standard convention oftypes of numbers.
This list focuses on numbers asmathematical objectsand isnota list ofnumerals, which are linguistic devices: nouns, adjectives, or adverbs thatdesignatenumbers. The distinction is drawn between thenumberfive (anabstract objectequal to 2+3), and thenumeralfive (thenounreferring to the number).
Natural numbers are a subset of the integers and are of historical and pedagogical value as they can be used forcountingand often have ethno-cultural significance (see below). Beyond this, natural numbers are widely used as a building block for other number systems including theintegers,rational numbersandreal numbers. Natural numbers are those used forcounting(as in "there aresix(6) coins on the table") andordering(as in "this is thethird(3rd) largest city in the country"). In common language, words used for counting are "cardinal numbers" and words used for ordering are "ordinal numbers". Defined by thePeano axioms, the natural numbers form an infinitely large set. Often referred to as "the naturals", the natural numbers are usually symbolised by a boldfaceN(orblackboard boldN{\displaystyle \mathbb {\mathbb {N} } }, UnicodeU+2115ℕDOUBLE-STRUCK CAPITAL N).
The inclusion of0in the set of natural numbers is ambiguous and subject to individual definitions. Inset theoryandcomputer science, 0 is typically considered a natural number. Innumber theory, it usually is not. The ambiguity can be solved with the terms "non-negative integers", which includes 0, and "positive integers", which does not.
Natural numbers may be used ascardinal numbers, which may go byvarious names. Natural numbers may also be used asordinal numbers.
Natural numbers may have properties specific to the individual number or may be part of a set (such as prime numbers) of numbers with a particular property.
Along with their mathematical properties, many integers haveculturalsignificance[2]or are also notable for their use in computing and measurement. As mathematical properties (such as divisibility) can confer practical utility, there may be interplay and connections between the cultural or practical significance of an integer and its mathematical properties.
Subsets of the natural numbers, such as the prime numbers, may be grouped into sets, for instance based on the divisibility of their members. Infinitely many such sets are possible. A list of notable classes of natural numbers may be found atclasses of natural numbers.
A prime number is a positive integer which has exactly twodivisors: 1 and itself.
The first 100 prime numbers are:
A highly composite number (HCN) is a positive integer with more divisors than any smaller positive integer. They are often used ingeometry, grouping and time measurement.
The first 20 highly composite numbers are:
1,2,4,6,12,24,36,48,60,120,180,240,360,720,840,1260,1680,2520,5040,7560
A perfect number is an integer that is the sum of its positive proper divisors (all divisors except itself).
The first 10 perfect numbers:
The integers are asetof numbers commonly encountered inarithmeticandnumber theory. There are manysubsetsof the integers, including thenatural numbers,prime numbers,perfect numbers, etc. Many integers are notable for their mathematical properties. Integers are usually symbolised by a boldfaceZ(orblackboard boldZ{\displaystyle \mathbb {\mathbb {Z} } }, UnicodeU+2124ℤDOUBLE-STRUCK CAPITAL Z); this became the symbol for the integers based on the German word for "numbers" (Zahlen).
Notable integers include−1, the additive inverse of unity, and0, theadditive identity.
As with the natural numbers, the integers may also have cultural or practical significance. For instance,−40is the equal point in theFahrenheitandCelsiusscales.
One important use of integers is inorders of magnitude. Apower of 10is a number 10k, wherekis an integer. For instance, withk= 0, 1, 2, 3, ..., the appropriate powers of ten are 1, 10, 100, 1000, ... Powers of ten can also be fractional: for instance,k= -3 gives 1/1000, or 0.001. This is used inscientific notation, real numbers are written in the formm× 10n. The number 394,000 is written in this form as 3.94 × 105.
Integers are used asprefixesin theSI system. Ametric prefixis aunit prefixthat precedes a basic unit of measure to indicate amultipleorfractionof the unit. Each prefix has a unique symbol that is prepended to the unit symbol. The prefixkilo-, for example, may be added togramto indicatemultiplicationby one thousand: one kilogram is equal to one thousand grams. The prefixmilli-, likewise, may be added tometreto indicatedivisionby one thousand; one millimetre is equal to one thousandth of a metre.
A rational number is any number that can be expressed as thequotientorfractionp/qof twointegers, anumeratorpand a non-zerodenominatorq.[5]Sinceqmay be equal to 1, every integer is trivially a rational number. Thesetof all rational numbers, often referred to as "the rationals", the field of rationals or the field of rational numbers is usually denoted by a boldfaceQ(orblackboard boldQ{\displaystyle \mathbb {Q} }, UnicodeU+211AℚDOUBLE-STRUCK CAPITAL Q);[6]it was thus denoted in 1895 byGiuseppe Peanoafterquoziente, Italian for "quotient".
Rational numbers such as 0.12 can be represented ininfinitelymany ways, e.g.zero-point-one-two(0.12),three twenty-fifths(3/25),nine seventy-fifths(9/75), etc. This can be mitigated by representing rational numbers in a canonical form as an irreducible fraction.
A list of rational numbers is shown below. The names of fractions can be found atnumeral (linguistics).
Real numbersare least upper bounds of sets of rational numbers that are bounded above, or greatest lower bounds of sets of rational numbers that are bounded below, or limits of convergent sequences of rational numbers. Real numbers that are not rational numbers are calledirrational numbers. The real numbers are categorised as algebraic numbers (which are the root of a polynomial with rational coefficients) or transcendental numbers, which are not; all rational numbers are algebraic.
or
Formula
Some numbers are known to beirrational numbers, but have not been proven to be transcendental. This differs from the algebraic numbers, which are known not to be transcendental.
For some numbers, it is not known whether they are algebraic or transcendental. The following list includes real numbers that have not been proved to be irrational, nor transcendental.
Some real numbers, including transcendental numbers, are not known with high precision.
Hypercomplex numberis a term for anelementof a unitalalgebraover thefieldofreal numbers. Thecomplex numbersare often symbolised by a boldfaceC(orblackboard boldC{\displaystyle \mathbb {\mathbb {C} } }, UnicodeU+2102ℂDOUBLE-STRUCK CAPITAL C), while the set ofquaternionsis denoted by a boldfaceH(orblackboard boldH{\displaystyle \mathbb {H} }, UnicodeU+210DℍDOUBLE-STRUCK CAPITAL H).
Transfinite numbersare numbers that are "infinite" in the sense that they are larger than allfinitenumbers, yet not necessarilyabsolutely infinite.
Physical quantities that appear in the universe are often described usingphysical constants.
Many languages have words expressingindefinite and fictitious numbers—inexact terms of indefinite size, used for comic effect, for exaggeration, asplaceholder names, or when precision is unnecessary or undesirable. One technical term for such words is "non-numerical vague quantifier".[45]Such words designed to indicate large quantities can be called "indefinite hyperbolic numerals".[46] | https://en.wikipedia.org/wiki/List_of_numbers |
Quantityoramountis a property that can exist as amultitudeormagnitude, which illustratediscontinuityandcontinuity. Quantities can be compared in terms of "more", "less", or "equal", or by assigning a numerical value multiple of aunit of measurement.Mass,time,distance,heat, andangleare among the familiar examples of quantitative properties.
Quantity is among the basicclassesof things along withquality,substance, change, and relation. Some quantities are such by their inner nature (as number), while others function as states (properties, dimensions, attributes) of things such as heavy and light, long and short, broad and narrow, small and great, or much and little.
Under the name of multitude comes what is discontinuous and discrete and divisible ultimately into indivisibles, such as:army, fleet, flock, government, company, party, people, mess (military), chorus, crowd, andnumber; all which are cases ofcollective nouns. Under the name of magnitude comes what is continuous and unified and divisible only into smaller divisibles, such as:matter, mass, energy, liquid, material—all cases of non-collective nouns.
Along with analyzing its nature andclassification, the issues of quantity involve such closely related topics as dimensionality, equality, proportion, the measurements of quantities, the units of measurements, number and numbering systems, the types of numbers and their relations to each other as numerical ratios.
In mathematics, the concept of quantity is an ancient one extending back to the time ofAristotleand earlier. Aristotle regarded quantity as a fundamental ontological and scientific category. In Aristotle'sontology, quantity or quantum was classified into two different types, which he characterized as follows:
Quantummeans that which is divisible into two or more constituent parts, of which each is by nature aoneand athis. A quantum is a plurality if it is numerable, a magnitude if it is measurable.Pluralitymeans that which is divisible potentially into non-continuous parts, magnitude that which is divisible into continuous parts; of magnitude, that which is continuous in one dimension is length; in two breadth, in three depth. Of these, limited plurality is number, limited length is a line, breadth a surface, depth a solid.
In hisElements,Eucliddeveloped the theory of ratios of magnitudes without studying the nature of magnitudes, as Archimedes, but giving the following significant definitions:
A magnitude is apartof a magnitude, the less of the greater, when it measures the greater; Aratiois a sort of relation in respect of size between two magnitudes of the same kind.
For Aristotle and Euclid, relations were conceived aswhole numbers(Michell, 1993).John Wallislater conceived of ratios of magnitudes asreal numbers:
When a comparison in terms of ratio is made, the resultant ratio often [namely with the exception of the 'numerical genus' itself] leaves the genus of quantities compared, and passes into the numerical genus, whatever the genus of quantities compared may have been.
That is, the ratio of magnitudes of any quantity, whether volume, mass, heat and so on, is a number. Following this,Newtonthen defined number, and the relationship between quantity and number, in the following terms:
Bynumberwe understand not so much a multitude of unities, as the abstracted ratio of any quantity to another quantity of the same kind, which we take for unity.
Continuous quantities possess a particular structure that was first explicitly characterized byHölder(1901) as a set of axioms that define such features asidentitiesandrelationsbetween magnitudes. In science, quantitative structure is the subject ofempirical investigationand cannot be assumed to exista priorifor any given property. The linearcontinuumrepresents the prototype of continuous quantitative structure as characterized by Hölder (1901) (translated in Michell & Ernst, 1996). A fundamental feature of any type of quantity is that the relationships of equality or inequality can in principle be stated in comparisons between particular magnitudes, unlike quality, which is marked by likeness, similarity and difference, diversity. Another fundamental feature is additivity. Additivity may involve concatenation, such as adding two lengths A and B to obtain a third A + B. Additivity is not, however, restricted to extensive quantities but may also entail relations between magnitudes that can be established through experiments that permit tests of hypothesizedobservablemanifestations of the additive relations of magnitudes. Another feature is continuity, on which Michell (1999, p. 51) says of length, as a type of quantitative attribute, "what continuity means is that if any arbitrary length, a, is selected as a unit, then for every positive real number,r, there is a length b such that b =ra". A further generalization is given by thetheory of conjoint measurement, independently developed by French economistGérard Debreu(1960) and by the American mathematical psychologistR. Duncan Luceand statisticianJohn Tukey(1964).
Magnitude (how much) and multitude (how many), the two principal types of quantities, are further divided as mathematical and physical. In formal terms, quantities—their ratios, proportions, order and formal relationships of equality and inequality—are studied by mathematics. The essential part of mathematical quantities consists of having a collection ofvariables, each assuming asetof values. These can be a set of a single quantity, referred to as ascalarwhen represented by real numbers, or have multiple quantities as dovectorsandtensors, two kinds of geometric objects.
The mathematical usage of a quantity can then be varied and so is situationally dependent. Quantities can be used as beinginfinitesimal,arguments of a function, variables in anexpression(independent or dependent), or probabilistic as in random andstochasticquantities. In mathematics, magnitudes and multitudes are also not only two distinct kinds of quantity but furthermore relatable to each other.
Number theorycovers the topics of thediscrete quantitiesas numbers: number systems with their kinds and relations.Geometrystudies the issues of spatial magnitudes: straight lines, curved lines, surfaces and solids, all with their respective measurements and relationships.
A traditionalAristotelian realist philosophy of mathematics, stemming fromAristotleand remaining popular until the eighteenth century, held that mathematics is the "science of quantity". Quantity was considered to be divided into the discrete (studied by arithmetic) and the continuous (studied by geometry and latercalculus). The theory fits reasonably well elementary or school mathematics but less well the abstract topological and algebraic structures of modern mathematics.[1]
Establishing quantitative structure and relationshipsbetweendifferent quantities is the cornerstone of modern science, especially but not restricted to physical sciences. Physics is fundamentally a quantitative science; chemistry, biology and others are increasingly so. Their progress is chiefly achieved due to rendering the abstract qualities of material entities into physical quantities, by postulating that all material bodies marked by quantitative properties or physical dimensions are subject to some measurements and observations. Setting the units of measurement, physics covers such fundamental quantities as space (length, breadth, and depth) and time, mass and force, temperature, energy, andquanta.
A distinction has also been made betweenintensive quantityandextensive quantityas two types of quantitative property, state or relation. The magnitude of anintensive quantitydoes not depend on the size, or extent, of the object or system of which the quantity is a property, whereas magnitudes of anextensive quantityare additive for parts of an entity or subsystems. Thus, magnitude does depend on the extent of the entity or system in the case of extensive quantity. Examples of intensive quantities aredensityandpressure, while examples of extensive quantities areenergy,volume, andmass.
In human languages, includingEnglish,numberis asyntactic category, along withpersonandgender. The quantity is expressed by identifiers, definite and indefinite, andquantifiers, definite and indefinite, as well as by three types ofnouns: 1. count unit nouns or countables; 2.mass nouns, uncountables, referring to the indefinite, unidentified amounts; 3. nouns of multitude (collective nouns). The word ‘number’ belongs to a noun of multitude standing either for a single entity or for the individuals making the whole. An amount in general is expressed by a special class of words called identifiers, indefinite and definite and quantifiers, definite and indefinite.[clarification needed]The amount may be expressed by: singular form and plural from, ordinal numbers before a count noun singular (first, second, third...), the demonstratives; definite and indefinite numbers and measurements (hundred/hundreds, million/millions), or cardinal numbers before count nouns. The set of language quantifiers covers "a few, a great number, many, several (for count names); a bit of, a little, less, a great deal (amount) of, much (for mass names); all, plenty of, a lot of, enough, more, most, some, any, both, each, either, neither, every, no". For the complex case of unidentified amounts, the parts and examples of a mass are indicated with respect to the following: a measure of a mass (two kilos of rice and twenty bottles of milk or ten pieces of paper); a piece or part of a mass (part, element, atom, item, article, drop); or a shape of a container (a basket, box, case, cup, bottle, vessel, jar).
Some further examples of quantities are:
Dimensionless quantities, or quantities of dimension one,[2]are quantitiesimplicitly definedin a manner that prevents their aggregation intounits of measurement.[3][4]Typically expressed asratiosthat align with another system, these quantities do not necessitate explicitly definedunits. For instance,alcohol by volume(ABV) represents avolumetric ratio; its value remains independent of the specificunits of volumeused, such as inmillilitersper milliliter (mL/mL).
Thenumber oneis recognized as a dimensionlessbase quantity.[5]Radiansserve as dimensionless units forangular measurements, derived from the universal ratio of 2π times theradiusof a circle being equal to its circumference.[6] | https://en.wikipedia.org/wiki/Mathematical_quantity |
Inthermodynamics, theparticle number(symbolN) of athermodynamic systemis thenumberof constituentparticlesin that system.[1]The particle number is a fundamentalthermodynamic propertywhich isconjugateto thechemical potential. Unlike mostphysical quantities, the particle number is adimensionless quantity, specifically acountable quantity. It is anextensive property, as it is directly proportional to the size of the system under consideration and thus meaningful only forclosed systems.
Aconstituent particleis one that cannot be broken into smaller pieces at the scale ofenergyk·Tinvolved in the process (wherekis theBoltzmann constantandTis thetemperature). For example, in a thermodynamic system consisting of apistoncontainingwater vapour, the particle number is the number of water molecules in the system. The meaning of constituent particles, and thereby of particle numbers, is thus temperature-dependent.
The concept of particle number plays a major role intheoreticalconsiderations. In situations where the actual particle number of a given thermodynamical system needs to be determined, mainly inchemistry, it is not practically possible to measure it directly bycountingthe particles. If the material is homogeneous and has a knownamount of substancenexpressed inmoles, the particle numberNcan be found by the relation :N=nNA{\displaystyle N=nN_{A}},
whereNAis theAvogadro constant.[1]
A relatedintensive system parameteris theparticlenumber density(orparticle number concentrationPNC), a quantity of kindvolumetric number densityobtained by dividing the particle number of a system by itsvolume. This parameter is often denoted by the lower-case lettern.
Inquantum mechanicalprocesses, the total number of particles may not be preserved. The concept is therefore generalized to theparticle number operator, that is, theobservablethat counts the number of constituent particles.[2]Inquantum field theory, the particle number operator (seeFock state) is conjugate to the phase of theclassicalwave (seecoherent state).
One measure ofair pollutionused in air quality standards is the atmospheric concentration ofparticulate matter. This measure is usually expressed in μg/m3(microgramsper cubic metre). In the current EU emission norms for cars, vans, and trucks and in the upcoming EU emission norm for non-road mobile machinery, particle number measurements and limits are defined, commonly referred to asPN, with units [#/km] or [#/kWh]. In this case, PN expresses a quantity of particles per unit distance (or work). | https://en.wikipedia.org/wiki/Particle_number |
Subitizingis the rapid, accurate, and effortless ability to perceive small quantities of items in aset, typically when there are four or fewer items, without relying on linguistic or arithmetic processes. The term refers to the sensation of instantly knowing how many objects are in the visual scene when their number falls within the subitizing range.[1]
Sets larger than about four to five items cannot be subitized unless the items appear in a pattern with which the person is familiar (such as the six dots on one face of a die). Large, familiar sets might becountedone-by-one (or the person might calculate the number through a rapid calculation if they can mentally group the elements into a few small sets). A person could alsoestimatethe number of a large set—a skill similar to, but different from, subitizing. The term subitizing was coined in 1949 by E. L. Kaufman et al.,[1]and is derived from the Latin adjectivesubitus(meaning "sudden").
The accuracy, speed, and confidence with which observers make judgments of the number of items are critically dependent on the number of elements to be enumerated. Judgments made for displays composed of around one to four items are rapid,[2]accurate,[3]and confident.[4]However, once there are more than four items to count, judgments are made with decreasing accuracy and confidence.[1]In addition, response times rise in a dramatic fashion, with an extra 250–350ms added for each additional item within the display beyond about four.[5]
While the increase in response time for each additional element within a display is 250–350ms per item outside the subitizing range, there is still a significant, albeit smaller, increase of 40–100ms per item within the subitizing range.[2]A similar pattern of reaction times is found in young children, although with steeper slopes for both the subitizing range and the enumeration range.[6]This suggests there is no span ofapprehensionas such, if this is defined as the number of items which can be immediately apprehended by cognitive processes, since there is an extra cost associated with each additional item enumerated. However, the relative differences in costs associated with enumerating items within the subitizing range are small, whether measured in terms of accuracy, confidence, orspeed of response. Furthermore, the values of all measures appear to differ markedly inside and outside the subitizing range.[1]So, while there may be no span of apprehension, there appear to be real differences in the ways in which a small number of elements is processed by the visual system (i.e. approximately four or fewer items), compared with larger numbers of elements (i.e. approximately more than four items).
A 2006 study demonstrated that subitizing and counting are not restricted to visual perception, but also extend to tactile perception, when observers had to name the number of stimulated fingertips.[7]A 2008 study also demonstrated subitizing and counting in auditory perception.[8]Even though the existence of subitizing in tactile perception has been questioned,[9]this effect has been replicated many times and can be therefore considered as robust.[10][11][12]The subitizing effect has also been obtained in tactile perception with congenitally blind adults.[13]Together, these findings support the idea that subitizing is a general perceptual mechanism extending to auditory and tactile processing.
As the derivation of the term "subitizing" suggests, the feeling associated with making a number judgment within the subitizing range is one of immediately being aware of the displayed elements.[3]When the number of objects presented exceeds the subitizing range, this feeling is lost, and observers commonly report an impression of shifting their viewpoint around the display, until all the elements presented have been counted.[1]The ability of observers to count the number of items within a display can be limited, either by the rapid presentation and subsequent masking of items,[14]or by requiring observers to respond quickly.[1]Both procedures have little, if any, effect on enumeration within the subitizing range. These techniques may restrict the ability of observers to count items by limiting the degree to which observers can shift their "zone of attention"[15]successively to different elements within the display.
Atkinson, Campbell, and Francis[16]demonstrated that visualafterimagescould be employed in order to achieve similar results. Using a flashgun to illuminate a line of white disks, they were able to generate intense afterimages in dark-adapted observers. Observers were required to verbally report how many disks had been presented, both at 10s and at 60s after the flashgun exposure. Observers reported being able to see all the disks presented for at least 10s, and being able to perceive at least some of the disks after 60s. Unlike simply displaying the images for 10 and 60 second intervals, when presented in the form of afterimages, eye movement cannot be employed for the purpose of counting: when the subjects move their eyes, the images also move. Despite a long period of time to enumerate the number of disks presented when the number of disks presented fell outside the subitizing range (i.e., 5–12 disks), observers made consistent enumeration errors in both the 10s and 60s conditions. In contrast, no errors occurred within the subitizing range (i.e., 1–4 disks), in either the 10s or 60s conditions.[17]
The work on theenumerationof afterimages[16][17]supports the view that different cognitive processes operate for the enumeration of elements inside and outside the subitizing range, and as such raises the possibility that subitizing and counting involve different brain circuits. However,functional imagingresearch has been interpreted both to support different[18]and shared processes.[19]
Social theory supporting the view that subitizing and counting may involve functionally and anatomically distinct brain areas comes from patients withsimultanagnosia, one of the key components ofBálint's syndrome.[20]Patients with this disorder suffer from an inability to perceive visual scenes properly, being unable to localize objects in space, either by looking at the objects, pointing to them, or by verbally reporting their position.[20]Despite these dramatic symptoms, such patients are able to correctly recognize individual objects.[21]Crucially, people with simultanagnosia are unable to enumerate objects outside the subitizing range, either failing to count certain objects, or alternatively counting the same object several times.[22]
However, people with simultanagnosia have no difficulty enumerating objects within the subitizing range.[23]The disorder is associated with bilateral damage to theparietal lobe, an area of the brain linked with spatial shifts of attention.[18]These neuropsychological results are consistent with the view that the process of counting, but not that of subitizing, requires active shifts of attention. However, recent research has questioned this conclusion by finding that attention also affects subitizing.[24]
A further source of research on the neural processes of subitizing compared to counting comes frompositron emission tomography(PET) research on normal observers. Such research compares the brain activity associated with enumeration processes inside (i.e., 1–4 items) for subitizing, and outside (i.e., 5–8 items) for counting.[18][19]
Such research finds that within the subitizing and counting range activation occurs bilaterally in the occipital extrastriate cortex and superior parietal lobe/intraparietal sulcus. This has been interpreted as evidence that shared processes are involved.[19]However, the existence of further activations during counting in the right inferior frontal regions, and theanterior cingulatehave been interpreted as suggesting the existence of distinct processes during counting related to the activation of regions involved in the shifting of attention.[18]
Historically, many systems have attempted to use subitizing to identify full or partial quantities. In the twentieth century, mathematics educators started to adopt some of these systems, as reviewed in the examples below, but often switched to more abstract color-coding to represent quantities up to ten.
In the 1990s, babies three weeks old were shown to differentiate between 1–3 objects, that is, to subitize.[22]A more recent meta-study summarizing five different studies concluded that infants are born with an innate ability to differentiate quantities within a small range, which increases over time.[25]By the age of seven that ability increases to 4–7 objects. Some practitioners claim that with training, children are capable of subitizing 15+ objects correctly.[citation needed]
The hypothesized use ofyupana, an Inca counting system, placed up to five counters in connected trays for calculations.
In each place value, the Chineseabacususes four or five beads to represent units, which are subitized, and one or two separate beads, which symbolize fives. This allows multi-digit operations such as carrying and borrowing to occur without subitizing beyond five.
European abacuses use ten beads in each register, but usually separate them into fives by color.
The idea of instant recognition of quantities has been adopted by several pedagogical systems, such asMontessori,CuisenaireandDienes. However, these systems only partially use subitizing, attempting to make all quantities from 1 to 10 instantly recognizable. To achieve it, they code quantities by color and length of rods or bead strings representing them. Recognizing such visual or tactile representations and associating quantities with them involves different mental operations from subitizing.
One of the most basic applications is indigit groupingin large numbers, which allow one to tell the size at a glance, rather than having to count. For example, writing one million (1000000) as 1,000,000 (or 1.000.000 or1000000) or one (short) billion (1000000000) as 1,000,000,000 (or other forms, such as 1,00,00,00,000 in theIndian numbering system) makes it much easier to read. This is particularly important in accounting and finance, as an error of a single decimal digit changes the amount by a factor of ten. This is also found in computerprogramming languagesforliteralvalues, some of which usedigit separators.
Dice,playing cardsand other gaming devices traditionally split quantities into subitizable groups with recognizable patterns. The behavioural advantage of this grouping method has been scientifically investigated by Ciccione andDehaene,[26]who showed that counting performances are improved if the groups share the same amount of items and the same repeated pattern.
A comparable application is to split up binary and hexadecimal number representations, telephone numbers, bank account numbers (e.g.,IBAN, social security numbers, number plates, etc.) into groups ranging from 2 to 5 digits separated by spaces, dots, dashes, or other separators. This is done to support overseeing completeness of a number when comparing or retyping. This practice of grouping characters also supports easier memorization of large numbers and character structures.
There is at least one game that can be played online to self assess one's ability to subitize.[27] | https://en.wikipedia.org/wiki/Subitizing_and_counting |
Tally marks, also calledhash marks, are a form ofnumeralused forcounting. They can be thought of as aunary numeral system.
They are most useful incountingor tallying ongoing results, such as thescorein a game or sport, as no intermediate results need to be erased or discarded. However, because of the length of large numbers, tallies are not commonly used for static text. Notched sticks, known astally sticks, were also historically used for this purpose.
Counting aids other than body parts appear in theUpper Paleolithic. The oldesttally sticksdate to between 35,000 and 25,000 years ago, in the form of notched bones found in the context of theEuropeanAurignaciantoGravettianand in Africa'sLate Stone Age.
The so-calledWolf boneis a prehistoric artifact discovered in 1937 inCzechoslovakiaduring excavations atDolní Věstonice,Moravia, led byKarl Absolon. Dated to theAurignacian, approximately 30,000 years ago, the bone is marked with 55 marks which may be tally marks. The head of an ivoryVenus figurinewas excavated close to the bone.[1]
TheIshango bone, found in theIshangoregion of the present-dayDemocratic Republic of Congo, is dated to over 20,000 years old. Upon discovery, it was thought to portray a series ofprime numbers. In the bookHow Mathematics Happened: The First 50,000 Years, Peter Rudman argues that the development of the concept of prime numbers could only have come about after the concept of division, which he dates to after10,000 BC, with prime numbers probably not being understood until about 500 BC. He also writes that "no attempt has been made to explain why a tally of something should exhibit multiples of two,prime numbersbetween 10 and 20, and some numbers that are almost multiples of 10."[2]Alexander Marshackexamined the Ishango bone microscopically, and concluded that it may represent a six-monthlunar calendar.[3]
Tally marks are typically clustered in groups of five for legibility. The cluster size 5 has the advantages of (a) easy conversion into decimal for higher arithmetic operations and (b) avoiding error, as humans can far more easilycorrectly identifya cluster of 5 than one of 10.[citation needed]
Roman numerals, theBrahmiandChinese numeralsfor one through three (一 二 三), androd numeralswere derived from tally marks, as possibly was theoghamscript.[7]
Base 1arithmeticnotation systemis aunarypositional systemsimilar to tally marks. It is rarely used as a practical base forcountingdue to its difficult readability.
The numbers 1, 2, 3, 4, 5, 6 ... would be represented in this system as[8]
Base 1 notation is widely used intype numbersof flour; the higher number represents a higher grind.
In 2015,Ken Lundeand Daisuke Miura submitted a proposal to encode various systems of tally marks in theUnicode Standard.[9]However, the box tally and dot-and-dash tally characters were not accepted for encoding, and only the fiveideographictally marks (正 scheme) and two Western tally digits were added to the Unicode Standard in theCounting Rod Numeralsblock in Unicode version 11.0 (June 2018). Only the tally marks for the numbers 1 and 5 are encoded, and tally marks for the numbers 2, 3 and 4 are intended to be composed from sequences of tally mark 1 at the font level. | https://en.wikipedia.org/wiki/Tally_mark |
Yan Tan Tetheraoryan-tan-tetherais a sheep-counting system traditionally used byshepherdsinYorkshire,Northern Englandand some other parts ofBritain.[1]The words may be derived from numbers inBrythonic Celticlanguages such asCumbricwhich had died out in most of Northern England by the sixth century, but they were commonly used for sheep counting and counting stitches inknittinguntil theIndustrial Revolution, especially in thefellsof theLake District. Though most of these number systems fell out of use by the turn of the 20th century, some are still in use.
Sheep-counting systems ultimately derive fromBrythonicCeltic languages, such asCumbric; Tim Gay writes: “[Sheep-counting systems from all over the British Isles] all compared very closely to 18th-centuryCornishand modernWelsh".[2]It is impossible, given the corrupted form in which they have survived, to be sure of their exact origin. The counting systems have changed considerably over time. A particularly common tendency is for certain pairs of adjacent numbers to come to resemble each other byrhyme(notably the words for 1 and 2, 3 and 4, 6 and 7, or 8 and 9). Still, multiples of five tend to be fairly conservative; comparebumfitwith Welshpymtheg, in contrast with standard Englishfifteen.
Like most Celtic numbering systems, they tend to bevigesimal(basedon the number twenty), but they usually lack words to describe quantities larger than twenty; this is not a limitation of either modernised decimal Celtic counting systems or the older ones. To count a large number of sheep, a shepherd would repeatedly count to twenty, placing a mark on the ground, or move a hand to another mark on ashepherd's crook, or drop a pebble into a pocket to represent eachscore(e.g. 5 score sheep = 100 sheep).
Their use is also attested in a "knitting song" known to be sung around the middle of the nineteenth century inWensleydale,Yorkshire, beginning "yahn, tayhn, tether, mether, mimph".[3]
The counting system has been used for products sold withinNorthern EnglandandYorkshire, such as prints,[4]beers,[5]alcoholic sparkling water (hard seltzer in U.S.),[6]and yarns,[7]as well as in artistic works referencing the region, such asHarrison Birtwistle's 1986 operaYan Tan Tethera.
Jake Thackray's song "Old Molly Metcalfe"[8]from his 1972 albumBantam Cockuses the Swaledale "Yan Tan Tether Mether Pip" as a repeating lyrical theme.
Garth Nixused the counting system to name the seven Grotesques in his novelGrim Tuesday.[9]
The wordyanoryenfor 'one' inCumbrian,Northumbrian, and someYorkshiredialects generally represents a regular development inNorthern Englishin which theOld Englishlong vowel/ɑː/<ā> was broken into/ie/,/ia/and so on. This explains the shift toyanandanefrom the Old Englishān, which is itself derived from theProto-Germanic*ainaz.[10][11]Another example of this development is the Northern English word for 'home',hame, which has forms such ashyem, yemandyamall deriving from the Old Englishhām.[12]
Note: Scots here means "Scots" not "Gaelic" | https://en.wikipedia.org/wiki/Yan_tan_tethera |
TheBanach–Tarski paradoxis atheoreminset-theoreticgeometry, which states the following: Given a solidballinthree-dimensional space,there existsa decomposition of the ball into a finite number ofdisjointsubsets, which can then be put back together in a different way to yield two identical copies of the original ball. Indeed, the reassembly process involves only moving the pieces around and rotating them, without changing their original shape. However, the pieces themselves are not "solids" in the traditional sense, but infinite scatterings ofpoints. The reconstruction can work with as few as five pieces.[1]
An alternative form of the theorem states that given any two "reasonable" solid objects (such as a small ball and a huge ball), the cut pieces of either one can be reassembled into the other. This is often stated informally as "a pea can be chopped up and reassembled into the Sun" and called the "pea and the Sun paradox".
The theorem is averidical paradox: it contradicts basic geometric intuition, but is not false or self-contradictory. "Doubling the ball" by dividing it into parts and moving them around byrotationsandtranslations, without any stretching, bending, or adding new points, seems to be impossible, since all these operationsought, intuitively speaking, to preserve thevolume. The intuition that such operations preserve volumes is not mathematically absurd and it is even included in the formal definition of volumes. However, this is not applicable here because in this case it is impossible to define the volumes of the considered subsets. Reassembling them reproduces a set that has a volume, which happens to be different from the volume at the start.
Unlike most theorems in geometry, themathematical proofof this result depends on the choice ofaxioms for set theoryin a critical way. It can be proven using theaxiom of choice, which allows for the construction ofnon-measurable sets, i.e., collections of points that do not have a volume in the ordinary sense, and whose construction requires anuncountablenumber of choices.[2]
It was shown in 2005 that the pieces in the decomposition can be chosen in such a way that they can be moved continuously into place without running into one another.[3]
As proved independently by Leroy[4]and Simpson,[5]the Banach–Tarski paradox does not violate volumes if one works withlocalesrather than topological spaces. In this abstract setting, it is possible to have subspaces without point but still nonempty. The parts of the paradoxical decomposition do intersect a lot in the sense of locales, so much that some of these intersections should be given a positive mass. Allowing for this hidden mass to be taken into account, the theory of locales permits all subsets (and even all sublocales) of the Euclidean space to be satisfactorily measured.
In a paper published in 1924,[6]Stefan BanachandAlfred Tarskigave a construction of such aparadoxical decomposition, based onearlier workbyGiuseppe Vitaliconcerning theunit intervaland on the paradoxical decompositions of the sphere byFelix Hausdorff, and discussed a number of related questions concerning decompositions of subsets of Euclidean spaces in various dimensions. They proved the following more general statement, thestrong form of the Banach–Tarski paradox:
Now letAbe the original ball andBbe the union of two translated copies of the original ball. Then the proposition means that the original ballAcan be divided into a certain number of pieces and then be rotated and translated in such a way that the result is the whole setB, which contains two copies ofA.
The strong form of the Banach–Tarski paradox is false in dimensions one and two, but Banach and Tarski showed that an analogous statement remains true ifcountably manysubsets are allowed. The difference between dimensions 1 and 2 on the one hand, and 3 and higher on the other hand, is due to the richer structure of the groupE(n)ofEuclidean motionsin 3 dimensions. Forn= 1, 2the group issolvable, but forn≥ 3it contains afree groupwith two generators.John von Neumannstudied the properties of the group of equivalences that make a paradoxical decomposition possible, and introduced the notion ofamenable groups. He also found a form of the paradox in the plane which uses area-preservingaffine transformationsin place of the usual congruences.
Tarski proved that amenable groups are precisely those for which no paradoxical decompositions exist. Since only free subgroups are needed in the Banach–Tarski paradox, this led to the long-standingvon Neumann conjecture, which was disproved in 1980.
The Banach–Tarski paradox states that a ball in the ordinary Euclidean space can be doubled using only the operations of partitioning into subsets, replacing a set with a congruent set, and reassembling. Its mathematical structure is greatly elucidated by emphasizing the role played by thegroupofEuclidean motionsand introducing the notions ofequidecomposable setsand aparadoxical set. Suppose thatGis a groupactingon a setX. In the most important special case,Xis ann-dimensional Euclidean space (for integraln), andGconsists of allisometriesofX, i.e. the transformations ofXinto itself that preserve the distances, usually denotedE(n). Two geometric figures that can be transformed into each other are calledcongruent, and this terminology will be extended to the generalG-action. TwosubsetsAandBofXare calledG-equidecomposable, orequidecomposable with respect toG, ifAandBcan be partitioned into the same finite number of respectivelyG-congruent pieces. This defines anequivalence relationamong all subsets ofX. Formally, if there exist non-empty setsA1,…,Ak{\displaystyle A_{1},\dots ,A_{k}},B1,…,Bk{\displaystyle B_{1},\dots ,B_{k}}such that
and there exist elementsgi∈G{\displaystyle g_{i}\in G}such that
then it can be said thatAandBareG-equidecomposable usingkpieces. If a setEhas two disjoint subsetsAandBsuch thatAandE, as well asBandE, areG-equidecomposable, thenEis calledparadoxical.
Using this terminology, the Banach–Tarski paradox can be reformulated as follows:
In fact, there is asharpresult in this case, due toRaphael M. Robinson:[7]doubling the ball can be accomplished with five pieces, and fewer than five pieces will not suffice.
The strong version of the paradox claims:
While apparently more general, this statement is derived in a simple way from the doubling of a ball by using a generalization of theBernstein–Schroeder theoremdue to Banach that implies that ifAis equidecomposable with a subset ofBandBis equidecomposable with a subset ofA, thenAandBare equidecomposable.
The Banach–Tarski paradox can be put in context by pointing out that for two sets in the strong form of the paradox, there is always abijectivefunction that can map the points in one shape into the other in a one-to-one fashion. In the language ofGeorg Cantor'sset theory, these two sets have equalcardinality. Thus, if one enlarges the group to allow arbitrary bijections ofX, then all sets with non-empty interior become congruent. Likewise, one ball can be made into a larger or smaller ball by stretching, or in other words, by applyingsimilaritytransformations. Hence, if the groupGis large enough,G-equidecomposable sets may be found whose "size"s vary. Moreover, since acountable setcan be made into two copies of itself, one might expect that using countably many pieces could somehow do the trick.
On the other hand, in the Banach–Tarski paradox, the number of pieces is finite and the allowed equivalences are Euclidean congruences, which preserve the volumes. Yet, somehow, they end up doubling the volume of the ball. While this is certainly surprising, some of the pieces used in the paradoxical decomposition arenon-measurable sets, so the notion of volume (more precisely,Lebesgue measure) is not defined for them, and the partitioning cannot be accomplished in a practical way. In fact, the Banach–Tarski paradox demonstrates that it is impossible to find a finitely-additive measure (or aBanach measure) defined on all subsets of a Euclidean space of three (and greater) dimensions that is invariant with respect to Euclidean motions and takes the value one on a unit cube. In his later work, Tarski showed that, conversely, non-existence of paradoxical decompositions of this type implies the existence of a finitely-additive invariant measure.
The heart of the proof of the "doubling the ball" form of the paradox presented below is the remarkable fact that by a Euclidean isometry (and renaming of elements), one can divide a certain set (essentially, the surface of a unit sphere) into four parts, then rotate one of them to become itself plus two of the other parts. This follows rather easily from aF2-paradoxical decomposition ofF2, thefree groupwith two generators. Banach and Tarski's proof relied on an analogous fact discovered by Hausdorff some years earlier: the surface of a unit sphere in space is a disjoint union of three setsB,C,Dand a countable setEsuch that, on the one hand,B,C,Dare pairwise congruent, and on the other hand,Bis congruent with the union ofCandD. This is often called theHausdorff paradox.
Banach and Tarski explicitly acknowledgeGiuseppe Vitali's 1905 construction of theset bearing his name, Hausdorff's paradox (1914), and an earlier (1923) paper of Banach as the precursors to their work. Vitali's and Hausdorff's constructions depend onZermelo'saxiom of choice("AC"), which is also crucial to the Banach–Tarski paper, both for proving their paradox and for the proof of another result:
They remark:
They point out that while the second result fully agrees with geometric intuition, its proof usesACin an even more substantial way than the proof of the paradox. Thus Banach and Tarski imply thatACshould not be rejected solely because it produces a paradoxical decomposition, for such an argument also undermines proofs of geometrically intuitive statements.
However, in 1949,A. P. Morseshowed that the statement about Euclidean polygons can be proved inZFset theoryand thus does not require the axiom of choice.[8]In 1964,Paul Cohenproved that the axiom of choice is independent fromZF– that is, choice cannot be proved fromZF.[9]A weaker version of an axiom of choice is theaxiom of dependent choice,DC, and it has been shown thatDCisnotsufficient for proving the Banach–Tarski paradox, that is,
Large amounts of mathematics useAC. AsStan Wagonpoints out at the end of his monograph, the Banach–Tarski paradox has been more significant for its role in pure mathematics than for foundational questions: it motivated a fruitful new direction for research, the amenability of groups, which has nothing to do with the foundational questions.
In 1991, using then-recent results byMatthew Foremanand Friedrich Wehrung,[11]Janusz Pawlikowski proved that the Banach–Tarski paradox follows fromZFplus theHahn–Banach theorem.[12]The Hahn–Banach theorem does not rely on the full axiom of choice but can be proved using a weaker version ofACcalled theultrafilter lemma.
Here a proof is sketched which is similar but not identical to that given by Banach and Tarski. Essentially, the paradoxical decomposition of the ball is achieved in four steps:
These steps are discussed in more detail below.
The free group with twogeneratorsaandbconsists of all finite strings that can be formed from the four symbolsa,a−1,bandb−1such that noaappears directly next to ana−1and nobappears directly next to ab−1. Two such strings can be concatenated and converted into a string of this type by repeatedly replacing the "forbidden" substrings with the empty string. For instance:abab−1a−1concatenated withabab−1ayieldsabab−1a−1abab−1a, which contains the substringa−1a, and so gets reduced toabab−1bab−1a, which contains the substringb−1b, which gets reduced toabaab−1a. One can check that the set of those strings with this operation forms a group withidentity elementthe empty stringe. This group may be calledF2.
The groupF2{\displaystyle F_{2}}can be "paradoxically decomposed" as follows: LetS(a) be the subset ofF2{\displaystyle F_{2}}consisting of all strings that start witha, and defineS(a−1),S(b) andS(b−1) similarly. Clearly,
but also
and
where the notationaS(a−1) means take all the strings inS(a−1) andconcatenatethem on the left witha.
This is at the core of the proof. For example, there may be a stringaa−1b{\displaystyle aa^{-1}b}in the setaS(a−1){\displaystyle aS(a^{-1})}which, because of the rule thata{\displaystyle a}must not appear next toa−1{\displaystyle a^{-1}}, reduces to the stringb{\displaystyle b}. Similarly,aS(a−1){\displaystyle aS(a^{-1})}contains all the strings that start witha−1{\displaystyle a^{-1}}(for example, the stringaa−1a−1{\displaystyle aa^{-1}a^{-1}}which reduces toa−1{\displaystyle a^{-1}}). In this way,aS(a−1){\displaystyle aS(a^{-1})}contains all the strings that start withb{\displaystyle b},b−1{\displaystyle b^{-1}}anda−1{\displaystyle a^{-1}}, as well as the empty stringe{\displaystyle e}.
GroupF2has been cut into four pieces (plus the singleton {e}), then two of them "shifted" by multiplying withaorb, then "reassembled" as two pieces to make one copy ofF2{\displaystyle F_{2}}and the other two to make another copy ofF2{\displaystyle F_{2}}. That is exactly what is intended to do to the ball.
In order to find afree groupof rotations of 3D space, i.e. that behaves just like (or "isisomorphicto") the free groupF2, two orthogonal axes are taken (e.g. thexandzaxes). Then,Ais taken to be a rotation ofθ=arccos(13){\textstyle \theta =\arccos \left({\frac {1}{3}}\right)}about thexaxis, andBto be a rotation ofθ{\displaystyle \theta }about thezaxis (there are many other suitable pairs of irrational multiples of π that could be used here as well).[13]
The group of rotations generated byAandBwill be calledH.
Letω{\displaystyle \omega }be an element ofHthat starts with a positive rotation about thezaxis, that is, an element of the formω=…bk3ak2bk1{\displaystyle \omega =\ldots b^{k_{3}}a^{k_{2}}b^{k_{1}}}withk1>0,k2,k3,…,kn≠0,n≥1{\displaystyle k_{1}>0,\ k_{2},k_{3},\ldots ,k_{n}\neq 0,\ n\geq 1}. It can be shown by induction thatω{\displaystyle \omega }maps the point(1,0,0){\displaystyle (1,0,0)}to(k3N,l23N,m3N){\textstyle \left({\frac {k}{3^{N}}},{\frac {l{\sqrt {2}}}{3^{N}}},{\frac {m}{3^{N}}}\right)}, for somek,l,m∈Z,N∈N{\displaystyle k,l,m\in \mathbb {Z} ,N\in \mathbb {N} }. Analyzingk,l{\displaystyle k,l}andm{\displaystyle m}modulo 3, one can show thatl≠0{\displaystyle l\neq 0}. The same argument repeated (by symmetry of the problem) is valid whenω{\displaystyle \omega }starts with a negative rotation about thezaxis, or a rotation about thexaxis. This shows that ifω{\displaystyle \omega }is given by a non-trivial word inAandB, thenω≠e{\displaystyle \omega \neq e}. Therefore, the groupHis a free group, isomorphic toF2.
The two rotations behave just like the elementsaandbin the groupF2: there is now a paradoxical decomposition ofH.
This step cannot be performed in two dimensions since it involves rotations in three dimensions. If two nontrivial rotations are taken about the same axis, the resulting group is eitherZ{\displaystyle \mathbb {Z} }(if the ratio between the two angles is rational) or the freeabeliangroup over two elements; either way, it does not have the property required in step 1.
An alternative arithmetic proof of the existence of free groups in some special orthogonal groups using integral quaternions leads to paradoxical decompositions of therotation group.[14]
Theunit sphereS2is partitioned intoorbitsby theactionof our groupH: two points belong to the same orbitif and only ifthere is a rotation inHwhich moves the first point into the second. (Note that the orbit of a point is adense setinS2.) Theaxiom of choicecan be used to pick exactly one point from every orbit; collect these points into a setM. The action ofHon a given orbit isfree and transitiveand so
each orbit can be identified withH. In other words, every point inS2can be reached in exactly one way by applying the proper rotation fromHto the proper element fromM. Because of this, theparadoxical decompositionofHyields a paradoxical decomposition ofS2into four piecesA1,A2,A3,A4as follows:
where we define
and likewise for the other sets, and where we define
(The five "paradoxical" parts ofF2were not used directly, as they would leaveMas an extra piece after doubling, owing to the presence of the singleton {e}.)
The (majority of the) sphere has now been divided into four sets (each one dense on the sphere), and when two of these are rotated, the result is double of what was had before:
Finally, connect every point onS2with a half-open segment to the origin; the paradoxical decomposition ofS2then yields a paradoxical decomposition of the solid unit ball minus the point at the ball's center. (This center point needs a bit more care; see below.)
N.B.This sketch glosses over some details. One has to be careful about the set of points on the sphere which happen to lie on the axis of some rotation inH. However, there are only countably many such points, and like the case of the point at the center of the ball, it is possible to patch the proof to account for them all. (See below.)
In Step 3, the sphere was partitioned into orbits of our groupH. To streamline the proof, the discussion of points that are fixed by some rotation was omitted; since the paradoxical decomposition ofF2relies on shifting certain subsets, the fact that some points are fixed might cause some trouble. Since any rotation ofS2(other than the null rotation) has exactly twofixed points, and sinceH, which is isomorphic toF2, iscountable, there are countably many points ofS2that are fixed by some rotation inH. Denote this set of fixed points asD. Step 3 proves thatS2−Dadmits a paradoxical decomposition.
What remains to be shown is theClaim:S2−Dis equidecomposable withS2.
Proof.Let λ be some line through the origin that does not intersect any point inD. This is possible sinceDis countable. LetJbe the set of angles, α, such that for somenatural numbern, and somePinD,r(nα)P is also inD, wherer(nα) is a rotation about λ ofnα. ThenJis countable. So there exists an angle θ not inJ. Let ρ be the rotation about λ by θ. Then ρ acts onS2with nofixed pointsinD, i.e., ρn(D) isdisjointfromD, and for naturalm<n, ρn(D) is disjoint from ρm(D). LetEbe thedisjoint unionof ρn(D) overn= 0, 1, 2, ... . ThenS2=E∪ (S2−E) ~ ρ(E) ∪ (S2−E) = (E−D) ∪ (S2−E) =S2−D, where ~ denotes "is equidecomposable to".
For step 4, it has already been shown that the ball minus a point admits a paradoxical decomposition; it remains to be shown that the ball minus a point is equidecomposable with the ball. Consider a circle within the ball, containing the point at the center of the ball. Using an argument like that used to prove the Claim, one can see that the full circle is equidecomposable with the circle minus the point at the ball's center. (Basically, a countable set of points on the circle can be rotated to give itself plus one more point.) Note that this involves the rotation about a point other than the origin, so the Banach–Tarski paradox involves isometries of Euclidean 3-space rather than justSO(3).
Use is made of the fact that ifA~BandB~C, thenA~C. The decomposition ofAintoCcan be done using number of pieces equal to the product of the numbers needed for takingAintoBand for takingBintoC.
The proof sketched above requires 2 × 4 × 2 + 8 = 24 pieces - a factor of 2 to remove fixed points, a factor 4 from step 1, a factor 2 to recreate fixed points, and 8 for the center point of the second ball. But in step 1 when moving {e} and all strings of the formanintoS(a−1), do this to all orbits except one. Move {e} of this last orbit to the center point of the second ball. This brings the total down to 16 + 1 pieces. With more algebra, one can also decompose fixed orbits into 4 sets as in step 1. This gives 5 pieces and is the best possible.
Using the Banach–Tarski paradox, it is possible to obtainkcopies of a ball in the Euclideann-space from one, for any integersn≥ 3 andk≥ 1, i.e. a ball can be cut intokpieces so that each of them is equidecomposable to a ball of the same size as the original. Using the fact that thefree groupF2of rank 2 admits a free subgroup ofcountably infiniterank, a similar proof yields that the unit sphereSn−1can be partitioned into countably infinitely many pieces, each of which is equidecomposable (with two pieces) to theSn−1using rotations. By using analytic properties of the rotation groupSO(n), which is aconnectedanalyticLie group, one can further prove that the sphereSn−1can be partitioned into as many pieces as there are real numbers (that is,2ℵ0{\displaystyle 2^{\aleph _{0}}}pieces), so that each piece is equidecomposable with two pieces toSn−1using rotations. These results then extend to the unit ball deprived of the origin. A 2010 article by Valeriy Churkin gives a new proof of the continuous version of the Banach–Tarski paradox.[15]
In theEuclidean plane, two figures that are equidecomposable with respect to the group ofEuclidean motionsare necessarily of the same area, and therefore, a paradoxical decomposition of a square or disk of Banach–Tarski type that uses only Euclidean congruences is impossible. A conceptual explanation of the distinction between the planar and higher-dimensional cases was given byJohn von Neumann: unlike the groupSO(3)of rotations in three dimensions, the groupE(2) of Euclidean motions of the plane issolvable, which implies the existence of a finitely-additive measure onE(2) andR2which is invariant under translations and rotations, and rules out paradoxical decompositions of non-negligible sets. Von Neumann then posed the following question: can such a paradoxical decomposition be constructed if one allows a larger group of equivalences?
It is clear that if one permitssimilarities, any two squares in the plane become equivalent even without further subdivision. This motivates restricting one's attention to the groupSA2ofarea-preserving affine transformations. Since the area is preserved, any paradoxical decomposition of a square with respect to this group would be counterintuitive for the same reasons as the Banach–Tarski decomposition of a ball. In fact, the groupSA2contains as a subgroup the special linear groupSL(2,R), which in its turn contains thefree groupF2with two generators as a subgroup. This makes it plausible that the proof of Banach–Tarski paradox can be imitated in the plane. The main difficulty here lies in the fact that the unit square is not invariant under the action of the linear groupSL(2,R), hence one cannot simply transfer a paradoxical decomposition from the group to the square, as in the third step of the above proof of the Banach–Tarski paradox. Moreover, the fixed points of the group present difficulties (for example, the origin is fixed under all linear transformations). This is why von Neumann used the larger groupSA2including the translations, and he constructed a paradoxical decomposition of the unit square with respect to the enlarged group (in 1929). Applying the Banach–Tarski method, the paradox for the square can be strengthened as follows:
As von Neumann notes:[16]
To explain further, the question of whether a finitely additive measure (that is preserved under certain transformations) exists or not depends on what transformations are allowed. TheBanach measureof sets in the plane, which is preserved by translations and rotations, is not preserved by non-isometric transformations even when they do preserve the area of polygons. The points of the plane (other than the origin) can be divided into twodense setswhich may be calledAandB. If theApoints of a given polygon are transformed by a certain area-preserving transformation and theBpoints by another, both sets can become subsets of theApoints in two new polygons. The new polygons have the same area as the old polygon, but the two transformed sets cannot have the same measure as before (since they contain only part of theApoints), and therefore there is no measure that "works".
The class of groups isolated by von Neumann in the course of study of Banach–Tarski phenomenon turned out to be very important for many areas of Mathematics: these areamenable groups, or groups with an invariant mean, and include all finite and all solvable groups. Generally speaking, paradoxical decompositions arise when the group used for equivalences in the definition of equidecomposability isnotamenable. | https://en.wikipedia.org/wiki/Banach%E2%80%93Tarski_paradox |
Galileo's paradoxis a demonstration of one of the surprising properties ofinfinite sets. In his final scientific work,Two New Sciences,Galileo Galileimade apparently contradictory statements about thepositive integers. First, a square is an integer which is the square of an integer. Some numbers aresquares, while others are not; therefore, all the numbers, including both squares and non-squares, must be more numerous than just the squares. And yet, for every number there is exactly one square; hence, there cannot be more of one than of the other. This is an early use, though not the first, of the idea ofone-to-one correspondencein the context of infinite sets.
Galileo concluded that the ideas ofless,equal, andgreaterapply to finite quantities but not to infinite quantities. During the nineteenth centuryCantorfound a framework in which this restriction is not necessary; it is possible to definecomparisons amongst infinite setsin a meaningful way (by which definition the two sets, integers and squares, have "the same size"), and that by this definitionsome infinite sets are strictly larger than others.
The ideas were not new with Galileo, but his name has come to be associated with them. In particular,Duns Scotus, about 1302, compared even numbers to the whole of numbers.[1]
The relevant section ofTwo New Sciencesis excerpted below:[2] | https://en.wikipedia.org/wiki/Galileo%27s_paradox |
This article contains a discussion ofparadoxes of set theory. As with most mathematicalparadoxes, they generally reveal surprising and counter-intuitive mathematical results, rather than actual logicalcontradictionswithin modernaxiomatic set theory.
Set theoryas conceived byGeorg Cantorassumes the existence of infinite sets. As this assumption cannot be proved from first principles it has been introduced intoaxiomatic set theoryby theaxiom of infinity, which asserts the existence of the setNof natural numbers. Every infinite set which can be enumerated by natural numbers is the same size (cardinality) asN, and is said to be countable. Examples of countably infinite sets are the natural numbers, the even numbers, theprime numbers, and also all therational numbers, i.e., the fractions. These sets have in common thecardinal number|N| =ℵ0{\displaystyle \aleph _{0}}(aleph-nought), a number greater than every natural number.
Cardinal numbers can be defined as follows. Define two sets tohave the same sizeby: there exists abijectionbetween the two sets (a one-to-one correspondence between the elements). Then a cardinal number is, by definition, a class consisting ofallsets of the same size. To have the same size is anequivalence relation, and the cardinal numbers are theequivalence classes.
Besides the cardinality, which describes the size of a set, ordered sets also form a subject of set theory. Theaxiom of choiceguarantees that every set can bewell-ordered, which means that a total order can be imposed on its elements such that every nonempty subset has a first element with respect to that order. The order of a well-ordered set is described by anordinal number. For instance, 3 is the ordinal number of the set {0, 1, 2} with the usual order 0 < 1 < 2; and ω is the ordinal number of the set of all natural numbers ordered the usual way. Neglecting the order, we are left with the cardinal number |N| = |ω| =ℵ0{\displaystyle \aleph _{0}}.
Ordinal numbers can be defined with the same method used for cardinal numbers. Define two well-ordered sets tohave the same order typeby: there exists abijectionbetween the two sets respecting the order: smaller elements are mapped to smaller elements. Then an ordinal number is, by definition, a class consisting ofallwell-ordered sets of the same order type. To have the same order type is anequivalence relationon the class of well-ordered sets, and the ordinal numbers are the equivalence classes.
Two sets of the same order type have the same cardinality. The converse is not true in general for infinite sets: it is possible to impose different well-orderings on the set of natural numbers that give rise to different ordinal numbers.
There is a natural ordering on the ordinals, which is itself a well-ordering. Given any ordinal α, one can consider the set of all ordinals less than α. This set turns out to have ordinal number α. This observation is used for a different way of introducing the ordinals, in which an ordinal isequatedwith the set of all smaller ordinals. This form of ordinal number is thus a canonical representative of the earlier form of equivalence class.
Allsubsetsof a setS(all possible choices of its elements) form thepower setP(S). Georg Cantor proved that the power set is always larger than the set, i.e., |P(S)| > |S|. A special case of Cantor's theorem is that the set of all real numbersRcannot be enumerated by natural numbers, that is,Ris uncountable: |R| > |N|.
Instead of relying on ambiguous descriptions such as "that which cannot be enlarged" or "increasing without bound", set theory provides definitions for the terminfinite setto give an unambiguous meaning to phrases such as "the set of all natural numbers is infinite". Just as forfinite sets, the theory makes further definitions which allow us to consistently compare two infinite sets as regards whether one set is "larger than", "smaller than", or "the same size as" the other. But not every intuition regarding the size of finite sets applies to the size of infinite sets, leading to various apparently paradoxical results regarding enumeration, size, measure and order.
Before set theory was introduced, the notion of thesizeof a set had been problematic. It had been discussed byGalileo GalileiandBernard Bolzano, among others. Are there as many natural numbers as squares of natural numbers when measured by the method of enumeration?
The issue can be settled by defining the size of a set in terms of its cardinality. Since abijectionexists between the two sets, they have the same cardinality by definition.
Hilbert's paradox of the Grand Hotelillustrates more paradoxes of enumeration.
"I see it but I don't believe," Cantor wrote toRichard Dedekindafter proving that the set of points of a square has the same cardinality as that of the points on just an edge of the square: thecardinality of the continuum.
This demonstrates that the "size" of sets as defined by cardinality alone is not the only useful way of comparing sets.Measure theoryprovides a more nuanced theory of size that conforms to our intuition that length and area are incompatible measures of size.
The evidence strongly suggests that Cantor was quite confident in the result itself and that his comment to Dedekind refers instead to his then-still-lingering concerns about the validity of his proof of it.[1]Nevertheless, Cantor's remark would also serve nicely to express the surprise that so many mathematicians after him have experienced on first encountering a result that is so counter-intuitive.
In 1904Ernst Zermeloproved by means of the axiom of choice (which was introduced for this reason) that every set can be well-ordered. In 1963Paul J. Cohenshowed that in Zermelo–Fraenkel set theory without the axiom of choice it is not possible to prove the existence of a well-ordering of the real numbers.
However, the ability to well order any set allows certain constructions to be performed that have been called paradoxical. One example is theBanach–Tarski paradox, a theorem widely considered to be nonintuitive. It states that it is possible to decompose a ball of a fixed radius into a finite number of pieces and then move and reassemble those pieces by ordinarytranslations and rotations(with no scaling) to obtain two copies from the one original copy. The construction of these pieces requires the axiom of choice; the pieces are not simple regions of the ball, butcomplicated subsets.
In set theory, an infinite set is not considered to be created by some mathematical process such as "adding one element" that is then carried out "an infinite number of times". Instead, a particular infinite set (such as the set of allnatural numbers) is said to already exist, "by fiat", as an assumption or an axiom. Given this infinite set, other infinite sets are then proven to exist as well, as a logical consequence. But it is still a natural philosophical question to contemplate some physical action that actually completes after an infinite number of discrete steps; and the interpretation of this question using set theory gives rise to the paradoxes of the supertask.
Tristram Shandy, the hero of a novel byLaurence Sterne, writes his autobiography so conscientiously that it takes him one year to lay down the events of one day. If he is mortal he can never terminate; but if he lived forever then no part of his diary would remain unwritten, for to each day of his life a year devoted to that day's description would correspond.
An increased version of this type of paradox shifts the infinitely remote finish to a finite time. Fill a huge reservoir with balls enumerated by numbers 1 to 10 and take off ball number 1. Then add the balls enumerated by numbers 11 to 20 and take off number 2. Continue to add balls enumerated by numbers 10n- 9 to 10nand to remove ball numbernfor all natural numbersn= 3, 4, 5, .... Let the first transaction last half an hour, let the second transaction last quarter an hour, and so on, so that all transactions are finished after one hour. Obviously the set of balls in the reservoir increases without bound. Nevertheless, after one hour the reservoir is empty because for every ball the time of removal is known.
The paradox is further increased by the significance of the removal sequence. If the balls are not removed in the sequence 1, 2, 3, ... but in the sequence 1, 11, 21, ... after one hour infinitely many balls populate the reservoir, although the same amount of material as before has been moved.
For all its usefulness in resolving questions regarding infinite sets, naive set theory has some fatal flaws. In particular, it is prey tological paradoxessuch as those exposed byRussell's paradox. The discovery of these paradoxes revealed that not all sets which can be described in the language of naive set theory can actually be said to exist without creating a contradiction. The 20th century saw a resolution to these paradoxes in the development of the variousaxiomatizationsof set theories such asZFCandNBGin common use today. However, the gap between the very formalized andsymbolic languageof these theories and our typical informal use of mathematical language results in various paradoxical situations, as well as the philosophical question of exactly what it is that suchformal systemsactually propose to be talking about.
In 1897 the Italian mathematicianCesare Burali-Fortidiscovered that there is no set containing all ordinal numbers. As every ordinal number is defined by a set of smaller ordinal numbers, the well-ordered set Ω of all ordinal numbers (if it exists) fits the definition and is itself an ordinal. On the other hand, no ordinal number can contain itself, so Ω cannot be an ordinal. Therefore, the set of all ordinal numbers cannot exist.
By the end of the 19th century Cantor was aware of the non-existence of the set of all cardinal numbers and the set of all ordinal numbers. In letters toDavid HilbertandRichard Dedekindhe wrote about inconsistent sets, the elements of which cannot be thought of as being all together, and he used this result to prove that every consistent set has a cardinal number.
After all this, the version of the "set of all sets" paradox conceived byBertrand Russellin 1903 led to a serious crisis in set theory. Russell recognized that the statementx=xis true for every set, and thus the set of all sets is defined by {x|x=x}. In 1906 he constructed several paradox sets, the most famous of which is the set of all sets which do not contain themselves. Russell himself explained this abstract idea by means of some very concrete pictures. One example, known as theBarber paradox, states: The male barber who shaves all and only men who do not shave themselves has to shave himself only if he does not shave himself.
There are close similarities between Russell's paradox in set theory and theGrelling–Nelson paradox, which demonstrates a paradox in natural language.
In 1905, the Hungarian mathematicianJulius Königpublished a paradox based on the fact that there are only countably many finite definitions. If we imagine the real numbers as a well-ordered set, those real numbers which can be finitely defined form a subset. Hence in this well-order there should be a first real number that is not finitely definable. This is paradoxical, because this real number has just been finitely defined by the last sentence. This leads to a contradiction innaive set theory.
This paradox is avoided in axiomatic set theory. Although it is possible to represent a proposition about a set as a set, by a system of codes known asGödel numbers, there is no formulaφ(a,x){\displaystyle \varphi (a,x)}in the language of set theory which holds exactly whena{\displaystyle a}is a code for a finite proposition about a set,x{\displaystyle x}is a set, anda{\displaystyle a}holds forx{\displaystyle x}. This result is known asTarski's indefinability theorem; it applies to a wide class of formal systems including all commonly studied axiomatizations of set theory.
In the same year the French mathematicianJules Richardused a variant ofCantor's diagonal methodto obtain another contradiction in naive set theory. Consider the setAof all finite agglomerations of words. The setEof all finite definitions of real numbers is a subset ofA. AsAis countable, so isE. Letpbe thenth decimal of thenth real number defined by the setE; we form a numberNhaving zero for the integral part andp+ 1 for thenth decimal ifpis not equal either to 8 or 9, and unity ifpis equal to 8 or 9. This numberNis not defined by the setEbecause it differs from any finitely defined real number, namely from thenth number by thenth digit. ButNhas been defined by a finite number of words in this paragraph. It should therefore be in the setE. That is a contradiction.
As with König's paradox, this paradox cannot be formalized in axiomatic set theory because it requires the ability to tell whether a description applies to a particular set (or, equivalently, to tell whether a formula is actually the definition of a single set).
Based upon work of the German mathematicianLeopold Löwenheim(1915) the Norwegian logicianThoralf Skolemshowed in 1922 that everyconsistenttheory offirst-order predicate calculus, such as set theory, has an at most countablemodel. However,Cantor's theoremproves that there are uncountable sets. The root of this seeming paradox is that the countability or noncountability of a set is not alwaysabsolute, but can depend on the model in which the cardinality is measured. It is possible for a set to be uncountable in one model of set theory but countable in a larger model (because the bijections that establish countability are in the larger model but not the smaller one). | https://en.wikipedia.org/wiki/Paradoxes_of_set_theory |
Inmathematics, theepsilon numbersare a collection oftransfinite numberswhose defining property is that they arefixed pointsof anexponential map. Consequently, they are not reachable from 0 via a finite series of applications of the chosen exponential map and of "weaker" operations like addition and multiplication. The original epsilon numbers were introduced byGeorg Cantorin the context ofordinal arithmetic; they are theordinal numbersεthat satisfy theequation
in which ω is the smallest infinite ordinal.
The least such ordinal isε0(pronouncedepsilon nought(chiefly British),epsilon naught(chiefly American), orepsilon zero), which can be viewed as the "limit" obtained bytransfinite recursionfrom a sequence of smaller limit ordinals:
wheresupis thesupremum, which is equivalent toset unionin the case of the von Neumann representation of ordinals.
Larger ordinal fixed points of the exponential map are indexed by ordinal subscripts, resulting inε1,ε2,…,εω,εω+1,…,εε0,…,εε1,…,εεε⋅⋅⋅,…ζ0=φ2(0){\displaystyle \varepsilon _{1},\varepsilon _{2},\ldots ,\varepsilon _{\omega },\varepsilon _{\omega +1},\ldots ,\varepsilon _{\varepsilon _{0}},\ldots ,\varepsilon _{\varepsilon _{1}},\ldots ,\varepsilon _{\varepsilon _{\varepsilon _{\cdot _{\cdot _{\cdot }}}}},\ldots \zeta _{0}=\varphi _{2}(0)}.[1]The ordinalε0is stillcountable, as is any epsilon number whose index is countable.Uncountableordinals also exist, along with uncountable epsilon numbers whose index is an uncountable ordinal.
The smallest epsilon numberε0appears in manyinductionproofs, because for many purposestransfinite inductionis only required up toε0(as inGentzen's consistency proofand the proof ofGoodstein's theorem). Its use byGentzento prove the consistency ofPeano arithmetic, along withGödel's second incompleteness theorem, show that Peano arithmetic cannot prove thewell-foundednessof this ordering (it is in fact the least ordinal with this property, and as such, inproof-theoreticordinal analysis, is used as a measure of the strength of the theory of Peano arithmetic).
Many larger epsilon numbers can be defined using theVeblen function.
A more general class of epsilon numbers has been identified byJohn Horton ConwayandDonald Knuthin thesurreal numbersystem, consisting of all surreals that are fixed points of the base ω exponential mapx→ωx.
Hessenberg (1906)defined gamma numbers (seeadditively indecomposable ordinal) to be numbersγ> 0such thatα+γ=γwheneverα<γ, and delta numbers (seemultiplicatively indecomposable ordinal) to be numbersδ> 1such thatαδ=δwhenever0 <α<δ, and epsilon numbers to be numbersε> 2such thatαε=εwhenever1 <α<ε. His gamma numbers are those of the formωβ, and his delta numbers are those of the formωωβ.
The standard definition ofordinal exponentiationwith base α is:
From this definition, it follows that for any fixed ordinalα> 1, themappingβ↦αβ{\displaystyle \beta \mapsto \alpha ^{\beta }}is anormal function, so it has arbitrarily largefixed pointsby thefixed-point lemma for normal functions. Whenα=ω{\displaystyle \alpha =\omega }, these fixed points are precisely the ordinal epsilon numbers.
Because
a different sequence with the same supremum,ε1{\displaystyle \varepsilon _{1}}, is obtained by starting from 0 and exponentiating with baseε0instead:
Generally, the epsilon numberεβ{\displaystyle \varepsilon _{\beta }}indexed by any ordinal that has an immediate predecessorβ−1{\displaystyle \beta -1}can be constructed similarly.
In particular, whether or not the index β is a limit ordinal,εβ{\displaystyle \varepsilon _{\beta }}is a fixed point not only of base ω exponentiation but also of base δ exponentiation for all ordinals1<δ<εβ{\displaystyle 1<\delta <\varepsilon _{\beta }}.
Since the epsilon numbers are an unbounded subclass of the ordinal numbers, they are enumerated using the ordinal numbers themselves. For any ordinal numberβ{\displaystyle \beta },εβ{\displaystyle \varepsilon _{\beta }}is the least epsilon number (fixed point of the exponential map) not already in the set{εδ∣δ<β}{\displaystyle \{\varepsilon _{\delta }\mid \delta <\beta \}}. It might appear that this is the non-constructive equivalent of the constructive definition using iterated exponentiation; but the two definitions are equally non-constructive at steps indexed by limit ordinals, which represent transfinite recursion of a higher order than taking the supremum of an exponential series.
The following facts about epsilon numbers are straightforward to prove:
Any epsilon number ε hasCantor normal formε=ωε{\displaystyle \varepsilon =\omega ^{\varepsilon }}, which means that the Cantor normal form is not very useful for epsilon numbers. The ordinals less thanε0, however, can be usefully described by their Cantor normal forms, which leads to a representation ofε0as the ordered set of allfinite rooted trees, as follows. Any ordinalα<ε0{\displaystyle \alpha <\varepsilon _{0}}has Cantor normal formα=ωβ1+ωβ2+⋯+ωβk{\displaystyle \alpha =\omega ^{\beta _{1}}+\omega ^{\beta _{2}}+\cdots +\omega ^{\beta _{k}}}wherekis anatural numberandβ1,…,βk{\displaystyle \beta _{1},\ldots ,\beta _{k}}are ordinals withα>β1≥⋯≥βk{\displaystyle \alpha >\beta _{1}\geq \cdots \geq \beta _{k}}, uniquely determined byα{\displaystyle \alpha }. Each of the ordinalsβ1,…,βk{\displaystyle \beta _{1},\ldots ,\beta _{k}}in turn has a similar Cantor normal form. We obtain the finite rooted tree representing α by joining the roots of the trees representingβ1,…,βk{\displaystyle \beta _{1},\ldots ,\beta _{k}}to a new root. (This has the consequence that the number 0 is represented by a single root while the number1=ω0{\displaystyle 1=\omega ^{0}}is represented by a tree containing a root and a single leaf.) An order on the set of finite rooted trees is defined recursively: we first order the subtrees joined to the root in decreasing order, and then uselexicographic orderon these ordered sequences of subtrees. In this way the set of all finite rooted trees becomes awell-ordered setwhich isorder isomorphictoε0.
This representation is related to the proof of thehydra theorem, which represents decreasing sequences of ordinals as agraph-theoreticgame.
The fixed points of the "epsilon mapping"x↦εx{\displaystyle x\mapsto \varepsilon _{x}}form a normal function, whose fixed points form a normal function; this is known as theVeblen hierarchy(the Veblen functions with baseφ0(α) =ωα). In the notation of the Veblen hierarchy, the epsilon mapping isφ1, and its fixed points are enumerated byφ2(seeordinal collapsing function.)
Continuing in this vein, one can define mapsφαfor progressively larger ordinals α (including, by this rarefied form of transfinite recursion, limit ordinals), with progressively larger least fixed pointsφα+1(0). The least ordinal not reachable from 0 by this procedure—i. e., the least ordinal α for whichφα(0) =α, or equivalently the first fixed point of the mapα↦φα(0){\displaystyle \alpha \mapsto \varphi _{\alpha }(0)}—is theFeferman–Schütte ordinalΓ0. In a set theory where such an ordinal can be proved to exist, one has a mapΓthat enumerates the fixed pointsΓ0,Γ1,Γ2, ... ofα↦φα(0){\displaystyle \alpha \mapsto \varphi _{\alpha }(0)}; these are all still epsilon numbers, as they lie in the image ofφβfor everyβ≤ Γ0, including of the mapφ1that enumerates epsilon numbers.
InOn Numbers and Games, the classic exposition onsurreal numbers,John Horton Conwayprovided a number of examples of concepts that had natural extensions from the ordinals to the surreals. One such function is theω{\displaystyle \omega }-mapn↦ωn{\displaystyle n\mapsto \omega ^{n}}; this mapping generalises naturally to include all surreal numbers in itsdomain, which in turn provides a natural generalisation of theCantor normal formfor surreal numbers.
It is natural to consider any fixed point of this expanded map to be an epsilon number, whether or not it happens to be strictly an ordinal number. Some examples of non-ordinal epsilon numbers are
and
There is a natural way to defineεn{\displaystyle \varepsilon _{n}}for every surreal numbern, and the map remainsorder-preserving. Conway goes on to define a broader class of "irreducible" surreal numbers that includes the epsilon numbers as a particularly interesting subclass. | https://en.wikipedia.org/wiki/Epsilon_numbers_(mathematics) |
In the mathematical discipline ofset theory, there are many ways of describing specificcountableordinals. The smallest ones can be usefully and non-circularly expressed in terms of theirCantor normal forms. Beyond that, many ordinals of relevance toproof theorystill havecomputableordinal notations(seeordinal analysis). However, it is not possible to decide effectively whether a given putative ordinal notation is a notation or not (for reasons somewhat analogous to the unsolvability of thehalting problem); various more-concrete ways of defining ordinals that definitely have notations are available.
Since there are only countably many notations, all ordinals with notations are exhausted well below thefirst uncountable ordinal ω1; theirsupremumis calledChurch–Kleeneω1or ωCK1(not to be confused with the first uncountable ordinal, ω1), describedbelow. Ordinal numbers below ωCK1are therecursiveordinals (seebelow). Countable ordinals larger than this may still be defined, but do not have notations.
Due to the focus on countable ordinals,ordinal arithmeticis used throughout, except where otherwise noted. The ordinals described here are not as large as the ones described inlarge cardinals, but they are large among those that have constructive notations (descriptions). Larger and larger ordinals can be defined, but they become more and more difficult to describe.
Computable ordinals(or recursive ordinals) are certain countable ordinals: loosely speaking those represented by acomputable function. There are several equivalent definitions of this: the simplest is to say that a computable ordinal is the order-type of some recursive (i.e., computable) well-ordering of the natural numbers; so, essentially, an ordinal is recursive when we can present the set of smaller ordinals in such a way that a computer (Turing machine, say) can manipulate them (and, essentially, compare them).
A different definition usesKleene's system ofordinal notations. Briefly, an ordinal notation is either the name zero (describing the ordinal 0), or the successor of an ordinal notation (describing the successor of the ordinal described by that notation), or a Turing machine (computable function) that produces an increasing sequence of ordinal notations (that describe the ordinal that is the limit of the sequence), and ordinal notations are (partially) ordered so as to make the successor ofogreater thanoand to make the limit greater than any term of the sequence (this order is computable; however, the setOof ordinal notations itself is highly non-recursive, owing to the impossibility of deciding whether a given Turing machine does indeed produce a sequence of notations); a recursive ordinal is then an ordinal described by some ordinal notation.
Any ordinal smaller than a recursive ordinal is itself recursive, so the set of all recursive ordinals forms a certain (countable) ordinal, theChurch–Kleene ordinal(see below).
It is tempting to forget about ordinal notations, and only speak of the recursive ordinals themselves: and some statements are made about recursive ordinals which, in fact, concern the notations for these ordinals. This leads to difficulties, however, as even the smallest infinite ordinal, ω, has many notations, some of which cannot be proved to be equivalent to the obvious notation (the simplest program that enumerates all natural numbers).
There is a relation between computable ordinals and certainformal systems(containingarithmetic, that is, at least a reasonable fragment ofPeano arithmetic).
Certain computable ordinals are so large that while they can be given by a certain ordinal notationo, a given formal system might not be sufficiently powerful to show thatois, indeed, an ordinal notation: the system does not showtransfinite inductionfor such large ordinals.
For example, the usualfirst-orderPeano axiomsdo not prove transfinite induction for (or beyond)ε0: while the ordinal ε0can easily be arithmetically described (it is countable), the Peano axioms are not strong enough to show that it is indeed an ordinal; in fact, transfinite induction on ε0proves the consistency of Peano's axioms (a theorem byGentzen), so byGödel's second incompleteness theorem, Peano's axioms cannot formalize that reasoning. (This is at the basis of theKirby–Paris theoremonGoodstein sequences.) Since Peano arithmeticcanprove that any ordinal less than ε0is well ordered, we say that ε0measures theproof-theoretic strengthof Peano's axioms.
But we can do this for systems far beyond Peano's axioms. For example, the proof-theoretic strength ofKripke–Platek set theoryis theBachmann–Howard ordinal, and, in fact, merely adding to Peano's axioms the axioms that state the well-ordering of all ordinals below the Bachmann–Howard ordinal is sufficient to obtain all arithmetical consequences of Kripke–Platek set theory.
We have already mentioned (seeCantor normal form) the ordinalε0, which is the smallest satisfying the equationωα=α{\displaystyle \omega ^{\alpha }=\alpha }, so it is the limit of the sequence 0, 1,ω{\displaystyle \omega },ωω{\displaystyle \omega ^{\omega }},ωωω{\displaystyle \omega ^{\omega ^{\omega }}}, ... The next ordinal satisfying this equation is called ε1: it is the limit of the sequence
More generally, theι{\displaystyle \iota }-th ordinal such thatωα=α{\displaystyle \omega ^{\alpha }=\alpha }is calledει{\displaystyle \varepsilon _{\iota }}. We could defineζ0{\displaystyle \zeta _{0}}as the smallest ordinal such thatεα=α{\displaystyle \varepsilon _{\alpha }=\alpha }, but since the Greek alphabet does not have transfinitely many letters it is better to use a more robust notation: define ordinalsφγ(β){\displaystyle \varphi _{\gamma }(\beta )}by transfinite induction as follows: letφ0(β)=ωβ{\displaystyle \varphi _{0}(\beta )=\omega ^{\beta }}and letφγ+1(β){\displaystyle \varphi _{\gamma +1}(\beta )}be theβ{\displaystyle \beta }-th fixed point ofφγ{\displaystyle \varphi _{\gamma }}(i.e., theβ{\displaystyle \beta }-th ordinal such thatφγ(α)=α{\displaystyle \varphi _{\gamma }(\alpha )=\alpha }; so for example,φ1(β)=εβ{\displaystyle \varphi _{1}(\beta )=\varepsilon _{\beta }}), and whenδ{\displaystyle \delta }is a limit ordinal, defineφδ(α){\displaystyle \varphi _{\delta }(\alpha )}as theα{\displaystyle \alpha }-th common fixed point of theφγ{\displaystyle \varphi _{\gamma }}for allγ<δ{\displaystyle \gamma <\delta }. This family of functions is known as theVeblen hierarchy(there are inessential variations in the definition, such as letting, forδ{\displaystyle \delta }a limit ordinal,φδ(α){\displaystyle \varphi _{\delta }(\alpha )}be the limit of theφγ(α){\displaystyle \varphi _{\gamma }(\alpha )}forγ<δ{\displaystyle \gamma <\delta }: this essentially just shifts the indices by 1, which is harmless).φγ{\displaystyle \varphi _{\gamma }}is called theγth{\displaystyle \gamma ^{th}}Veblen function(to the baseω{\displaystyle \omega }).
Ordering:φα(β)<φγ(δ){\displaystyle \varphi _{\alpha }(\beta )<\varphi _{\gamma }(\delta )}if and only if either (α=γ{\displaystyle \alpha =\gamma }andβ<δ{\displaystyle \beta <\delta }) or (α<γ{\displaystyle \alpha <\gamma }andβ<φγ(δ){\displaystyle \beta <\varphi _{\gamma }(\delta )}) or (α>γ{\displaystyle \alpha >\gamma }andφα(β)<δ{\displaystyle \varphi _{\alpha }(\beta )<\delta }).
The smallest ordinal such thatφα(0)=α{\displaystyle \varphi _{\alpha }(0)=\alpha }is known as theFeferman–Schütte ordinaland generally writtenΓ0{\displaystyle \Gamma _{0}}. It can be described as the set of all ordinals that can be written as finite expressions, starting from zero, using only the Veblen hierarchy and addition. The Feferman–Schütte ordinal is important because, in a sense that is complicated to make precise, it is the smallest (infinite) ordinal that cannot be ("predicatively") described using smaller ordinals. It measures the strength of such systems as "arithmetical transfinite recursion".
More generally, Γαenumerates the ordinals that cannot be obtained from smaller ordinals using addition and the Veblen functions.
It is, of course, possible to describe ordinals beyond the Feferman–Schütte ordinal. One could continue to seek fixed points in a more and more complicated manner: enumerate the fixed points ofα↦Γα{\displaystyle \alpha \mapsto \Gamma _{\alpha }}, then enumerate the fixed points ofthat, and so on, and then look for the first ordinalαsuch thatαis obtained inαsteps of this process, and continue diagonalizing in thisad hocmanner. This leads to the definition of the "small" and "large" Veblen ordinals.
To go far beyond the Feferman–Schütte ordinal, one needs to introduce new methods. Unfortunately there is not yet any standard way to do this: every author in the subject seems to have invented their own system of notation, and it is quite hard to translate between the different systems. The first such system was introduced byBachmannin 1950 (in anad hocmanner), and different extensions and variations of it were described by Buchholz,Takeuti(ordinal diagrams), Feferman (θ systems),Aczel, Bridge,Schütte, and Pohlers. However most systems use the same basic idea, of constructing new countable ordinals by using the existence of certain uncountable ordinals. Here is an example of such a definition, described in much greater detail in the article onordinal collapsing function:
Here Ω = ω1is the first uncountable ordinal. It is put in because otherwise the function ψ gets "stuck" at the smallest ordinalσsuch that εσ=σ: in particular ψ(α)=σfor any ordinal α satisfyingσ≤α≤Ω. However the fact that we included Ω allows us to get past this point: ψ(Ω+1) is greater thanσ. The key property of Ω that we used is that it is greater than any ordinal produced by ψ.
To construct still larger ordinals, we can extend the definition of ψ by throwing in more ways of constructing uncountable ordinals. There are several ways to do this, described to some extent in the article onordinal collapsing function.
TheBachmann–Howard ordinal(sometimes just called theHoward ordinal, ψ0(εΩ+1) with the notation above) is an important one, because it describes the proof-theoretic strength ofKripke–Platek set theory. Indeed, the main importance of these large ordinals, and the reason to describe them, is their relation to certain formal systems as explained above. However, such powerful formal systems as fullsecond-order arithmetic, let aloneZermelo–Fraenkel set theory, seem beyond reach for the moment.
Beyond this, there are multiple recursive ordinals which aren't as well known as the previous ones. The first of these isBuchholz's ordinal, defined asψ0(Ωω){\displaystyle \psi _{0}(\Omega _{\omega })}, abbreviated as justψ(Ωω){\displaystyle \psi (\Omega _{\omega })}, using the previous notation. It is the proof-theoretic ordinal ofΠ11−CA0{\displaystyle \Pi _{1}^{1}-CA_{0}},[1]a first-order theory of arithmetic allowing quantification over the natural numbers as well assetsof natural numbers, andID<ω{\displaystyle ID_{<\omega }}, the "formal theory of finitely iterated inductive definitions".[2]
Since the hydras fromBuchholz's hydra gameare isomorphic to Buchholz's ordinal notation, the ordinals up to this point can be expressed using hydras from the game.[3]p.136For example+(0(ω)){\displaystyle +(0(\omega ))}corresponds toψ(Ωω){\displaystyle \psi (\Omega _{\omega })}.
Next is theTakeuti-Feferman-Buchholz ordinal, the proof-theoretic ordinal ofΠ11−CA+BI{\displaystyle \Pi _{1}^{1}-CA+BI};[4]and another subsystem of second-order arithmetic:Π11{\displaystyle \Pi _{1}^{1}}- comprehension + transfinite induction, andIDω{\displaystyle ID_{\omega }}, the "formal theory ofω{\displaystyle \omega }-times iterated inductive definitions".[5]In this notation, it is defined asψ0(εΩω+1){\displaystyle \psi _{0}(\varepsilon _{\Omega _{\omega }+1})}. It is the supremum of the range of Buchholz's psi functions.[6]It was first named by David Madore.[citation needed]
The next ordinal is mentioned in a piece of code describinglarge countable ordinals and numbers in Agda, and defined by "AndrasKovacs" asψ0(Ωω+1⋅ε0){\displaystyle \psi _{0}(\Omega _{\omega +1}\cdot \varepsilon _{0})}.
The next ordinal is mentioned in the same piece of code as earlier, and defined asψ0(Ωωω){\displaystyle \psi _{0}(\Omega _{\omega ^{\omega }})}. It is the proof-theoretic ordinal ofID<ωω{\displaystyle ID_{<\omega ^{\omega }}}.
This next ordinal is, once again, mentioned in this same piece of code, defined asψ0(Ωε0){\displaystyle \psi _{0}(\Omega _{\varepsilon _{0}})}, is the proof-theoretic ordinal ofID<ε0{\displaystyle ID_{<\varepsilon _{0}}}. In general, the proof-theoretic ordinal ofID<ν{\displaystyle ID_{<\nu }}is equal toψ0(Ων){\displaystyle \psi _{0}(\Omega _{\nu })}— note that in this certain instance,Ω0{\displaystyle \Omega _{0}}represents1{\displaystyle 1}, the first nonzero ordinal.
Next is an unnamed ordinal, referred by David Madore as the "countable" collapse ofεI+1{\displaystyle \varepsilon _{I+1}},[5]whereI{\displaystyle I}is the firstinaccessible(=Π01{\displaystyle \Pi _{0}^{1}}-indescribable) cardinal. This is the proof-theoretic ordinal ofKripke-Platek set theoryaugmented by the recursive inaccessibility of the class of ordinals (KPi), or, on the arithmetical side, ofΔ21{\displaystyle \Delta _{2}^{1}}-comprehension + transfinite induction. Its value is equal toψ(εI+1){\displaystyle \psi (\varepsilon _{I+1})}using an unknown function.
Next is another unnamed ordinal, referred by David Madore as the "countable" collapse ofεM+1{\displaystyle \varepsilon _{M+1}},[5]whereM{\displaystyle M}is the firstMahlo cardinal. This is the proof-theoretic ordinal of KPM, an extension ofKripke-Platek set theorybased on a Mahlo cardinal.[7]Its value is equal toψ(εM+1){\displaystyle \psi (\varepsilon _{M+1})}using one of Buchholz's various psi functions.[8]
Next is another unnamed ordinal, referred by David Madore as the "countable" collapse ofεK+1{\displaystyle \varepsilon _{K+1}},[5]whereK{\displaystyle K}is the firstweakly compact(=Π11{\displaystyle \Pi _{1}^{1}}-indescribable) cardinal. This is the proof-theoretic ordinal ofKripke-Platek set theory+ Π3 - Ref. Its value is equal toΨ(εK+1){\displaystyle \Psi (\varepsilon _{K+1})}using Rathjen's Psi function.[9]
Next is another unnamed ordinal, referred by David Madore as the "countable" collapse ofεΞ+1{\displaystyle \varepsilon _{\Xi +1}},[5]whereΞ{\displaystyle \Xi }is the firstΠ02{\displaystyle \Pi _{0}^{2}}-indescribable cardinal. This is the proof-theoretic ordinal ofKripke-Platek set theory+ Πω-Ref. Its value is equal toΨXεΞ+1{\displaystyle \Psi _{X}^{\varepsilon _{\Xi +1}}}using Stegert's Psi function, whereX{\displaystyle X}= (ω+{\displaystyle \omega ^{+}};P0{\displaystyle P_{0}};ϵ{\displaystyle \epsilon },ϵ{\displaystyle \epsilon }, 0).[10]
Next is the last unnamed ordinal, referred by David Madore as the proof-theoretic ordinal of Stability.[5]This is the proof-theoretic ordinal of Stability, an extension of Kripke-Platek set theory. Its value is equal toΨXεΥ+1{\displaystyle \Psi _{X}^{\varepsilon _{\Upsilon +1}}}using Stegert's Psi function, whereX{\displaystyle X}= (ω+{\displaystyle \omega ^{+}};P0{\displaystyle P_{0}};ϵ{\displaystyle \epsilon },ϵ{\displaystyle \epsilon }, 0).[10]
Next is a group of ordinals which not that much are known about, but are still fairly significant (in ascending order):
By dropping the requirement of having a concrete description, even larger recursive countable ordinals can be obtained as the ordinals measuring the strengths of various strong theories; roughly speaking, these ordinals are the smallest order types of "natural" ordinal notations that the theories cannot prove are well ordered. By taking stronger and stronger theories such assecond-order arithmetic,Zermelo set theory,Zermelo–Fraenkel set theory, or Zermelo–Fraenkel set theory with variouslarge cardinalaxioms, one gets some extremely large recursive ordinals. (Strictly speaking it is not known that all of these really are ordinals: by construction, the ordinal strength of a theory can only be proved to be an ordinal from an even stronger theory. So for the large cardinal axioms this becomes quite unclear.)
The supremum of the set ofrecursive ordinalsis the smallest ordinal thatcannotbe described in a recursive way. (It is not theorder typeof any recursive well-ordering of the integers.) That ordinal is a countable ordinal called theChurch–Kleene ordinal,ω1CK{\displaystyle \omega _{1}^{\mathrm {CK} }}. Thus,ω1CK{\displaystyle \omega _{1}^{\mathrm {CK} }}is the smallest non-recursive ordinal, and there is no hope of precisely "describing" any ordinals from this point on—we can onlydefinethem. But it is still far less than the first uncountable ordinal,ω1{\displaystyle \omega _{1}}. However, as its symbol suggests, it behaves in many ways rather likeω1{\displaystyle \omega _{1}}. For instance, one can define ordinal collapsing functions usingω1CK{\displaystyle \omega _{1}^{\mathrm {CK} }}instead ofω1{\displaystyle \omega _{1}}.
The Church–Kleene ordinal is again related toKripke–Platek set theory, but now in a different way: whereas the Bachmann–Howard ordinal (describedabove) was the smallest ordinal for which KP does not prove transfinite induction, the Church–Kleene ordinal is the smallestαsuch that the construction of theGödel universe,L, up to stageα, yields a modelLα{\displaystyle L_{\alpha }}of KP. Such ordinals are calledadmissible, thusω1CK{\displaystyle \omega _{1}^{\mathrm {CK} }}is the smallest admissible ordinal (beyond ω in case theaxiom of infinityis not included in KP).
By a theorem ofFriedman,Jensen, andSacks, the countable admissible ordinals are exactly those constructed in a manner similar to the Church–Kleene ordinal but for Turing machines withoracles.[11][12]One sometimes writesωαCK{\displaystyle \omega _{\alpha }^{\mathrm {CK} }}for theα{\displaystyle \alpha }-th ordinal that is either admissible or a limit of smaller admissibles.[citation needed]
ωωCK{\displaystyle \omega _{\omega }^{\mathrm {CK} }}is the smallest limit of admissible ordinals (mentioned later), yet the ordinal itself is not admissible. It is also the smallestα{\displaystyle \alpha }such thatLα∩P(ω){\displaystyle L_{\alpha }\cap P(\omega )}is a model ofΠ11{\displaystyle \Pi _{1}^{1}}-comprehension.[5][13]
An ordinal that is both admissible and a limit of admissibles, or equivalently such thatα{\displaystyle \alpha }is theα{\displaystyle \alpha }-th admissible ordinal, is calledrecursively inaccessible, and the least recursively inaccessible may be denotedω1E1{\displaystyle \omega _{1}^{E_{1}}}.[14]An ordinal that is both recursively inaccessible and a limit of recursively inaccessibles is calledrecursively hyperinaccessible.[5]There exists a theory of large ordinals in this manner that is highly parallel to that of (small)large cardinals. For example, we can definerecursively Mahlo ordinals: these are theα{\displaystyle \alpha }such that everyα{\displaystyle \alpha }-recursive closed unbounded subset ofα{\displaystyle \alpha }contains an admissible ordinal (a recursive analog of the definition of aMahlo cardinal). The 1-section of Harrington's functional2S#{\displaystyle {}^{2}S^{\#}}is equal toLρ∩P(ω){\displaystyle L_{\rho }\cap {\mathcal {P}}(\omega )}, whereρ{\displaystyle \rho }is the least recursively Mahlo ordinal.[15]p.171
But note that we are still talking about possibly countable ordinals here. (While the existence of inaccessible or Mahlo cardinals cannot be proved inZermelo–Fraenkel set theory, that of recursively inaccessible or recursively Mahlo ordinals is a theorem of ZFC: in fact, anyregular cardinalis recursively Mahlo and more, but even if we limit ourselves to countable ordinals,[clarification needed]ZFC proves the existence of recursively Mahlo ordinals. They are, however, beyond the reach of Kripke–Platek set theory.)
For a set of formulaeΓ{\displaystyle \Gamma }, a limit ordinalα{\displaystyle \alpha }is calledΓ{\displaystyle \Gamma }-reflectingif the rankLα{\displaystyle L_{\alpha }}satisfies a certain reflection property for eachΓ{\displaystyle \Gamma }-formulaϕ{\displaystyle \phi }.[16]These ordinals appear in ordinal analysis of theories such asKP+Π3-ref, a theory augmentingKripke-Platek set theoryby aΠ3{\displaystyle \Pi _{3}}-reflection schema. They can also be considered "recursive analogues" of some uncountable cardinals such asweakly compact cardinalsandindescribable cardinals.[17]For example, an ordinal whichΠ3{\displaystyle \Pi _{3}}-reflecting is calledrecursively weakly compact.[18]For finiten{\displaystyle n}, the leastΠn{\displaystyle \Pi _{n}}-reflecting ordinal is also the supremum of the closure ordinals of monotonic inductive definitions whose graphs areΠm+10.[18]
In particular,Π3{\displaystyle \Pi _{3}}-reflecting ordinals also have a characterization usinghigher-type functionalson ordinal functions, lending them the name2-admissible ordinals.[18]An unpublished paper bySolomon Fefermansupplies, for each finiten{\displaystyle n}, a similar property corresponding toΠn{\displaystyle \Pi _{n}}-reflection.[19]
An admissible ordinalα{\displaystyle \alpha }is callednonprojectibleif there is no totalα{\displaystyle \alpha }-recursiveinjective functionmappingα{\displaystyle \alpha }into a smaller ordinal. (This is trivially true for regular cardinals; however, we are mainly interested in countable ordinals.) Being nonprojectible is a much stronger condition than being admissible, recursively inaccessible, or even recursively Mahlo.[13]By Jensen's method of projecta,[20]this statement is equivalent to the statement that theGödel universe,L, up to stage α, yields a modelLα{\displaystyle L_{\alpha }}of KP +Σ1{\displaystyle \Sigma _{1}}-separation. However,Σ1{\displaystyle \Sigma _{1}}-separation on its own (not in the presence ofV=L{\displaystyle V=L}) is not a strong enough axiom schema to imply nonprojectibility, in fact there are transitive models ofKP{\displaystyle KP}+Σ1{\displaystyle \Sigma _{1}}-separation of any countable admissible height>ω{\displaystyle >\omega }.[21]
Nonprojectible ordinals are tied toJensen'swork on projecta.[5][22]The least ordinals that are nonprojectible relative to a given set are tied to Harrington's construction of the smallest reflecting Spector 2-class.[15]p.174
We can imagine even larger ordinals that are still countable. For example, ifZFChas atransitive model(a hypothesis stronger than the mere hypothesis of consistency, and implied by the existence of an inaccessible cardinal), then there exists a countableα{\displaystyle \alpha }such thatLα{\displaystyle L_{\alpha }}is a model of ZFC. Such ordinals are beyond the strength of ZFC in the sense that it cannot (by construction) prove their existence.
IfT{\displaystyle T}is a recursively enumerable set theory consistent withV=L, then the leastα{\displaystyle \alpha }such that(Lα,∈)⊨T{\displaystyle (L_{\alpha },\in )\vDash T}is less than the least stable ordinal, which follows.[23]
Even larger countable ordinals, called thestable ordinals, can be defined by indescribability conditions or as thoseα{\displaystyle \alpha }such thatLα{\displaystyle L_{\alpha }}is aΣ1-elementary submodelofL; the existence of these ordinals can be proved in ZFC,[24]and they are closely related to thenonprojectible ordinalsfrom a model-theoretic perspective.[5]: 6For countableα{\displaystyle \alpha }, stability ofα{\displaystyle \alpha }is equivalent toLα≺Σ1Lω1{\displaystyle L_{\alpha }\prec _{\Sigma _{1}}L_{\omega _{1}}}.[5]
The least stable level ofL{\displaystyle L}has some definability-related properties. Lettingσ{\displaystyle \sigma }be least such thatLσ≺1L{\displaystyle L_{\sigma }\prec _{1}L}:
These are weakened variants of stable ordinals. There are ordinals with these properties smaller than the aforementioned least nonprojectible ordinal,[5]for example an ordinal is(+1){\displaystyle (+1)}-stable iff it isΠn0{\displaystyle \Pi _{n}^{0}}-reflecting for all naturaln{\displaystyle n}.[18]
Stronger weakenings of stability have appeared in proof-theoretic publications, including analysis of subsystems ofsecond-order arithmetic.[26]
Within thescheme of notations of Kleenesome represent ordinals and some do not. One can define a recursive total ordering that is a subset of the Kleene notations and has an initial segment which is well-ordered with order-typeω1CK{\displaystyle \omega _{1}^{\mathrm {CK} }}. Every recursively enumerable (or even hyperarithmetic) nonempty subset of this total ordering has a least element. So it resembles a well-ordering in some respects. For example, one can define the arithmetic operations on it. Yet it is not possible to effectively determine exactly where the initial well-ordered part ends and the part lacking a least element begins.
For an example of a recursive pseudo-well-ordering, let S beATR0or another recursively axiomatizable theory that has an ω-model but no hyperarithmetical ω-models, and (if needed) conservatively extend S withSkolem functions. Let T be the tree of (essentially) finite partial ω-models of S: A sequence of natural numbersx1,x2,...,xn{\displaystyle x_{1},x_{2},...,x_{n}}is in T iff S plus ∃m φ(m) ⇒ φ(x⌈φ⌉) (for the first n formulas φ with one numeric free variable; ⌈φ⌉ is the Gödel number) has no inconsistency proof shorter than n. Then theKleene–Brouwer orderof T is a recursive pseudowellordering.
Any such construction must have order typeω1CK×(1+η)+ρ{\displaystyle \omega _{1}^{CK}\times (1+\eta )+\rho }, whereη{\displaystyle \eta }is the order type of(Q,<){\displaystyle (\mathbb {Q} ,<)}, andρ{\displaystyle \rho }is a recursive ordinal.[27]
Most books describing large countable ordinals are on proof theory, and unfortunately tend to be out of print. | https://en.wikipedia.org/wiki/Large_countable_ordinal |
In themathematicalfield ofset theory,ordinal arithmeticdescribes the three usual operations onordinal numbers:addition,multiplication, andexponentiation. Each can be defined in essentially two different ways: either by constructing an explicitwell-ordered setthat represents the result of the operation or by usingtransfinite recursion. Cantor normal form provides a standardized way of writing ordinals. In addition to these usual ordinal operations, there are also the"natural" arithmetic of ordinalsand thenimber operations.
The sum of two well-ordered setsSandTis the ordinal representing the variant oflexicographical orderwith least significant position first, on the union of theCartesian productsS× {0}andT× {1}. This way, every element ofSis smaller than every element ofT, comparisons withinSkeep the order they already have, and likewise for comparisons withinT.
The definition of additionα+βcan also be given bytransfinite recursiononβ. When the right addendβ= 0, ordinary addition givesα+ 0 =αfor anyα. Forβ> 0, the value ofα+βis the smallest ordinal strictly greater than the sum ofαandδfor allδ<β. Writing the successor and limit ordinals cases separately:
Ordinal addition on thenatural numbersis the same as standard addition. The first transfinite ordinal isω, the set of all natural numbers, followed byω+ 1,ω+ 2, etc. The ordinalω+ωis obtained by two copies of the natural numbers ordered in the usual fashion and the second copy completely to the right of the first. Writing0' < 1' < 2' < ...for the second copy,ω+ωlooks like
This is different fromωbecause inωonly0does not have a direct predecessor while inω+ωthe two elements0and0'do not have direct predecessors.
Ordinal addition is, in general, notcommutative. For example,3 +ω=ωsince the order relation for3 +ωis0 < 1 < 2 < 0' < 1' < 2' < ..., which can be relabeled toω. In contrastω+ 3is not equal toωsince the order relation0 < 1 < 2 < ... < 0' < 1' < 2'has a largest element (namely,2') andωdoes not (ωandω+ 3areequipotent, but not order isomorphic).
Ordinal addition is stillassociative; one can see for example that(ω+ 4) +ω=ω+ (4 +ω) =ω+ω.
Addition isstrictly increasingandcontinuousin the right argument:
but the analogous relation does not hold for the left argument; instead we only have:
Ordinal addition isleft-cancellative: ifα+β=α+γ, thenβ=γ. Furthermore, one can defineleft subtractionfor ordinalsβ≤α: there is a uniqueγsuch thatα=β+γ. On the other hand, right cancellation does not work:
Nor does right subtraction, even whenβ≤α: for example, there does not exist anyγsuch thatγ+ 42 =ω.
If the ordinals less thanαare closed under addition and contain 0, thenαis occasionally called aγ-number (seeadditively indecomposable ordinal). These are exactly the ordinals of the formωβ.
TheCartesian product,S×T, of two well-ordered setsSandTcan be well-ordered by a variant oflexicographical orderthat puts the least significant position first. Effectively, each element ofTis replaced by a disjoint copy ofS. The order-type of the Cartesian product is the ordinal that results from multiplying the order-types ofSandT.
The definition of multiplication can also be given by transfinite recursion onβ. When the right factorβ= 0, ordinary multiplication givesα· 0 = 0for anyα. Forβ> 0, the value ofα·βis the smallest ordinal greater than or equal to(α·δ) +αfor allδ<β. Writing the successor and limit ordinals cases separately:
As an example, here is the order relation forω· 2:
which has the same order type asω+ω. In contrast,2 ·ωlooks like this:
and after relabeling, this looks just likeω.
Thus,ω· 2 =ω+ω≠ω= 2 ·ω, showing that multiplication of ordinals is not in general commutative, c.f. pictures.
As is the case with addition, ordinal multiplication on the natural numbers is the same as standard multiplication.
α· 0 = 0 ·α= 0, and thezero-product propertyholds:α·β= 0 →α= 0orβ= 0. The ordinal 1 is a multiplicative identity,α· 1 = 1 ·α=α. Multiplication is associative,(α·β) ·γ=α· (β·γ). Multiplication is strictly increasing and continuous in the right argument: (α<βandγ> 0) →γ·α<γ·β. Multiplication isnotstrictly increasing in the left argument, for example, 1 < 2 but1 ·ω= 2 ·ω=ω. However, it is (non-strictly) increasing, i.e.α≤β→α·γ≤β·γ.
Multiplication of ordinals is not in general commutative. Specifically, a natural number greater than 1 never commutes with any infinite ordinal, and two infinite ordinalsαandβcommute if and only ifαm=βnfor some nonzero natural numbersmandn. The relation "αcommutes withβ" is an equivalence relation on the ordinals greater than 1, and all equivalence classes are countably infinite.
Distributivityholds, on the left:α(β+γ) =αβ+αγ. However, the distributive law on the right(β+γ)α=βα+γαisnotgenerally true:(1 + 1) ·ω= 2 ·ω=ωwhile1 ·ω+ 1 ·ω=ω+ω, which is different. There is aleft cancellationlaw: Ifα> 0andα·β=α·γ, thenβ=γ. Right cancellation does not work, e.g.1 ·ω= 2 ·ω=ω, but 1 and 2 are different. Aleft divisionwithremainderproperty holds: for allαandβ, ifβ> 0, then there are uniqueγandδsuch thatα=β·γ+δandδ<β. Right division does not work: there is noαsuch thatα·ω≤ωω≤ (α+ 1) ·ω.
The ordinal numbers form a leftnear-semiring, but donotform aring. Hence the ordinals are not aEuclidean domain, since they are not even a ring; furthermore the Euclidean "norm" would be ordinal-valued using the left division here.
Aδ-number (seeMultiplicatively indecomposable ordinal) is an ordinalβgreater than 1 such thatαβ=βwhenever0 <α<β. These consist of the ordinal 2 and the ordinals of the formβ=ωωγ.
The definition ofexponentiationvia order types is most easily explained usingVon Neumann's definition of an ordinal as the set of all smaller ordinals. Then, to construct a set of order typeαβconsider the set of all functionsf:β→αsuch thatf(x) = 0for all but finitely many elementsx∈β(essentially, we consider the functions with finitesupport). This set isordered lexicographicallywith the least significant position first: we writef<gif and only if there existsx∈βwithf(x) <g(x)andf(y) =g(y)for ally∈βwithx<y. This is a well-ordering and hence gives an ordinal number.
The definition of exponentiation can also be given by transfinite recursion on the exponentβ. When the exponentβ= 0, ordinary exponentiation givesα0= 1for anyα. Forβ> 0, the value ofαβis the smallest ordinal greater than or equal toαδ·αfor allδ<β. Writing the successor and limit ordinals cases separately:
Both definitions simplify considerably if the exponentβis a finite number:αβis then just the product ofβcopies ofα; e.g.ω3=ω·ω·ω, and the elements ofω3can be viewed as triples of natural numbers, ordered lexicographically with least significant position first. This agrees with the ordinary exponentiation of natural numbers.
But for infinite exponents, the definition may not be obvious. For example,αωcan be identified with a set of finite sequences of elements ofα, properly ordered. The equation2ω=ωexpresses the fact that finite sequences of zeros and ones can be identified with natural numbers, using thebinary numbersystem. The ordinalωωcan be viewed as the order type of finite sequences of natural numbers; every element ofωω(i.e. every ordinal smaller thanωω) can be uniquely written in the formωn1c1+ωn2c2+⋯+ωnkck{\displaystyle \omega ^{n_{1}}c_{1}+\omega ^{n_{2}}c_{2}+\cdots +\omega ^{n_{k}}c_{k}}wherek,n1, ...,nkare natural numbers,c1, ...,ckare nonzero natural numbers, andn1> ... >nk.
The same is true in general: every element ofαβ(i.e. every ordinal smaller thanαβ) can be uniquely written in the formαb1a1+αb2a2+⋯+αbkak{\displaystyle \alpha ^{b_{1}}a_{1}+\alpha ^{b_{2}}a_{2}+\cdots +\alpha ^{b_{k}}a_{k}}wherekis a natural number,b1, ...,bkare ordinals smaller thanβwithb1> ... >bk, anda1, ...,akare nonzero ordinals smaller thanα. This expression corresponds to the functionf:β→αwhich sendsbitoaifori= 1, ...,kand sends all other elements ofβto 0.
While the same exponent-notation is used for ordinal exponentiation andcardinal exponentiation, the two operations are quite different and should not be confused. The cardinal exponentiationABis defined to be the cardinal number of the set ofallfunctionsB→A, while the ordinal exponentiationαβonly contains the functionsβ→αwith finite support, typically a set of much smaller cardinality. To avoid confusing ordinal exponentiation with cardinal exponentiation, one can use symbols for ordinals (e.g.ω) in the former and symbols for cardinals (e.g.ℵ0{\displaystyle \aleph _{0}}) in the latter.
Jacobsthalshowed that the only solutions ofαβ=βαwithα≤βare given byα=β, orα= 2andβ= 4, orαis any limit ordinal andβ=εαwhereεis anε-numberlarger thanα.[1]
There are ordinal operations that continue the sequence begun by addition, multiplication, and exponentiation, including ordinal versions oftetration,pentation, andhexation. See alsoVeblen function.
Every ordinal numberαcan be uniquely written asωβ1c1+ωβ2c2+⋯+ωβkck{\displaystyle \omega ^{\beta _{1}}c_{1}+\omega ^{\beta _{2}}c_{2}+\cdots +\omega ^{\beta _{k}}c_{k}}, wherekis a natural number,c1,c2,…,ck{\displaystyle c_{1},c_{2},\ldots ,c_{k}}are nonzero natural numbers, andβ1>β2>…>βk≥0{\displaystyle \beta _{1}>\beta _{2}>\ldots >\beta _{k}\geq 0}are ordinal numbers. The degenerate caseα= 0occurs whenk= 0and there are noβs norcs. This decomposition ofαis called theCantor normal formofα, and can be considered the base-ωpositional numeral system. The highest exponentβ1{\displaystyle \beta _{1}}is called the degree ofα{\displaystyle \alpha }, and satisfiesβ1≤α{\displaystyle \beta _{1}\leq \alpha }. The equalityβ1=α{\displaystyle \beta _{1}=\alpha }applies if and only ifα=ωα{\displaystyle \alpha =\omega ^{\alpha }}. In that case Cantor normal form does not express the ordinal in terms of smaller ones; this can happen as explained below.
A minor variation of Cantor normal form, which is usually slightly easier to work with, is to set all the numbersciequal to 1 and allow the exponents to be equal. In other words, every ordinal number α can be uniquely written asωβ1+ωβ2+⋯+ωβk{\displaystyle \omega ^{\beta _{1}}+\omega ^{\beta _{2}}+\cdots +\omega ^{\beta _{k}}}, wherekis a natural number, andβ1≥β2≥…≥βk≥0{\displaystyle \beta _{1}\geq \beta _{2}\geq \ldots \geq \beta _{k}\geq 0}are ordinal numbers.
Another variation of the Cantor normal form is the "baseδexpansion", whereωis replaced by any ordinalδ> 1, and the numbersciare nonzero ordinals less thanδ.
The Cantor normal form allows us to uniquely express—and order—the ordinalsαthat are built from the natural numbers by a finite number of arithmetical operations of addition, multiplication and exponentiation base-ω{\displaystyle \omega }: in other words, assumingβ1<α{\displaystyle \beta _{1}<\alpha }in the Cantor normal form, we can also express the exponentsβi{\displaystyle \beta _{i}}in Cantor normal form, and making the same assumption for theβi{\displaystyle \beta _{i}}as forαand so on recursively, we get a system of notation for these ordinals (for example,
denotes an ordinal).
The ordinal ε0(epsilon nought) is the set of ordinal valuesαof the finite-length arithmetical expressions of Cantor normal form that are hereditarily non-trivial where non-trivial meansβ1<αwhen 0<α. It is the smallest ordinal that does not have a finite arithmetical expression in terms ofω, and the smallest ordinal such thatε0=ωε0{\displaystyle \varepsilon _{0}=\omega ^{\varepsilon _{0}}}, i.e. in Cantor normal form the exponent is not smaller than the ordinal itself. It is the limit of the sequence
The ordinal ε0is important for various reasons in arithmetic (essentially because it measures theproof-theoretic strengthof thefirst-orderPeano arithmetic: that is, Peano's axioms can show transfinite induction up to any ordinal less than ε0but not up to ε0itself).
The Cantor normal form also allows us to compute sums and products of ordinals: to compute the sum, for example, one need merely know (see the properties listed in§ Additionand§ Multiplication) that
ifβ′>β{\displaystyle \beta '>\beta }(ifβ′=β{\displaystyle \beta '=\beta }one can apply the distributive law on the left and rewrite this asωβ(c+c′){\displaystyle \omega ^{\beta }(c+c')}, and ifβ′<β{\displaystyle \beta '<\beta }the expression is already in Cantor normal form); and to compute products, the essential facts are that when0<α=ωβ1c1+⋯+ωβkck{\displaystyle 0<\alpha =\omega ^{\beta _{1}}c_{1}+\cdots +\omega ^{\beta _{k}}c_{k}}is in Cantor normal form and0<β′{\displaystyle 0<\beta '}, then
and
ifnis a non-zero natural number.
To compare two ordinals written in Cantor normal form, first compareβ1{\displaystyle \beta _{1}}, thenc1{\displaystyle c_{1}}, thenβ2{\displaystyle \beta _{2}}, thenc2{\displaystyle c_{2}}, and so on. At the first occurrence of inequality, the ordinal that has the larger component is the larger ordinal. If they are the same until one terminates before the other, then the one that terminates first is smaller.
Ernst Jacobsthalshowed that the ordinals satisfy a form of the unique factorization theorem: every nonzero ordinal can be written as a product of a finite number of prime ordinals. This factorization into prime ordinals is in general not unique, but there is a "minimal" factorization into primes that is unique up to changing the order of finite prime factors (Sierpiński 1958).
A prime ordinal is an ordinal greater than 1 that cannot be written as a product of two smaller ordinals. Some of the first primes are 2, 3, 5, ... ,ω,ω+ 1,ω2+ 1,ω3+ 1, ...,ωω,ωω+ 1,ωω+ 1+ 1, ... There are three sorts of prime ordinals:
Factorization into primes is not unique: for example,2×3 = 3×2,2×ω=ω,(ω+1)×ω=ω×ωandω×ωω=ωω. However, there is a unique factorization into primes satisfying the following additional conditions:
This prime factorization can easily be read off using the Cantor normal form as follows:
So the factorization of the Cantor normal form ordinal
into a minimal product of infinite primes and natural numbers is
where eachnishould be replaced by its factorization into a non-increasing sequence of finite primes and
As discussed above, the Cantor normal form of ordinals below ε0can be expressed in an alphabet containing only the function symbols for addition, multiplication and exponentiation, as well as constant symbols for each natural number and forω. We can do away with the infinitely many numerals by using just the constant symbol 0 and the operation of successor, S (for example, the natural number 4 may be expressed as S(S(S(S(0))))). This describes anordinal notation: a system for naming ordinals over a finite alphabet. This particular system of ordinal notation is called the collection ofarithmeticalordinal expressions, and can express all ordinals below ε0, but cannot express ε0. There are other ordinal notations capable of capturing ordinals well past ε0, but because there are only countably many finite-length strings over any finite alphabet, for any given ordinal notation there will be ordinals belowω1(thefirst uncountable ordinal) that are not expressible. Such ordinals are known aslarge countable ordinals.
The operations of addition, multiplication and exponentiation are all examples ofprimitive recursive ordinal functions, and more general primitive recursive ordinal functions can be used to describe larger ordinals.
Thenatural sumandnatural productoperations on ordinals were defined in 1906 byGerhard Hessenberg, and are sometimes called theHessenberg sum(or product) (Sierpiński 1958). The natural sum ofαandβis often denoted byα⊕βorα#β, and the natural product byα⊗βorα⨳β.
The natural sum and product are defined as follows. Letα=ωα1+⋯+ωαk{\displaystyle \alpha =\omega ^{\alpha _{1}}+\cdots +\omega ^{\alpha _{k}}}andβ=ωβ1+⋯+ωβℓ{\displaystyle \beta =\omega ^{\beta _{1}}+\cdots +\omega ^{\beta _{\ell }}}be in Cantor normal form (i.e.α1≥⋯≥αk{\displaystyle \alpha _{1}\geq \cdots \geq \alpha _{k}}andβ1≥⋯≥βℓ{\displaystyle \beta _{1}\geq \cdots \geq \beta _{\ell }}). Letγ1,…,γk+ℓ{\displaystyle \gamma _{1},\ldots ,\gamma _{k+\ell }}be the exponentsα1,…,αk,β1,…,βℓ{\displaystyle \alpha _{1},\ldots ,\alpha _{k},\beta _{1},\ldots ,\beta _{\ell }}sorted in nonincreasing order. Thenα⊕β{\displaystyle \alpha \oplus \beta }is defined asα⊕β=ωγ1+⋯+ωγk+ℓ.{\displaystyle \alpha \oplus \beta =\omega ^{\gamma _{1}}+\cdots +\omega ^{\gamma _{k+\ell }}.}The natural product ofα{\displaystyle \alpha }andβ{\displaystyle \beta }is defined asα⊗β=⨁1≤i≤k1≤j≤ℓωαi⊕βj.{\displaystyle \alpha \otimes \beta =\bigoplus _{\begin{aligned}&1\leq i\leq k\\&1\leq j\leq \ell \end{aligned}}\omega ^{\alpha _{i}\oplus \beta _{j}}.}For example, supposeα=ωωω+ω{\displaystyle \alpha =\omega ^{\omega ^{\omega }}+\omega }andβ=ωω+ω5{\displaystyle \beta =\omega ^{\omega }+\omega ^{5}}. Thenα⊕β=ωωω+ωω+ω5+ω{\displaystyle \alpha \oplus \beta =\omega ^{\omega ^{\omega }}+\omega ^{\omega }+\omega ^{5}+\omega }, whereasα+β=ωωω+ωω+ω5{\displaystyle \alpha +\beta =\omega ^{\omega ^{\omega }}+\omega ^{\omega }+\omega ^{5}}. Andα⊗β=ωωω+ω+ωωω+5+ωω+1+ω6{\displaystyle \alpha \otimes \beta =\omega ^{\omega ^{\omega }+\omega }+\omega ^{\omega ^{\omega }+5}+\omega ^{\omega +1}+\omega ^{6}}, whereasαβ=ωωω+ω+ωωω+5{\displaystyle \alpha \beta =\omega ^{\omega ^{\omega }+\omega }+\omega ^{\omega ^{\omega }+5}}.
The natural sum and product are commutative and associative, and natural product distributes over natural sum. The operations are also monotonic, in the sense that ifα<β{\displaystyle \alpha <\beta }thenα⊕γ<β⊕γ{\displaystyle \alpha \oplus \gamma <\beta \oplus \gamma }; ifα≤β{\displaystyle \alpha \leq \beta }thenα⊗γ≤β⊗γ{\displaystyle \alpha \otimes \gamma \leq \beta \otimes \gamma }; and ifα<β{\displaystyle \alpha <\beta }andγ>0{\displaystyle \gamma >0}thenα⊗γ<β⊗γ{\displaystyle \alpha \otimes \gamma <\beta \otimes \gamma }.
We haveα⊕⋯⊕α⏟n=α⊗n{\displaystyle \underbrace {\alpha \oplus \cdots \oplus \alpha } _{n}=\alpha \otimes n}.
We always haveα+β≤α⊕β{\displaystyle \alpha +\beta \leq \alpha \oplus \beta }andαβ≤α⊗β{\displaystyle \alpha \beta \leq \alpha \otimes \beta }. If bothα<ωγ{\displaystyle \alpha <\omega ^{\gamma }}andβ<ωγ{\displaystyle \beta <\omega ^{\gamma }}thenα⊕β<ωγ{\displaystyle \alpha \oplus \beta <\omega ^{\gamma }}. If bothα<ωωγ{\displaystyle \alpha <\omega ^{\omega ^{\gamma }}}andβ<ωωγ{\displaystyle \beta <\omega ^{\omega ^{\gamma }}}thenα⊗β<ωωγ{\displaystyle \alpha \otimes \beta <\omega ^{\omega ^{\gamma }}}.
Natural sum and product are not continuous in the right argument, since, for examplelimn<ωα⊕n=α+ω{\displaystyle \lim _{n<\omega }\alpha \oplus n=\alpha +\omega }, and notα⊕ω{\displaystyle \alpha \oplus \omega }; andlimn<ωα⊗n=αω{\displaystyle \lim _{n<\omega }\alpha \otimes n=\alpha \omega }, and notα⊗ω{\displaystyle \alpha \otimes \omega }.
The natural sum and product are the same as the addition and multiplication (restricted to ordinals) ofJohn Conway'sfieldofsurreal numbers.
The natural operations come up in the theory ofwell partial orders; given two well partial ordersS{\displaystyle S}andT{\displaystyle T}, oftypes(maximumlinearizations)o(S){\displaystyle o(S)}ando(T){\displaystyle o(T)}, the type of the disjoint union iso(S)⊕o(T){\displaystyle o(S)\oplus o(T)}, while the type of the direct product iso(S)⊗o(T){\displaystyle o(S)\otimes o(T)}.[2]One may take this relation as a definition of the natural operations by choosingSandTto be ordinalsαandβ; soα⊕βis the maximum order type of a total order extending the disjoint union (as a partial order) ofαandβ; whileα⊗βis the maximum order type of a total order extending the direct product (as a partial order) ofαandβ.[3]A useful application of this is whenαandβare both subsets of some larger total order; then their union has order type at mostα⊕β. If they are both subsets of someordered abelian group, then their sum has order type at mostα⊗β.
We can also define the natural sumα⊕βby simultaneous transfinite recursion onαandβ, as the smallest ordinal strictly greater than the natural sum ofαandγfor allγ<βand ofγandβfor allγ<α.[4]Similarly, we can define the natural productα⊗βby simultaneous transfinite recursion onαandβ, as the smallest ordinalγsuch that(α⊗δ) ⊕ (ε⊗β) <γ⊕ (ε⊗δ)for allε<αandδ<β.[4]Also, see the article onsurreal numbersfor the definition of natural multiplication in that context; however, it uses surreal subtraction, which is not defined on ordinals.
The natural sum is associative and commutative. It is always greater or equal to the usual sum, but it may be strictly greater. For example, the natural sum ofωand 1 isω+ 1(the usual sum), but this is also the natural sum of 1 andω. The natural product is associative and commutative and distributes over the natural sum. The natural product is always greater or equal to the usual product, but it may be strictly greater. For example, the natural product ofωand 2 isω· 2(the usual product), but this is also the natural product of 2 andω.
Under natural addition, the ordinals can be identified with the elements of thefree commutative monoidgenerated by the gamma numbersωα. Under natural addition and multiplication, the ordinals can be identified with the elements of thefree commutative semiringgenerated by the delta numbersωωα.
The ordinals do not have unique factorization into primes under the natural product. While the full polynomial ring does have unique factorization, the subset of polynomials with non-negative coefficients does not: for example, ifxis any delta number, then
has two incompatible expressions as a natural product of polynomials with non-negative coefficients that cannot be decomposed further.
There are arithmetic operations on ordinals by virtue of the one-to-one correspondence between ordinals andnimbers. Three common operations on nimbers are nimber addition, nimber multiplication, andminimum excludance (mex). Nimber addition is a generalization of thebitwise exclusive oroperation on natural numbers. Themexof a set of ordinals is the smallest ordinalnotpresent in the set. | https://en.wikipedia.org/wiki/Ordinal_arithmetic |
In mathematicalset theory,Cantor's theoremis a fundamental result which states that, for anysetA{\displaystyle A}, the set of allsubsetsofA,{\displaystyle A,}known as thepower setofA,{\displaystyle A,}has a strictly greatercardinalitythanA{\displaystyle A}itself.
Forfinite sets, Cantor's theorem can be seen to be true by simpleenumerationof the number of subsets. Counting theempty setas a subset, a set withn{\displaystyle n}elements has a total of2n{\displaystyle 2^{n}}subsets, and the theorem holds because2n>n{\displaystyle 2^{n}>n}for allnon-negative integers.
Much more significant is Cantor's discovery of an argument that is applicable to any set, and shows that the theorem holds forinfinitesets also. As a consequence, the cardinality of thereal numbers, which is the same as that of the power set of theintegers, is strictly larger than the cardinality of the integers; seeCardinality of the continuumfor details.
The theorem is named forGeorg Cantor, who first stated and proved it at the end of the 19th century. Cantor's theorem had immediate and important consequences for thephilosophy of mathematics. For instance, by iteratively taking the power set of an infinite set and applying Cantor's theorem, we obtain an endless hierarchy of infinite cardinals, each strictly larger than the one before it. Consequently, the theorem implies that there is no largestcardinal number(colloquially, "there's no largest infinity").
Cantor's argument is elegant and remarkably simple. The complete proof is presented below, with detailed explanations to follow.
Theorem (Cantor)—Letf{\displaystyle f}be a map from setA{\displaystyle A}to its power setP(A){\displaystyle {\mathcal {P}}(A)}. Thenf:A→P(A){\displaystyle f:A\to {\mathcal {P}}(A)}is notsurjective. As a consequence,card(A)<card(P(A)){\displaystyle \operatorname {card} (A)<\operatorname {card} ({\mathcal {P}}(A))}holds for any setA{\displaystyle A}.
B={x∈A∣x∉f(x)}{\displaystyle B=\{x\in A\mid x\notin f(x)\}}exists via theaxiom schema of specification, andB∈P(A){\displaystyle B\in {\mathcal {P}}(A)}becauseB⊆A{\displaystyle B\subseteq A}.Assumef{\displaystyle f}is surjective.Then there exists aξ∈A{\displaystyle \xi \in A}such thatf(ξ)=B{\displaystyle f(\xi )=B}.Fromfor allx{\displaystyle x}inA[x∈B⟺x∉f(x)]{\displaystyle A\ [x\in B\iff x\notin f(x)]}, we deduceξ∈B⟺ξ∉f(ξ){\displaystyle \xi \in B\iff \xi \notin f(\xi )}viauniversal instantiation.The previous deduction yields a contradiction of the formφ⇔¬φ{\displaystyle \varphi \Leftrightarrow \lnot \varphi }, sincef(ξ)=B{\displaystyle f(\xi )=B}.Therefore,f{\displaystyle f}is not surjective, viareductio ad absurdum.We knowinjective mapsfromA{\displaystyle A}toP(A){\displaystyle {\mathcal {P}}(A)}exist. For example, a functiong:A→P(A){\displaystyle g:A\to {\mathcal {P}}(A)}such thatg(x)={x}{\displaystyle g(x)=\{x\}}.Consequently,card(A)<card(P(A)){\displaystyle \operatorname {card} (A)<\operatorname {card} ({\mathcal {P}}(A))}. ∎
By definition of cardinality, we havecard(X)<card(Y){\displaystyle \operatorname {card} (X)<\operatorname {card} (Y)}for any two setsX{\displaystyle X}andY{\displaystyle Y}if and only if there is aninjective functionbut nobijective functionfromX{\displaystyle X}toY{\displaystyle Y}.It suffices to show that there is no surjection fromX{\displaystyle X}toY{\displaystyle Y}. This is the heart of Cantor's theorem: there is no surjective function from any setA{\displaystyle A}to its power set. To establish this, it is enough to show that no functionf{\displaystyle f}(that maps elements inA{\displaystyle A}to subsets ofA{\displaystyle A}) can reach every possible subset, i.e., we just need to demonstrate the existence of a subset ofA{\displaystyle A}that is not equal tof(x){\displaystyle f(x)}for anyx∈A{\displaystyle x\in A}. Recalling that eachf(x){\displaystyle f(x)}is a subset ofA{\displaystyle A}, such a subset is given by the following construction, sometimes called theCantor diagonal setoff{\displaystyle f}:[1][2]
This means, by definition, that for allx∈A{\displaystyle x\in A},x∈B{\displaystyle x\in B}if and only ifx∉f(x){\displaystyle x\notin f(x)}. For allx{\displaystyle x}the setsB{\displaystyle B}andf(x){\displaystyle f(x)}cannot be equal becauseB{\displaystyle B}was constructed from elements ofA{\displaystyle A}whoseimagesunderf{\displaystyle f}did not include themselves. For allx∈A{\displaystyle x\in A}eitherx∈f(x){\displaystyle x\in f(x)}orx∉f(x){\displaystyle x\notin f(x)}. Ifx∈f(x){\displaystyle x\in f(x)}thenf(x){\displaystyle f(x)}cannot equalB{\displaystyle B}becausex∈f(x){\displaystyle x\in f(x)}by assumption andx∉B{\displaystyle x\notin B}by definition. Ifx∉f(x){\displaystyle x\notin f(x)}thenf(x){\displaystyle f(x)}cannot equalB{\displaystyle B}becausex∉f(x){\displaystyle x\notin f(x)}by assumption andx∈B{\displaystyle x\in B}by the definition ofB{\displaystyle B}.
Equivalently, and slightly more formally, we have just proved that the existence ofξ∈A{\displaystyle \xi \in A}such thatf(ξ)=B{\displaystyle f(\xi )=B}implies the followingcontradiction:
Therefore, byreductio ad absurdum, the assumption must be false.[3]Thus there is noξ∈A{\displaystyle \xi \in A}such thatf(ξ)=B{\displaystyle f(\xi )=B}; in other words,B{\displaystyle B}is not in the image off{\displaystyle f}andf{\displaystyle f}does not map onto every element of the power set ofA{\displaystyle A}, i.e.,f{\displaystyle f}is not surjective.
Finally, to complete the proof, we need to exhibit an injective function fromA{\displaystyle A}to its power set. Finding such a function is trivial: just mapx{\displaystyle x}to the singleton set{x}{\displaystyle \{x\}}. The argument is now complete, and we have established the strict inequality for any setA{\displaystyle A}thatcard(A)<card(P(A)){\displaystyle \operatorname {card} (A)<\operatorname {card} ({\mathcal {P}}(A))}.
Another way to think of the proof is thatB{\displaystyle B}, empty or non-empty, is always in the power set ofA{\displaystyle A}. Forf{\displaystyle f}to beonto, some element ofA{\displaystyle A}must map toB{\displaystyle B}. But that leads to a contradiction: no element ofB{\displaystyle B}can map toB{\displaystyle B}because that would contradict the criterion of membership inB{\displaystyle B}, thus the element mapping toB{\displaystyle B}must not be an element ofB{\displaystyle B}meaning that it satisfies the criterion for membership inB{\displaystyle B}, another contradiction. So the assumption that an element ofA{\displaystyle A}maps toB{\displaystyle B}must be false; andf{\displaystyle f}cannot be onto.
Because of the double occurrence ofx{\displaystyle x}in the expression "x∈f(x){\displaystyle x\in f(x)}", this is adiagonal argument. For a countable (or finite) set, the argument of the proof given above can be illustrated by constructing a table in which
Given the order chosen for the row and column labels, the main diagonalD{\displaystyle D}of this table thus records whetherx∈f(x){\displaystyle x\in f(x)}for eachx∈A{\displaystyle x\in A}. One such table will be the following:f(x1)f(x2)f(x3)f(x4)⋯x1TTFT⋯x2TFFF⋯x3FFTT⋯x4FTTT⋯⋮⋮⋮⋮⋮⋱{\displaystyle {\begin{array}{cccccc}&f(x_{1})&f(x_{2})&f(x_{3})&f(x_{4})&\cdots \\\hline x_{1}&{\color {red}T}&T&F&T&\cdots \\x_{2}&T&{\color {red}F}&F&F&\cdots \\x_{3}&F&F&{\color {red}T}&T&\cdots \\x_{4}&F&T&T&{\color {red}T}&\cdots \\\vdots &\vdots &\vdots &\vdots &\vdots &\ddots \end{array}}}The setB{\displaystyle B}constructed in the previous paragraphs coincides with the row labels for the subset of entries on this main diagonalD{\displaystyle D}(which in above example, coloured red) where the table records thatx∈f(x){\displaystyle x\in f(x)}is false.[3]Each row records the values of theindicator functionof the set corresponding to the column. The indicator function ofB{\displaystyle B}coincides with thelogically negated(swap "true" and "false") entries of the main diagonal. Thus the indicator function ofB{\displaystyle B}does not agree with any column in at least one entry. Consequently, no column representsB{\displaystyle B}.
Despite the simplicity of the above proof, it is rather difficult for anautomated theorem proverto produce it. The main difficulty lies in an automated discovery of the Cantor diagonal set.Lawrence Paulsonnoted in 1992 thatOttercould not do it, whereasIsabellecould, albeit with a certain amount of direction in terms of tactics that might perhaps be considered cheating.[2]
Let us examine the proof for the specific case whenA{\displaystyle A}iscountably infinite.Without loss of generality, we may takeA=N={1,2,3,…}{\displaystyle A=\mathbb {N} =\{1,2,3,\ldots \}}, the set ofnatural numbers.
Suppose thatN{\displaystyle \mathbb {N} }isequinumerouswith itspower setP(N){\displaystyle {\mathcal {P}}(\mathbb {N} )}. Let us see a sample of whatP(N){\displaystyle {\mathcal {P}}(\mathbb {N} )}looks like:
Indeed,P(N){\displaystyle {\mathcal {P}}(\mathbb {N} )}contains infinite subsets ofN{\displaystyle \mathbb {N} }, e.g. the set of all positive even numbers{2,4,6,…}={2k:k∈N}{\displaystyle \{2,4,6,\ldots \}=\{2k:k\in \mathbb {N} \}}, along with theempty set∅{\displaystyle \varnothing }.
Now that we have an idea of what the elements ofP(N){\displaystyle {\mathcal {P}}(\mathbb {N} )}are, let us attempt to pair off eachelementofN{\displaystyle \mathbb {N} }with each element ofP(N){\displaystyle {\mathcal {P}}(\mathbb {N} )}to show that these infinite sets are equinumerous. In other words, we will attempt to pair off each element ofN{\displaystyle \mathbb {N} }with an element from the infinite setP(N){\displaystyle {\mathcal {P}}(\mathbb {N} )}, so that no element from either infinite set remains unpaired. Such an attempt to pair elements would look like this:
Given such a pairing, some natural numbers are paired withsubsetsthat contain the very same number. For instance, in our example the number 2 is paired with the subset {1, 2, 3}, which contains 2 as a member. Let us call such numbersselfish. Other natural numbers are paired withsubsetsthat do not contain them. For instance, in our example the number 1 is paired with the subset {4, 5}, which does not contain the number 1. Call these numbersnon-selfish. Likewise, 3 and 4 are non-selfish.
Using this idea, let us build a special set of natural numbers. This set will provide thecontradictionwe seek. LetB{\displaystyle B}be the set ofallnon-selfish natural numbers. By definition, thepower setP(N){\displaystyle {\mathcal {P}}(\mathbb {N} )}contains all sets of natural numbers, and so it contains this setB{\displaystyle B}as an element. If the mapping is bijective,B{\displaystyle B}must be paired off with some natural number, sayb{\displaystyle b}. However, this causes a problem. Ifb{\displaystyle b}is inB{\displaystyle B}, thenb{\displaystyle b}is selfish because it is in the corresponding set, which contradicts the definition ofB{\displaystyle B}. Ifb{\displaystyle b}is not inB{\displaystyle B}, then it is non-selfish and it should instead be a member ofB{\displaystyle B}. Therefore, no such elementb{\displaystyle b}which maps toB{\displaystyle B}can exist.
Since there is no natural number which can be paired withB{\displaystyle B}, we have contradicted our original supposition, that there is abijectionbetweenN{\displaystyle \mathbb {N} }andP(N){\displaystyle {\mathcal {P}}(\mathbb {N} )}.
Note that the setB{\displaystyle B}may be empty. This would mean that every natural numberx{\displaystyle x}maps to a subset of natural numbers that containsx{\displaystyle x}. Then, every number maps to a nonempty set and no number maps to the empty set. But the empty set is a member ofP(N){\displaystyle {\mathcal {P}}(\mathbb {N} )}, so the mapping still does not coverP(N){\displaystyle {\mathcal {P}}(\mathbb {N} )}.
Through thisproof by contradictionwe have proven that thecardinalityofN{\displaystyle \mathbb {N} }andP(N){\displaystyle {\mathcal {P}}(\mathbb {N} )}cannot be equal. We also know that thecardinalityofP(N){\displaystyle {\mathcal {P}}(\mathbb {N} )}cannot be less than thecardinalityofN{\displaystyle \mathbb {N} }becauseP(N){\displaystyle {\mathcal {P}}(\mathbb {N} )}contains allsingletons, by definition, and these singletons form a "copy" ofN{\displaystyle \mathbb {N} }inside ofP(N){\displaystyle {\mathcal {P}}(\mathbb {N} )}. Therefore, only one possibility remains, and that is that thecardinalityofP(N){\displaystyle {\mathcal {P}}(\mathbb {N} )}is strictly greater than thecardinalityofN{\displaystyle \mathbb {N} }, proving Cantor's theorem.
Cantor's theorem and its proof are closely related to twoparadoxes of set theory.
Cantor's paradoxis the name given to a contradiction following from Cantor's theorem together with the assumption that there is a set containing all sets, theuniversal setV{\displaystyle V}. In order to distinguish this paradox from the next one discussed below, it is important to note what this contradiction is. By Cantor's theorem|P(X)|>|X|{\displaystyle |{\mathcal {P}}(X)|>|X|}for any setX{\displaystyle X}. On the other hand, all elements ofP(V){\displaystyle {\mathcal {P}}(V)}are sets, and thus contained inV{\displaystyle V}, therefore|P(V)|≤|V|{\displaystyle |{\mathcal {P}}(V)|\leq |V|}.[1]
Another paradox can be derived from the proof of Cantor's theorem by instantiating the functionfwith theidentity function; this turns Cantor's diagonal set into what is sometimes called theRussell setof a given setA:[1]
The proof of Cantor's theorem is straightforwardly adapted to show that assuming a set of all setsUexists, then considering its Russell setRUleads to the contradiction:
This argument is known asRussell's paradox.[1]As a point of subtlety, the version of Russell's paradox we have presented here is actually a theorem ofZermelo;[4]we can conclude from the contradiction obtained that we must reject the hypothesis thatRU∈U, thus disproving the existence of a set containing all sets. This was possible because we have usedrestricted comprehension(as featured inZFC) in the definition ofRAabove, which in turn entailed that
Had we usedunrestricted comprehension(as inFrege's system for instance) by defining the Russell set simply asR={x:x∉x}{\displaystyle R=\left\{\,x:x\not \in x\,\right\}}, then the axiom system itself would have entailed the contradiction, with no further hypotheses needed.[4]
Despite the syntactical similarities between the Russell set (in either variant) and the Cantor diagonal set,Alonzo Churchemphasized that Russell's paradox is independent of considerations of cardinality and its underlying notions like one-to-one correspondence.[5]
Cantor gave essentially this proof in a paper published in 1891 "Über eine elementare Frage der Mannigfaltigkeitslehre",[6]where thediagonal argumentfor the uncountability of therealsalso first appears (he hadearlier proved the uncountability of the reals by other methods). The version of this argument he gave in that paper was phrased in terms of indicator functions on a set rather than subsets of a set.[7]He showed that iffis a function defined onXwhose values are 2-valued functions onX, then the 2-valued functionG(x) = 1 −f(x)(x) is not in the range off.
Bertrand Russellhas a very similar proof inPrinciples of Mathematics(1903, section 348), where he shows that there are morepropositional functionsthan objects. "For suppose a correlation of all objects and some propositional functions to have been affected, and let phi-xbe the correlate ofx. Then "not-phi-x(x)," i.e. "phi-xdoes not hold ofx" is a propositional function not contained in this correlation; for it is true or false ofxaccording as phi-xis false or true ofx, and therefore it differs from phi-xfor every value ofx." He attributes the idea behind the proof to Cantor.
Ernst Zermelohas a theorem (which he calls "Cantor's Theorem") that is identical to the form above in the paper that became the foundation of modern set theory ("Untersuchungen über die Grundlagen der Mengenlehre I"), published in 1908. SeeZermelo set theory.
Lawvere's fixed-point theoremprovides for a broad generalization of Cantor's theorem to anycategorywithfinite productsin the following way:[8]letC{\displaystyle {\mathcal {C}}}be such a category, and let1{\displaystyle 1}be a terminal object inC{\displaystyle {\mathcal {C}}}. Suppose thatY{\displaystyle Y}is an object inC{\displaystyle {\mathcal {C}}}and that there exists an endomorphismα:Y→Y{\displaystyle \alpha :Y\to Y}that does not have any fixed points; that is, there is no morphismy:1→Y{\displaystyle y:1\to Y}that satisfiesα∘y=y{\displaystyle \alpha \circ y=y}. Then there is no objectT{\displaystyle T}ofC{\displaystyle {\mathcal {C}}}such that a morphismf:T×T→Y{\displaystyle f:T\times T\to Y}can parameterize all morphismsT→Y{\displaystyle T\to Y}. In other words, for every objectT{\displaystyle T}and every morphismf:T×T→Y{\displaystyle f:T\times T\to Y}, an attempt to write mapsT→Y{\displaystyle T\to Y}as maps of the formf(−,x):T→Y{\displaystyle f(-,x):T\to Y}must leave out at least one mapT→Y{\displaystyle T\to Y}. | https://en.wikipedia.org/wiki/Cantor%27s_theorem |
Theabsolute infinite(symbol:Ω), in context often called "absolute", is an extension of the idea ofinfinityproposed bymathematicianGeorg Cantor. Cantor linked the absolute infinite withGod,[1][2]: 175[3]: 556and believed that it had variousmathematicalproperties, including thereflection principle: every property of the absolute infinite is also held by some smaller object.[4][clarification needed]
Cantor said:
The actual infinite was distinguished by three relations: first, as it is realized in the supremeperfection, in the completely independent, extra worldly existence, in Deo, where I call it absolute infinite or simply absolute; second to the extent that it is represented in the dependent, creatural world; third as it can be conceived in abstracto in thought as a mathematical magnitude, number or order type. In the latter two relations, where it obviously reveals itself as limited and capable for further proliferation and hence familiar to the finite, I call itTransfinitumand strongly contrast it with the absolute.[5]
While using theLatinexpressionin Deo(in God), Cantor identifiesabsoluteinfinity withGod(GA 175–176, 376, 378, 386, 399). According to Cantor, Absolute Infinity is beyondmathematical comprehensionand shall be interpreted in terms ofnegative theology.[6]
Cantor also mentioned the idea in his letters toRichard Dedekind(text in square brackets not present in original):[8]
A multiplicity [he appears to mean what we now call aset] is calledwell-orderedif it fulfills the condition that every sub-multiplicity has a firstelement; such a multiplicity I call for short a "sequence"....Now I envisage the system of all [ordinal] numbers and denote itΩ....The systemΩin its natural ordering according to magnitude is a "sequence".Now let us adjoin 0 as an additional element to this sequence, and place it, obviously, in the first position; then we obtain a sequenceΩ′:0, 1, 2, 3, ... ω0, ω0+1, ..., γ, ...of which one can readily convince oneself that every number γ occurring in it is the type [i.e., order-type] of the sequence of all its preceding elements (including 0). (The sequenceΩhas this property first for ω0+1. [ω0+1 should be ω0.])NowΩ′(and therefore alsoΩ) cannot be a consistent multiplicity. For ifΩ′were consistent, then as a well-ordered set, a numberδwould correspond to it which would be greater than all numbers of the systemΩ; the numberδ, however, also belongs to the systemΩ, because it comprises all numbers. Thusδwould be greater thanδ, which is a contradiction. Therefore:
The system Ω of all [ordinal] numbers is an inconsistent, absolutely infinite multiplicity.
The idea that the collection of all ordinal numbers cannot logically exist seemsparadoxicalto many. This is related to the Burali-Forti's paradox which implies that there can be no greatestordinal number. All of these problems can be traced back to the idea that, for every property that can be logically defined, there exists a set of all objects that have that property. However, as in Cantor's argument (above), this idea leads to difficulties.
More generally, as noted byA. W. Moore, there can be no end to the process ofsetformation, and thus no such thing as thetotality of all sets, or theset hierarchy. Any such totality would itself have to be a set, thus lying somewhere within thehierarchyand thus failing to contain every set.
A standard solution to this problem is found inZermelo set theory, which does not allow the unrestricted formation of sets from arbitrary properties. Rather, we may form the set of all objects that have a given propertyand lie in some given set(Zermelo'sAxiom of Separation). This allows for the formation of sets based on properties, in a limited sense, while (hopefully) preserving the consistency of the theory.
While this solves the logical problem, one could argue that the philosophical problem remains. It seems natural that a set of individuals ought to exist, so long as the individuals exist. Indeed,naive set theorymight be said to be based on this notion. Although Zermelo's fix allows aclassto describe arbitrary (possibly "large") entities, these predicates of themetalanguagemay have no formal existence (i.e., as a set) within the theory. For example, the class of all sets would be aproper class. This is philosophically unsatisfying to some and has motivated additional work inset theoryand other methods of formalizing the foundations of mathematics such asNew FoundationsbyWillard Van Orman Quine.
Es wurde das Aktual-Unendliche (A-U.) nach drei Beziehungen unterschieden: erstens, sofern es in der höchsten Vollkommenheit, im völlig unabhängigen außerweltlichen Sein, in Deo realisiert ist, wo ich es Absolut Unendliches oder kurzweg Absolutes nenne; zweitens, sofern es in der abhängigen, kreatürlichen Welt vertreten ist; drittens, sofern es als mathematische Größe, Zahl oder Ordnungstypus vom Denken in abstracto aufgefaßt werden kann. In den beiden letzten Beziehungen, wo es offenbar als beschränktes, noch weiterer Vermehrung fähiges und insofern dem Endlichen verwandtes A.-U. sich darstellt, nenne ich esTransfinitumund setze es dem Absoluten strengstens entgegen. | https://en.wikipedia.org/wiki/Absolute_infinite |
Inmathematics, thecardinalityof asetis the number of its elements. The cardinality of a set may also be called itssize, when no confusion with other notions of size is possible.[a]Beginning in the late 19th century, this concept of size was generalized toinfinite sets, allowing one to distinguish between different types of infinity and to performarithmeticon them. Nowadays, infinite sets are encountered in almost all parts of mathematics, even those that may seem to be unrelated. Familiar examples are provided by mostnumber systemsandalgebraic structures(natural numbers,rational numbers,real numbers,vector spaces, etc.), as well as in geometry, bylines,line segmentsandcurves, which are considered as the sets of their points.
There are two approaches to describing cardinality: one which usescardinal numbersand another which compares sets directly using functions between them, eitherbijectionsorinjections.
The former states the size as a number; the latter compares their relative size and led to the discovery of different sizes of infinity.[1]For example, the setsA={1,2,3}{\displaystyle A=\{1,2,3\}}andB={2,4,6}{\displaystyle B=\{2,4,6\}}are the same size as they each contain 3elements(the first approach) and there is a bijection between them (the second approach).
The cardinality, orcardinal number, of a setA{\displaystyle A}is generally denoted by|A|,{\displaystyle |A|,}with avertical baron each side.[2](This is the same notation as forabsolute value; the meaning depends on context.) The notation|A|=|B|{\displaystyle |A|=|B|}means that the two setsA{\displaystyle A}andB{\displaystyle B}have the same cardinality. The cardinal number of a setA{\displaystyle A}may also be denoted byn(A),{\displaystyle n(A),}A{\displaystyle A},card(A),{\displaystyle \operatorname {card} (A),}#A,{\displaystyle \#A,}etc.
It is conventional to recognize three kinds of cardinality:
In English, the termcardinalityoriginates from thepost-classical Latincardinalis, meaning "principal" or "chief", which derives fromcardo, a noun meaning "hinge". In Latin,cardoreferred to something central or pivotal, both literally and metaphorically. This concept of centrality passed intomedieval Latinand then into English, wherecardinalcame to describe things considered to be, in some sense, fundamental, such ascardinal virtues,cardinal sins,cardinal directions, and (in the grammatical sense)cardinal numbers.[4][5]The last of which referred to numbers used for counting (e.g., one, two, three),[6]as opposed toordinal numbers, which express order (e.g., first, second, third),[7]andnominal numbersused for labeling without meaning (e.g.,jersey numbersandserial numbers).[8]
In mathematics, the notion of cardinality was first introduced byGeorg Cantorin the late 19th century, wherein he used the used the termMächtigkeit, which may be translated as "magnitude" or "power", though Cantor credited the term to a work byJakob Steineronprojective geometry.[9][10][11]The termscardinalityandcardinal numberwere eventually adopted from the grammatical sense, and later translations would use these terms.[12][13]Similarly, the terms forcountableanduncountable setscome fromcountableanduncountable nouns.[citation needed]
A crude sense of cardinality, an awareness that groups of things or events compare with other groups by containing more, fewer, or the same number of instances, is observed in a variety of present-day animal species, suggesting an origin millions of years ago.[14]Human expression of cardinality is seen as early as40000years ago, with equating the size of a group with a group of recorded notches, or a representative collection of other things, such as sticks and shells.[15]The abstraction of cardinality as a number is evident by 3000 BCE, in Sumerianmathematicsand the manipulation of numbers without reference to a specific group of things or events.[16]
From the 6th century BCE, the writings of Greek philosophers show hints of infinite cardinality. While they considered generally infinity as an endless series of actions, such as adding 1 to a number repeatedly, they considered rarely infinite sets (actual infinity), and, if they did, they considered infinity as a unique cardinality.[17]The ancient Greek notion of infinity also considered the division of things into parts repeated without limit.
One of the earliest explicit uses of a one-to-one correspondence is recorded inAristotle'sMechanics(c.350 BC), known asAristotle's wheel paradox. The paradox can be briefly described as follows: A wheel is depicted as twoconcentric circles. The larger, outer circle is tangent to a horizontal line (e.g. a road that it rolls on), while the smaller, inner circle is rigidly affixed to the larger. Assuming the larger circle rolls along the line without slipping (or skidding) for one full revolution, the distances moved by both circles are the same: thecircumferenceof the larger circle. Further, the lines traced by the bottom-most point of each is the same length.[18]Since the smaller wheel does not skip any points, and no point on the smaller wheel is used more than once, there is a one-to-one correspondence between the two circles.
Galileo Galileipresented what was later coinedGalileo's paradoxin his bookTwo New Sciences(1638), where he attempts to show that infinite quantities cannot be called greater or less than one another. He presents the paradox roughly as follows: asquare numberis one which is the product of another number with itself, such as 4 and 9, which are the squares of 2 and 3 respectively. Then thesquare rootof a square number is that multiplicand. He then notes that there are as many square numbers as there are square roots, since every square has its own root and every root its own square, while no square has more than one root and no root more than one square. But there are as many square roots as there are numbers, since every number is the square root of some square. He, however, concluded that this meant we could not compare the sizes of infinite sets, missing the opportunity to discover cardinality.[19]
Bernard Bolzano'sParadoxes of the Infinite(Paradoxien des Unendlichen, 1851) is often considered the first systematic attempt to introduce the concept of sets intomathematical analysis. In this work, Bolzano defended the notion ofactual infinity, examined various properties of infinite collections, including an early formulation of what would later be recognized as one-to-one correspondence between infinite sets, and proposed to base mathematics on a notion similar to sets. He discussed examples such as the pairing between theintervals[0,5]{\displaystyle [0,5]}and[0,12]{\displaystyle [0,12]}by the relation5y=12x.{\displaystyle 5y=12x.}Bolzano also revisited and extended Galileo's paradox. However, he too resisted saying that these sets were, in that sense, the same size. Thus, whileParadoxes of the Infiniteanticipated several ideas central to later set theory, the work had little influence on contemporary mathematics, in part due to itsposthumous publicationand limited circulation.[20][21][22]
Other, more minor contributions incudeDavid HumeinA Treatise of Human Nature(1739), who said"When two numbers are so combined, as that the one has always a unit answering to every unit of the other, we pronounce them equal",[23]now calledHume's principle, which was used extensively byGottlob Fregelater during the rise of set theory.[24]Jakob Steiner, whomGeorg Cantorcredits the original term,Mächtigkeit, for cardinality (1867).[9][10][11]Peter Gustav Lejeune Dirichletis commonly credited for being the first to explicitly formulate thepigeonhole principlein 1834,[25]though it was used at least two centuries earlier byJean Leurechonin 1624.[26]
To better understand infinite sets, a notion of cardinality was formulatedc.1880byGeorg Cantor, the originator ofset theory. He examined the process of equating two sets with abijection, a one-to-one correspondence between the elements of two sets. In 1891, with the publication ofhis diagonal argument, he demonstrated that there are sets of numbers that cannot be placed in one-to-one correspondence with the set of natural numbers, i.e., there are "uncountable sets" that contain more elements than there are in the infinite set of natural numbers.[27]
While the cardinality of a finite set is simply its number of elements, extending that notion to infinite sets usually starts with defining comparison of sizes of arbitrary sets (some of which are possibly infinite).
Two sets have the same cardinality if there exists a one-to-one correspondence between the elements ofA{\displaystyle A}and thoseB{\displaystyle B}(that is, abijectionfromA{\displaystyle A}toB{\displaystyle B}).[3]Such sets are said to beequipotent,equipollent, orequinumerous. For example, the setE={0,2,4,6,...}{\displaystyle E=\{0,2,4,6,{\text{...}}\}}of non-negativeeven numbershas the same cardinality as the setN={0,1,2,3,...}{\displaystyle \mathbb {N} =\{0,1,2,3,{\text{...}}\}}ofnatural numbers, since the functionf(n)=2n{\displaystyle f(n)=2n}is a bijection fromN{\displaystyle \mathbb {N} }toE{\displaystyle E}(see picture).
For finite setsA{\displaystyle A}andB{\displaystyle B}, ifsomebijection exists fromA{\displaystyle A}toB{\displaystyle B}, theneachinjective or surjective function fromA{\displaystyle A}toB{\displaystyle B}is a bijection. This is no longer true for infiniteA{\displaystyle A}andB{\displaystyle B}. For example, the functiong{\displaystyle g}fromN{\displaystyle \mathbb {N} }toE{\displaystyle E}, defined byg(n)=4n{\displaystyle g(n)=4n}is injective, but not surjective since 2, for instance, is not mapped to, andh{\displaystyle h}fromN{\displaystyle \mathbb {N} }toE{\displaystyle E}, defined byh(n)=2floor(n/2){\displaystyle h(n)=2\operatorname {floor} (n/2)}(see:floor function) is surjective, but not injective, since 0 and 1 for instance both map to 0. Neitherg{\displaystyle g}norh{\displaystyle h}can challenge|E|=|N|,{\displaystyle |E|=|\mathbb {N} |,}which was established by the existence off{\displaystyle f}.
A fundamental result often used for cadinality is that of anequivalence relation. A binaryrelationis an equvalence relation if it satisfies the three basic properties of equality:reflexivity,symmetry, andtransitivity. A relationR{\displaystyle R}is reflexive if, for anya,{\displaystyle a,}aRa{\displaystyle aRa}(read:a{\displaystyle a}isR{\displaystyle R}-related toa{\displaystyle a}); symmetric if, for anya{\displaystyle a}andb,{\displaystyle b,}ifaRb,{\displaystyle aRb,}thenbRa{\displaystyle bRa}(read: ifa{\displaystyle a}is related tob,{\displaystyle b,}thenb{\displaystyle b}is related toa{\displaystyle a}); and transitive if, for anya,{\displaystyle a,}b,{\displaystyle b,}andc,{\displaystyle c,}ifaRb{\displaystyle aRb}andbRc,{\displaystyle bRc,}thenaRc.{\displaystyle aRc.}
Given any setA,{\displaystyle A,}there is a bijection fromA{\displaystyle A}to itself by theidentity function, therefore cardinality is reflexive. Given any setsA{\displaystyle A}andB,{\displaystyle B,}such that there is a bijectionf{\displaystyle f}fromA{\displaystyle A}toB,{\displaystyle B,}then there is aninverse functionf−1{\displaystyle f^{-1}}fromB{\displaystyle B}toA,{\displaystyle A,}which is also bijective, therefore cardinality is symmetric. Finally, given any setsA,{\displaystyle A,}B,{\displaystyle B,}andC{\displaystyle C}such that there is a bijectionf{\displaystyle f}fromA{\displaystyle A}toB,{\displaystyle B,}andg{\displaystyle g}fromB{\displaystyle B}toC,{\displaystyle C,}then theircompositiong∘f{\displaystyle g\circ f}(read:g{\displaystyle g}afterf{\displaystyle f}) is a bijection fromA{\displaystyle A}toC,{\displaystyle C,}and so cardinality is transitive. Thus, cardinality forms an equivalence relation. This means that cardinalitypartitions setsintoequivalence classes, and one may assign a representative to denote this class. This motivates the notion of acardinal number.
Somewhat more formally, a relation must be a certain set ofordered pairs. Since there is noset of all setsin standard set theory (see:§ Cantor's paradox), cardinality is not a relation in the usual sense, but apredicateor a relation overclasses.
A setA{\displaystyle A}is not larger than a setB{\displaystyle B}if it can be mapped intoB{\displaystyle B}without overlap. That is, the cardinality ofA{\displaystyle A}is less than or equal to the cardinality ofB{\displaystyle B}if there is aninjective functionfromA{\displaystyle A}toB{\displaystyle B}. This is writtenA⪯B,{\displaystyle A\preceq B,}or|A|≤|B|.{\displaystyle |A|\leq |B|.}IfA⪯B,{\displaystyle A\preceq B,}but there is no injection fromB{\displaystyle B}toA,{\displaystyle A,}thenA{\displaystyle A}is said to bestrictlysmaller thanB,{\displaystyle B,}written without the underline asA≺B{\displaystyle A\prec B}or|A|<|B|.{\displaystyle |A|<|B|.}For example, ifA{\displaystyle A}has four elements andB{\displaystyle B}has five, then the following are trueA⪯A,{\displaystyle A\preceq A,}A⪯B,{\displaystyle A\preceq B,}andA≺B.{\displaystyle A\prec B.}
For example, the setN{\displaystyle \mathbb {N} }of allnatural numbershas cardinality strictly less than itspower setP(N){\displaystyle {\mathcal {P}}(\mathbb {N} )}, becauseg(n)={n}{\displaystyle g(n)=\{n\}}is an injective function fromN{\displaystyle \mathbb {N} }toP(N){\displaystyle {\mathcal {P}}(\mathbb {N} )}, and it can be shown that no function fromN{\displaystyle \mathbb {N} }toP(N){\displaystyle {\mathcal {P}}(\mathbb {N} )}can be bijective (see picture). By a similar argument,N{\displaystyle \mathbb {N} }has cardinality strictly less than the cardinality of the setR{\displaystyle \mathbb {R} }of allreal numbers. For proofs, seeCantor's diagonal argumentorCantor's first uncountability proof.
If|A|≤|B|{\displaystyle |A|\leq |B|}and|B|≤|A|,{\displaystyle |B|\leq |A|,}then|A|=|B|{\displaystyle |A|=|B|}(a fact known as theSchröder–Bernstein theorem). Theaxiom of choiceis equivalent to the statement that|A|≤|B|{\displaystyle |A|\leq |B|}or|B|≤|A|{\displaystyle |B|\leq |A|}for everyA{\displaystyle A}andB{\displaystyle B}.[28][29]
A set is calledcountableif it isfiniteor has a bijection with the set ofnatural numbers(N),{\displaystyle (\mathbb {N} ),}in which case it is calledcountably infinite. The termdenumerableis also sometimes used for countably infinite sets. For example, the set of all even natural numbers is countable, and therefore has the same cardinality as the whole set of natural numbers, even though it is aproper subset. Similarly, the set ofsquare numbersis countable, which was considered paradoxical for hundreds of years before modern set theory (see:§ Pre-Cantorian Set theory). However, several other examples have historically been considered surprising or initially unintuitive since the rise of set theory.
Therational numbers(Q){\displaystyle (\mathbb {Q} )}are those which can be expressed as thequotientorfractionpq{\displaystyle {\tfrac {p}{q}}}of twointegers. The rational numbers can be shown to be countable by considering the set of fractions as the set of allordered pairsof integers, denotedZ×Z,{\displaystyle \mathbb {Z} \times \mathbb {Z} ,}which can be visualized as the set of allinteger pointson a grid. Then, an intuitive function can be described by drawing a line in a repeating pattern, or spiral, which eventually goes through each point in the grid. For example, going through each diagonal on the grid for positive fractions, or through a lattice spiral for all integer pairs. These technically over cover the rationals, since, for example, the rational number12{\textstyle {\frac {1}{2}}}gets mapped to by all the fractions24,36,48,…,{\textstyle {\frac {2}{4}},\,{\frac {3}{6}},\,{\frac {4}{8}},\,\dots ,}as the grid method treats these all as distinct ordered pairs. So this function shows|Q|≤|N|{\displaystyle |\mathbb {Q} |\leq |\mathbb {N} |}not|Q|=|N|.{\displaystyle |\mathbb {Q} |=|\mathbb {N} |.}This can be corrected by "skipping over" these numbers in the grid, or by designing a function which does this naturally, but these menthods are usually more complicated.
A number is calledalgebraicif it is a solution of somepolynomialequation (with integercoefficients). For example, thesquare root of two2{\displaystyle {\sqrt {2}}}is a solution tox2−2=0,{\displaystyle x^{2}-2=0,}and the rational numberp/q{\displaystyle p/q}is the solution toqx−p=0.{\displaystyle qx-p=0.}Conversely, a number which cannot be the root of any polynomial is calledtranscendental. Two examples includeEuler's number(e) andpi (π). In general, proving a number is trancendental is considered to be very difficult, and only a few classes of transcendental numbers are known. However, it can be shown that the set of algebraic numbers is countable (for example, seeCantor's first set theory article § The proofs). Since the set of algebraic numbers is countable while the real numbers are uncountable (shown in the following section), the transcendental numbers must form the vast majority of real numbers, even though they are individually much harder to identify. That is to say,almost allreal numbers are transcendental.
A set is calleduncountableif it is not countable. That is, it is infinite and strictly larger than the set of natural numbers. The usual first example of this is the set ofreal numbers(R){\displaystyle (\mathbb {R} )}, which can be understood as the set of all numbers on thenumber line. One method of proving that the reals are uncountable is calledCantor's diagonal argument, credited to Cantor for his 1891 proof,[30]though his differs from the more common presentation.
It begins by assuming,by contradiction, that there is some one-to-one mapping between the natural numbers and the set of real numbers between 0 and 1 (the interval[0,1]{\displaystyle [0,1]}). Then, take thedecimal expansionsof each real number, which looks like0.d1d2d3...{\displaystyle 0.d_{1}d_{2}d_{3}...}Considering these real numbers in a column, create a new number such that the first digit of the new number is different from that of the first number in the column, the second digit is different from the second number in the column and so on. We also need to make sure that the number we create has a unique decimal representation, that is, it cannot end inrepeated nines. For example, if the digit isn't a 7, make the digit of the new number a 7, and if it was a seven, make it a 3.[31]Then, this new number will be different from each of the numbers in the list by at least one digit, and therefore must not be in the list. This shows that the real numbers cannot be put into a one-to-one correspondence with the naturals, and thus must be strictly larger.[32]
Another classical example of an uncountable set, established using a related reasoning, is thepower setof the natural numbers, denotedP(N){\displaystyle {\mathcal {P}}(\mathbb {N} )}. This is the set of allsubsetsofN{\displaystyle \mathbb {N} }, including theempty setandN{\displaystyle \mathbb {N} }itself. The method is much much closer to Cantor's original diagonal argument. Consider any functionf:N→P(N){\displaystyle f:\mathbb {N} \to {\mathcal {P}}(\mathbb {N} )}. One may define a subsetT⊆N{\displaystyle T\subseteq \mathbb {N} }which cannot be in the image off{\displaystyle f}by: if1∈f(1){\displaystyle 1\in f(1)}, then1∉T{\displaystyle 1\notin T}, and if2∉f(2){\displaystyle 2\notin f(2)}, then2∈T{\displaystyle 2\in T}, and in general, for each natural numbern{\displaystyle n},n∈T{\displaystyle n\in T}if and only ifn∉f(n){\displaystyle n\notin f(n)}. Then if the subsetT=f(t){\displaystyle T=f(t)}was in the image off{\displaystyle f}, thent∈f(t)⟺t∉f(t){\displaystyle t\in f(t)\iff t\notin f(t)}a contradiction. Sof{\displaystyle f}cannot be surjective. Therefore no bijection can exist betweenN{\displaystyle \mathbb {N} }andP(N){\displaystyle {\mathcal {P}}(\mathbb {N} )}. ThusP(N){\displaystyle {\mathcal {P}}(\mathbb {N} )}must be not be countable. The two sets,R{\displaystyle \mathbb {R} }andP(N){\displaystyle {\mathcal {P}}(\mathbb {N} )}can be shown to have the same cardinality (by, for example, assigning each subset to a decimal expansion). Whether there exists a setA{\displaystyle A}with cardinality between these two sets|N|<|A|<|R|{\displaystyle |\mathbb {N} |<|A|<|\mathbb {R} |}is known as thecontinuum hypothesis.
Cantor's theoremgeneralizes the second theorem above, showing that every set is strictly smaller than its powerset. The proof roughly goes as follows: Given a setA{\displaystyle A}, iff{\displaystyle f}is a function fromA{\displaystyle A}toP(A){\displaystyle {\mathcal {P}}(A)}, let the subsetT⊆A{\displaystyle T\subseteq A}be given byT={a∈A:a∉f(a)}{\displaystyle T=\{a\in A:a\notin f(a)\}}. IfT=f(t){\displaystyle T=f(t)}, thent∈f(t)⟺t∉f(t){\displaystyle t\in f(t)\iff t\notin f(t)}a contradiction. Sof{\displaystyle f}cannot be surjective and thus cannot be a bijection. So|A|<|P(A)|{\displaystyle |A|<|{\mathcal {P}}(A)|}. (Notice that a trivial injection exists -- mapa{\displaystyle a}to{a}{\displaystyle \{a\}}.) Further, sinceP(A){\displaystyle {\mathcal {P}}(A)}is itself a set, the argument can be repeated to show|A|<|P(A)|<|P(P(A))|{\displaystyle |A|<|{\mathcal {P}}(A)|<|{\mathcal {P}}({\mathcal {P}}(A))|}. TakingA=N{\displaystyle A=\mathbb {N} }, this shows thatP(P(N)){\displaystyle {\mathcal {P}}({\mathcal {P}}(\mathbb {N} ))}is even larger thanP(N){\displaystyle {\mathcal {P}}(\mathbb {N} )}, which was already shown to be uncountable. Repeating this argument shows that there are infinitely many "sizes" of infinity.
In the above section, "cardinality" of a set was defined relationally. In other words, it was not defined as a specific object itself. However, such an object can be defined as follows.
Given a basic sense ofnatural numbers, a set is said to have cardinalityn{\displaystyle n}if it can be put in one-to-one correspondence with the set{1,2,…,n}.{\displaystyle \{1,\,2,\,\dots ,\,n\}.}For example, the setS={A,B,C,D}{\displaystyle S=\{A,B,C,D\}}has a natural correspondence with the set{1,2,3,4},{\displaystyle \{1,2,3,4\},}and therefore is said to have cardinality 4. Other terminologies include "Its cardinality is 4" or "Its cardinal number is 4". While this definition uses a basic sense of natural numbers, it may be that cardinality is used to define the natural numbers, in which case, a simple construction of objects satisfying thePeano axiomscan be used as a substitute. Most commonly, theVon Neumann ordinals.
Showing that such a correspondence exists is not always trivial, which is the subject matter ofcombinatorics.
An intuitive property of finite sets is that, for example, if a set has cardinality 4, then it does not also have cardinality 5. Intuitively meaning that a set cannot have both exaclty 4 elements and exactly 5 elements. However, it is not so obviously proven. The following proof is adapted fromAnalysis IbyTerence Tao.[33]
Lemma: If a setX{\displaystyle X}has cardinalityn≥1,{\displaystyle n\geq 1,}andx0∈X,{\displaystyle x_{0}\in X,}then the setX−{x0}{\displaystyle X-\{x_{0}\}}(i.e.X{\displaystyle X}with the elementx0{\displaystyle x_{0}}removed) has cardinalityn−1.{\displaystyle n-1.}
Proof: GivenX{\displaystyle X}as above, sinceX{\displaystyle X}has cardinalityn,{\displaystyle n,}there is a bijectionf{\displaystyle f}fromX{\displaystyle X}to{1,2,…,n}.{\displaystyle \{1,\,2,\,\dots ,\,n\}.}Then, sincex0∈X,{\displaystyle x_{0}\in X,}there must be some numberf(x0){\displaystyle f(x_{0})}in{1,2,…,n}.{\displaystyle \{1,\,2,\,\dots ,\,n\}.}We need to find a bijection fromX−{x0}{\displaystyle X-\{x_{0}\}}to{1,…n−1}{\displaystyle \{1,\dots n-1\}}(which may be empty). Define a functiong{\displaystyle g}such thatg(x)=f(x){\displaystyle g(x)=f(x)}iff(x)<f(x0),{\displaystyle f(x)<f(x_{0}),}andg(x)=f(x)−1{\displaystyle g(x)=f(x)-1}iff(x)>f(x0).{\displaystyle f(x)>f(x_{0}).}Theng{\displaystyle g}is a bijection fromX−{x0}{\displaystyle X-\{x_{0}\}}to{1,…n−1}.{\displaystyle \{1,\dots n-1\}.}
Theorem: If a setX{\displaystyle X}has cardinalityn,{\displaystyle n,}then it cannot have any other cardinality. That is,X{\displaystyle X}cannot also have cardinalitym≠n.{\displaystyle m\neq n.}
Proof: IfX{\displaystyle X}is empty (has cardinality 0), then there cannot exist a bijection fromX{\displaystyle X}to any nonempty setY,{\displaystyle Y,}since nothing mapped toy0∈Y.{\displaystyle y_{0}\in Y.}Assume, byinductionthat the result has been proven up to some cardinalityn.{\displaystyle n.}IfX,{\displaystyle X,}has cardinalityn+1,{\displaystyle n+1,}assume it also has cardinalitym.{\displaystyle m.}We want to show thatm=n+1.{\displaystyle m=n+1.}By the lemma above,X−{x0}{\displaystyle X-\{x_{0}\}}must have cardinalityn{\displaystyle n}andm−1.{\displaystyle m-1.}Since, by induction, cardinality is unique for sets with cardinalityn,{\displaystyle n,}it must be thatm−1=n,{\displaystyle m-1=n,}and thusm=n+1.{\displaystyle m=n+1.}
Thealeph numbersare a sequence of cardinal numbers that denote the size ofinfinite sets, denoted with analephℵ,{\displaystyle \aleph ,}the first letter of theHebrew alphabet. The first aleph number isℵ0,{\displaystyle \aleph _{0},}called "aleph-nought", "aleph-zero", or "aleph-null", which represents the cardinality of the set of allnatural numbers:ℵ0=|N|=|{0,1,2,3,⋯}|.{\displaystyle \aleph _{0}=|\mathbb {N} |=|\{0,1,2,3,\cdots \}|.}Then,ℵ1{\displaystyle \aleph _{1}}represents the next largest cardinality. The most common way this is formalized in set theory is throughVon Neumann ordinals, known asVon Neumann cardinal assignment.
Ordinal numbersgeneralize the notion oforderto infinite sets. For example, 2 comes after 1, denoted1<2,{\displaystyle 1<2,}and 3 comes after both, denote1<2<3.{\displaystyle 1<2<3.}Then, one define a new number,ω,{\displaystyle \omega ,}which comes after every natural number, denoted1<2<3<⋯<ω.{\displaystyle 1<2<3<\cdots <\omega .}Furtherω<ω+1,{\displaystyle \omega <\omega +1,}and so on. More formally, these ordinal numbers can be defined as follows:
0:={},{\displaystyle 0:=\{\},}theempty set,1:={0},{\displaystyle 1:=\{0\},}2:={0,1},{\displaystyle 2:=\{0,1\},}3:={0,1,2},{\displaystyle 3:=\{0,1,2\},}and so on. Then one can definem<n, ifm∈n,{\displaystyle m<n{\text{, if }}\,m\in n,}for examlpe,2∈{0,1,2}=3,{\displaystyle 2\in \{0,1,2\}=3,}therefore2<3.{\displaystyle 2<3.}Further, definingω:={0,1,2,3,⋯}{\displaystyle \omega :=\{0,1,2,3,\cdots \}}(alimit ordinal) givesω{\displaystyle \omega }the desired property of being the smallest ordinal greater than all finite ordinal numbers.
Sinceω∼N{\displaystyle \omega \sim \mathbb {N} }by the natural correspondence, one may defineℵ0{\displaystyle \aleph _{0}}as the set of all finite ordinals. That is,ℵ0:=ω.{\displaystyle \aleph _{0}:=\omega .}Then,ℵ1{\displaystyle \aleph _{1}}is the set of all countable ordinals (all ordinalsκ{\displaystyle \kappa }with cardinality|κ|≤ℵ0{\displaystyle |\kappa |\leq \aleph _{0}}), thefirst uncountable ordinal. Since a set cannot contain itself,ℵ1{\displaystyle \aleph _{1}}must have a strictly larger cardinality:ℵ0<ℵ1.{\displaystyle \aleph _{0}<\aleph _{1}.}Furthermore,ℵ2{\displaystyle \aleph _{2}}is the set of all ordinals with cardinalityℵ1,{\displaystyle \aleph _{1},}and so on. By thewell-ordering theorem, there cannot exist any set with cardinality betweenℵ0{\displaystyle \aleph _{0}}andℵ1,{\displaystyle \aleph _{1},}and every infinite set has some cardinality corresponding to some alephℵα,{\displaystyle \aleph _{\alpha },}for some ordinalα.{\displaystyle \alpha .}
The cardinality of thereal numbersis denoted by "c{\displaystyle {\mathfrak {c}}}" (a lowercasefraktur script"c"), and is also referred to as thecardinality of the continuum. Cantor showed, using thediagonal argument, thatc>ℵ0.{\displaystyle {\mathfrak {c}}>\aleph _{0}.}We can show thatc=2ℵ0,{\displaystyle {\mathfrak {c}}=2^{\aleph _{0}},}this also being the cardinality of the set of all subsets of the natural numbers.
Thecontinuum hypothesissays thatℵ1=2ℵ0,{\displaystyle \aleph _{1}=2^{\aleph _{0}},}i.e.2ℵ0{\displaystyle 2^{\aleph _{0}}}is the smallest cardinal number bigger thanℵ0,{\displaystyle \aleph _{0},}i.e. there is no set whose cardinality is strictly between that of the integers and that of the real numbers. The continuum hypothesis isindependentofZFC, a standard axiomatization of set theory; that is, it is impossible to prove the continuum hypothesis or its negation from ZFC—provided that ZFC is consistent.[34][35][36]
One of Cantor's most important results was that thecardinality of the continuum(c{\displaystyle {\mathfrak {c}}}) is greater than that of the natural numbers (ℵ0{\displaystyle \aleph _{0}}); that is, there are more real numbersRthan natural numbersN. Namely, Cantor showed thatc=2ℵ0=ℶ1{\displaystyle {\mathfrak {c}}=2^{\aleph _{0}}=\beth _{1}}(seeBeth one) satisfies:
Thecontinuum hypothesisstates that there is nocardinal numberbetween the cardinality of the reals and the cardinality of the natural numbers, that is,
However, this hypothesis can neither be proved nor disproved within the widely acceptedZFCaxiomatic set theory, if ZFC is consistent.
The first of these results is apparent by considering, for instance, thetangent function, which provides aone-to-one correspondencebetween theinterval(−1/2π,1/2π) andR.
The second result was first demonstrated by Cantor in 1878, but it became more apparent in 1890, whenGiuseppe Peanointroduced thespace-filling curves, curved lines that twist and turn enough to fill the whole of any square, or cube, orhypercube, or finite-dimensional space. These curves are not a direct proof that a line has the same number of points as a finite-dimensional space, but they can be used to obtainsuch a proof.
Cantor also showed that sets with cardinality strictly greater thanc{\displaystyle {\mathfrak {c}}}exist (see hisgeneralized diagonal argumentandtheorem). They include, for instance:
Both have cardinality
Thecardinal equalitiesc2=c,{\displaystyle {\mathfrak {c}}^{2}={\mathfrak {c}},}cℵ0=c,{\displaystyle {\mathfrak {c}}^{\aleph _{0}}={\mathfrak {c}},}andcc=2c{\displaystyle {\mathfrak {c}}^{\mathfrak {c}}=2^{\mathfrak {c}}}can be demonstrated usingcardinal arithmetic:
During the rise of set theory came along severalparadoxes(see:Paradoxes of set theory). These can be divided into two kinds:real paradoxesandapparent paradoxes. Apparent paradoxes are those which follow a series of reasonable steps and arrive at a conclusion which seems impossible or incorrect according to one'sintuition, but aren't necessarily logically impossible. Two historical examples have been given,Galileo's ParadoxandAristotle's Wheel, in§ History. Real paradoxes are those which, through reasonable steps, prove alogical contradiction. The real paradoxes here apply tonaive set theoryor otherwise informal statements, and have been resolved by restating the problem in terms of aformalized set theory, such asZermelo–Fraenkel set theory.
Hilbert's Hotelis athought experimentdevised by the German mathematicianDavid Hilbertto illustrate a counterintuitive property of infinite sets (assuming the axiom of choice), allowing them to have the same cardinality as aproper subsetof themselves. The scenario begins by imagining a hotel with an infinite number of rooms, all of which are occupied. But then a new guest walks in asking for a room. The hotel accommodates by moving the occupant of room 1 to room 2, the occupant of room 2 to room 3, room three to room 4, and in general room n to room n+1. Then every guest still has a room, but room 1 opens up for the new guest.[37]
Then, the scenario continues by imagining an infinite bus of new guests seeking a room. The hotel accommodates by moving the person in room 1 to room 2, room 2 to room 4, and in general room n to room 2n. Thus all the even-numbered rooms are occupied, but all the odd-numbered rooms are vacant, leaving room for the infinite bus of new guests. The scenario continues by assuming an infinite number of these infinite busses arrives at the hotel, and showing that the hotel is still able to accommodate. Finally, an infinite bus which has a seat for everyreal numberarrives, and the hotel is no longer able to accommodate.[37]
Inmodel theory, amodelcorresponds to a specific interpretation of aformal languageortheory. It consists of adomain(a set of objects) and aninterpretationof the symbols and formulas in the language, such that the axioms of the theory are satisfied within this structure. TheLöwenheim–Skolem theoremshows that any model of set theory infirst-order logic, if it isconsistent, has an equivalentmodelwhich is countable. This appears contradictory, becauseGeorg Cantorproved that there exist sets which are not countable. Thus the seeming contradiction is that a model that is itself countable, and which therefore contains only countable sets,satisfiesthe first-order sentence that intuitively states "there are uncountable sets".[38]
A mathematical explanation of the paradox, showing that it is not a true contradiction in mathematics, was first given in 1922 byThoralf Skolem. He explained that the countability of a set is not absolute, but relative to the model in which the cardinality is measured. Skolem's work was harshly received byErnst Zermelo, who argued against the limitations of first-order logic and Skolem's notion of "relativity", but the result quickly came to be accepted by the mathematical community.[39][38]
Cantor's theoremstate's that, for any setA,{\displaystyle A,}possibly infinite, itspowersetP(A){\displaystyle {\mathcal {P}}(A)}has a strictly greater cardinality. For example, this means there is no bijection fromN{\displaystyle \mathbb {N} }toP(N)∼R.{\displaystyle {\mathcal {P}}(\mathbb {N} )\sim \mathbb {R} .}Cantor's paradoxis a paradox innaive set theory, which proves there is not "set of all sets" or "universe set". It starts by assuming there is some set of all sets,U:={x|xis a set},{\displaystyle U:=\{x\;|\;x\,{\text{ is a set}}\},}then it must be thatU{\displaystyle U}is strictly smaller thanP(U),{\displaystyle {\mathcal {P}}(U),}thus|U|≤|P(U)|.{\displaystyle |U|\leq |{\mathcal {P}}(U)|.}But sinceU{\displaystyle U}contains all sets, we must have thatP(U)⊆U,{\displaystyle {\mathcal {P}}(U)\subseteq U,}and thus|P(U)|≤|U|.{\displaystyle |{\mathcal {P}}(U)|\leq |U|.}Therefore|P(U)|=|U|,{\displaystyle |{\mathcal {P}}(U)|=|U|,}contradicting Cantor's theorem. This was one of the original paradoxes that added to the need for a formalized set theory to avoid these paradoxes. This paradox is usually resolved in formal set theories by disallowingunrestricted comprehensionand the existence of a universe set.
Similar to Cantor's paradox, the paradox of the set of all cardinal numbers is a result due to unrestricted comprehension. It often uses the definition of cardinal numbers as ordinal numbers for representatives. It is related to theBurali-Forti paradox. It begins by assuming there is some setS:={X|Xis a cardinal number}.{\displaystyle S:=\{X\,|X{\text{ is a cardinal number}}\}.}Then, if there is some largest elementℵ∈S,{\displaystyle \aleph \in S,}then the powersetP(ℵ){\displaystyle {\mathcal {P}}(\aleph )}is strictly greater, and thus not inS.{\displaystyle S.}Conversly, if there is no largest element, then theunion⋃S{\displaystyle \bigcup S}contains the elements of all elements ofS,{\displaystyle S,}and is therefore greater than or equal to each element. Since there is no largest element inS,{\displaystyle S,}for any elementx∈S,{\displaystyle x\in S,}there is another elementy∈S{\displaystyle y\in S}such that|x|<|y|{\displaystyle |x|<|y|}and|y|≤|⋃S|.{\displaystyle |y|\leq {\Bigl |}\bigcup S{\Bigr |}.}Thus, for anyx∈S,{\displaystyle x\in S,}|x|<|⋃S|,{\displaystyle |x|<{\Bigl |}\bigcup S{\Bigr |},}and so|⋃S|∉S.{\displaystyle {\Bigl |}\bigcup S{\Bigr |}\notin S.}
IfAandBaredisjoint sets, then
From this, one can show that in general, the cardinalities ofunionsandintersectionsare related by the following equation:[40] | https://en.wikipedia.org/wiki/Cardinality |
Inset theory,Ω-logicis aninfinitary logicanddeductive systemproposed byW. Hugh Woodin(1999) as part of an attempt to generalize the theory ofdeterminacyofpointclassesto coverthe structureHℵ2{\displaystyle H_{\aleph _{2}}}. Just as theaxiom of projective determinacyyields a canonical theory ofHℵ1{\displaystyle H_{\aleph _{1}}}, he sought to find axioms that would give a canonical theory for the larger structure. The theory he developed involves a controversial argument that thecontinuum hypothesisis false.
Woodin'sΩ-conjectureasserts that if there is a proper class ofWoodin cardinals(for technical reasons, most results in the theory are most easily stated under this assumption), then Ω-logic satisfies an analogue of thecompleteness theorem. From this conjecture, it can be shown that, if there is any single axiom which is comprehensive overHℵ2{\displaystyle H_{\aleph _{2}}}(in Ω-logic), it must imply that the continuum is notℵ1{\displaystyle \aleph _{1}}. Woodin also isolated a specific axiom, a variation ofMartin's maximum, which states that any Ω-consistentΠ2{\displaystyle \Pi _{2}}(overHℵ2{\displaystyle H_{\aleph _{2}}}) sentence is true; this axiom implies that the continuum isℵ2{\displaystyle \aleph _{2}}.
Woodin also related his Ω-conjecture to a proposed abstract definition of large cardinals: he took a "large cardinal property" to be aΣ2{\displaystyle \Sigma _{2}}propertyP(α){\displaystyle P(\alpha )}of ordinals which implies that α is astrong inaccessible, and which is invariant under forcing by sets of cardinal less than α. Then the Ω-conjecture implies that if there are arbitrarily large models containing a large cardinal, this fact will be provable in Ω-logic.
The theory involves a definition ofΩ-validity: a statement is an Ω-valid consequence of a set theoryTif it holds in every model ofThaving the formVαB{\displaystyle V_{\alpha }^{\mathbb {B} }}for some ordinalα{\displaystyle \alpha }and some forcing notionB{\displaystyle \mathbb {B} }. This notion is clearly preserved under forcing, and in the presence of a proper class of Woodin cardinals it will also be invariant under forcing (in other words, Ω-satisfiability is preserved under forcing as well). There is also a notion ofΩ-provability;[1]here the "proofs" consist ofuniversally Baire setsand are checked by verifying that for every countable transitive model of the theory, and every forcing notion in the model, the generic extension of the model (as calculated inV) contains the "proof", restricted its own reals. For a proof-setAthe condition to be checked here is called "A-closed". A complexity measure can be given on the proofs by their ranks in theWadge hierarchy. Woodin showed that this notion of "provability" implies Ω-validity for sentences which areΠ2{\displaystyle \Pi _{2}}overV. The Ω-conjecture states that the converse of this result also holds. In all currently knowncore models, it is known to be true; moreover the consistency strength of the large cardinals corresponds to the least proof-rank required to "prove" the existence of the cardinals. | https://en.wikipedia.org/wiki/%CE%A9-logic |
Thesecond continuum hypothesis, also calledLuzin's hypothesisorLuzin's second continuum hypothesis, is the hypothesis that2ℵ0=2ℵ1{\displaystyle 2^{\aleph _{0}}=2^{\aleph _{1}}}. It is the negation ofa weakened form,2ℵ0<2ℵ1{\displaystyle 2^{\aleph _{0}}<2^{\aleph _{1}}}, of theContinuum Hypothesis(CH). It was discussed byNikolai Luzinin 1935, although he did not claim to be the first to postulate it.[note 1][2][3]: 157, 171[4]: §3[1]: 130–131The statement2ℵ0<2ℵ1{\displaystyle 2^{\aleph _{0}}<2^{\aleph _{1}}}may also be called Luzin's hypothesis.[2]
The second continuum hypothesis is independent ofZermelo–Fraenkel set theorywith theAxiom of Choice(ZFC): its truth is consistent with ZFC since it is true inCohen's model of ZFC with the negation of the Continuum Hypothesis;[5][6]: 109–110its falsity is also consistent since it is contradicted by theContinuum Hypothesis, which follows fromV=L. It is implied byMartin's Axiomtogether with the negation of the CH.[2] | https://en.wikipedia.org/wiki/Second_continuum_hypothesis |
Inmathematics,Wetzel's problemconcerns bounds on thecardinalityof a set ofanalytic functionsthat, for each of their arguments, take on few distinct values. It is named after John Wetzel, a mathematician at theUniversity of Illinois at Urbana–Champaign.[1][2]
LetFbe a family of distinct analytic functions on a givendomainwith the property that, for eachxin the domain, the functions inFmapxto acountable setof values. In his doctoral dissertation, Wetzel asked whether this assumption implies thatFis necessarily itself countable.[3]Paul Erdősin turn learned about the problem at theUniversity of Michigan, likely viaLee Albert Rubel.[1]In his paper on the problem, Erdős credited an anonymous mathematician with the observation that, when eachxis mapped to a finite set of values,Fis necessarily finite.[4]
However, as Erdős showed, the situation for countable sets is more complicated: the answer to Wetzel's question is yes if and only if thecontinuum hypothesisis false.[4]That is, the existence of an uncountable set of functions that maps each argumentxto a countable set of values is equivalent to the nonexistence of an uncountable set of real numbers whose cardinality is less than the cardinality of the set of all real numbers. One direction of this equivalence was also proven independently, but not published, by another UIUC mathematician, Robert Dan Dixon.[1]It follows from the independence of the continuum hypothesis, proved in 1963 byPaul Cohen,[5]that the answer to Wetzel's problem is independent ofZFC set theory.[1]Erdős' proof is so short and elegant that it is considered to be one of theProofs from THE BOOK.[2]
In the case that the continuum hypothesis is false, Erdős asked whether there is a family of analytic functions, with the cardinality of the continuum, such that each complex number has a smaller-than-continuum set of images. As Ashutosh Kumar andSaharon Shelahlater proved, both positive and negative answers to this question are consistent.[6] | https://en.wikipedia.org/wiki/Wetzel%27s_problem |
In thephilosophy of mathematics, thepre-intuitionistsis the name given byL. E. J. Brouwerto several influential mathematicians who shared similar opinions on the nature of mathematics. The term was introduced by Brouwer in his 1951 lectures atCambridgewhere he described the differences between his philosophy ofintuitionismand its predecessors:[1]
Of a totally different orientation [from the "Old Formalist School" ofDedekind,Cantor,Peano,Zermelo, andCouturat, etc.] was the Pre-Intuitionist School, mainly led byPoincaré,BorelandLebesgue. These thinkers seem to have maintained a modified observational standpoint for theintroduction of natural numbers, forthe principle of complete induction[...] For these, even for such theorems as were deduced by means of classical logic, they postulated an existence and exactness independent of language and logic and regarded its non-contradictority as certain, even without logical proof. For the continuum, however, they seem not to have sought an origin strictly extraneous to language and logic.
The pre-intuitionists, as defined byL. E. J. Brouwer, differed from theformaliststandpoint in several ways,[1]particularly in regard to the introduction of natural numbers, or how the natural numbers are defined/denoted. ForPoincaré, the definition of a mathematical entity is the construction of the entity itself and not an expression of an underlying essence or existence.
This is to say that no mathematical object exists without human construction of it, both in mind and language.
This sense of definition allowedPoincaréto argue withBertrand RusselloverGiuseppe Peano'saxiomatic theory of natural numbers.
Peano's fifthaxiomstates:
This is the principle ofcomplete induction, which establishes the property ofinductionas necessary to the system. Since Peano's axiom is asinfiniteas thenatural numbers, it is difficult to prove that the property ofPdoes belong to anyxand alsox+ 1. What one can do is say that, if after some numbernof trials that show a propertyPconserved inxandx+ 1, then we may infer that it will still hold to be true aftern+ 1 trials. But this is itself induction. And hence the argumentbegs the question.
From this Poincaré argues that if we fail to establish the consistency of Peano's axioms for natural numbers without falling into circularity, then the principle ofcomplete inductionis not provable bygeneral logic.
Thus arithmetic and mathematics in general is notanalyticbutsynthetic.Logicismthus rebuked andIntuitionis held up. What Poincaré and the Pre-Intuitionists shared was the perception of a difference between logic and mathematics that is not a matter oflanguagealone, but ofknowledgeitself.
It was for this assertion, among others, thatPoincaréwas considered to be similar to the intuitionists. ForBrouwerthough, the Pre-Intuitionists failed to go as far as necessary in divesting mathematics from metaphysics, for they still usedprincipium tertii exclusi(the "law of excluded middle").
The principle of the excluded middle does lead to some strange situations. For instance, statements about the future such as "There will be a naval battle tomorrow" do not seem to be either true or false,yet. So there is some question whether statements must be either true or false in somesituations. To an intuitionist this seems to rank the law of excluded middle as just as unrigorousasPeano'svicious circle.
Yet to the Pre-Intuitionists this is mixing apples and oranges. For them mathematics was one thing (a muddled invention of the human mind,i.e., synthetic), and logic was another (analytic).
The above examples only include the works ofPoincaré, and yetBrouwernamed other mathematicians as Pre-Intuitionists too;BorelandLebesgue. Other mathematicians such asHermann Weyl(who eventually became disenchanted with intuitionism, feeling that it places excessive strictures on mathematical progress) andLeopold Kroneckeralso played a role—though they are not cited by Brouwer in his definitive speech.
In fact Kronecker might be the most famous of the Pre-Intuitionists for his singular and oft quoted phrase, "God made the natural numbers; all else is the work of man."
Kronecker goes in almost the opposite direction from Poincaré, believing in the natural numbers but not the law of the excluded middle. He was the first mathematician to express doubt onnon-constructiveexistence proofsthat state that something must exist because it can be shown that it is "impossible" for it not to. | https://en.wikipedia.org/wiki/Preintuitionism |
Indirect self-referencedescribes an objectreferring to itselfindirectly. For example, the "this sentence is false." contains a direct self-reference, in which the phrase "this sentence" refers directly to the sentence as a whole. An indirectly self-referential sentence would replace the phrase "this sentence" with an expression that effectively still referred to the sentence, but did not use the pronoun "this."
If thequineof a phrase is defined to be the quotation of the phrase followed by the phrase itself, then the quine of:
would be:
which, incidentally, is a true statement.
Now consider the sentence:
The quotation here, plus the phrase "when quined," indirectly refers to the entire sentence. The importance of this fact is that the remainder of the sentence, the phrase "makes quite a statement," can now make a statement about the sentence as a whole. If a pronoun were used for this, the sentence would be the directly self-referencing "this sentence makes quite a statement." In natural language, pronouns are straightforwardly used and indirect self-references are uncommon, but in systems ofmathematical logic, there is generally no analog of the pronoun.
Indirect self-reference was studied in great depth byW. V. Quine(after whom the operation above is named), and occupies a central place in the proof ofGödel's incompleteness theorem. Among the paradoxical statements developed by Quine is the following: | https://en.wikipedia.org/wiki/Indirect_self-reference |
Inmathematics, afixed-point theoremis a result saying that afunctionFwill have at least onefixed point(a pointxfor whichF(x) =x), under some conditions onFthat can be stated in general terms.[1]
TheBanach fixed-point theorem(1922) gives a general criterion guaranteeing that, if it is satisfied, the procedure ofiteratinga function yields a fixed point.[2]
By contrast, theBrouwer fixed-point theorem(1911) is a non-constructive result: it says that anycontinuous functionfrom the closedunit ballinn-dimensionalEuclidean spaceto itself must have a fixed point,[3]but it doesn't describe how to find the fixed point (see alsoSperner's lemma).
For example, thecosinefunction is continuous in [−1, 1] and maps it into [−1, 1], and thus must have a fixed point. This is clear when examining a sketched graph of the cosine function; the fixed point occurs where the cosine curvey= cos(x) intersects the liney=x. Numerically, the fixed point (known as theDottie number) is approximatelyx= 0.73908513321516 (thusx= cos(x) for this value ofx).
TheLefschetz fixed-point theorem[4](and theNielsen fixed-point theorem)[5]fromalgebraic topologyis notable because it gives, in some sense, a way to count fixed points.
There are a number of generalisations toBanach fixed-point theoremand further; these are applied inPDEtheory. Seefixed-point theorems in infinite-dimensional spaces.
Thecollage theoreminfractal compressionproves that, for many images, there exists a relatively small description of a function that, when iteratively applied to any starting image, rapidly converges on the desired image.[6]
TheKnaster–Tarski theoremstates that anyorder-preserving functionon acomplete latticehas a fixed point, and indeed asmallestfixed point.[7]See alsoBourbaki–Witt theorem.
The theorem has applications inabstract interpretation, a form ofstatic program analysis.
A common theme inlambda calculusis to find fixed points of given lambda expressions. Every lambda expression has a fixed point, and afixed-point combinatoris a "function" which takes as input a lambda expression and produces as output a fixed point of that expression.[8]An important fixed-point combinator is theY combinatorused to giverecursivedefinitions.
Indenotational semanticsof programming languages, a special case of the Knaster–Tarski theorem is used to establish the semantics of recursive definitions. While the fixed-point theorem is applied to the "same" function (from a logical point of view), the development of the theory is quite different.
The same definition of recursive function can be given, incomputability theory, by applyingKleene's recursion theorem.[9]These results are not equivalent theorems; the Knaster–Tarski theorem is a much stronger result than what is used in denotational semantics.[10]However, in light of theChurch–Turing thesistheir intuitive meaning is the same: a recursive function can be described as the least fixed point of a certain functional, mapping functions to functions.
The above technique of iterating a function to find a fixed point can also be used inset theory; thefixed-point lemma for normal functionsstates that any continuous strictly increasing function fromordinalsto ordinals has one (and indeed many) fixed points.
Everyclosure operatoron aposethas many fixed points; these are the "closed elements" with respect to the closure operator, and they are the main reason the closure operator was defined in the first place.
Everyinvolutionon afinite setwith an odd number of elements has a fixed point; more generally, for every involution on a finite set of elements, the number of elements and the number of fixed points have the sameparity.Don Zagierused these observations to give a one-sentence proof ofFermat's theorem on sums of two squares, by describing two involutions on the same set of triples of integers, one of which can easily be shown to have only one fixed point and the other of which has a fixed point for each representation of a given prime (congruent to 1 mod 4) as a sum of two squares. Since the first involution has an odd number of fixed points, so does the second, and therefore there always exists a representation of the desired form.[11] | https://en.wikipedia.org/wiki/List_of_fixed_point_theorems |
This list includes well known paradoxes, grouped thematically. The grouping is approximate, as paradoxes may fit into more than one category. This list collects only scenarios that have been called aparadoxby at least one source and have their own article in this encyclopedia. These paradoxes may be due to fallacious reasoning (falsidical), or an unintuitive solution (veridical). The termparadoxis often used to describe a counter-intuitive result.
However, some of these paradoxes qualify to fit into the mainstream viewpoint of a paradox, which is a self-contradictory result gained even while properly applying accepted ways ofreasoning. These paradoxes, often calledantinomy,point out genuine problems in our understanding of the ideas oftruthanddescription.
These paradoxes,insolubilia(insolubles), have in common a contradiction arising from eitherself-referenceorcircular reference, in which several statements refer to each other in a way that following some of the references leads back to the starting point.
One class of paradoxes in economics are theparadoxes of competition, in which behavior that benefits a lone actor would leave everyone worse off if everyone did the same. These paradoxes are classified into circuit, classical and Marx paradoxes. | https://en.wikipedia.org/wiki/Self-referential_paradoxes |
Incalculus,Newton's method(also calledNewton–Raphson) is aniterative methodfor finding therootsof adifferentiable functionf{\displaystyle f}, which are solutions to theequationf(x)=0{\displaystyle f(x)=0}. However, to optimize a twice-differentiablef{\displaystyle f}, our goal is to find the roots off′{\displaystyle f'}. We can therefore use Newton's method on itsderivativef′{\displaystyle f'}to find solutions tof′(x)=0{\displaystyle f'(x)=0}, also known as thecritical pointsoff{\displaystyle f}. These solutions may be minima, maxima, or saddle points; see section"Several variables"inCritical point (mathematics)and also section"Geometric interpretation"in this article. This is relevant inoptimization, which aims to find (global) minima of the functionf{\displaystyle f}.
The central problem of optimization is minimization of functions. Let us first consider the case of univariate functions, i.e., functions of a single real variable. We will later consider the more general and more practically useful multivariate case.
Given a twice differentiable functionf:R→R{\displaystyle f:\mathbb {R} \to \mathbb {R} }, we seek to solve the optimization problem
Newton's method attempts to solve this problem by constructing asequence{xk}{\displaystyle \{x_{k}\}}from an initial guess (starting point)x0∈R{\displaystyle x_{0}\in \mathbb {R} }that converges towards a minimizerx∗{\displaystyle x_{*}}off{\displaystyle f}by using a sequence of second-order Taylor approximations off{\displaystyle f}around the iterates. The second-orderTaylor expansionoffaroundxk{\displaystyle x_{k}}is
The next iteratexk+1{\displaystyle x_{k+1}}is defined so as to minimize this quadratic approximation int{\displaystyle t}, and settingxk+1=xk+t{\displaystyle x_{k+1}=x_{k}+t}. If the second derivative is positive, the quadratic approximation is a convex function oft{\displaystyle t}, and its minimum can be found by setting the derivative to zero. Since
the minimum is achieved for
Putting everything together, Newton's method performs the iteration
The geometric interpretation of Newton's method is that at each iteration, it amounts to the fitting of aparabolato thegraphoff(x){\displaystyle f(x)}at the trial valuexk{\displaystyle x_{k}}, having the same slope and curvature as the graph at that point, and then proceeding to the maximum or minimum of that parabola (in higher dimensions, this may also be asaddle point), see below. Note that iff{\displaystyle f}happens tobea quadratic function, then the exact extremum is found in one step.
The aboveiterative schemecan be generalized tod>1{\displaystyle d>1}dimensions by replacing the derivative with thegradient(different authors use different notation for the gradient, includingf′(x)=∇f(x)=gf(x)∈Rd{\displaystyle f'(x)=\nabla f(x)=g_{f}(x)\in \mathbb {R} ^{d}}), and thereciprocalof the second derivative with theinverseof theHessian matrix(different authors use different notation for the Hessian, includingf″(x)=∇2f(x)=Hf(x)∈Rd×d{\displaystyle f''(x)=\nabla ^{2}f(x)=H_{f}(x)\in \mathbb {R} ^{d\times d}}). One thus obtains the iterative scheme
Often Newton's method is modified to include a smallstep size0<γ≤1{\displaystyle 0<\gamma \leq 1}instead ofγ=1{\displaystyle \gamma =1}:
This is often done to ensure that theWolfe conditions, or much simpler and efficientArmijo's condition, are satisfied at each step of the method. For step sizes other than 1, the method is often referred to as the relaxed or damped Newton's method.
Iffis a strongly convex function with Lipschitz Hessian, then provided thatx0{\displaystyle x_{0}}is close enough tox∗=argminf(x){\displaystyle x_{*}=\arg \min f(x)}, the sequencex0,x1,x2,…{\displaystyle x_{0},x_{1},x_{2},\dots }generated by Newton's method will converge to the (necessarily unique) minimizerx∗{\displaystyle x_{*}}off{\displaystyle f}quadratically fast.[1]That is,
Finding the inverse of the Hessian in high dimensions to compute the Newton directionh=−(f″(xk))−1f′(xk){\displaystyle h=-(f''(x_{k}))^{-1}f'(x_{k})}can be an expensive operation. In such cases, instead of directly inverting the Hessian, it is better to calculate the vectorh{\displaystyle h}as the solution to thesystem of linear equations
which may be solved by various factorizations or approximately (but to great accuracy) usingiterative methods. Many of these methods are only applicable to certain types of equations, for example theCholesky factorizationandconjugate gradientwill only work iff″(xk){\displaystyle f''(x_{k})}is a positive definite matrix. While this may seem like a limitation, it is often a useful indicator of something gone wrong; for example if a minimization problem is being approached andf″(xk){\displaystyle f''(x_{k})}is not positive definite, then the iterations are converging to asaddle pointand not a minimum.
On the other hand, if aconstrained optimizationis done (for example, withLagrange multipliers), the problem may become one of saddle point finding, in which case the Hessian will be symmetric indefinite and the solution ofxk+1{\displaystyle x_{k+1}}will need to be done with a method that will work for such, such as theLDL⊤{\displaystyle LDL^{\top }}variant ofCholesky factorizationor theconjugate residual method.
There also exist variousquasi-Newton methods, where an approximation for the Hessian (or its inverse directly) is built up from changes in the gradient.
If the Hessian is close to a non-invertible matrix, the inverted Hessian can be numerically unstable and the solution may diverge. In this case, certain workarounds have been tried in the past, which have varied success with certain problems. One can, for example, modify the Hessian by adding a correction matrixBk{\displaystyle B_{k}}so as to makef″(xk)+Bk{\displaystyle f''(x_{k})+B_{k}}positive definite. One approach is to diagonalize the Hessian and chooseBk{\displaystyle B_{k}}so thatf″(xk)+Bk{\displaystyle f''(x_{k})+B_{k}}has the same eigenvectors as the Hessian, but with each negative eigenvalue replaced byϵ>0{\displaystyle \epsilon >0}.
An approach exploited in theLevenberg–Marquardt algorithm(which uses an approximate Hessian) is to add a scaled identity matrix to the Hessian,μI{\displaystyle \mu I}, with the scale adjusted at every iteration as needed. For largeμ{\displaystyle \mu }and small Hessian, the iterations will behave likegradient descentwith step size1/μ{\displaystyle 1/\mu }. This results in slower but more reliable convergence where the Hessian doesn't provide useful information.
Newton's method, in its original version, has several caveats:
The popular modifications of Newton's method, such as quasi-Newton methods or Levenberg-Marquardt algorithm mentioned above, also have caveats:
For example, it is usually required that the cost function is (strongly) convex and the Hessian is globally bounded or Lipschitz continuous, for example this is mentioned in the section "Convergence" in this article. If one looks at the papers by Levenberg and Marquardt in the reference forLevenberg–Marquardt algorithm, which are the original sources for the mentioned method, one can see that there is basically no theoretical analysis in the paper by Levenberg, while the paper by Marquardt only analyses a local situation and does not prove a global convergence result. One can compare withBacktracking line searchmethod for Gradient descent, which has good theoretical guarantee under more general assumptions, and can be implemented and works well in practical large scale problems such as Deep Neural Networks. | https://en.wikipedia.org/wiki/Newton%27s_method_in_optimization |
Thegolden-section searchis a technique for finding anextremum(minimum or maximum) of a function inside a specified interval. For a strictlyunimodal functionwith an extremum inside the interval, it will find that extremum, while for an interval containing multiple extrema (possibly including the interval boundaries), it will converge to one of them. If the only extremum on the interval is on a boundary of the interval, it will converge to that boundary point. The method operates by successively narrowing the range of values on the specified interval, which makes it relatively slow, but very robust. The technique derives its name from the fact that the algorithm maintains the function values for four points whose three interval widths are in the ratioφ:1:φ, whereφis thegolden ratio. These ratios are maintained for each iteration and are maximally efficient. Excepting boundary points, when searching for a minimum, the central point is always less than or equal to the outer points, assuring that a minimum is contained between the outer points. The converse is true when searching for a maximum. The algorithm is the limit ofFibonacci search(also described below) for many function evaluations. Fibonacci search and golden-section search were discovered byKiefer(1953) (see also Avriel and Wilde (1966)).
The discussion here is posed in terms of searching for a minimum (searching for a maximum is similar) of aunimodal function. Unlike finding a zero, where two function evaluations with opposite sign are sufficient to bracket a root, when searching for a minimum, three values are necessary. The golden-section search is an efficient way to progressively reduce the interval locating the minimum. The key is to observe that regardless of how many points have been evaluated, the minimum lies within the interval defined by the two points adjacent to the point with the least value so far evaluated.
The diagram above illustrates a single step in the technique for finding a minimum. The functional values off(x){\displaystyle f(x)}are on the vertical axis, and the horizontal axis is thexparameter. The value off(x){\displaystyle f(x)}has already been evaluated at the three points:x1{\displaystyle x_{1}},x2{\displaystyle x_{2}}, andx3{\displaystyle x_{3}}. Sincef2{\displaystyle f_{2}}is smaller than eitherf1{\displaystyle f_{1}}orf3{\displaystyle f_{3}}, it is clear that a minimum lies inside the interval fromx1{\displaystyle x_{1}}tox3{\displaystyle x_{3}}.
The next step in the minimization process is to "probe" the function by evaluating it at a new value ofx, namelyx4{\displaystyle x_{4}}. It is most efficient to choosex4{\displaystyle x_{4}}somewhere inside the largest interval, i.e. betweenx2{\displaystyle x_{2}}andx3{\displaystyle x_{3}}. From the diagram, it is clear that if the function yieldsf4a>f(x2){\displaystyle f_{4a}>f(x_{2})}, then a minimum lies betweenx1{\displaystyle x_{1}}andx4{\displaystyle x_{4}}, and the new triplet of points will bex1{\displaystyle x_{1}},x2{\displaystyle x_{2}}, andx4{\displaystyle x_{4}}. However, if the function yields the valuef4b<f(x2){\displaystyle f_{4b}<f(x_{2})}, then a minimum lies betweenx2{\displaystyle x_{2}}andx3{\displaystyle x_{3}}, and the new triplet of points will bex2{\displaystyle x_{2}},x4{\displaystyle x_{4}}, andx3{\displaystyle x_{3}}. Thus, in either case, we can construct a new narrower search interval that is guaranteed to contain the function's minimum.
From the diagram above, it is seen that the new search interval will be either betweenx1{\displaystyle x_{1}}andx4{\displaystyle x_{4}}with a length ofa+c, or betweenx2{\displaystyle x_{2}}andx3{\displaystyle x_{3}}with a length ofb. The golden-section search requires that these intervals be equal. If they are not, a run of "bad luck" could lead to the wider interval being used many times, thus slowing down the rate of convergence. To ensure thatb=a+c, the algorithm should choosex4=x1+(x3−x2){\displaystyle x_{4}=x_{1}+(x_{3}-x_{2})}.
However, there still remains the question of wherex2{\displaystyle x_{2}}should be placed in relation tox1{\displaystyle x_{1}}andx3{\displaystyle x_{3}}. The golden-section search chooses the spacing between these points in such a way that these points have the same proportion of spacing as the subsequent triplex1,x2,x4{\displaystyle x_{1},x_{2},x_{4}}orx2,x4,x3{\displaystyle x_{2},x_{4},x_{3}}. By maintaining the same proportion of spacing throughout the algorithm, we avoid a situation in whichx2{\displaystyle x_{2}}is very close tox1{\displaystyle x_{1}}orx3{\displaystyle x_{3}}and guarantee that the interval width shrinks by the same constant proportion in each step.
Mathematically, to ensure that the spacing after evaluatingf(x4){\displaystyle f(x_{4})}is proportional to the spacing prior to that evaluation, iff(x4){\displaystyle f(x_{4})}isf4a{\displaystyle f_{4a}}and our new triplet of points isx1{\displaystyle x_{1}},x2{\displaystyle x_{2}}, andx4{\displaystyle x_{4}}, then we want
However, iff(x4){\displaystyle f(x_{4})}isf4b{\displaystyle f_{4b}}and our new triplet of points isx2{\displaystyle x_{2}},x4{\displaystyle x_{4}}, andx3{\displaystyle x_{3}}, then we want
Eliminatingcfrom these two simultaneous equations yields
or
where φ is thegolden ratio:
The appearance of the golden ratio in the proportional spacing of the evaluation points is how this searchalgorithmgets its name.
Any number of termination conditions may be applied, depending upon the application. The interval ΔX=X4−X1is a measure of the absolute error in the estimation of the minimumXand may be used to terminate the algorithm. The value of ΔXis reduced by a factor ofr=φ− 1 for each iteration, so the number of iterations to reach an absolute error of ΔXis about ln(ΔX/ΔX0) / ln(r), where ΔX0is the initial value of ΔX.
Because smooth functions are flat (their first derivative is close to zero) near a minimum, attention must be paid not to expect too great an accuracy in locating the minimum. The termination condition provided in the bookNumerical Recipes in Cis based on testing the gaps amongx1{\displaystyle x_{1}},x2{\displaystyle x_{2}},x3{\displaystyle x_{3}}andx4{\displaystyle x_{4}}, terminating when within the relative accuracy bounds
whereτ{\displaystyle \tau }is a tolerance parameter of the algorithm, and|x|{\displaystyle |x|}is theabsolute valueofx{\displaystyle x}. The check is based on the bracket size relative to its central value, because that relative error inx{\displaystyle x}is approximately proportional to the squared absolute error inf(x){\displaystyle f(x)}in typical cases. For that same reason, the Numerical Recipes text recommends thatτ=ε{\displaystyle \tau ={\sqrt {\varepsilon }}}, whereε{\displaystyle \varepsilon }is the required absolute precision off(x){\displaystyle f(x)}.
Note!The examples here describe an algorithm that is for finding theminimumof a function. For maximum, the comparison operators need to be reversed.
A very similar algorithm can also be used to find theextremum(minimum or maximum) of asequenceof values that has a single local minimum or local maximum. In order to approximate the probe positions of golden section search while probing only integer sequence indices, the variant of the algorithm for this case typically maintains a bracketing of the solution in which the length of the bracketed interval is aFibonacci number. For this reason, the sequence variant of golden section search is often calledFibonacci search.
Fibonacci search was first devised byKiefer(1953) as aminimaxsearch for the maximum (minimum) of a unimodal function in an interval.
TheBisection methodis a similar algorithm for finding a zero of a function. Note that, for bracketing a zero, only two points are needed, rather than three. The interval ratio decreases by 2 in each step, rather than by the golden ratio. | https://en.wikipedia.org/wiki/Golden-section_search |
Incomputer science,binary search, also known ashalf-interval search,[1]logarithmic search,[2]orbinary chop,[3]is asearch algorithmthat finds the position of a target value within asorted array.[4][5]Binary search compares the target value to the middle element of the array. If they are not equal, the half in which the target cannot lie is eliminated and the search continues on the remaining half, again taking the middle element to compare to the target value, and repeating this until the target value is found. If the search ends with the remaining half being empty, the target is not in the array.
Binary search runs inlogarithmic timein theworst case, makingO(logn){\displaystyle O(\log n)}comparisons, wheren{\displaystyle n}is the number of elements in the array.[a][6]Binary search is faster thanlinear searchexcept for small arrays. However, the array must be sorted first to be able to apply binary search. There are specializeddata structuresdesigned for fast searching, such ashash tables, that can be searched more efficiently than binary search. However, binary search can be used to solve a wider range of problems, such as finding the next-smallest or next-largest element in the array relative to the target even if it is absent from the array.
There are numerous variations of binary search. In particular,fractional cascadingspeeds up binary searches for the same value in multiple arrays. Fractional cascading efficiently solves a number of search problems incomputational geometryand in numerous other fields.Exponential searchextends binary search to unbounded lists. Thebinary search treeandB-treedata structures are based on binary search.
Binary search works on sorted arrays. Binary search begins by comparing an element in the middle of the array with the target value. If the target value matches the element, its position in the array is returned. If the target value is less than the element, the search continues in the lower half of the array. If the target value is greater than the element, the search continues in the upper half of the array. By doing this, the algorithm eliminates the half in which the target value cannot lie in each iteration.[7]
Given an arrayA{\displaystyle A}ofn{\displaystyle n}elements with values orrecordsA0,A1,A2,…,An−1{\displaystyle A_{0},A_{1},A_{2},\ldots ,A_{n-1}}sorted such thatA0≤A1≤A2≤⋯≤An−1{\displaystyle A_{0}\leq A_{1}\leq A_{2}\leq \cdots \leq A_{n-1}}, and target valueT{\displaystyle T}, the followingsubroutineuses binary search to find the index ofT{\displaystyle T}inA{\displaystyle A}.[7]
This iterative procedure keeps track of the search boundaries with the two variablesL{\displaystyle L}andR{\displaystyle R}. The procedure may be expressed inpseudocodeas follows, where the variable names and types remain the same as above,flooris thefloor function, andunsuccessfulrefers to a specific value that conveys the failure of the search.[7]
Alternatively, the algorithm may take theceilingofR−L2{\displaystyle {\frac {R-L}{2}}}. This may change the result if the target value appears more than once in the array.
In the above procedure, the algorithm checks whether the middle element (m{\displaystyle m}) is equal to the target (T{\displaystyle T}) in every iteration. Some implementations leave out this check during each iteration. The algorithm would perform this check only when one element is left (whenL=R{\displaystyle L=R}). This results in a faster comparison loop, as one comparison is eliminated per iteration, while it requires only one more iteration on average.[8]
Hermann Bottenbruchpublished the first implementation to leave out this check in 1962.[8][9]
Whereceilis the ceiling function, the pseudocode for this version is:
The procedure may return any index whose element is equal to the target value, even if there are duplicate elements in the array. For example, if the array to be searched was[1,2,3,4,4,5,6,7]{\displaystyle [1,2,3,4,4,5,6,7]}and the target was4{\displaystyle 4}, then it would be correct for the algorithm to either return the 4th (index 3) or 5th (index 4) element. The regular procedure would return the 4th element (index 3) in this case. It does not always return the first duplicate (consider[1,2,4,4,4,5,6,7]{\displaystyle [1,2,4,4,4,5,6,7]}which still returns the 4th element). However, it is sometimes necessary to find the leftmost element or the rightmost element for a target value that is duplicated in the array. In the above example, the 4th element is the leftmost element of the value 4, while the 5th element is the rightmost element of the value 4. The alternative procedure above will always return the index of the rightmost element if such an element exists.[9]
To find the leftmost element, the following procedure can be used:[10]
IfL<n{\displaystyle L<n}andAL=T{\displaystyle A_{L}=T}, thenAL{\displaystyle A_{L}}is the leftmost element that equalsT{\displaystyle T}. Even ifT{\displaystyle T}is not in the array,L{\displaystyle L}is therankofT{\displaystyle T}in the array, or the number of elements in the array that are less thanT{\displaystyle T}.
Whereflooris the floor function, the pseudocode for this version is:
To find the rightmost element, the following procedure can be used:[10]
IfR>0{\displaystyle R>0}andAR−1=T{\displaystyle A_{R-1}=T}, thenAR−1{\displaystyle A_{R-1}}is the rightmost element that equalsT{\displaystyle T}. Even ifT{\displaystyle T}is not in the array,n−R{\displaystyle n-R}is the number of elements in the array that are greater thanT{\displaystyle T}.
Whereflooris the floor function, the pseudocode for this version is:
The above procedure only performsexactmatches, finding the position of a target value. However, it is trivial to extend binary search to perform approximate matches because binary search operates on sorted arrays. For example, binary search can be used to compute, for a given value, its rank (the number of smaller elements), predecessor (next-smallest element), successor (next-largest element), andnearest neighbor.Range queriesseeking the number of elements between two values can be performed with two rank queries.[11]
In terms of the number of comparisons, the performance of binary search can be analyzed by viewing the run of the procedure on a binary tree. The root node of the tree is the middle element of the array. The middle element of the lower half is the left child node of the root, and the middle element of the upper half is the right child node of the root. The rest of the tree is built in a similar fashion. Starting from the root node, the left or right subtrees are traversed depending on whether the target value is less or more than the node under consideration.[6][14]
In the worst case, binary search makes⌊log2(n)+1⌋{\textstyle \lfloor \log _{2}(n)+1\rfloor }iterations of the comparison loop, where the⌊⋅⌋{\textstyle \lfloor \cdot \rfloor }notation denotes thefloor functionthat yields the greatest integer less than or equal to the argument, andlog2{\textstyle \log _{2}}is thebinary logarithm. This is because the worst case is reached when the search reaches the deepest level of the tree, and there are always⌊log2(n)+1⌋{\textstyle \lfloor \log _{2}(n)+1\rfloor }levels in the tree for any binary search.
The worst case may also be reached when the target element is not in the array. Ifn{\textstyle n}is one less than a power of two, then this is always the case. Otherwise, the search may perform⌊log2(n)+1⌋{\textstyle \lfloor \log _{2}(n)+1\rfloor }iterations if the search reaches the deepest level of the tree. However, it may make⌊log2(n)⌋{\textstyle \lfloor \log _{2}(n)\rfloor }iterations, which is one less than the worst case, if the search ends at the second-deepest level of the tree.[15]
On average, assuming that each element is equally likely to be searched, binary search makes⌊log2(n)⌋+1−(2⌊log2(n)⌋+1−⌊log2(n)⌋−2)/n{\displaystyle \lfloor \log _{2}(n)\rfloor +1-(2^{\lfloor \log _{2}(n)\rfloor +1}-\lfloor \log _{2}(n)\rfloor -2)/n}iterations when the target element is in the array. This is approximately equal tolog2(n)−1{\displaystyle \log _{2}(n)-1}iterations. When the target element is not in the array, binary search makes⌊log2(n)⌋+2−2⌊log2(n)⌋+1/(n+1){\displaystyle \lfloor \log _{2}(n)\rfloor +2-2^{\lfloor \log _{2}(n)\rfloor +1}/(n+1)}iterations on average, assuming that the range between and outside elements is equally likely to be searched.[14]
In the best case, where the target value is the middle element of the array, its position is returned after one iteration.[16]
In terms of iterations, no search algorithm that works only by comparing elements can exhibit better average and worst-case performance than binary search. The comparison tree representing binary search has the fewest levels possible as every level above the lowest level of the tree is filled completely.[b]Otherwise, the search algorithm can eliminate few elements in an iteration, increasing the number of iterations required in the average and worst case. This is the case for other search algorithms based on comparisons, as while they may work faster on some target values, the average performance overallelements is worse than binary search. By dividing the array in half, binary search ensures that the size of both subarrays are as similar as possible.[14]
Binary search requires three pointers to elements, which may be array indices or pointers to memory locations, regardless of the size of the array. Therefore, the space complexity of binary search isO(1){\displaystyle O(1)}in theword RAMmodel of computation.
The average number of iterations performed by binary search depends on the probability of each element being searched. The average case is different for successful searches and unsuccessful searches. It will be assumed that each element is equally likely to be searched for successful searches. For unsuccessful searches, it will be assumed that theintervalsbetween and outside elements are equally likely to be searched. The average case for successful searches is the number of iterations required to search every element exactly once, divided byn{\displaystyle n}, the number of elements. The average case for unsuccessful searches is the number of iterations required to search an element within every interval exactly once, divided by then+1{\displaystyle n+1}intervals.[14]
In the binary tree representation, a successful search can be represented by a path from the root to the target node, called aninternal path. The length of a path is the number of edges (connections between nodes) that the path passes through. The number of iterations performed by a search, given that the corresponding path has lengthl, isl+1{\displaystyle l+1}counting the initial iteration. Theinternal path lengthis the sum of the lengths of all unique internal paths. Since there is only one path from the root to any single node, each internal path represents a search for a specific element. If there arenelements, which is a positive integer, and the internal path length isI(n){\displaystyle I(n)}, then the average number of iterations for a successful searchT(n)=1+I(n)n{\displaystyle T(n)=1+{\frac {I(n)}{n}}}, with the one iteration added to count the initial iteration.[14]
Since binary search is the optimal algorithm for searching with comparisons, this problem is reduced to calculating the minimum internal path length of all binary trees withnnodes, which is equal to:[17]
I(n)=∑k=1n⌊log2(k)⌋{\displaystyle I(n)=\sum _{k=1}^{n}\left\lfloor \log _{2}(k)\right\rfloor }
For example, in a 7-element array, the root requires one iteration, the two elements below the root require two iterations, and the four elements below require three iterations. In this case, the internal path length is:[17]
∑k=17⌊log2(k)⌋=0+2(1)+4(2)=2+8=10{\displaystyle \sum _{k=1}^{7}\left\lfloor \log _{2}(k)\right\rfloor =0+2(1)+4(2)=2+8=10}
The average number of iterations would be1+107=237{\displaystyle 1+{\frac {10}{7}}=2{\frac {3}{7}}}based on the equation for the average case. The sum forI(n){\displaystyle I(n)}can be simplified to:[14]
I(n)=∑k=1n⌊log2(k)⌋=(n+1)⌊log2(n+1)⌋−2⌊log2(n+1)⌋+1+2{\displaystyle I(n)=\sum _{k=1}^{n}\left\lfloor \log _{2}(k)\right\rfloor =(n+1)\left\lfloor \log _{2}(n+1)\right\rfloor -2^{\left\lfloor \log _{2}(n+1)\right\rfloor +1}+2}
Substituting the equation forI(n){\displaystyle I(n)}into the equation forT(n){\displaystyle T(n)}:[14]
T(n)=1+(n+1)⌊log2(n+1)⌋−2⌊log2(n+1)⌋+1+2n=⌊log2(n)⌋+1−(2⌊log2(n)⌋+1−⌊log2(n)⌋−2)/n{\displaystyle T(n)=1+{\frac {(n+1)\left\lfloor \log _{2}(n+1)\right\rfloor -2^{\left\lfloor \log _{2}(n+1)\right\rfloor +1}+2}{n}}=\lfloor \log _{2}(n)\rfloor +1-(2^{\lfloor \log _{2}(n)\rfloor +1}-\lfloor \log _{2}(n)\rfloor -2)/n}
For integern, this is equivalent to the equation for the average case on a successful search specified above.
Unsuccessful searches can be represented by augmenting the tree withexternal nodes, which forms anextended binary tree. If an internal node, or a node present in the tree, has fewer than two child nodes, then additional child nodes, called external nodes, are added so that each internal node has two children. By doing so, an unsuccessful search can be represented as a path to an external node, whose parent is the single element that remains during the last iteration. Anexternal pathis a path from the root to an external node. Theexternal path lengthis the sum of the lengths of all unique external paths. If there aren{\displaystyle n}elements, which is a positive integer, and the external path length isE(n){\displaystyle E(n)}, then the average number of iterations for an unsuccessful searchT′(n)=E(n)n+1{\displaystyle T'(n)={\frac {E(n)}{n+1}}}, with the one iteration added to count the initial iteration. The external path length is divided byn+1{\displaystyle n+1}instead ofn{\displaystyle n}because there aren+1{\displaystyle n+1}external paths, representing the intervals between and outside the elements of the array.[14]
This problem can similarly be reduced to determining the minimum external path length of all binary trees withn{\displaystyle n}nodes. For all binary trees, the external path length is equal to the internal path length plus2n{\displaystyle 2n}.[17]Substituting the equation forI(n){\displaystyle I(n)}:[14]
E(n)=I(n)+2n=[(n+1)⌊log2(n+1)⌋−2⌊log2(n+1)⌋+1+2]+2n=(n+1)(⌊log2(n)⌋+2)−2⌊log2(n)⌋+1{\displaystyle E(n)=I(n)+2n=\left[(n+1)\left\lfloor \log _{2}(n+1)\right\rfloor -2^{\left\lfloor \log _{2}(n+1)\right\rfloor +1}+2\right]+2n=(n+1)(\lfloor \log _{2}(n)\rfloor +2)-2^{\lfloor \log _{2}(n)\rfloor +1}}
Substituting the equation forE(n){\displaystyle E(n)}into the equation forT′(n){\displaystyle T'(n)}, the average case for unsuccessful searches can be determined:[14]
T′(n)=(n+1)(⌊log2(n)⌋+2)−2⌊log2(n)⌋+1(n+1)=⌊log2(n)⌋+2−2⌊log2(n)⌋+1/(n+1){\displaystyle T'(n)={\frac {(n+1)(\lfloor \log _{2}(n)\rfloor +2)-2^{\lfloor \log _{2}(n)\rfloor +1}}{(n+1)}}=\lfloor \log _{2}(n)\rfloor +2-2^{\lfloor \log _{2}(n)\rfloor +1}/(n+1)}
Each iteration of the binary search procedure defined above makes one or two comparisons, checking if the middle element is equal to the target in each iteration. Assuming that each element is equally likely to be searched, each iteration makes 1.5 comparisons on average. A variation of the algorithm checks whether the middle element is equal to the target at the end of the search. On average, this eliminates half a comparison from each iteration. This slightly cuts the time taken per iteration on most computers. However, it guarantees that the search takes the maximum number of iterations, on average adding one iteration to the search. Because the comparison loop is performed only⌊log2(n)+1⌋{\textstyle \lfloor \log _{2}(n)+1\rfloor }times in the worst case, the slight increase in efficiency per iteration does not compensate for the extra iteration for all but very largen{\textstyle n}.[c][18][19]
In analyzing the performance of binary search, another consideration is the time required to compare two elements. For integers and strings, the time required increases linearly as the encoding length (usually the number ofbits) of the elements increase. For example, comparing a pair of 64-bit unsigned integers would require comparing up to double the bits as comparing a pair of 32-bit unsigned integers. The worst case is achieved when the integers are equal. This can be significant when the encoding lengths of the elements are large, such as with large integer types or long strings, which makes comparing elements expensive. Furthermore, comparingfloating-pointvalues (the most common digital representation ofreal numbers) is often more expensive than comparing integers or short strings.
On most computer architectures, theprocessorhas a hardwarecacheseparate fromRAM. Since they are located within the processor itself, caches are much faster to access but usually store much less data than RAM. Therefore, most processors store memory locations that have been accessed recently, along with memory locations close to it. For example, when an array element is accessed, the element itself may be stored along with the elements that are stored close to it in RAM, making it faster to sequentially access array elements that are close in index to each other (locality of reference). On a sorted array, binary search can jump to distant memory locations if the array is large, unlike algorithms (such aslinear searchandlinear probinginhash tables) which access elements in sequence. This adds slightly to the running time of binary search for large arrays on most systems.[20]
Sorted arrays with binary search are a very inefficient solution when insertion and deletion operations are interleaved with retrieval, takingO(n){\textstyle O(n)}time for each such operation. In addition, sorted arrays can complicate memory use especially when elements are often inserted into the array.[21]There are other data structures that support much more efficient insertion and deletion. Binary search can be used to perform exact matching andset membership(determining whether a target value is in a collection of values). There are data structures that support faster exact matching and set membership. However, unlike many other searching schemes, binary search can be used for efficient approximate matching, usually performing such matches inO(logn){\textstyle O(\log n)}time regardless of the type or structure of the values themselves.[22]In addition, there are some operations, like finding the smallest and largest element, that can be performed efficiently on a sorted array.[11]
Linear searchis a simple search algorithm that checks every record until it finds the target value. Linear search can be done on a linked list, which allows for faster insertion and deletion than an array. Binary search is faster than linear search for sorted arrays except if the array is short, although the array needs to be sorted beforehand.[d][24]Allsorting algorithmsbased on comparing elements, such asquicksortandmerge sort, require at leastO(nlogn){\textstyle O(n\log n)}comparisons in the worst case.[25]Unlike linear search, binary search can be used for efficient approximate matching. There are operations such as finding the smallest and largest element that can be done efficiently on a sorted array but not on an unsorted array.[26]
Abinary search treeis abinary treedata structure that works based on the principle of binary search. The records of the tree are arranged in sorted order, and each record in the tree can be searched using an algorithm similar to binary search, taking on average logarithmic time. Insertion and deletion also require on average logarithmic time in binary search trees. This can be faster than the linear time insertion and deletion of sorted arrays, and binary trees retain the ability to perform all the operations possible on a sorted array, including range and approximate queries.[22][27]
However, binary search is usually more efficient for searching as binary search trees will most likely be imperfectly balanced, resulting in slightly worse performance than binary search. This even applies tobalanced binary search trees, binary search trees that balance their own nodes, because they rarely produce the tree with the fewest possible levels. Except for balanced binary search trees, the tree may be severely imbalanced with few internal nodes with two children, resulting in the average and worst-case search time approachingn{\textstyle n}comparisons.[e]Binary search trees take more space than sorted arrays.[29]
Binary search trees lend themselves to fast searching in external memory stored in hard disks, as binary search trees can be efficiently structured in filesystems. TheB-treegeneralizes this method of tree organization. B-trees are frequently used to organize long-term storage such asdatabasesandfilesystems.[30][31]
For implementingassociative arrays,hash tables, a data structure that maps keys torecordsusing ahash function, are generally faster than binary search on a sorted array of records.[32]Most hash table implementations require onlyamortizedconstant time on average.[f][34]However, hashing is not useful for approximate matches, such as computing the next-smallest, next-largest, and nearest key, as the only information given on a failed search is that the target is not present in any record.[35]Binary search is ideal for such matches, performing them in logarithmic time. Binary search also supports approximate matches. Some operations, like finding the smallest and largest element, can be done efficiently on sorted arrays but not on hash tables.[22]
A related problem to search isset membership. Any algorithm that does lookup, like binary search, can also be used for set membership. There are other algorithms that are more specifically suited for set membership. Abit arrayis the simplest, useful when the range of keys is limited. It compactly stores a collection ofbits, with each bit representing a single key within the range of keys. Bit arrays are very fast, requiring onlyO(1){\textstyle O(1)}time.[36]The Judy1 type ofJudy arrayhandles 64-bit keys efficiently.[37]
For approximate results,Bloom filters, another probabilistic data structure based on hashing, store asetof keys by encoding the keys using abit arrayand multiple hash functions. Bloom filters are much more space-efficient than bit arrays in most cases and not much slower: withk{\textstyle k}hash functions, membership queries require onlyO(k){\textstyle O(k)}time. However, Bloom filters suffer fromfalse positives.[g][h][39]
There exist data structures that may improve on binary search in some cases for both searching and other operations available for sorted arrays. For example, searches, approximate matches, and the operations available to sorted arrays can be performed more efficiently than binary search on specialized data structures such asvan Emde Boas trees,fusion trees,tries, andbit arrays. These specialized data structures are usually only faster because they take advantage of the properties of keys with a certain attribute (usually keys that are small integers), and thus will be time or space consuming for keys that lack that attribute.[22]As long as the keys can be ordered, these operations can always be done at least efficiently on a sorted array regardless of the keys. Some structures, such as Judy arrays, use a combination of approaches to mitigate this while retaining efficiency and the ability to perform approximate matching.[37]
Uniform binary search stores, instead of the lower and upper bounds, the difference in the index of the middle element from the current iteration to the next iteration. Alookup tablecontaining the differences is computed beforehand. For example, if the array to be searched is[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], the middle element (m{\displaystyle m}) would be6. In this case, the middle element of the left subarray ([1, 2, 3, 4, 5]) is3and the middle element of the right subarray ([7, 8, 9, 10, 11]) is9. Uniform binary search would store the value of3as both indices differ from6by this same amount.[40]To reduce the search space, the algorithm either adds or subtracts this change from the index of the middle element. Uniform binary search may be faster on systems where it is inefficient to calculate the midpoint, such as ondecimal computers.[41]
Exponential search extends binary search to unbounded lists. It starts by finding the first element with an index that is both a power of two and greater than the target value. Afterwards, it sets that index as the upper bound, and switches to binary search. A search takes⌊log2x+1⌋{\textstyle \lfloor \log _{2}x+1\rfloor }iterations before binary search is started and at most⌊log2x⌋{\textstyle \lfloor \log _{2}x\rfloor }iterations of the binary search, wherex{\textstyle x}is the position of the target value. Exponential search works on bounded lists, but becomes an improvement over binary search only if the target value lies near the beginning of the array.[42]
Instead of calculating the midpoint, interpolation search estimates the position of the target value, taking into account the lowest and highest elements in the array as well as length of the array. It works on the basis that the midpoint is not the best guess in many cases. For example, if the target value is close to the highest element in the array, it is likely to be located near the end of the array.[43]
A common interpolation function islinear interpolation. IfA{\displaystyle A}is the array,L,R{\displaystyle L,R}are the lower and upper bounds respectively, andT{\displaystyle T}is the target, then the target is estimated to be about(T−AL)/(AR−AL){\displaystyle (T-A_{L})/(A_{R}-A_{L})}of the way betweenL{\displaystyle L}andR{\displaystyle R}. When linear interpolation is used, and the distribution of the array elements is uniform or near uniform, interpolation search makesO(loglogn){\textstyle O(\log \log n)}comparisons.[43][44][45]
In practice, interpolation search is slower than binary search for small arrays, as interpolation search requires extra computation. Its time complexity grows more slowly than binary search, but this only compensates for the extra computation for large arrays.[43]
Fractional cascading is a technique that speeds up binary searches for the same element in multiple sorted arrays. Searching each array separately requiresO(klogn){\textstyle O(k\log n)}time, wherek{\textstyle k}is the number of arrays. Fractional cascading reduces this toO(k+logn){\textstyle O(k+\log n)}by storing specific information in each array about each element and its position in the other arrays.[46][47]
Fractional cascading was originally developed to efficiently solve variouscomputational geometryproblems. Fractional cascading has been applied elsewhere, such as indata miningandInternet Protocolrouting.[46]
Binary search has been generalized to work on certain types of graphs, where the target value is stored in a vertex instead of an array element. Binary search trees are one such generalization—when a vertex (node) in the tree is queried, the algorithm either learns that the vertex is the target, or otherwise which subtree the target would be located in. However, this can be further generalized as follows: given an undirected, positively weighted graph and a target vertex, the algorithm learns upon querying a vertex that it is equal to the target, or it is given an incident edge that is on the shortest path from the queried vertex to the target. The standard binary search algorithm is simply the case where the graph is a path. Similarly, binary search trees are the case where the edges to the left or right subtrees are given when the queried vertex is unequal to the target. For all undirected, positively weighted graphs, there is an algorithm that finds the target vertex inO(logn){\displaystyle O(\log n)}queries in the worst case.[48]
Noisy binary search algorithms solve the case where the algorithm cannot reliably compare elements of the array. For each pair of elements, there is a certain probability that the algorithm makes the wrong comparison. Noisy binary search can find the correct position of the target with a given probability that controls the reliability of the yielded position. Every noisy binary search procedure must make at least(1−τ)log2(n)H(p)−10H(p){\displaystyle (1-\tau ){\frac {\log _{2}(n)}{H(p)}}-{\frac {10}{H(p)}}}comparisons on average, whereH(p)=−plog2(p)−(1−p)log2(1−p){\displaystyle H(p)=-p\log _{2}(p)-(1-p)\log _{2}(1-p)}is thebinary entropy functionandτ{\displaystyle \tau }is the probability that the procedure yields the wrong position.[49][50][51]The noisy binary search problem can be considered as a case of theRényi-Ulam game,[52]a variant ofTwenty Questionswhere the answers may be wrong.[53]
Classical computers are bounded to the worst case of exactly⌊log2n+1⌋{\textstyle \lfloor \log _{2}n+1\rfloor }iterations when performing binary search.Quantum algorithmsfor binary search are still bounded to a proportion oflog2n{\textstyle \log _{2}n}queries (representing iterations of the classical procedure), but the constant factor is less than one, providing for a lower time complexity onquantum computers. Anyexactquantum binary search procedure—that is, a procedure that always yields the correct result—requires at least1π(lnn−1)≈0.22log2n{\textstyle {\frac {1}{\pi }}(\ln n-1)\approx 0.22\log _{2}n}queries in the worst case, whereln{\textstyle \ln }is thenatural logarithm.[54]There is an exact quantum binary search procedure that runs in4log605n≈0.433log2n{\textstyle 4\log _{605}n\approx 0.433\log _{2}n}queries in the worst case.[55]In comparison,Grover's algorithmis the optimal quantum algorithm for searching an unordered list of elements, and it requiresO(n){\displaystyle O({\sqrt {n}})}queries.[56]
The idea of sorting a list of items to allow for faster searching dates back to antiquity. The earliest known example was the Inakibit-Anu tablet from Babylon dating back toc.200 BCE. The tablet contained about 500sexagesimalnumbers and theirreciprocalssorted inlexicographical order, which made searching for a specific entry easier. In addition, several lists of names that were sorted by their first letter were discovered on theAegean Islands.Catholicon, a Latin dictionary finished in 1286 CE, was the first work to describe rules for sorting words into alphabetical order, as opposed to just the first few letters.[9]
In 1946,John Mauchlymade the first mention of binary search as part of theMoore School Lectures, a seminal and foundational college course in computing.[9]In 1957,William Wesley Petersonpublished the first method for interpolation search.[9][57]Every published binary search algorithm worked only for arrays whose length is one less than a power of two[i]until 1960, whenDerrick Henry Lehmerpublished a binary search algorithm that worked on all arrays.[59]In 1962, Hermann Bottenbruch presented anALGOL 60implementation of binary search that placed thecomparison for equality at the end, increasing the average number of iterations by one, but reducing to one the number of comparisons per iteration.[8]Theuniform binary searchwas developed by A. K. Chandra ofStanford Universityin 1971.[9]In 1986,Bernard ChazelleandLeonidas J. Guibasintroducedfractional cascadingas a method to solve numerous search problems incomputational geometry.[46][60][61]
Although the basic idea of binary search is comparatively straightforward, the details can be surprisingly tricky
WhenJon Bentleyassigned binary search as a problem in a course for professional programmers, he found that ninety percent failed to provide a correct solution after several hours of working on it, mainly because the incorrect implementations failed to run or returned a wrong answer in rareedge cases.[62]A study published in 1988 shows that accurate code for it is only found in five out of twenty textbooks.[63]Furthermore, Bentley's own implementation of binary search, published in his 1986 bookProgramming Pearls, contained anoverflow errorthat remained undetected for over twenty years. TheJava programming languagelibrary implementation of binary search had the same overflow bug for more than nine years.[64]
In a practical implementation, the variables used to represent the indices will often be of fixed size (integers), and this can result in anarithmetic overflowfor very large arrays. If the midpoint of the span is calculated asL+R2{\displaystyle {\frac {L+R}{2}}}, then the value ofL+R{\displaystyle L+R}may exceed the range of integers of the data type used to store the midpoint, even ifL{\displaystyle L}andR{\displaystyle R}are within the range. IfL{\displaystyle L}andR{\displaystyle R}are nonnegative, this can be avoided by calculating the midpoint asL+R−L2{\displaystyle L+{\frac {R-L}{2}}}.[65]
An infinite loop may occur if the exit conditions for the loop are not defined correctly. OnceL{\displaystyle L}exceedsR{\displaystyle R}, the search has failed and must convey the failure of the search. In addition, the loop must be exited when the target element is found, or in the case of an implementation where this check is moved to the end, checks for whether the search was successful or failed at the end must be in place. Bentley found that most of the programmers who incorrectly implemented binary search made an error in defining the exit conditions.[8][66]
Many languages'standard librariesinclude binary search routines:
This article was submitted toWikiJournal of Sciencefor externalacademic peer reviewin 2018 (reviewer reports). The updated content was reintegrated into the Wikipedia page under aCC-BY-SA-3.0license (2019). The version of record as reviewed is:Anthony Lin; et al. (2 July 2019)."Binary search algorithm"(PDF).WikiJournal of Science.2(1): 5.doi:10.15347/WJS/2019.005.ISSN2470-6345.WikidataQ81434400. | https://en.wikipedia.org/wiki/Binary_search_algorithm |
Interpolation searchis analgorithmforsearchingfor a key in an array that has beenorderedby numerical values assigned to the keys (key values). It was first described by W. W. Peterson in 1957.[1]Interpolation search resembles the method by which people search atelephone directoryfor a name (the key value by which the book's entries are ordered): in each step the algorithm calculates where in the remainingsearch spacethe sought item might be, based on the key values at the bounds of the search space and the value of the sought key, usually via alinear interpolation. The key value actually found at this estimated position is then compared to the key value being sought. If it is not equal, then depending on the comparison, the remaining search space is reduced to the part before or after the estimated position. This method will only work if calculations on the size of differences between key values are sensible.
By comparison,binary searchalways chooses the middle of the remaining search space, discarding one half or the other, depending on the comparison between the key found at the estimated position and the key sought — it does not require numerical values for the keys, just a total order on them. The remaining search space is reduced to the part before or after the estimated position. Thelinear searchuses equality only as it compares elements one-by-one from the start, ignoring any sorting.
On average the interpolation search makes about log(log(n)) comparisons (if the elements are uniformly distributed), wherenis the number of elements to be searched. In the worst case (for instance where the numerical values of the keys increase exponentially) it can make up toO(n) comparisons.
In interpolation-sequential search, interpolation is used to find an item near the one being searched for, thenlinear searchis used to find the exact item.
Usingbig-O notation, the performance of the interpolation algorithm on a data set of sizenisO(n); however under the assumption of a uniform distribution of the data on the linear scale used for interpolation, the performance can be shown to beO(log logn).[3][4][5]
Dynamic interpolation search extends theo(log logn) bound to other distributions and also supportsO(logn) insertion and deletion.[6][7]
Practical performance of interpolation search depends on whether the reduced number of probes is outweighed by the more complicated calculations needed for each probe. It can be useful for locating a record in a large sorted file on disk, where each probe involves a disk seek and is much slower than the interpolation arithmetic.
Index structures likeB-treesalso reduce the number of disk accesses, and are more often used to index on-disk data in part because they can index many types of data and can be updatedonline. Still, interpolation search may be useful when one is forced to search certain sorted but unindexed on-disk datasets.
When sort keys for a dataset are uniformly distributed numbers, linear interpolation is straightforward to implement and will find an index very near the sought value.
On the other hand, for a phone book sorted by name, the straightforward approach to interpolation search does not apply. The same high-level principles can still apply, though: one can estimate a name's position in the phone book using the relative frequencies of letters in names and use that as a probe location.
Some interpolation search implementations may not work as expected when a run of equal key values exists. The simplest implementation of interpolation search won't necessarily select the first (or last) element of such a run.
The conversion of names in a telephone book to some sort of number clearly will not provide numbers having a uniform distribution (except via immense effort such as sorting the names and calling them name #1, name #2, etc.) and further, it is well known that some names are much more common than others (Smith, Jones,) Similarly with dictionaries, where there are many more words starting with some letters than others. Some publishers go to the effort of preparing marginal annotations or even cutting into the side of the pages to show markers for each letter so that at a glance a segmented interpolation can be performed.
The followingC++code example is a simple implementation. At each stage it computes a probe position then as with the binary search, moves either the upper or lower bound in to define a smaller interval containing the sought value. Unlike the binary search which guarantees a halving of the interval's size with each stage, a misled interpolation may reduce/i-case efficiency of O(n).
Notice that having probed the list at indexmid, for reasons of loop control administration, this code sets eitherhighorlowto be notmidbut an adjacent index, which location is then probed during the next iteration. Since an adjacent entry's value will not be much different, the interpolation calculation is not much improved by this one step adjustment, at the cost of an additional reference to distant memory such as disk.
Each iteration of the above code requires between five and six comparisons (the extra is due to the repetitions needed to distinguish the three states of < > and = via binary comparisons in the absence of athree-way comparison) plus some messy arithmetic, while thebinary search algorithmcan be written with one comparison per iteration and uses only trivial integer arithmetic. It would thereby search an array of a million elements with no more than twenty comparisons (involving accesses to slow memory where the array elements are stored); to beat that, the interpolation search, as written above, would be allowed no more than three iterations. | https://en.wikipedia.org/wiki/Interpolation_search |
Incomputer science, anexponential search(also calleddoubling searchorgalloping searchorStruzik search)[1]is analgorithm, created byJon BentleyandAndrew Chi-Chih Yaoin 1976, for searching sorted, unbounded/infinite lists.[2]There are numerous ways to implement this, with the most common being to determine a range that the search key resides in and performing abinary searchwithin that range. This takesO(logi){\displaystyle O(\log i)}time, wherei{\displaystyle i}is the position of the search key in the list, if the search key is in the list, or the position where the search key should be, if the search key is not in the list.
Exponential search can also be used to search in bounded lists. Exponential search can even out-perform more traditional searches for bounded lists, such as binary search, when the element being searched for is near the beginning of the array. This is because exponential search will run inO(logi){\displaystyle O(\log i)}time, wherei{\displaystyle i}is the index of the element being searched for in the list, whereas binary search would run inO(logn){\displaystyle O(\log n)}time, wheren{\displaystyle n}is the number of elements in the list.
Exponential search allows for searching through a sorted, unbounded list for a specified input value (the search "key"). The algorithm consists of two stages. The first stage determines a range in which the search key would reside if it were in the list. In the second stage, a binary search is performed on this range. In the first stage, assuming that the list is sorted in ascending order, the algorithm looks for the firstexponent,j, where the value 2jis greater than the search key. This value, 2jbecomes the upper bound for the binary search with the previous power of 2, 2j - 1, being the lower bound for the binary search.[3]
In each step, the algorithm compares the search key value with the key value at the current search index. If the element at the current index is smaller than the search key, the algorithm repeats, skipping to the next search index by doubling it, calculating the next power of 2.[3]If the element at the current index is larger than the search key, the algorithm now knows that the search key, if it is contained in the list at all, is located in the interval formed by the previous search index, 2j - 1, and the current search index, 2j. The binary search is then performed with the result of either a failure, if the search key is not in the list, or the position of the search key in the list.
The first stage of the algorithm takesO(logi){\displaystyle O(\log i)}time, wherei{\displaystyle i}is the index where the search key would be in the list. This is because, in determining the upper bound for the binary search, the while loop is executed exactly⌈log(i)⌉{\displaystyle \lceil \log(i)\rceil }times. Since the list is sorted, after doubling the search index⌈log(i)⌉{\displaystyle \lceil \log(i)\rceil }times, the algorithm will be at a search index that is greater than or equal toias2⌈log(i)⌉≥i{\displaystyle 2^{\lceil \log(i)\rceil }\geq i}. As such, the first stage of the algorithm takesO(logi){\displaystyle O(\log i)}time.
The second part of the algorithm also takesO(logi){\displaystyle O(\log i)}time. As the second stage is simply a binary search, it takesO(logn){\displaystyle O(\log n)}wheren{\displaystyle n}is the size of the interval being searched. The size of this interval would be 2j- 2j- 1where, as seen above,j=logi{\displaystyle \log i}. This means that the size of the interval being searched is 2logi- 2logi- 1= 2logi- 1. This gives us a runtime of log (2logi- 1) = log (i) - 1 =O(logi){\displaystyle O(\log i)}.
This gives the algorithm a total runtime, calculated by summing the runtimes of the two stages, ofO(logi){\displaystyle O(\log i)}+O(logi){\displaystyle O(\log i)}= 2O(logi){\displaystyle O(\log i)}=O(logi){\displaystyle O(\log i)}.
Bentley and Yao suggested several variations for exponential search.[2]These variations consist of performing a binary search, as opposed to a unary search, when determining the upper bound for the binary search in the second stage of the algorithm. This splits the first stage of the algorithm into two parts, making the algorithm a three-stage algorithm overall. The new first stage determines a valuej′{\displaystyle j'}, much like before, such that2j′{\displaystyle 2^{j'}}is larger than the search key and2j′/2{\displaystyle 2^{j'/2}}is lower than the search key. Previously,j′{\displaystyle j'}was determined in a unary fashion by calculating the next power of 2 (i.e., adding 1 toj). In the variation, it is proposed thatj′{\displaystyle j'}is doubled instead (e.g., jumping from 22to 24as opposed to 23). The firstj′{\displaystyle j'}such that2j′{\displaystyle 2^{j'}}is greater than the search key forms a much rougher upper bound than before. Once thisj′{\displaystyle j'}is found, the algorithm moves to its second stage and a binary search is performed on the interval formed byj′/2{\displaystyle j'/2}andj′{\displaystyle j'}, giving the more accurate upper bound exponentj. From here, the third stage of the algorithm performs the binary search on the interval 2j- 1and 2j, as before. The performance of this variation is⌊logi⌋+2⌊log(⌊logi⌋+1)⌋+1=O(logi){\displaystyle \lfloor \log i\rfloor +2\lfloor \log(\lfloor \log i\rfloor +1)\rfloor +1=O(\log i)}.
Bentley and Yao generalize this variation into one where any number,k, of binary searches are performed during the first stage of the algorithm, giving thek-nested binary search variation. The asymptotic runtime does not change for the variations, running inO(logi){\displaystyle O(\log i)}time, as with the original exponential search algorithm.
Also, a data structure with a tight version of thedynamic finger propertycan be given when the above result of thek-nested binary search is used on a sorted array.[4]Using this, the number of comparisons done during a search is log (d) + log log (d) + ... +O(log*d), wheredis the difference in rank between the last element that was accessed and the current element being accessed.
An algorithm based on exponentially increasing the search band solvesglobal pairwise alignmentforO(ns){\displaystyle O(ns)}, wheren{\displaystyle n}is the length of the sequences ands{\displaystyle s}is theedit distancebetween them.[5][6] | https://en.wikipedia.org/wiki/Exponential_search |
Asearch gameis a two-personzero-sum gamewhich takes place in asetcalled the search space. The searcher can choose any continuous trajectory subject to a maximal velocity constraint. It is always assumed that neither the searcher nor the hider has any knowledge about the movement of the other player until their distance apart is less than or equal to the discovery radius and at this very moment capture occurs. The game is zero sum with the payoff being the time spent in searching. As mathematical models, search games can be applied to areas such as hide-and-seek games that children play or representations of some tactical military situations. The area of search games was introduced in the last chapter ofRufus Isaacs' classic book "Differential Games"[1]and has been developed further byShmuel Gal[2][3]andSteve Alpern.[3]Theprincess and monster gamedeals with a moving target.
A natural strategy to search for a stationary target in a graph (in which arcs have lengths) is to find a minimal closed curve L that covers all the arcs of the graph. (L is called aChinese postmantour). Then, traverse L with probability 1/2 for each direction. This strategy seems to work well if the graph isEulerian. In general, this random Chinese postman tour is indeed an optimal search strategy if and only if the graph consists of a set of Eulerian graphs connected in a tree-like structure.[4]A misleadingly simple example of a graph not in this family consists of two nodes connected by three arcs. The random Chinese postman tour (equivalent to traversing the three arcs in a random order) is not optimal, and the optimal way to search these three arcs is complicated.[2]
In general, the reasonable framework for searching an unbounded domain, as in the case of anonline algorithm, is to use a normalizedcost function(called thecompetitive ratioin Computer Science literature). Theminimaxtrajectory for problems of these types is always a geometric sequence (or exponential function for continuous problems). This result yields an easy method to find the minimax trajectory by minimizing over a single parameter (the generator of this sequence) instead of searching over the whole trajectory space. This tool has been used for thelinear search problem, i.e., finding a target on the infinite line, which has attracted much attention over several decades and has been analyzed as a search game.[5]It has also been used to find a minimax trajectory for searching a set of concurrent rays. Optimal searching in the plane is performed by using exponential spirals.[2][3][6]Searching a set of concurrent rays was later re-discovered in Computer Science literature as the 'cow-path problem'.[7] | https://en.wikipedia.org/wiki/Search_games |
Algorithmic artoralgorithm artis art, mostlyvisual art, in which the design is generated by analgorithm. Algorithmic artists are sometimes called algorists. Algorithmic art is created in the form of digital paintings andsculptures,interactive installationsandmusic compositions.[2]
Algorithmic art is not a newconcept.Islamic artis a good example of the tradition of following a set of rules to createpatterns. The even older practice ofweavingincludes elements of algorithmic art.[3]
Ascomputersdeveloped so did the art created with them. Algorithmic art encouragesexperimentationallowing artists to push theircreativityin thedigital age. Algorithmic art allows creators to devise intricate patterns and designs that would be nearly impossible to achieve by hand.[4]Creators have a say on what the input criteria is, but not on the outcome.[5]
Algorithmic art, also known as computer-generated art, is a subset ofgenerative art(generated by an autonomous system) and is related tosystems art(influenced by systems theory).Fractal artis an example of algorithmic art.[6]Fractal art is bothabstractand mesmerizing.[2]
For an image of reasonable size, even the simplestalgorithmsrequire too much calculation for manual execution to be practical, and they are thus executed on either a single computer or on a cluster of computers. The final output is typically displayed on acomputer monitor, printed with araster-type printer, or drawn using aplotter. Variability can be introduced by usingpseudo-randomnumbers. There is no consensus as to whether the product of an algorithm that operates on an existing image (or on any input other than pseudo-random numbers) can still be considered computer-generated art, as opposed to computer-assisted art.[6]
Roman Verostkoargues thatIslamic geometric patternsare constructed using algorithms, as areItalian Renaissancepaintings which make use ofmathematical techniques, in particularlinear perspectiveand proportion.[7]
Some of the earliest known examples of computer-generated algorithmic art were created byGeorg Nees,Frieder Nake,A. Michael Noll,Manfred MohrandVera Molnárin the early 1960s. These artworks were executed by a plotter controlled by a computer, and were therefore computer-generated art but notdigital art. The act of creation lay in writing theprogram, which specified the sequence of actions to be performed by the plotter.Sonia Landy Sheridanestablished Generative Systems as a program at theSchool of the Art Institute of Chicagoin 1970 in response to social change brought about in part by the computer-robot communications revolution.[8]Her early work with copier and telematic art focused on the differences between the human hand and the algorithm.[9]
Aside from the ongoing work of Roman Verostko and his fellow algorists, the next known examples are fractal artworks created in the mid to late 1980s. These are important here because they use a different means of execution. Whereas the earliest algorithmic art was "drawn" by aplotter, fractal art simply creates an image incomputer memory; it is therefore digital art. The native form of a fractal artwork is an image stored on a computer –this is also true of very nearly all equation art and of most recent algorithmic art in general. However, in a stricter sense "fractal art" is not considered algorithmic art, because the algorithm is not devised by the artist.[6]
In light of such ongoing developments, pioneer algorithmic artistErnest Edmondshas documented the continuing prophetic role of art in human affairs by tracing the early 1960s association between art and the computer up to a present time in which the algorithm is now widely recognized as a key concept for society as a whole.[10]
While art has strong emotional and psychological ties, it also depends heavily on rational approaches. Artists have to learn how to use various tools, theories and techniques to be able to create impressive artwork. Thus, throughout history, many art techniques were introduced to create various visual effects. For example,Georges-Pierre Seuratinventedpointillism, a painting technique that involves placing dots of complementary colors adjacent to each other.[11]CubismandColor Theoryalso helped revolutionize visual arts.Cubisminvolved taking various reference points for the object and creating a 2-Dimensional rendering.Color Theory, stating that all colors are a combination of the three primary colors (Red, Green and Blue), also helped facilitate the use of colors in visual arts and in the creation of distinct colorful effects.[11]In other words, humans have always found algorithmic ways and discovered patterns to create art. Such tools allowed humans to create more visually appealing artworks efficiently. In such ways, art adapted to become more methodological.
Another important aspect that allowed art to evolve into its current form isperspective. Perspective allows the artist to create a 2-Dimensional projection of a 3-Dimensional object. Muslim artists during theIslamic Golden Ageemployedlinear perspectivein most of their designs. The notion of perspective was rediscovered by Italian artists during the Renaissance. TheGolden Ratio, a famous mathematical ratio, was utilized by manyRenaissanceartists in their drawings.[11]Most famously,Leonardo DaVinciemployed that technique in hisMona Lisa, and many other paintings, such asSalvator Mundi.[12]This is a form of using algorithms in art. By examining the works of artists in the past, from the Renaissance and Islamic Golden Age, a pattern of mathematical patterns, geometric principles and natural numbers emerges.
From one point of view, for a work of art to be considered algorithmic art, its creation must include a process based on analgorithmdevised by the artist. An artists may also select parameters and interact as the composition is generated. Here, an algorithm is simply a detailed recipe for the design and possibly execution of an artwork, which may includecomputer code,functions,expressions, or other input which ultimately determines the form the art will take.[7]This input may bemathematical,computational, or generative in nature. Inasmuch as algorithms tend to bedeterministic, meaning that their repeated execution would always result in the production of identical artworks, some external factor is usually introduced. This can either be a random number generator of some sort, or an external body of data (which can range from recorded heartbeats to frames of a movie.) Some artists also work with organically based gestural input which is then modified by an algorithm. By this definition,fractalsmade by a fractal program are not art, as humans are not involved. However, defined differently, algorithmic art can be seen to include fractal art, as well as other varieties such as those usinggenetic algorithms. The artistKerry Mitchellstated in his 1999Fractal Art Manifesto:[13][6][14]
Fractal Art is not..Computer(ized) Art, in the sense that the computer does all the work. The work is executed on a computer, but only at the direction of the artist. Turn a computer on and leave it alone for an hour. When you come back, no art will have been generated.[13]
"Algorist" is a term used fordigital artistswho create algorithmic art.[7]Pioneering algorists includeVera Molnár,Dóra MaurerandGizella Rákóczy.[15]
Algorists formally began correspondence and establishing their identity as artists following a panel titled "Art and Algorithms" atSIGGRAPHin 1995. The co-founders wereJean-Pierre HébertandRoman Verostko. Hébert is credited with coining the term and its definition, which is in the form of his own algorithm:[7]
Artists can write code that createscomplexand dynamic visual compositions.[2]
Cellular automatacan be used to generate artistic patterns with an appearance of randomness, or to modify images such as photographs by applying a transformation such as the stepping stone rule (to give an impressionist style) repeatedly until the desired artistic effect is achieved.[16]Their use has also been explored in music.[17]
Fractal art consists of varieties of computer-generatedfractalswith colouring chosen to give an attractive effect.[18]Especially in the western world, it is not drawn or painted by hand. It is usually created indirectly with the assistance offractal-generating software,iteratingthrough three phases: settingparametersof appropriate fractal software; executing the possibly lengthy calculation; and evaluating the product. In some cases, othergraphics programsare used to further modify the images produced. This is called post-processing. Non-fractal imagery may also be integrated into the artwork.[19]
Genetic or evolutionary art makes use ofgenetic algorithmsto develop images iteratively, selecting at each "generation" according to a rule defined by the artist.[20][21]
Algorithmic art is not only produced by computers. Wendy Chun explains:[22]
Software is unique in its status as metaphor for metaphor itself. As A universal imitator/machine, it encapsulates a logic of general substitutability; a logic of ordering and creative, animating disordering. Joseph Weizenbaum has argued that computers have become metaphors for "effective procedures," that is, for anything that can be solved in a prescribed number of steps, such as gene expression and clerical work.[22]
The American artist,Jack Ox, has used algorithms to produce paintings that arevisualizations of musicwithout using a computer. Two examples arevisual performancesof extant scores, such asAnton Bruckner'sEighth Symphony[23][24]andKurt Schwitters'Ursonate.[25][26]Later, she and her collaborator, Dave Britton, created the 21st Century Virtual Color Organ that does use computer coding and algorithms.[27]
Since 1996 there have beenambigram generatorsthat auto generate ambigrams.[28][29][30]
In modern times, humans have witnessed a drastic change in their lives. One such glaring difference is the need for more comfortable andaestheticenvironment. People have started to show particular interest towards decorating their environment with paintings. While it is not uncommon to see renowned, famousoil paintingsin certain environments, it is still unusual to find such paintings in an ordinary family house. Oil paintings can be costly, even if its a copy of the painting. Thus, many people prefer simulating such paintings.[31]With the emergence of Artificial Intelligence, such simulations have become possible. Artificial intelligence image processors utilize an algorithm and machine learning to produce the images for the user.[31]
Recent studies and experiments have shown thatartificial intelligence, using algorithms andmachine learning, is able to replicate oil paintings. The image look relatively accurate and identical to the original image.[31]Such improvements in algorithmic art and artificial intelligence can make it possible for many people to own renowned paintings, at little to no cost. This could prove to be revolutionary for various environments, especially with the rapid rise in demand for improved aesthetic. Using the algorithm, the simulator can create images with an accuracy of 48.13% to 64.21%, which would be imperceptible to most humans. However, the simulations are not perfect and are bound to error. They can sometimes give inaccurate, extraneous images. Other times, they can completely malfunction and produce a physically impossible image. However, with the emergence of newer technologies and finer algorithms, research are confident that simulations could witness a massive improvement.[31]Other contemporary outlooks on art have focused heavily on making art more interactive. Based on the environment or audiencefeedback, the algorithm is fine-tuned to create a more appropriate and appealing output. However, such approaches have been criticized since the artist is not responsible for every detail in the painting. Merely, the artist facilitates the interaction between the algorithm and its environment and adjusts it based on the desired outcome.[32] | https://en.wikipedia.org/wiki/Algorithmic_art |
TheHindu–Arabic numeral system(also known as theIndo-Arabic numeral system,[1]Hindu numeral system, andArabic numeral system)[2][note 1]is apositionalbase-tennumeral systemfor representingintegers; its extension to non-integers is thedecimal numeral system, which is presently the most common numeral system.
The system was invented between the 1st and 4th centuries byIndian mathematicians. By the 9th century, the system was adopted byArabic mathematicianswho extended it to includefractions. It became more widely known through the writings inArabicof the Persian mathematicianAl-Khwārizmī[3](On the Calculation with Hindu Numerals,c.825) and Arab mathematicianAl-Kindi(On the Use of the Hindu Numerals,c.830). The system had spread to medieval Europe by theHigh Middle Ages, notably followingFibonacci's 13th centuryLiber Abaci; until the evolution of theprinting pressin the 15th century, use of the system in Europe was mainly confined toNorthern Italy.[4]
It is based upon tenglyphsrepresenting the numbers from zero to nine, and allows representing anynatural numberby a unique sequence of these glyphs. The symbols (glyphs) used to represent the system are in principle independent of the system itself. The glyphs in actual use are descended fromBrahmi numeralsand have split into various typographical variants since theMiddle Ages.
These symbol sets can be divided into three main families:Western Arabic numeralsused in theGreater Maghreband inEurope;Eastern Arabic numeralsused in theMiddle East; and the Indian numerals in various scripts used in theIndian subcontinent.
Sometime around 600 CE, a change began in the writing of dates in theBrāhmī-derived scripts of India and Southeast Asia, transforming from an additive system with separate numerals for numbers of different magnitudes to a positional place-value system with a single set of glyphs for 1–9 and a dot for zero, gradually displacing additive expressions of numerals over the following several centuries.[5]
When this system was adopted and extended by medieval Arabs and Persians, they called ital-ḥisāb al-hindī("Indian arithmetic"). These numerals were gradually adopted in Europe starting around the 10th century, probably transmitted by Arab merchants;[6]medieval and Renaissance European mathematicians generally recognized them as Indian in origin,[7]however a few influential sources credited them to the Arabs, and they eventually came to be generally known as "Arabic numerals" in Europe.[8]According to some sources, this number system may have originated in ChineseShangnumerals (1200 BCE), which was also adecimalpositionalnumeral system.[9]
The Hindu–Arabic system is designed forpositional notationin adecimalsystem. In a more developed form, positional notation also uses adecimal marker(at first a mark over the ones digit but now more commonly a decimal point or a decimal comma which separates the ones place from the tenths place), and also a symbol for "these digits recurad infinitum". In modern usage, this latter symbol is usually avinculum(a horizontal line placed over the repeating digits). In this more developed form, the numeral system can symbolize anyrational numberusing only 13 symbols (the ten digits, decimal marker, vinculum, and a prependedminus signto indicate anegative number).
Although generally found in text written with the Arabicabjad("alphabet"), which is written right-to-left, numbers written with these numerals place the most-significant digit to the left, so they read from left to right (though digits are not always said in order from most to least significant[10]). The requisite changes in reading direction are found in text that mixes left-to-right writing systems with right-to-left systems.
Various symbol sets are used to represent numbers in the Hindu–Arabic numeral system, most of which developed from theBrahmi numerals.
The symbols used to represent the system have split into various typographical variants since theMiddle Ages, arranged in three main groups:
TheBrahmi numeralsat the basis of the system predate theCommon Era. They replaced the earlierKharosthi numeralsused since the 4th century BCE. Brahmi and Kharosthi numerals were used alongside one another in theMaurya Empireperiod, both appearing on the 3rd century BCEedicts of Ashoka.[11]
Buddhistinscriptions from around 300 BCE use the symbols that became 1, 4, and 6. One century later, their use of the symbols that became 2, 4, 6, 7, and 9 was recorded. TheseBrahmi numeralsare the ancestors of the Hindu–Arabic glyphs 1 to 9, but they were not used as apositional systemwith azero, and there were rather[clarification needed]separate numerals for each of the tens (10, 20, 30, etc.).
The actual numeral system, including positional notation and use of zero, is in principle independent of the glyphs used, and significantly younger than the Brahmi numerals.
The place-value system is used in theBakhshali manuscript, the earliest leaves being radiocarbon dated to the period 224–383 CE.[12]The development of the positional decimal systemtakes its origins in[clarification needed]Indian mathematicsduring theGupta period. Around 500, the astronomerAryabhatauses the wordkha("emptiness") to mark "zero" in tabular arrangements of digits. The 7th centuryBrahmasphuta Siddhantacontains a comparatively advanced understanding of the mathematical role ofzero. The Sanskrit translation of the lost 5th century PrakritJaina cosmologicaltextLokavibhagamay preserve an early instance of the positional use of zero.[13]
The first dated and undisputed inscription showing the use of a symbol for zero appears on a stone inscription found at theChaturbhuja TempleatGwaliorin India, dated 876 CE.[14]
These Indian developments were taken up inIslamic mathematicsin the 8th century, as recorded inal-Qifti'sChronology of the scholars(early 13th century).[15]
In 10th centuryIslamic mathematics, the system was extended to include fractions, as recorded in a treatise byAbbasid CaliphatemathematicianAbu'l-Hasan al-Uqlidisi, who was the first to describe positional decimal fractions.[16]According to J. L. Berggren, the Muslims were the first to represent numbers as we do since they were the ones who initially extended this system of numeration to represent parts of the unit by decimal fractions, something that the Hindus did not accomplish. Thus, we refer to the system as "Hindu–Arabic" rather appropriately.[17][18]
The numeral system came to be known to both thePersianmathematicianKhwarizmi, who wrote a book,On the Calculation with Hindu Numeralsin about 825 CE, and theArabmathematicianAl-Kindi, who wrote a book,On the Use of the Hindu Numerals(كتاب في استعمال العداد الهندي[kitāb fī isti'māl al-'adād al-hindī]) around 830 CE.PersianscientistKushyar Gilaniwho wroteKitab fi usul hisab al-hind (Principles of Hindu Reckoning)is one of the oldest surviving manuscripts using the Hindu numerals.[19]These books are principally responsible for the diffusion of the Hindu system of numeration throughout theIslamic worldand ultimately also to Europe.
In Christian Europe, the first mention and representation of Hindu–Arabic numerals (from one to nine, without zero), is in theCodex Vigilanus(akaAlbeldensis), anilluminatedcompilation of various historical documents from theVisigothicperiod inSpain, written in the year 976 CE by three monks of theRiojanmonastery ofSan Martín de Albelda. Between 967 and 969 CE,Gerbert of Aurillacdiscovered and studied Arab science in the Catalan abbeys. Later he obtained from these places the bookDe multiplicatione et divisione(On multiplication and division). After becomingPope Sylvester IIin the year 999 CE, he introduced a new model ofabacus, the so-calledAbacus of Gerbert, by adopting tokens representing Hindu–Arabic numerals, from one to nine.
Leonardo Fibonaccibrought this system to Europe. His bookLiber AbaciintroducedModus Indorum(the method of the Indians), today known as Hindu–Arabic numeral system or base-10 positional notation, the use of zero, and the decimal place system to the Latin world. The numeral system came to be called "Arabic" by the Europeans. It was used in European mathematics from the 12th century, and entered common use from the 15th century to replaceRoman numerals.[20][21]
The familiar shape of the Western Arabic glyphs as now used with the Latin alphabet (0, 1, 2, 3, 4, 5, 6, 7, 8, 9) are the product of the late 15th to early 16th century, when they entered earlytypesetting. Muslim scientists used theBabylonian numeral system, and merchants used theAbjad numerals, a system similar to theGreek numeral systemand theHebrew numeral system. Similarly, Fibonacci's introduction of the system to Europe was restricted to learned circles. The credit for first establishing widespread understanding and usage of the decimal positional notation among the general population goes toAdam Ries, an author of theGerman Renaissance, whose 1522Rechenung auff der linihen und federn(Calculating on the Lines and with a Quill) was targeted at the apprentices of businessmen and craftsmen.
The '〇' is used to write zero inSuzhou numerals, which is the only surviving variation of therod numeralsystem. TheMathematical Treatise in Nine Sections, written byQin Jiushaoin 1247, is the oldest surviving Chinese mathematical text to use the character ‘〇’ for zero.[22]
The origin of using the character '〇' to represent zero is unknown.Gautama Siddhaintroduced Hindu numerals with zero in 718 CE, butChinese mathematiciansdid not find them useful, as they already had the decimal positionalcounting rods.[23][24]Some historians suggest that the use of '〇' for zero was influenced by Indian numerals imported by Gautama,[24]but Gautama’s numeral system represented zero with a dot rather than a hollow circle, similar to theBakhshali manuscript.[25]
An alternative hypothesis proposes that the use of '〇' to represent zero arose from a modification of the Chinese text space filler "□", making its resemblance to Indian numeral systems purely coincidental. Others think that the Indians acquired the symbol '〇' from China, because it resembles aConfucianphilosophical symbol for "nothing".[23]
Chinese andJapanesefinally adopted the Hindu–Arabic numerals in the 19th century, abandoning counting rods.
The "Western Arabic" numerals as they were in common use in Europe since theBaroqueperiod have secondarily found worldwide use together with theLatin alphabet, and even significantly beyond the contemporaryspread of the Latin alphabet, intruding into the writing systems in regions where other variants of the Hindu–Arabic numerals had been in use, but also in conjunction withChineseandJapanesewriting (seeChinese numerals,Japanese numerals). | https://en.wikipedia.org/wiki/Hindu%E2%80%93Arabic_numeral_system |
TheHindu–Arabic numeral systemis a decimalplace-valuenumeral system that uses azeroglyph as in "205".[1]
Its glyphs are descended from the IndianBrahmi numerals. The full system emerged by the 8th to 9th centuries, and is first described outside India inAl-Khwarizmi'sOn the Calculation with Hindu Numerals(ca. 825), and secondAl-Kindi's four-volume workOn the Use of the Indian Numerals(ca. 830).[2]Today the nameHindu–Arabic numeralsis usually used.
Historians trace modern numerals in most languages to theBrahmi numerals, which were in use around the middle of the 3rd century BC.[3]Theplace valuesystem, however, developed later. The Brahmi numerals have been found in inscriptions in caves and on coins in regions nearPune, Maharashtra[2]andUttar Pradeshin India. These numerals (with slight variations) were in use up to the 4th century.[3]
During theGupta period(early 4th century to the late 6th century), the Gupta numerals developed from the Brahmi numerals and were spread over large areas by the Gupta empire as they conquered territory.[3]Beginning around 7th century, the Gupta numerals developed into the Nagari numerals.
During theVedic period(1500–500 BCE), motivated by geometric construction of the fire altars and astronomy, the use of a numerical system and of basic mathematical operations developed in northern India.[4][5]Hindu cosmology required the mastery of very large numbers such as thekalpa(the lifetime of the universe) said to be 4,320,000,000 years and the "orbit of the heaven" said to be 18,712,069,200,000,000yojanas.[6]Numbers were expressed using a "named place-value notation", using names for the powers of 10, likedasa,shatha,sahasra,ayuta,niyuta,prayuta,arbuda,nyarbuda,samudra,madhya,anta,parardhaetc., the last of these being the name for a trillion (1012).[7]For example, the number 26,432 was expressed as "2ayuta, 6sahasra, 4shatha, 3dasa, 2."[8]In the Buddhist textLalitavistara, the Buddha is said to have narrated a scheme of numbers up to 1053.[9][10]
The form of numerals inAshoka's inscriptions in theBrahmiscript (middle of the third century BCE) involved separate signs for the numbers 1 to 9, 10 to 90, 100 and 1000. A multiple of 100 or 1000 was represented by a modification (or "enciphering"[11]) of the sign for the number using the sign for the multiplier number.[12]Such enciphered numerals directly represented the named place-value numerals used verbally. They continued to be used in inscriptions until the end of the 9th century.
In his seminal text of 499 CE,Aryabhatadevised a novel positional number system, using Sanskrit consonants for small numbers and vowels for powers of 10. Using the system, numbers up to a billion could be expressed using short phrases, e. g.,khyu-ghṛrepresenting the number 4,320,000. The system did not catch on because it produced quite unpronounceable phrases, but it might have driven home the principle of positional number system (calleddasa-gunottara, exponents of 10) to later mathematicians.[13]A more elegantkatapayadischeme was devised in later centuries representing a place-value system including zero.[14]
While the numerals in texts and inscriptions used a named place-value notation, a more efficient notation might have been employed in calculations, possibly from the 1st century CE. Computations were carried out on clay tablets covered with a thin layer of sand, giving rise to the termdhuli-karana('sand-work') for higher computation.Karl Menningerbelieves that, in such computations, they must have dispensed with the enciphered numerals and written down just sequences of digits to represent the numbers. A zero would have been represented as a "missing place", such as a dot.[15]The single manuscript with worked examples available to us, theBakhshali manuscript(of unclear date), uses a place value system with a dot to denote the zero. The dot was called theshunya-sthāna'empty-place'. The same symbol was also used in algebraic expressions for the unknown (as in the canonicalxin modern algebra).[16]
Textual references to a place-value system are seen from the 5th century CE onward. A commentary onPatanjali'sYoga Sutrasfrom the 5th century reads, "Just as a line in the hundreds place [means] a hundred, in the tens place ten, and one in the ones place, so one and the same woman is called mother, daughter and sister."[17]
A system calledbhūta-sankhya('object numbers' or 'concrete numbers') was employed for representing numerals in Sanskrit verses, by using a concept representing a digit to stand for the digit itself. The Jain text entitled theLokavibhaga, dated 458 CE,[18]mentions the objectified numeral
"panchabhyah khalu shunyebhyah param dve sapta chambaram ekam trini cha rupam cha"
meaning 'five voids, then two and seven, the sky, one and three and the form', i.e., the number 13107200000.[19][20]Such objectified numbers were used extensively from the 6th century onward, especially afterVarāhamihira(c.5th century CE). Zero is explicitly represented in such numbers as "the void" (sunya) or the "heaven-space" (ambara akasha).[21]Correspondingly, the dot used in place of zero in written numerals was referred to as asunya-bindu.[22]
In 628 CE, astronomer-mathematicianBrahmaguptawrote his textBrahma Sphuta Siddhantawhich contained the first mathematical treatment of zero. He defined zero as the result of subtracting a number from itself, postulated negative numbers and discussed their properties under arithmetical operations. His word for zero wasshunya(void), the same term previously used for the empty spot in 9-digit place-value system.[25]This provided a new perspective on theshunya-binduas a numeral and paved the way for the eventual evolution of a zero digit. The dot continued to be used for at least 100 years afterwards, and transmitted to Southeast Asia and Arabia. Kashmir'sSharada scripthas retained the dot for zero until this day.
By the end of the 7th century, decimal numbers begin to appear in inscriptions in Southeast Asia as well as in India.[22]Some scholars hold that they appeared even earlier. A 6th century copper-plate grant at Mankani bearing the numeral 346 (corresponding to 594 CE) is often cited.[26]But its reliability is subject to dispute.[22][27]The first indisputable occurrence of 0 in an inscription occurs atGwaliorin 876 CE, containing a numeral "270" in a notation surprisingly similar to the modern numerals.[28]Throughout the 8th and 9th centuries, both the old Brahmi numerals and the new decimal numerals were used, sometimes appearing in the same inscriptions. In some documents, a transition is seen to occur around 866 CE.[22]
Before the rise of theCaliphate, the Hindu–Arabic numeral system was already moving West and was mentioned inSyriain 662 AD by theSyriacNestorianscholarSeverus Sebokhtwho wrote the following:
According toAl-Qifti'sHistory of Learned Men:[29]
The work was most likely to have beenBrahmagupta'sBrāhmasphuṭasiddhānta(The Opening of the Universe) which was written in 628.[29][30]Irrespective of whether this is wrong, since all Indian texts afterAryabhata'sAryabhatiyaused the Indian number system, certainly from this time the Arabs had a translation of a text written in the Indian number system.[29]
In his textThe Arithmetic of Al-Uqlîdisî(Dordrecht: D. Reidel, 1978),A.S. Saidan's studies were unable to answer in full how the numerals reached the Arab world:
Al-Uqlidisideveloped a notation to represent decimal fractions.[31][32]The numerals came to fame due to their use in the pivotal work of thePersianmathematicianAl-Khwarizmi, whose bookOn the Calculation with Hindu Numeralswas written about 825, and theArabmathematicianAl-Kindi, who wrote four volumes (see [2]) "On the Use of the Indian Numerals" (Ketab fi Isti'mal al-'Adad al-Hindi) about 830. They, amongst other works, contributed to the diffusion of the Indian system of numeration in theMiddle Eastand the West.
The development of the numerals in early Europe is shown below:
In the last few centuries, the European variety of Arabic numbers was spread around the world and gradually became the most commonly used numeral system in the world.
Even in many countries in languages which have their own numeral systems, the European Arabic numerals are widely used in commerce and mathematics.
The significance of the development of the positional number system is described by the French mathematicianPierre-Simon Laplace(1749–1827) who wrote:
It isIndiathat gave us the ingenious method of expressing all numbers by the means of ten symbols, each symbol receiving a value of position, as well as an absolute value; a profound and important idea which appears so simple to us now that we ignore its true merit, but its very simplicity, the great ease which it has lent to all computations, puts our arithmetic in the first rank of useful inventions, and we shall appreciate the grandeur of this achievement when we remember that it escaped the genius ofArchimedesandApollonius, two of the greatest minds produced by antiquity.[34] | https://en.wikipedia.org/wiki/History_of_the_Hindu%E2%80%93Arabic_numeral_system |
Johannes de Sacrobosco, also writtenIoannes de Sacro Bosco, later calledJohn of HolywoodorJohn of Holybush(c.1195 –c.1256), was ascholar,Catholic monk, andastronomerwho taught at theUniversity of Paris.
He wrote a short introduction to the Hindu-Arabic numeral system. Judging from the number of manuscript copies that survive today, for the next 400 years it became the most widely read book on that subject.[1][2]He also wrote a short textbook which was widely read and influential in Europe during the later medieval centuries as an introduction to astronomy. In his longest book, on the computation of the date of Easter, Sacrobosco correctly described the defects of the then-usedJulian calendar, and recommended a solution similar to the modernGregorian calendarthree centuries before its implementation.[1]
Very little is known about the education and biography of Sacrobosco. For one thing, his year of death has been guessed at 1236, 1244, and 1256, each of which is plausible and each lacking adequate evidence.[1]
The country in which he was born is uncertain.Robertus Anglicuswrote in 1271 that Sacrobosco was born in England.[3]That could be true, yet there is neither good supporting nor good contradicting evidence for it. Based on Anglicus writing so soon after Sacrobosco's death, a birthplace in England may deserve greater credence than later suggestions.
Among those other possibilities, several different tenuous efforts have been made to figure out his birthplace from his appellative namede Sacrobosco. Long after his death, Johannes de Sacrobosco was called and sometimes is still called by the name "John of Holywood" or "John of Holybush", a name which was constructed by post-hoc reverse translation of the Medieval Latinsacerboscus, "holy (sacred) wood".Sacer Boscusor RomanceSacro Boscoas such is an unknown town or region. One traditional report, that he was born inHalifax, West Yorkshire, is the speculation of a 16th-century antiquary,John Leland,[1]: 176–177which was discredited byWilliam Camden:Halifax[4]means "holy hair", not "holy wood".[1]: 177
Thomas Dempsteridentified Sacrobosco with anAugustinian canonfromHolywood Abbey,Nithsdale,[a]which would be a reason for supposing him to have been born inScotland.[1][5]The historianJohn Veitchclaimed that he was born inGallowayand studied the classics among the monks ofWhithornandDryburgh.[6]
Based on a suggestion byStanihurst,Holywood, County Downalso claims Sacrobosco. However,Pedersenattributes this assertion toHolywoodbeing familiar to Stanihurst. A similar claim is made that he was born in Holywood,County Wicklow, but there is no known supporting historical document.
Pedersen mentioned that James Ware, writing in 1639, believed that the birthplace of Sacrobosco was near Dublin.[1]Stanihurst and even Pedersen were probably unaware that the seat of the Sacrobosco / Hollywood family in Ireland was in Artane, a suburb of Dublin.[7]Local historical records in Ireland seem to indicate that Johannes de Sacrobosco was a member of the Hollywood family, born in Artane Castle.[8][1]: 177–178
The story that he was educated at theUniversity of Oxfordis no better documented than the stories on his place of birth.[1]: 177
According to a seventeenth-century account, he arrived at the University of Paris on 5 June 1221, but whether as a student or as a graduate (licentiate– one already having aMaster of Artsdegree from another university, and thus qualified to teach) is unclear.[1]: 175–182In due course, he began to teach the mathematical disciplines at the University of Paris.
The year of his death is uncertain, with evidence supporting the years 1234, 1236, 1244, and 1256.[1]: 186–189, 192The inscription marking his burial place in the monastery of Saint-Mathurin, Paris, described him as a "computist" – one who was an expert on calculating the date of Easter.[1]: 181
On 14 May 2021, asteroid14541 Sacrobosco, discovered by Czech astronomersJana TicháandMiloš Tichýin 1997, wasnamedin his memory.[9]
About 1230, his best-known work,Tractatus de Sphaera / De Sphaera Mundi(Treatise on the Sphere / On the Sphere of the World) was published. In this book, Sacrobosco gives a readable account of thePtolemaicuniverse.Ptolemy's (updated)Almagesthad been translated into Latin in 1175 byGerard of Cremonafrom the Arabic translation held inToledoand copies had quickly found their way to Paris. In addition Sacrobosco was able to draw on translations of the Arabic astronomersThabit ibn Qurra,al-Biruni,al-Urdi, andal-Fargani.[10]
The "sphere" Sacrobosco was referring to is thecelestial sphere– an imaginary backdrop of the stars in the sky – which was the meaning of the wordmundi("world") at that time,notthe planet Earth. Though principally about astronomy, in its first chapter the book also contains a clear description of theEarthas a sphere.De Sphaera Mundiwas required reading by students in all western European universities for the next four hundred years.
Sacrobosco'sAlgorismusa.k.a.De Arte Numerandiis thought to have been his first work, written c. 1225. The Hindu–Arabic methods of numerical calculation had arrived in Latin Europe during the previous fifty years but had not been disseminated on a wide scale. Sacrobosco'sAlgorismuswas the first text to introduceHindu–Arabic numeralsand arithmetical procedures into the European university curriculum.[2][1]: 199–200
Sacrobosco may now be most famous for his criticism of theJulian calendar. In hisc.1235book oncomputation of Easter's date,De Anni Ratione[On Reckoning Years], he maintained that the calendar had accumulated an error of 10 days and that some correction was needed.
The Julian calendar was instituted in the 1st century BCE. The Julian calendar year contained 365.25 days, with the 0.25 day provided for by aLeap yearonce every fourth year. However, the more precise length of a solar year is about 365.2422 days. By the 13th century, the less accurate 365.25 days had resulted in an accumulated error of about 10 days in the date of thevernal equinox. Sacrobosco made no proposal on how to get rid of the accumulated error. But looking to the future, he proposed to leave one day out of the calendar every 288 years to prevent further error.[1]: 209–210[b]His criticism would foreshadow the introduction of theGregorian calendarin 1582, which corrected the error observed by Sacrobosco by skipping 10 days, and dropping three of the century leap years in every 400-year period. | https://en.wikipedia.org/wiki/Johannes_de_Sacrobosco |
Positional notation, also known asplace-value notation,positional numeral system, or simplyplace value, usually denotes the extension to anybaseof theHindu–Arabic numeral system(ordecimal system). More generally, a positional system is anumeral systemin which the contribution of a digit to the value of a number is the value of the digit multiplied by a factor determined by the position of the digit. In earlynumeral systems, such asRoman numerals, a digit has only one value: I means one, X means ten and C a hundred (however, the values may be modified when combined). In modern positional systems, such as thedecimal system, the position of the digit means that its value must be multiplied by some value: in 555, the three identical symbols represent five hundreds, five tens, and five units, respectively, due to their different positions in the digit string.
TheBabylonian numeral system, base 60, was the first positional system to be developed, and its influence is present today in the way time and angles are counted in tallies related to 60, such as 60 minutes in an hour and 360 degrees in a circle. Today, the Hindu–Arabic numeral system (base ten) is the most commonly used system globally. However, thebinary numeral system(base two) is used in almost allcomputersandelectronic devicesbecause it is easier to implement efficiently inelectronic circuits.
Systems with negative base,complexbase or negative digits have been described. Most of them do not require a minus sign for designating negative numbers.
The use of aradix point(decimal point in base ten), extends to includefractionsand allows the representation of anyreal numberwith arbitrary accuracy. With positional notation,arithmetical computationsare much simpler than with any older numeral system; this led to the rapid spread of the notation when it was introduced in western Europe.
Today, the base-10 (decimal) system, which is presumably motivated by counting with the tenfingers, is ubiquitous. Other bases have been used in the past, and some continue to be used today. For example, theBabylonian numeral system, credited as the first positional numeral system, wasbase-60. However, it lacked a realzero. Initially inferred only from context, later, by about 700 BC, zero came to be indicated by a "space" or a "punctuation symbol" (such as two slanted wedges) between numerals.[1]It was aplaceholderrather than a true zero because it was not used alone or at the end of a number. Numbers like 2 and 120 (2×60) looked the same because the larger number lacked a final placeholder. Only context could differentiate them.
The polymathArchimedes(ca. 287–212 BC) invented a decimal positional system based on 108in hisSand Reckoner;[2]19th century German mathematicianCarl Gausslamented how science might have progressed had Archimedes only made the leap to something akin to the modern decimal system.[3]HellenisticandRomanastronomers used a base-60 system based on the Babylonian model (seeGreek numerals § Zero).
Before positional notation became standard, simple additive systems (sign-value notation) such asRoman numeralsorChinese numeralswere used, and accountants in the past used theabacusor stone counters to do arithmetic until the introduction of positional notation.[4]
Counting rodsand most abacuses have been used to represent numbers in a positional numeral system. With counting rods orabacusto perform arithmetic operations, the writing of the starting, intermediate and final values of a calculation could easily be done with a simple additive system in each position or column. This approach required no memorization of tables (as does positional notation) and could produce practical results quickly.
The oldest extant positional notation system is either that of Chineserod numerals, used from at least the early 8th century, or perhapsKhmer numerals, showing possible usages of positional-numbers in the 7th century. Khmer numerals and otherIndian numeralsoriginate with theBrahmi numeralsof about the 3rd century BC, which symbols were, at the time, not used positionally. Medieval Indian numerals are positional, as are the derivedArabic numerals, recorded from the 10th century.
After theFrench Revolution(1789–1799), the new French government promoted the extension of the decimal system.[5]Some of those pro-decimal efforts—such asdecimal timeand thedecimal calendar—were unsuccessful. Other French pro-decimal efforts—currencydecimalisationand themetricationof weights and measures—spread widely out of France to almost the whole world.
Decimal fractions were first developed and used by the Chinese in the form ofrod calculusin the 1st century BC, and then spread to the rest of the world.[6][7]J. Lennart Berggren notes that positional decimal fractions were first used in the Arab by mathematicianAbu'l-Hasan al-Uqlidisias early as the 10th century.[8]The Jewish mathematicianImmanuel Bonfilsused decimal fractions around 1350, but did not develop any notation to represent them.[9]The Persian mathematicianJamshīd al-Kāshīmade the same discovery of decimal fractions in the 15th century.[8]Al Khwarizmiintroduced fractions to Islamic countries in the early 9th century; his fraction presentation was similar to the traditional Chinese mathematical fractions fromSunzi Suanjing.[10]This form of fraction with numerator on top and denominator at bottom without a horizontal bar was also used by 10th centuryAbu'l-Hasan al-Uqlidisiand 15th centuryJamshīd al-Kāshī's work "Arithmetic Key".[10][11]
The adoption of thedecimal representationof numbers less than one, afraction, is often credited toSimon Stevinthrough his textbookDe Thiende;[12]but both Stevin andE. J. Dijksterhuisindicate thatRegiomontanuscontributed to the European adoption of generaldecimals:[13]
In the estimation of Dijksterhuis, "after the publication ofDe Thiendeonly a small advance was required to establish the complete system of decimal positional fractions, and this step was taken promptly by a number of writers ... next to Stevin the most important figure in this development was Regiomontanus." Dijksterhuis noted that [Stevin] "gives full credit to Regiomontanus for his prior contribution, saying that the trigonometric tables of the German astronomer actually contain the whole theory of 'numbers of the tenth progress'."[13]: 19
Inmathematical numeral systemstheradixris usually the number of uniquedigits, including zero, that a positional numeral system uses to represent numbers. In some cases, such as with anegative base, the radix is theabsolute valuer=|b|{\displaystyle r=|b|}of the baseb. For example, for the decimal system the radix (and base) is ten, because it uses the ten digits from 0 through 9. When a number "hits" 9, the next number will not be another different symbol, but a "1" followed by a "0". In binary, the radix is two, since after it hits "1", instead of "2" or another written symbol, it jumps straight to "10", followed by "11" and "100".
The highest symbol of a positional numeral system usually has the value one less than the value of the radix of that numeral system. The standard positional numeral systems differ from one another only in the base they use.
The radix is an integer that is greater than 1, since a radix of zero would not have any digits, and a radix of 1 would only have the zero digit. Negative bases are rarely used. In a system with more than|b|{\displaystyle |b|}unique digits, numbers may have many different possible representations.
It is important that the radix is finite, from which follows that the number of digits is quite low. Otherwise, the length of a numeral would not necessarily belogarithmicin its size.
(In certainnon-standard positional numeral systems, includingbijective numeration, the definition of the base or the allowed digits deviates from the above.)
In standard base-ten (decimal) positional notation, there are tendecimal digitsand the number
In standard base-sixteen (hexadecimal), there are the sixteen hexadecimal digits (0–9 and A–F) and the number
where B represents the number eleven as a single symbol.
In general, in base-b, there arebdigits{d1,d2,⋯,db}=:D{\displaystyle \{d_{1},d_{2},\dotsb ,d_{b}\}=:D}and the number
has∀k:ak∈D.{\displaystyle \forall k\colon a_{k}\in D.}Note thata3a2a1a0{\displaystyle a_{3}a_{2}a_{1}a_{0}}represents a sequence of digits, notmultiplication.
When describing base inmathematical notation, the letterbis generally used as asymbolfor this concept, so, for abinarysystem,bequals2. Another common way of expressing the base is writing it as adecimalsubscript after the number that is being represented (this notation is used in this article). 11110112implies that the number 1111011 is a base-2 number, equal to 12310(adecimal notationrepresentation), 1738(octal) and 7B16(hexadecimal). In books and articles, when using initially the written abbreviations of number bases, the base is not subsequently printed: it is assumed that binary 1111011 is the same as 11110112.
The basebmay also be indicated by the phrase "base-b". So binary numbers are "base-2"; octal numbers are "base-8"; decimal numbers are "base-10"; and so on.
To a given radixbthe set of digits {0, 1, ...,b−2,b−1} is called the standard set of digits. Thus, binary numbers have digits {0, 1}; decimal numbers have digits{0, 1, 2, ..., 8, 9};and so on. Therefore, the following are notational errors: 522, 22, 1A9. (In all cases, one or more digits is not in the set of allowed digits for the given base.)
Positional numeral systems work usingexponentiationof the base. A digit's value is the digit multiplied by the value of its place. Place values are the number of the base raised to thenth power, wherenis the number of other digits between a given digit and theradix point. If a given digit is on the left hand side of the radix point (i.e. its value is aninteger) thennis positive or zero; if the digit is on the right hand side of the radix point (i.e., its value is fractional) thennis negative.
As an example of usage, the number 465 in its respective baseb(which must be at least base 7 because the highest digit in it is 6) is equal to:
If the number 465 was in base-10, then it would equal:
If however, the number were in base 7, then it would equal:
10b=bfor any baseb, since 10b= 1×b1+ 0×b0. For example, 102= 2; 103= 3; 1016= 1610. Note that the last "16" is indicated to be in base 10. The base makes no difference for one-digit numerals.
This concept can be demonstrated using a diagram. One object represents one unit. When the number of objects is equal to or greater than the baseb, then a group of objects is created withbobjects. When the number of these groups exceedsb, then a group of these groups of objects is created withbgroups ofbobjects; and so on. Thus the same number in different bases will have different values:
The notation can be further augmented by allowing a leading minus sign. This allows the representation of negative numbers. For a given base, every representation corresponds to exactly onereal numberand every real number has at least one representation. The representations of rational numbers are those representations that are finite, use the bar notation, or end with an infinitely repeating cycle of digits.
Adigitis a symbol that is used for positional notation, and anumeralconsists of one or more digits used for representing anumberwith positional notation. Today's most common digits are thedecimal digits"0", "1", "2", "3", "4", "5", "6", "7", "8", and "9". The distinction between a digit and a numeral is most pronounced in the context of a number base.
A non-zeronumeralwith more than one digit position will mean a different number in a different number base, but in general, thedigitswill mean the same.[14]For example, the base-8 numeral 238contains two digits, "2" and "3", and with a base number (subscripted) "8". When converted to base-10, the 238is equivalent to 1910, i.e. 238= 1910. In our notation here, the subscript "8" of the numeral 238is part of the numeral, but this may not always be the case.
Imagine the numeral "23" as havingan ambiguous basenumber. Then "23" could likely be any base, from base-4 up. In base-4, the "23" means 1110, i.e. 234= 1110. In base-60, the "23" means the number 12310, i.e. 2360= 12310. The numeral "23" then, in this case, corresponds to the set of base-10 numbers {11, 13, 15, 17, 19, 21,23, ..., 121, 123} while its digits "2" and "3" always retain their original meaning: the "2" means "two of", and the "3" means "three of".
In certain applications when a numeral with a fixed number of positions needs to represent a greater number, a higher number-base with more digits per position can be used. A three-digit, decimal numeral can represent only up to999. But if the number-base is increased to 11, say, by adding the digit "A", then the same three positions, maximized to "AAA", can represent a number as great as1330. We could increase the number base again and assign "B" to 11, and so on (but there is also a possible encryption between number and digit in the number-digit-numeral hierarchy). A three-digit numeral "ZZZ" in base-60 could mean215999. If we use the entire collection of ouralphanumericswe could ultimately serve a base-62numeral system, but we remove two digits, uppercase "I" and uppercase "O", to reduce confusion with digits "1" and "0".[15]We are left with a base-60, or sexagesimal numeral system utilizing 60 of the 62 standard alphanumerics. (But seeSexagesimal systembelow.) In general, the number of possible values that can be represented by ad{\displaystyle d}digit number in baser{\displaystyle r}isrd{\displaystyle r^{d}}.
The common numeral systems in computer science are binary (radix 2), octal (radix 8), and hexadecimal (radix 16). Inbinaryonly digits "0" and "1" are in the numerals. In theoctalnumerals, are the eight digits 0–7.Hexis 0–9 A–F, where the ten numerics retain their usual meaning, and the alphabetics correspond to values 10–15, for a total of sixteen digits. The numeral "10" is binary numeral "2", octal numeral "8", or hexadecimal numeral "16".
The notation can be extended into the negative exponents of the baseb. Thereby the so-called radix point, mostly ».«, is used as separator of the positions with non-negative from those with negative exponent.
Numbers that are notintegersuse places beyond theradix point. For every position behind this point (and thus after the units digit), the exponentnof the powerbndecreases by 1 and the power approaches 0. For example, the number 2.35 is equal to:
If the base and all the digits in the set of digits are non-negative, negative numbers cannot be expressed. To overcome this, aminus sign, here −, is added to the numeral system. In the usual notation it is prepended to the string of digits representing the otherwise non-negative number.
The conversion to a baseb2{\displaystyle b_{2}}of an integernrepresented in baseb1{\displaystyle b_{1}}can be done by a succession ofEuclidean divisionsbyb2:{\displaystyle b_{2}:}the right-most digit in baseb2{\displaystyle b_{2}}is the remainder of the division ofnbyb2;{\displaystyle b_{2};}the second right-most digit is the remainder of the division of the quotient byb2,{\displaystyle b_{2},}and so on. The left-most digit is the last quotient. In general, thekth digit from the right is the remainder of the division byb2{\displaystyle b_{2}}of the(k−1)th quotient.
For example: converting A10BHexto decimal (41227):
When converting to a larger base (such as from binary to decimal), the remainder representsb2{\displaystyle b_{2}}as a single digit, using digits fromb1{\displaystyle b_{1}}. For example: converting 0b11111001 (binary) to 249 (decimal):
For thefractionalpart, conversion can be done by taking digits after the radix point (the numerator), anddividingit by theimplied denominatorin the target radix. Approximation may be needed due to a possibility ofnon-terminating digitsif thereducedfraction's denominator has a prime factor other than any of the base's prime factor(s) to convert to. For example, 0.1 in decimal (1/10) is 0b1/0b1010 in binary, by dividing this in that radix, the result is 0b0.00011(because one of the prime factors of 10 is 5). For more general fractions and bases see thealgorithm for positive bases.
Alternatively,Horner's methodcan be used for base conversion using repeated multiplications, with the same computational complexity as repeated divisions.[16]A number in positional notation can be thought of as a polynomial, where each digit is a coefficient. Coefficients can be larger than one digit, so an efficient way to convert bases is to convert each digit, then evaluate the polynomial via Horner's method within the target base. Converting each digit is a simplelookup table, removing the need for expensive division or modulus operations; and multiplication by x becomes right-shifting. However, other polynomial evaluation algorithms would work as well, likerepeated squaringfor single or sparse digits. Example:
The numbers which have a finite representation form thesemiring
More explicitly, ifp1ν1⋅…⋅pnνn:=b{\displaystyle p_{1}^{\nu _{1}}\cdot \ldots \cdot p_{n}^{\nu _{n}}:=b}is afactorizationofb{\displaystyle b}into the primesp1,…,pn∈P{\displaystyle p_{1},\ldots ,p_{n}\in \mathbb {P} }with exponentsν1,…,νn∈N{\displaystyle \nu _{1},\ldots ,\nu _{n}\in \mathbb {N} },[17]then with the non-empty set of denominatorsS:={p1,…,pn}{\displaystyle S:=\{p_{1},\ldots ,p_{n}\}}we have
where⟨S⟩{\displaystyle \langle S\rangle }is the group generated by thep∈S{\displaystyle p\in S}and⟨S⟩−1Z{\displaystyle {\langle S\rangle }^{-1}\mathbb {Z} }is the so-calledlocalizationofZ{\displaystyle \mathbb {Z} }with respect toS{\displaystyle S}.
Thedenominatorof an element ofZS{\displaystyle \mathbb {Z} _{S}}contains if reduced to lowest terms only prime factors out ofS{\displaystyle S}.
Thisringof all terminating fractions to baseb{\displaystyle b}isdensein the field ofrational numbersQ{\displaystyle \mathbb {Q} }. Itscompletionfor the usual (Archimedean) metric is the same as forQ{\displaystyle \mathbb {Q} }, namely the real numbersR{\displaystyle \mathbb {R} }. So, ifS={p}{\displaystyle S=\{p\}}thenZ{p}{\displaystyle \mathbb {Z} _{\{p\}}}has not to be confused withZ(p){\displaystyle \mathbb {Z} _{(p)}}, thediscrete valuation ringfor theprimep{\displaystyle p}, which is equal toZT{\displaystyle \mathbb {Z} _{T}}withT=P∖{p}{\displaystyle T=\mathbb {P} \setminus \{p\}}.
Ifb{\displaystyle b}dividesc{\displaystyle c}, we havebZZ⊆cZZ.{\displaystyle b^{\mathbb {Z} }\,\mathbb {Z} \subseteq c^{\mathbb {Z} }\,\mathbb {Z} .}
The representation of non-integers can be extended to allow an infinite string of digits beyond the point. For example, 1.12112111211112 ... base-3 represents the sum of the infiniteseries:
Since a complete infinite string of digits cannot be explicitly written, the trailing ellipsis (...) designates the omitted digits, which may or may not follow a pattern of some kind. One common pattern is when a finite sequence of digits repeats infinitely. This is designated by drawing avinculumacross the repeating block:[18]
This is therepeating decimal notation(to which there does not exist a single universally accepted notation or phrasing).
For base 10 it is called a repeating decimal or recurring decimal.
Anirrational numberhas an infinite non-repeating representation in all integer bases. Whether arational numberhas a finite representation or requires an infinite repeating representation depends on the base. For example, one third can be represented by:
For integerspandqwithgcd(p,q) = 1, thefractionp/qhas a finite representation in basebif and only if eachprime factorofqis also a prime factor ofb.
For a given base, any number that can be represented by a finite number of digits (without using the bar notation) will have multiple representations, including one or two infinite representations:
A (real) irrational number has an infinite non-repeating representation in all integer bases.[19]
Examples are the non-solvablenth roots
withyn=x{\displaystyle y^{n}=x}andy∉Q, numbers which are calledalgebraic, or numbers like
which aretranscendental. The number of transcendentals isuncountableand the sole way to write them down with a finite number of symbols is to give them a symbol or a finite sequence of symbols.
In thedecimal(base-10)Hindu–Arabic numeral system, each position starting from the right is a higher power of 10. The first position represents100(1), the second position101(10), the third position102(10 × 10or 100), the fourth position103(10 × 10 × 10or 1000), and so on.
Fractionalvalues are indicated by aseparator, which can vary in different locations. Usually this separator is a period orfull stop, or acomma. Digits to the right of it are multiplied by 10 raised to a negative power or exponent. The first position to the right of the separator indicates10−1(0.1), the second position10−2(0.01), and so on for each successive position.
As an example, the number 2674 in a base-10 numeral system is:
or
Thesexagesimalor base-60 system was used for the integral and fractional portions ofBabylonian numeralsand other Mesopotamian systems, byHellenisticastronomers usingGreek numeralsfor the fractional portion only, and is still used for modern time and angles, but only for minutes and seconds. However, not all of these uses were positional.
Modern time separates each position by a colon or aprime symbol. For example, the time might be 10:25:59 (10 hours 25 minutes 59 seconds). Angles use similar notation. For example, an angle might be10°25′59″(10degrees25minutes59seconds). In both cases, only minutes and seconds use sexagesimal notation—angular degrees can be larger than 59 (one rotation around a circle is 360°, two rotations are 720°, etc.), and both time and angles use decimal fractions of a second.[citation needed]This contrasts with the numbers used by Hellenistic andRenaissanceastronomers, who usedthirds,fourths, etc. for finer increments. Where we might write10°25′59.392″, they would have written10°25′59′′23′′′31′′′′12′′′′′or10°25i59ii23iii31iv12v.
Using a digit set of digits with upper and lowercase letters allows short notation for sexagesimal numbers, e.g. 10:25:59 becomes 'ARz' (by omitting I and O, but not i and o), which is useful for use in URLs, etc., but it is not very intelligible to humans.
In the 1930s,Otto Neugebauerintroduced a modern notational system for Babylonian and Hellenistic numbers that substitutes modern decimal notation from 0 to 59 in each position, while using a semicolon (;) to separate the integral and fractional portions of the number and using a comma (,) to separate the positions within each portion.[20]For example, the meansynodic monthused by both Babylonian and Hellenistic astronomers and still used in theHebrew calendaris 29;31,50,8,20 days, and the angle used in the example above would be written 10;25,59,23,31,12 degrees.
Incomputing, thebinary(base-2), octal (base-8) andhexadecimal(base-16) bases are most commonly used. Computers, at the most basic level, deal only with sequences of conventional zeroes and ones, thus it is easier in this sense to deal with powers of two. The hexadecimal system is used as "shorthand" for binary—every 4 binary digits (bits) relate to one and only one hexadecimal digit. In hexadecimal, the six digits after 9 are denoted by A, B, C, D, E, and F (and sometimes a, b, c, d, e, and f).
Theoctalnumbering system is also used as another way to represent binary numbers. In this case the base is 8 and therefore only digits 0, 1, 2, 3, 4, 5, 6, and 7 are used. When converting from binary to octal every 3 bits relate to one and only one octal digit.
Hexadecimal, decimal, octal, and a wide variety of other bases have been used forbinary-to-text encoding, implementations ofarbitrary-precision arithmetic, and other applications.
For a list of bases and their applications, seelist of numeral systems.
Base-12 systems (duodecimalor dozenal) have been popular because multiplication and division are easier than in base-10, with addition and subtraction being just as easy. Twelve is a useful base because it has manyfactors. It is the smallest common multiple of one, two, three, four and six. There is still a special word for "dozen" in English, and by analogy with the word for 102,hundred, commerce developed a word for 122,gross. The standard 12-hour clock and common use of 12 in English units emphasize the utility of the base. In addition, prior to its conversion to decimal, the old British currencyPound Sterling(GBP)partiallyused base-12; there were 12 pence (d) in a shilling (s), 20 shillings in a pound (£), and therefore 240 pence in a pound. Hence the term LSD or, more properly,£sd.
TheMaya civilizationand other civilizations ofpre-ColumbianMesoamericaused base-20 (vigesimal), as did several North American tribes (two being in southern California). Evidence of base-20 counting systems is also found in the languages of central and westernAfrica.
Remnants of aGaulishbase-20 system also exist in French, as seen today in the names of the numbers from 60 through 99. For example, sixty-five issoixante-cinq(literally, "sixty [and] five"), while seventy-five issoixante-quinze(literally, "sixty [and] fifteen"). Furthermore, for any number between 80 and 99, the "tens-column" number is expressed as a multiple of twenty. For example, eighty-two isquatre-vingt-deux(literally, four twenty[s] [and] two), while ninety-two isquatre-vingt-douze(literally, four twenty[s] [and] twelve). In Old French, forty was expressed as two twenties and sixty was three twenties, so that fifty-three was expressed as two twenties [and] thirteen, and so on.
In English the same base-20 counting appears in the use of "scores". Although mostly historical, it is occasionally used colloquially. Verse 10 of Psalm 90 in the King James Version of the Bible starts: "The days of our years are threescore years and ten; and if by reason of strength they be fourscore years, yet is their strength labour and sorrow". The Gettysburg Address starts: "Four score and seven years ago".
TheIrish languagealso used base-20 in the past, twenty beingfichid, fortydhá fhichid, sixtytrí fhichidand eightyceithre fhichid. A remnant of this system may be seen in the modern word for 40,daoichead.
TheWelsh languagecontinues to use abase-20counting system, particularly for the age of people, dates and in common phrases. 15 is also important, with 16–19 being "one on 15", "two on 15" etc. 18 is normally "two nines". A decimal system is commonly used.
TheInuit languagesuse abase-20counting system. Students fromKaktovik, Alaskainvented abase-20 numeral systemin 1994[21]
Danish numeralsdisplay a similarbase-20structure.
TheMāori languageof New Zealand also has evidence of an underlying base-20 system as seen in the termsTe Hokowhitu a Tureferring to a war party (literally "the seven 20s of Tu") andTama-hokotahi, referring to a great warrior ("the one man equal to 20").
The binary systemwas used in the Egyptian Old Kingdom, 3000 BC to 2050 BC. It was cursive by rounding off rational numbers smaller than 1 to1/2 + 1/4 + 1/8 + 1/16 + 1/32 + 1/64, with a 1/64 term thrown away (the system was called theEye of Horus).
A number ofAustralian Aboriginal languagesemploy binary or binary-like counting systems. For example, inKala Lagaw Ya, the numbers one through six areurapon,ukasar,ukasar-urapon,ukasar-ukasar,ukasar-ukasar-urapon,ukasar-ukasar-ukasar.
North and Central American natives used base-4 (quaternary) to represent the four cardinal directions. Mesoamericans tended to add a second base-5 system to create a modified base-20 system.
A base-5 system (quinary) has been used in many cultures for counting. Plainly it is based on the number of digits on a human hand. It may also be regarded as a sub-base of other bases, such as base-10, base-20, and base-60.
A base-8 system (octal) was devised by theYuki tribeof Northern California, who used the spaces between the fingers to count, corresponding to the digits one through eight.[22]There is also linguistic evidence which suggests that the Bronze AgeProto-Indo Europeans(from whom most European and Indic languages descend) might have replaced a base-8 system (or a system which could only count up to 8) with a base-10 system. The evidence is that the word for 9,newm, is suggested by some to derive from the word for "new",newo-, suggesting that the number 9 had been recently invented and called the "new number".[23]
Many ancient counting systems use five as a primary base, almost surely coming from the number of fingers on a person's hand. Often these systems are supplemented with a secondary base, sometimes ten, sometimes twenty. In someAfrican languagesthe word for five is the same as "hand" or "fist" (Dyola languageofGuinea-Bissau,Banda languageofCentral Africa). Counting continues by adding 1, 2, 3, or 4 to combinations of 5, until the secondary base is reached. In the case of twenty, this word often means "man complete". This system is referred to asquinquavigesimal. It is found in many languages of theSudanregion.
TheTelefol language, spoken inPapua New Guinea, is notable for possessing a base-27 numeral system.
Interesting properties exist when the base is not fixed or positive and when the digit symbol sets denote negative values. There are many more variations. These systems are of practical and theoretic value to computer scientists.
Balanced ternary[24]uses a base of 3 but the digit set is {1,0,1} instead of {0,1,2}. The "1" has an equivalent value of −1. The negation of a number is easily formed by switching theon the 1s. This system can be used to solve thebalance problem, which requires finding a minimal set of known counter-weights to determine an unknown weight. Weights of 1, 3, 9, ..., 3nknown units can be used to determine any unknown weight up to 1 + 3 + ... + 3nunits. A weight can be used on either side of the balance or not at all. Weights used on the balance pan with the unknown weight are designated with1, with 1 if used on the empty pan, and with 0 if not used. If an unknown weightWis balanced with 3 (31) on its pan and 1 and 27 (30and 33) on the other, then its weight in decimal is 25 or 1011 in balanced base-3.
Thefactorial number systemuses a varying radix, givingfactorialsas place values; they are related toChinese remainder theoremandresidue number systemenumerations. This system effectively enumerates permutations. A derivative of this uses theTowers of Hanoipuzzle configuration as a counting system. The configuration of the towers can be put into 1-to-1 correspondence with the decimal count of the step at which the configuration occurs and vice versa.
Each position does not need to be positional itself.Babylonian sexagesimal numeralswere positional, but in each position were groups of two kinds of wedges representing ones and tens (a narrow vertical wedge | for the one and an open left pointing wedge ⟨ for the ten) — up to 5+9=14 symbols per position (i.e. 5 tens ⟨⟨⟨⟨⟨ and 9 ones ||||||||| grouped into one or two near squares containing up to three tiers of symbols, or a place holder (⑊) for the lack of a position).[25]Hellenistic astronomers used one or two alphabetic Greek numerals for each position (one chosen from 5 letters representing 10–50 and/or one chosen from 9 letters representing 1–9, or azero symbol).[26]
Examples:
Related topics:
Other: | https://en.wikipedia.org/wiki/Positional_notation |
Aternary/ˈtɜːrnəri/numeral system(also calledbase 3ortrinary[1]) hasthreeas itsbase. Analogous to abit, a ternarydigitis atrit(trinary digit). One trit is equivalent tolog23 (about 1.58496) bits ofinformation.
Althoughternarymost often refers to a system in which the three digits are all non–negative numbers; specifically0,1, and2, the adjective also lends its name to thebalanced ternarysystem; comprising the digits−1, 0 and +1, used in comparison logic andternary computers.
Representations ofinteger numbersin ternary do not get uncomfortably lengthy as quickly as inbinary. For example,decimal365(10)orsenary1405(6)corresponds to binary101101101(2)(ninebits) and to ternary111112(3)(six digits). However, they are still far less compact than the corresponding representations in bases such asdecimal– see below for a compact way to codify ternary using nonary (base 9) andseptemvigesimal(base 27).
As forrational numbers, ternary offers a convenient way to represent1/3as same as senary (as opposed to its cumbersome representation as an infinite string ofrecurring digitsin decimal); but a major drawback is that, in turn, ternary does not offer a finite representation for1/2(nor for1/4,1/8, etc.), because2is not aprimefactorof the base; as with base two, one-tenth (decimal1/10, senary1/14) is not representable exactly (that would need e.g. decimal); nor is one-sixth (senary1/10, decimal1/6).
The value of a binary number withnbits that are all 1 is2n− 1.
Similarly, for a numberN(b,d) with basebandddigits, all of which are the maximal digit valueb− 1, we can write:
Then
For a three-digit ternary number,N(3, 3) = 33− 1 = 26 = 2 × 32+ 2 × 31+ 2 × 30= 18 + 6 + 2.
Nonary/ˈnɒnəri/(base 9, each digit is two ternary digits) orseptemvigesimal(base 27, each digit is three ternary digits) can be used for compact representation of ternary, similar to howoctalandhexadecimalsystems are used in place ofbinary.
In certain analog logic, the state of the circuit is often expressed ternary. This is most commonly seen inCMOScircuits, and also intransistor–transistor logicwithtotem-pole output. The output is said to either be low (grounded), high, or open (high-Z). In this configuration the output of the circuit is actually not connected to anyvoltagereference at all. Where the signal is usually grounded to a certain reference, or at a certain voltage level, the state is said to be highimpedancebecause it is open and serves its own reference. Thus, the actual voltage level is sometimes unpredictable.
A rare "ternary point" in common use is for defensive statistics in Americanbaseball(usually just forpitchers), to denote fractional parts of an inning. Since the team on offense is allowed threeouts, each out is considered one third of a defensive inning and is denoted as.1. For example, if a player pitched all of the 4th, 5th and 6th innings, plus achieving 2 outs in the 7th inning, hisinnings pitchedcolumn for that game would be listed as3.2, the equivalent of3+2⁄3(which is sometimes used as an alternative by some record keepers). In this usage, only the fractional part of the number is written in ternary form.[2][3]
Ternary numbers can be used to convey self-similar structures like theSierpinski triangleor theCantor setconveniently. Additionally, it turns out that the ternary representation is useful for defining the Cantor set and related point sets, because of the way the Cantor set is constructed. The Cantor set consists of the points from 0 to 1 that have a ternary expression that does not contain any instance of the digit 1.[4][5]Any terminating expansion in the ternary system is equivalent to the expression that is identical up to the term preceding the last non-zero term followed by the term one less than the last non-zero term of the first expression, followed by an infinite tail of twos. For example: 0.1020 is equivalent to 0.1012222... because the expansions are the same until the "two" of the first expression, the two was decremented in the second expansion, and trailing zeros were replaced with trailing twos in the second expression.
Ternary is the integer base with the lowestradix economy, followed closely bybinaryandquaternary. This is due to its proximity to themathematical constante. It has been used for some computing systems because of this efficiency. It is also used to represent three-optiontrees, such as phone menu systems, which allow a simple path to any branch.
A form ofredundant binary representationcalled a binary signed-digit number system, a form ofsigned-digit representation, is sometimes used in low-level software and hardware to accomplish fast addition of integers because it can eliminatecarries.[6]
Simulation of ternary computers using binary computers, or interfacing between ternary and binary computers, can involve use of binary-coded ternary (BCT) numbers, with two or three bits used to encode each trit.[7][8]BCT encoding is analogous tobinary-coded decimal(BCD) encoding. If the trit values 0, 1 and 2 are encoded 00, 01 and 10, conversion in either direction between binary-coded ternary and binary can be done inlogarithmic time.[9]A library ofC codesupporting BCT arithmetic is available.[10]
Someternary computerssuch as theSetundefined atryteto be six trits[11]or approximately 9.5bits(holding more information than thede factobinarybyte).[12] | https://en.wikipedia.org/wiki/Binary-coded_ternary |
TheIEEE 754-2008standard includes decimal floating-point number formats in which thesignificandand the exponent (and the payloads ofNaNs) can be encoded in two ways, referred to asbinary encodinganddecimal encoding.[1]
Both formats break a number down into a sign bits, an exponentq(betweenqminandqmax), and ap-digit significandc(between 0 and 10p−1). The value encoded is (−1)s×10q×c. In both formats the range of possible values is identical, but they differ in how the significandcis represented. In the decimal encoding, it is encoded as a series ofpdecimal digits (using thedensely packed decimal(DPD) encoding). This makes conversion to decimal form efficient, but requires a specialized decimalALUto process. In thebinary integer decimal(BID) encoding, it is encoded as a binary number.
Using the fact that 210= 1024 is only slightly more than 103= 1000, 3n-digit decimal numbers can be efficiently packed into 10nbinary bits. However, the IEEE formats have significands of 3n+1 digits, which would generally require 10n+4 binary bits to represent.
This would not be efficient, because only 10 of the 16 possible values of the additional four bits are needed. A more efficient encoding can be designed using the fact that the exponent range is of the form 3×2k, so the exponent never starts with11. Using the Decimal32 encoding (with a significand of 3*2+1 decimal digits) as an example (estands for exponent,mfor mantissa, i.e. significand):
The bits shown in parentheses areimplicit: they are not included in the 32 bits of the Decimal32 encoding, but are implied by the two bits after the sign bit.
The Decimal64 and Decimal128 encodings have larger exponent and significand fields, but operate in a similar fashion.
For the Decimal128 encoding, 113 bits of significand is actually enough to encode 34 decimal digits, and the second form is never actually required.
A decimal floating-point number can be encoded in several ways, the different ways represent different precisions, for example 100.0 is encoded as 1000×10−1, while 100.00 is encoded as 10000×10−2. The set of possible encodings of the same numerical value is called acohortin the standard. If the result of a calculation is inexact the largest amount of significant data is preserved by selecting the cohort member with the largest integer that can be stored in the significand along with the required exponent.
The proposed IEEE 754r standard limits the range of numbers to a significand of the form 10n−1, where n is the number of whole decimal digits that can be stored in the bits available so that decimal rounding is effected correctly.
A binary encoding is inherently less efficient for conversions to or from decimal-encoded data, such as strings (ASCII,Unicode, etc.) andBCD. A binary encoding is therefore best chosen only when the data are binary rather than decimal. IBM has published some unverified performance data.[2] | https://en.wikipedia.org/wiki/Binary_integer_decimal |
Incomputer science, amaskorbitmaskis data that is used forbitwise operations, particularly in abit field. Using a mask, multiple bits in abyte,nibble,word, etc. can be set either on or off, or inverted from on to off (or vice versa) in a single bitwise operation. An additional use of masking involvespredicationinvector processing, where the bitmask is used to select which element operations in the vector are to be executed (mask bit is enabled) and which are not (mask bit is clear).
To turn certain bits on, thebitwiseORoperation can be used, followingthe principlethat for an individual bitY,Y OR 1 = 1andY OR 0 = Y. Therefore, to make sure a bit is on,ORcan be used with a1. To leave a bit unchanged,ORis used with a0.
Example: Maskingonthe highernibble(bits 4, 5, 6, 7) while leaving the lower nibble (bits 0, 1, 2, 3) unchanged.
More often in practice, bits are "maskedoff" (or masked to0) than "maskedon" (or masked to1). When a bit isANDed with a 0, the result is always 0, i.e.Y AND 0 = 0. To leave the other bits as they were originally, they can beANDed with1asY AND 1 = Y
Example: Maskingoffthe highernibble(bits 4, 5, 6, 7) while leaving the lower nibble (bits 0, 1, 2, 3) unchanged.
It is possible to use bitmasks to easily check the state of individual bits regardless of the other bits. To do this, turning off all the other bits using the bitwiseANDis done as discussed above and the value is compared with0. If it is equal to0, then the bit was off, but if the value is any other value, then the bit was on. What makes this convenient is that it is not necessary to figure out what the value actually is, just that it is not0.
Example: Querying the status of the 4th bit
So far the article has covered how to turn bits on and turn bits off, but not both at once. Sometimes it does not really matter what the value is, but it must be made the opposite of what it currently is. This can be achieved using theXOR(exclusive or)operation.XORreturns1if and only ifanodd numberof bits are1. Therefore, if two corresponding bits are1, the result will be a0, but if only one of them is1, the result will be1. Therefore inversion of the values of bits is done byXORing them with a1. If the original bit was1, it returns1 XOR 1 = 0. If the original bit was0it returns0 XOR 1 = 1. Also note thatXORmasking is bit-safe, meaning that it will not affect unmasked bits becauseY XOR 0 = Y, just like anOR.
Example: Toggling bit values
To write arbitrary 1s and 0s to a subset of bits, first write 0s to that subset, then set the high bits:
In programming languages such asC, bit fields are a useful way to pass a set of named Boolean arguments to a function. For example, in the graphics APIOpenGL, there is a command,glClear()which clears the screen or other buffers. It can clear up to four buffers (the color, depth, accumulation, andstencil buffers), so the API authors could have had it take four arguments. But then a call to it would look like
which is not very descriptive. Instead there are four defined field bits,GL_COLOR_BUFFER_BIT,GL_DEPTH_BUFFER_BIT,GL_ACCUM_BUFFER_BIT, andGL_STENCIL_BUFFER_BITandglClear()is declared as
Then a call to the function looks like this
Internally, a function taking a bitfield like this can use binaryandto extract the individual bits. For example, an implementation ofglClear()might look like:
The advantage to this approach is that function argument overhead is decreased. Since the minimum datum size is one byte, separating the options into separate arguments would be wasting seven bits per argument and would occupy more stack space. Instead, functions typically accept one or more 32-bit integers, with up to 32 option bits in each. While elegant, in the simplest implementation this solution is nottype-safe. AGLbitfieldis simply defined to be anunsigned int, so the compiler would allow a meaningless call toglClear(42)or evenglClear(GL_POINTS). InC++an alternative would be to create a class to encapsulate the set of arguments that glClear could accept and could be cleanly encapsulated in a library.
Masks are used with IP addresses in IP ACLs (Access Control Lists) to specify what should be permitted and denied. To configure IP addresses on interfaces, masks start with 255 and have the large values on the left side: for example, IP address203.0.113.129with a255.255.255.224mask. Masks for IP ACLs are the reverse: for example, mask0.0.0.255. This is sometimes called an inverse mask or awildcard mask. When the value of the mask is broken down into binary (0s and 1s), the results determine which address bits are to be considered in processing the traffic. A0-bit indicates that the address bit must be considered (exact match); a1-bit in the mask is a "don't care". This table further explains the concept.
Mask example:
network address (traffic that is to be processed):192.0.2.0
mask:0.0.0.255
network address (binary): 11000000.00000000.00000010.00000000
mask (binary): 00000000.00000000.00000000.11111111
Based on the binary mask, it can be seen that the first three sets (octets) must match the given binary network address exactly (11000000.00000000.00000010). The last set of numbers is made of "don't cares" (.11111111). Therefore, all traffic that begins with "192.0.2." matches, since the last octet is "don't care". Therefore, with this mask, network addresses192.0.2.1through192.0.2.255(192.0.2.x) are processed.
Subtract the normal mask from255.255.255.255in order to determine the ACL inverse mask. In this example, the inverse mask is determined for network address198.51.100.0with a normal mask of255.255.255.0.
255.255.255.255−255.255.255.0(normal mask) =0.0.0.255(inverse mask)
ACL equivalents
The source/source-wildcard of0.0.0.0/255.255.255.255means "any".
The source/wildcard of198.51.100.2/0.0.0.0is the same as "host198.51.100.2"
Incomputer graphics, when a given image is intended to be placed over a background, the transparent areas can be specified through a binary mask.[1]This way, for each intended image there are actually twobitmaps: the actual image, in which the unused areas are given apixelvalue with allbitsset to 0s, and an additionalmask, in which the correspondent image areas are given a pixel value of all bits set to 0s and the surrounding areas a value of all bits set to 1s. In the sample at right, black pixels have the all-zero bits and white pixels have the all-one bits.
Atrun time, to put the image on the screen over the background, the program first masks the screen pixel's bits with the image mask at the desired coordinates using thebitwise ANDoperation. This preserves the background pixels of the transparent areas while resets with zeros the bits of the pixels which will be obscured by the overlapped image.
Then, the program renders the image pixel's bits by combining them with the background pixel's bits using thebitwise ORoperation. This way, the image pixels are appropriately placed while keeping the background surrounding pixels preserved. The result is a perfect compound of the image over the background.
This technique is used for painting pointing device cursors, in typical 2-D videogames for characters, bullets and so on (thesprites), forGUIicons, and for video titling and other image mixing applications. A faster method is to simply overwrite the background pixels with the foreground pixels if their alpha=1
Although related (due to being used for the same purposes),transparent colorsandalpha channelsare techniques which do not involve the image pixel mixage by binary masking.
To create a hashing function for ahash table, often a function is used that has a large domain. To create an index from the output of the function, a modulo can be taken to reduce the size of the domain to match the size of the array; however, it is often faster on many processors to restrict the size of the hash table to powers of two sizes and use a bitmask instead.
An example of both modulo and masking in C: | https://en.wikipedia.org/wiki/Bitmask |
Chen–Ho encodingis a memory-efficient alternate system ofbinaryencoding fordecimaldigits.
The traditional system of binary encoding for decimal digits, known asbinary-coded decimal(BCD), uses four bits to encode each digit, resulting in significant wastage of binary data bandwidth (since four bits can store 16 states and are being used to store only 10),[1]even when usingpacked BCD.
The encoding reduces the storage requirements of two decimal digits (100 states) from 8 to 7 bits, and those of three decimal digits (1000 states) from 12 to 10 bits using only simpleBooleantransformations avoiding any complex arithmetic operations like abase conversion.
In what appears to have been amultiple discovery, some of the concepts behind what later became known as Chen–Ho encoding were independently developed by Theodore M. Hertz in 1969[2]and byTien Chi Chen(陳天機) (1928–)[3][4][5][6]in 1971.
Hertz ofRockwellfiled a patent for his encoding in 1969, which was granted in 1971.[2]
Chen first discussed his ideas withIrving Tze Ho(何宜慈) (1921–2003)[7][8][9][10]in 1971. Chen and Ho were both working forIBMat the time, albeit in different locations.[11][12]Chen also consulted withFrank Chin Tung[13]to verify the results of his theories independently.[12]IBM filed a patent in their name in 1973, which was granted in 1974.[14]At least by 1973, Hertz's earlier work must have been known to them, as the patent cites his patent asprior art.[14]
With input from Joseph D. Rutledge and John C. McPherson,[15]the final version of the Chen–Ho encoding was circulated inside IBM in 1974[16]and published in 1975 in the journalCommunications of the ACM.[15][17]This version included several refinements, primarily related to the application of the encoding system. It constitutes aHuffman-likeprefix code.
The encoding was referred to asChen and Ho's schemein 1975,[18]Chen's encodingin 1982[19]and became known asChen–Ho encodingorChen–Ho algorithmsince 2000.[17]After having filed a patent for it in 2001,[20]Michael F. Cowlishawpublished a further refinement of Chen–Ho encoding known asdensely packed decimal(DPD) encoding inIEE Proceedings – Computers and Digital Techniquesin 2002.[21][22]DPD has subsequently been adopted as thedecimal encodingused in theIEEE 754-2008andISO/IEC/IEEE 60559:2011floating-pointstandards.
Chen noted that the digits zero through seven were simply encoded using three binary digits of the correspondingoctalgroup. He also postulated that one could use aflagto identify a different encoding for the digits eight and nine, which would be encoded using a single bit.
In practice, a series ofBooleantransformations are applied to the stream of input bits, compressing BCD encoded digits from 12 bits per three digits to 10 bits per three digits. Reversed transformations are used to decode the resulting coded stream to BCD. Equivalent results can also be achieved by the use of alook-up table.
Chen–Ho encoding is limited to encoding sets of three decimal digits into groups of 10 bits (so calleddeclets).[1]Of the 1024 states possible by using 10 bits, it leaves only 24 states unused[1](withdon't carebits typically set to 0 on write and ignored on read). With only 2.34% wastage it gives a 20% more efficient encoding than BCD with one digit in 4 bits.[12][17]
Both, Hertz and Chen also proposed similar, but less efficient, encoding schemes to compress sets of two decimal digits (requiring 8 bits in BCD) into groups of 7 bits.[2][12]
Larger sets of decimal digits could be divided into three- and two-digit groups.[2]
The patents also discuss the possibility to adapt the scheme to digits encoded in any other decimal codes than8-4-2-1 BCD,[2]like f.e.Excess-3,[2]Excess-6,Jump-at-2,Jump-at-8,Gray,Glixon,O'Brien type-IandGray–Stibitz code.[a]The same principles could also be applied to other bases.
In 1973, some form of Chen–Ho encoding appears to have been utilized in the address conversion hardware of the optionalIBM 7070/7074emulation feature for theIBM System/370 Model 165and370 Model 168computers.[23][24]
One prominent application uses a 128-bit register to store 33 decimal digits with a three digit exponent, effectively not less than what could be achieved using binary encoding (whereas BCD encoding would need 144 bits to store the same number of digits). | https://en.wikipedia.org/wiki/Chen%E2%80%93Ho_encoding |
Incomputer science, thedouble dabblealgorithmis used to convertbinary numbersintobinary-coded decimal(BCD) notation.[1][2]It is also known as theshift-and-add-3 algorithm, and can be implemented using a small number of gates in computer hardware, but at the expense of highlatency.[3]
The algorithm operates as follows:
Suppose the original number to be converted is stored in aregisterthat isnbits wide. Reserve a scratch space wide enough to hold both the original number and its BCD representation;n+ 4×ceil(n/3)bits will be enough. It takes a maximum of 4 bits in binary to store each decimal digit.
Then partition the scratch space into BCD digits (on the left) and the original register (on the right). For example, if the original number to be converted is eight bits wide, the scratch space would be partitioned as follows:
The diagram above shows the binary representation of 24310in the original register, and the BCD representation of 243 on the left.
The scratch space is initialized to all zeros, and then the value to be converted is copied into the "original register" space on the right.
The algorithm then iteratesntimes. On each iteration, any BCD digit which is at least 5 (0101 in binary) is incremented by 3 (0011); then the entire scratch space is left-shifted one bit. The increment ensures that a value of 5, incremented and left-shifted, becomes 16 (10000), thus correctly "carrying" into the next BCD digit.
Essentially, the algorithm operates by doubling the BCD value on the left each iteration and adding either one or zero according to the original bit pattern. Shifting left accomplishes both tasks simultaneously. If any digit is five or above, three is added to ensure the value "carries" in base 10.
The double-dabble algorithm, performed on the value 24310, looks like this:
Now eight shifts have been performed, so the algorithm terminates. The BCD digits to the left of the "original register" space display the BCD encoding of the original value 243.
Another example for the double dabble algorithm – value 6524410.
Sixteen shifts have been performed, so the algorithm terminates. The decimal value of the BCD digits is: 6*104+ 5*103+ 2*102+ 4*101+ 4*100= 65244.
[4]
The algorithm is fully reversible. By applying the reverse double dabble algorithm a BCD number can be converted to binary. Reversing the algorithm is done by reversing the principal steps of the algorithm:
The reverse double dabble algorithm, performed on the three BCD digits 2-4-3, looks like this:
In the 1960s, the termdouble dabblewas also used for a different mental algorithm, used by programmers to convert a binary number to decimal. It is performed by reading the binary number from left to right, doubling if the next bit is zero, and doubling and adding one if the next bit is one.[5]In the example above, 11110011, the thought process would be: "one, three, seven, fifteen, thirty, sixty, one hundred twenty-one, two hundred forty-three", the same result as that obtained above. | https://en.wikipedia.org/wiki/Double_dabble |
The termyear 2000 problem,[1]or simplyY2K, refers to potential computer errors related to theformatting and storage of calendar datafor dates in and after the year2000. Manyprogramsrepresented four-digit years with only the final two digits, making the year 2000 indistinguishable from 1900. Computer systems' inability to distinguish dates correctly had the potential to bring down worldwide infrastructures for computer-reliant industries.
In the years leading up to the turn of themillennium, the public gradually became aware of the "Y2K scare", and individual companies predicted the global damage caused by the bug would require anything between $400 million and $600 billion to rectify.[2]A lack of clarity regarding the potential dangers of the bug led some to stock up on food, water, and firearms, purchase backup generators, and withdraw large sums of money in anticipation of a computer-inducedapocalypse.[3]
Contrary to published expectations, few major errors occurred in 2000. Supporters of the Y2K remediation effort argued that this was primarily due to the pre-emptive action of many computer programmers andinformation technologyexperts. Companies and organizations in some countries, but not all, had checked, fixed, and upgraded their computer systems to address the problem.[4][5]Then-U.S. presidentBill Clinton, who organized efforts to minimize the damage in theUnited States, labelled Y2K as "the first challenge of the 21st century successfully met",[6]and retrospectives on the event typically commend the programmers who worked to avert the anticipated disaster.
Critics argued that even in countries where very little had been done to fix software, problems were minimal. The same was true in sectors such as schools and small businesses where compliance with Y2K policies was patchy at best.
Y2K is anumeronymand was the common abbreviation for the year 2000 software problem. The abbreviation combines the letterYfor "year", the number 2 and a capitalized version ofkfor the SI unit prefixkilomeaning 1000; hence,2Ksignifies 2000. It was also named the "millennium bug" because it was associated with the popular (rather than literal) rollover of themillennium, even though most of the problems could have occurred at the end ofanycentury.
Computerworld's 1993 three-page "Doomsday 2000" article byPeter de Jagerwas called "the information-age equivalent of the midnight ride of Paul Revere" byThe New York Times.[7][8][9]
The problem was the subject of the early bookComputers in Crisisby Jerome and Marilyn Murray (Petrocelli, 1984; reissued byMcGraw-Hillunder the titleThe Year 2000 Computing Crisisin 1996). Its first recorded mention on aUsenetnewsgroup is from 18 January 1985 bySpencer Bolles.[10]
The acronym Y2K has been attributed toMassachusettsprogrammer David Eddy[11]in an e-mail sent on 12 June 1995. He later said, "People were calling it CDC (Century Date Change), FADL (Faulty Date Logic). There were other contenders. Y2K just came off my fingertips."[12]
The problem started because on bothmainframe computersand laterpersonal computers,memorywas expensive, from as low as $10 perkilobyteto more than US$100 per kilobyte in 1975.[13][14]It was therefore very important for programmers to minimize usage. Since computers only gained wide usage in the 20th century, programs could simply prefix "19" to the year of a date, allowing them to only store the last two digits of the year instead of four. As space on disc and tape storage was also expensive, these strategies saved money by reducing the size of stored data files and databases in exchange for becoming unusable past the year 2000.[15]
This meant that programs facing two-digit years could not distinguish between dates in 1900 and 2000. Dire warnings at times were in the mode of:
The Y2K problem is the electronic equivalent of theEl Niñoand there will be nasty surprises around the globe.
Options on the De Jager Year 2000 Index, "the first index enabling investors to manage risk associated with the ... computer problem linked to the year 2000" began trading mid-March 1997.[17]
Special committees were set up by governments to monitor remedial work andcontingency planning, particularly by crucial infrastructures such as telecommunications, to ensure that the most critical services had fixed their own problems and were prepared for problems with others. While some commentators and experts argued that the coverage of the problem largely amounted toscaremongering,[18]it was only the safe passing of the main event itself, 1 January 2000, that fully quelled public fears.[citation needed]
Some experts who argued that scaremongering was occurring, such asRoss Anderson, professor ofsecurity engineeringat theUniversity of Cambridge Computer Laboratory, have since claimed that despite sending out hundreds ofpress releasesabout research results suggesting that the problem was not likely to be as big as some had suggested, they were largely ignored by the media.[18]In a similar vein, theMicrosoft PressbookRunning Office 2000 Professional, published in May 1999, accurately predicted that most personal computer hardware and software would be unaffected by the year 2000 problem.[19]AuthorsMichael Halvorsonand Michael Young characterized most of the worries as popular hysteria, an opinion echoed byMicrosoft Corp.[20]
The practice of using two-digit dates for convenience predates computers, but was never a problem until stored dates were used in calculations.
I'm one of the culprits who created this problem. I used to write those programs back in the 1960s and 1970s, and was proud of the fact that I was able to squeeze a few elements of space out of my program by not having to put a 19 before the year. Back then, it was very important. We used to spend a lot of time running through various mathematical exercises before we started to write our programs so that they could be very clearly delimited with respect to space and the use of capacity. It never entered our minds that those programs would have lasted for more than a few years. As a consequence, they are very poorly documented. If I were to go back and look at some of the programs I wrote 30 years ago, I would have one terribly difficult time working my way through step-by-step.
Business data processing was done usingunit record equipmentandpunched cards, most commonly the 80-column variety employed byIBM, which dominated the industry. Many tricks were used to squeeze needed data into fixed-field 80-character records. Saving two digits for every date field was significant in this effort.
In the 1960s, computer memory and mass storage were scarce and expensive. Earlycore memorycost one dollar per bit. Popular commercial computers, such as theIBM 1401, shipped with as little as 2 kilobytes of memory.[a]Programs often mimicked card processing techniques. Commercial programming languages of the time, such asCOBOLandRPG, processed numbers in their character representations. Over time, the punched cards were converted tomagnetic tapeand then disc files, but the structure of the data usually changed very little.
Data was still input usingpunched cardsuntil the mid-1970s. Machine architectures,programming languagesand application designs were evolving rapidly. Neither managers nor programmers of that time expected their programs to remain in use for many decades, and the possibility that these programs would both remain in use and cause problems when interacting with databases – a new type of program with different characteristics – went largely uncommented upon.
The first person known to publicly address this issue wasBob Bemer, who had noticed it in 1958 as a result of work ongenealogical software. He spent the next twenty years fruitlessly trying to raise awareness of the problem with programmers,IBM, thegovernment of the United Statesand theInternational Organization for Standardization. This included the recommendation that the COBOLpicture clauseshould be used to specify four digit years for dates.[23]
In the 1980s, thebrokerageindustry began to address this issue, mostly because of bonds with maturity dates beyond the year 2000. By 1987 theNew York Stock Exchangehad reportedly spent over $20 million on Y2K, including hiring 100 programmers.[24]
Despite magazine articles on the subject from 1970 onward, the majority of programmers and managers only started recognizing Y2K as a looming problem in the mid-1990s, but even then, inertia and complacency caused it to be mostly unresolved until the last few years of the decade. In 1989,Erik Naggumwas instrumental in ensuring that internet mail used four digit representations of years by including a strong recommendation to this effect in the internet host requirements documentRFC1123.[25]OnApril Fools' Day1998, some companies set their mainframe computer dates to 2001, so that "the wrong date will be perceived as good fun instead of bad computing" while having a full day of testing.[26]
While using 3-digit years and 3-digit dates within that year was used by some, others chose to use the number of days since a fixed date, such as 1 January 1900.[27]Inaction was not an option, and risked major failure. Embedded systems with similar date logic were expected to malfunction and cause utilities and other crucial infrastructure to fail.
Saving space on stored dates persisted into the Unix era, with most systems representing dates to a single 32-bit word, typically representing dates aselapsed seconds from some fixed date, which causes the similarY2K38 problem.[28]
Storage of a combined date and time within a fixed binary field is often considered a solution, but the possibility for software to misinterpret dates remains because such date and time representations must be relative to some known origin. Rollover of such systems is still a problem but can happen at varying dates and can fail in various ways. For example:
The date of 4 January 1975 overflowed the 12-bit field that had been used in the Decsystem 10 operating systems. There were numerous problems and crashes related to this bug while an alternative format was developed.[34]
Even before 1 January 2000 arrived, there were also some worries about 9 September 1999 (albeit less than those generated by Y2K). Because this date could also be written in the numeric format 9/9/99, it could have conflicted with the date value9999, frequently used to specify an unknown date. It was thus possible that database programs might act on the records containing unknown dates on that day. Data entry operators commonly entered 9999 into required fields for an unknown future date, (e.g., a termination date for cable television or telephone service), in order to process computer forms usingCICSsoftware.[35]Somewhat similar to this is the end-of-file code9999, used in older programming languages. While fears arose that some programs might unexpectedly terminate on that date, the bug was more likely to confuse computer operators than machines.
Normally, a year is a leap year if it is evenly divisible by four. A year divisible by 100 is not a leap year in the Gregorian calendar unless it is also divisible by 400. For example, 1600 was a leap year, but 1700, 1800 and 1900 were not. Some programs may have relied on the oversimplified rule that "a year divisible by four is a leap year". This method works fine for the year 2000 (because it is a leap year), and will not become a problem until 2100, when older legacy programs will likely have long since been replaced. Other programs contained incorrect leap year logic, assuming for instance that no year divisible by 100 could be a leap year. An assessment of thisleap year problemincluding a number of real-life code fragments appeared in 1998.[36]For information on why century years are treated differently, seeGregorian calendar.
Some systems had problems once the year rolled over to 2010. This was dubbed by some in the media as the "Y2K+10" or "Y2.01K" problem.[37]
The main source of problems was confusion between hexadecimal number encoding andbinary-coded decimalencodings of numbers. Both hexadecimal and BCD encode the numbers 0–9 as 0x0–0x9. BCD encodes the number 10 as 0x10, while hexadecimal encodes the number 10 as 0x0A; 0x10 interpreted as a hexadecimal encoding represents the number 16.
For example, because the SMS protocol uses BCD for dates, some mobile phone software incorrectly reported dates of SMSes as 2016 instead of 2010.Windows Mobileis the first software reported to have been affected by this glitch; in some cases WM6 changes the date of any incoming SMS message sent after 1 January 2010 from the year 2010 to 2016.[38][39]
Other systems affected includeEFTPOSterminals,[40]and thePlayStation 3(except the Slim model).[41]
The most important occurrences of such a glitch were in Germany, where up to 20 million bank cards became unusable, and withCitibank Belgium, whose Digipass customer identification chips failed.[42]
When the year 2022 began, many systems using 32-bit integers encountered problems, which are now collectively known as the Y2K22 bug. The maximum value of a signed 32-bit integer, as used in many computer systems, is 2147483647. Systems using an integer to represent a 10 character date-based field, where the leftmost two characters are the 2-digit year, ran into an issue on 1 January 2022 when the leftmost characters needed to be '22', i.e. values from 2200000001 needed to be represented.
Microsoft Exchange Serverwas one of the more significant systems affected by the Y2K22 bug. The problem caused emails to be stuck on transport queues on Exchange Server 2016 and Exchange Server 2019, reporting the following error:The FIP-FS "Microsoft" Scan Engine failed to load. PID: 23092, Error Code: 0x80004005. Error Description: Can't convert "2201010001" to long.[43]
Many systems useUnix timeand store it in asigned 32-bit integer. This data type is only capable of representing integers between −(231) and (231)−1, treated as number of seconds since the epoch at 1 January 1970 at 00:00:00UTC. These systems can only represent times between 13 December 1901 at 20:45:52 UTC and 19 January 2038 at 03:14:07 UTC. If these systems are not updated and fixed, then dates all across the world that rely on Unix time will wrongfully display the year as 1901 beginning at 03:14:08 UTC on 19 January 2038.[citation needed]
Several very different approaches were used to solve the year 2000 problem in legacy systems.
Problems that occurred on 1 January 2000 were generally regarded as minor.[63]Consequences did not always result exactly at midnight. Some programs were not active at that moment and problems would only show up when they were invoked. Not all problems recorded were directly linked to Y2K programming in acausality; minor technological glitches occur on a regular basis.
Reported problems include:
Problems were reported on 29 February 2000, Y2K's first leap year day, and 1 March 2000. These were mostly minor.[96][97][98]
Some software did not correctly recognize 2000 as a leap year, and so worked on the basis of the year having 365 days. On the last day of 2000 (day 366) and first day of 2001 these systems exhibited various errors. Some computers also treated the new year 2001 as 1901, causing errors. These were generally minor.
Since 2000, various issues have occurred due to errors involvingoverflows. Anissue with time taggingcaused the destruction of theNASADeep Impactspacecraft.[107]
Some software used a process calleddate windowingto fix the issue by interpreting years 00–19 as 2000–2019 and 20–99 as 1920–1999. As a result, a new wave of problems started appearing in 2020, including parking meters in New York City refusing to accept credit cards, issues with Novituspoint of saleunits, and some utility companies printing bills listing the year 1920. The video gameWWE 2K20also began crashing when the year rolled over, although a patch was distributed later that day.[108]
Although theBulgarian national identification numberallocates only two digits for the birth year, theyear 1900 problemand subsequently the Y2K problem were addressed by the use of unused values above 12 in the month range. For all persons born before 1900, the month is stored as the calendar month plus 20, and for all persons born in or after 2000, the month is stored as the calendar month plus 40.[109]
Canadian Prime MinisterJean Chrétien's most importantcabinet ministerswere ordered to remain in the capitalOttawa, and gathered at24 Sussex Drive, the prime minister's residence, to watch the clock.[7]13,000Canadian troopswere also put on standby.[7]
The Dutch Government promoted Y2K Information Sharing and Analysis Centers (ISACs) to share readiness between industries, without threat of antitrust violations or liability based on information shared.[citation needed]
Norway and Finland changed theirnational identification numbersto indicate a person's century of birth. In both countries, the birth year was historically indicated by two digits only. This numbering system had already given rise to a similar problem, the "Year 1900 problem", which arose due to problems distinguishing between people born in the 19th and 20th centuries. Y2K fears drew attention to an older issue, while prompting a solution to a new problem. In Finland, the problem was solved by replacing the hyphen ("-") in the number with the letter "A" for people born in the 21st century (for people born before 1900, the sign was already "+").[110]In Norway, the range of the individual numbers following the birth date was altered from 0–499 to 500–999.[citation needed]
Romania also changed its national identification number in response to the Y2K problem, due to the birth year being represented by only two digits. Before 2000, the first digit, which shows the person's sex, was 1 for males and 2 for females. Individuals born since 1 January 2000 have a number starting with 5 if male or 6 if female.[citation needed]
TheUgandan governmentresponded to the Y2K threat by setting up a Y2K Task Force.[111]In August 1999 an independent international assessment by the World Bank International Y2k Cooperation Centre found that Uganda's website was in the top category as "highly informative". This put Uganda in the "top 20" out of 107 national governments, and on a par with the United States, United Kingdom, Canada, Australia and Japan, and ahead of Germany, Italy, Austria, Switzerland which were rated as only "somewhat informative". The report said that "Countries which disclose more Y2K information will be more likely to maintain public confidence in their own countries and in the international markets."[112]
In 1998, theUnited States governmentresponded to the Y2K threat by passing the Year 2000 Information and Readiness Disclosure Act, by working with private sector counterparts in order to ensure readiness, and by creating internal continuity of operations plans in the event of problems and set limits to certain potential liabilities of companies with respect to disclosures about their year 2000 programs.[113][114]The effort was coordinated by the President's Council on Year 2000 Conversion, headed byJohn Koskinen, in coordination with theFederal Emergency Management Agency(FEMA), and an interimCritical Infrastructure ProtectionGroup within theDepartment of Justice.[115][116]
The US government followed a three-part approach to the problem: (1) outreach and advocacy, (2) monitoring and assessment, and (3) contingency planning and regulation.[117]
A feature of US government outreach was Y2K websites, including y2k.gov, many of which have become inaccessible in the years since 2000. Some of these websiteshave been archivedby theNational Archives and Records Administrationor theWayback Machine.[118][119]
Each federal agency had its own Y2K task force which worked with its private sector counterparts; for example, theFCChad the FCC Year 2000 Task Force.[117][120]
Most industries had contingency plans that relied upon the internet for backup communications. As no federal agency had clear authority with regard to the internet at this time (it had passed from the Department of Defense to the National Science Foundation and then to the Department of Commerce), no agency was assessing the readiness of the internet itself. Therefore, on 30 July 1999, the White House held the White House Internet Y2K Roundtable.[121]
The U.S. government also established theCenter for Year 2000 Strategic Stabilityas a joint operation with the Russian Federation. It was a liaison operation designed to mitigate the possibility of false positive readings in each nation's nuclear attack early warning systems.[122]
The International Y2K Cooperation Center (IY2KCC) was established at the behest of national Y2K coordinators from over 120 countries when they met at the First Global Meeting of National Y2K Coordinators at the United Nations in December 1998.[123]IY2KCC established an office in Washington, D.C., in March 1999. Funding was provided by the World Bank, and Bruce W. McConnell was appointed as director.
IY2KCC's mission was to "promote increased strategic cooperation and action among governments, peoples, and the private sector to minimize adverse Y2K effects on the global society and economy." Activities of IY2KCC were conducted in six areas:
IY2KCC closed down in March 2000.[123]
The Y2K issue was a major topic of discussion in the late 1990s and as such showed up in much popular media. A number of "Y2K disaster" books were published such asDeadline Y2Kby Mark Joseph. Movies such asY2K: Year to Killcapitalized on the currency of Y2K, as did numerous TV shows, comic strips, and computer games.
A variety of fringe groups and individuals such as those within somefundamentalistreligious organizations,survivalists,cults, anti-social movements,self-sufficiencyenthusiasts and those attracted toconspiracy theories, called attention to Y2K fears and claimed that they provided evidence for their respective theories.End-of-the-worldscenarios andapocalypticthemes were common in their communication.
Interest in the survivalist movement peaked in 1999 in its second wave for that decade, triggered by Y2K fears. In the time before extensive efforts were made to rewrite computer programming codes to mitigate the possible impacts, some writers such asGary North,Ed Yourdon,James Howard Kunstler,[127]andEd Yardenianticipated widespread power outages, food and gasoline shortages, and other emergencies. North and others raised the alarm because they thought Y2K code fixes were not being made quickly enough. While a range of authors responded to this wave of concern, two of the most survival-focused texts to emerge wereBoston on Y2K(1998) byKenneth W. RoyceandMike Oehler'sThe Hippy Survival Guide to Y2K.
Y2K also appeared in the communication of somefundamentalistandcharismaticChristian leaders throughout the Western world, particularly in North America and Australia. Their promotion of the perceived risks of Y2K was combined withend timesthinking andapocalypticprophecies, allegedly in an attempt to influence followers.[128]TheNew York Timesreported in late 1999, "The Rev.Jerry Falwellsuggested that Y2K would be the confirmation of Christianprophecy– God's instrument to shake this nation, to humble this nation. The Y2K crisis might incite a worldwiderevivalthat would lead to theraptureof the church. Along with many survivalists, Mr. Falwell advised stocking up on food and guns".[129]Adherents in these movements were encouraged to engage in food hoarding, take lessons in self-sufficiency, and the more extreme elements planned for a total collapse of modern society. TheChicago Tribunereported that some large fundamentalist churches, motivated by Y2K, were the sites forflea market-like sales of paraphernalia designed to help people survive a social order crisis ranging from gold coins to wood-burning stoves.[130]Betsy Hartwrote in theDeseret Newsthat many of the more extreme evangelicals used Y2K to promote a political agenda in which the downfall of the government was a desired outcome in order to usher in Christ's reign. She also said, "the cold truth is that preaching chaos is profitable and calm doesn't sell many tapes or books".[131]Y2K fears were described dramatically by New Zealand-based Christian prophetic author and preacherBarry Smithin his publication "I Spy with my Little Eye," where he dedicated an entire chapter to Y2K.[132]Some expected, at times through so-called prophecies, that Y2K would be the beginning of a worldwide Christian revival.[133]
In the aftermath, it became clear that leaders of these fringe groups and churches had manufactured fears of apocalyptic outcomes to manipulate their followers into dramatic scenes of mass repentance or renewed commitment to their groups, as well as urging additional giving of funds. TheBaltimore Sunclaimed this in their article "Apocalypse Now – Y2K spurs fears", noting the increased call for repentance in the populace in order to avoid God's wrath.[134]Christian leaderCol Stringerwrote, "Fear-creating writers sold over 45 million books citing every conceivable catastrophe from civil war, planes dropping from the sky to the end of the civilized world as we know it. Reputable preachers were advocating food storage and a "head for the caves" mentality. No banks failed, no planes crashed, no wars or civil war started. And yet not one of these prophets of doom has ever apologized for their scare-mongering tactics."[133]Critics argue that some prominent North American Christian ministries and leaders generated huge personal and corporate profits through sales of Y2K preparation kits, generators, survival guides, published prophecies and a wide range of other associated merchandise, such as Christian journalistRob Bostonin his article "False Prophets, Real Profits."[128]However,Pat Robertson, founder of the global Christian Broadcasting Network, gave equal time to pessimists and optimists alike and granted that people should at least expect "serious disruptions".[135]
The total cost of the work done in preparation for Y2K likely surpassed US$300 billion ($548 billion as of January 2018, once inflation is taken into account).[136][137]IDC calculated that the US spent an estimated $134 billion ($245 billion) preparing for Y2K, and another $13 billion ($24 billion) fixing problems in 2000 and 2001. Worldwide, $308 billion ($562 billion) was estimated to have been spent on Y2K remediation.[138]
Remedial work was driven by customer demand for solutions.[139]Software suppliers, mindful of their potential legal liability,[124]responded with remedial effort. Software subcontractors were required to certify that their software components were free of date-related problems, which drove further work down the supply chain.
By 1999, many corporations required their suppliers to certify that their software was all Y2K-compliant. Some signed after accepting merely remedial updates. Many businesses or even whole countries suffered only minor problems despite spending little effort themselves.[citation needed]
There are two ways to view the events of 2000 from the perspective of its aftermath:
This view holds that the vast majority of problems were fixed correctly, and the money spent was at least partially justified. The situation was essentially one of preemptive alarm. Those who hold this view claim that the lack of problems at the date changereflects the completeness of the project, and that many computer applications would not have continued to function into the 21st century without correction or remediation.
Expected problems that were not seen by small businesses and small organizations were prevented by Y2K fixes embedded in routine updates to operating system and utility software[140]that were applied several years before 31 December 1999.
The extent to which larger industry and government fixes averted issues that would have more significant impacts had they not been fixed were typically not disclosed or widely reported.[141][unreliable source?]
It has been suggested that on11 September 2001, infrastructure in New York City (includingsubways, phone service, and financial transactions) was able to continue operation because of the redundant networks established in the event of Y2K bug impact[142]and the contingency plans devised by companies.[143]The terrorist attacks and the following prolonged blackout tolower Manhattanhad minimal effect on global banking systems.[144]Backup systems were activated at various locations around the region, many of which had been established to deal with a possible complete failure of networks in Manhattan'sFinancial Districton 31 December 1999.[145]
The contrary view asserts that there were no, or very few, critical problems to begin with. This view also asserts that there would have been only a few minor mistakes and that a "fix on failure" approach would have been the most efficient andcost-effectiveway to solve these problems as they occurred.
International Data Corporationestimated that the US might have wasted $40 billion.[146]
Skeptics of the need for a massive effort pointed to the absence of Y2K-related problems occurring before 1 January 2000, even though the 2000 financial year commenced in 1999 in many jurisdictions, and a wide range of forward-looking calculations involved dates in 2000 and later years. Estimates undertaken in the leadup to 2000 suggested that around 25% of all problems should have occurred before 2000.[147]Critics of large-scale remediation argued during 1999 that the absence of significant reported problems in non-compliant small firms was evidence that there had been, and would be, no serious problems needing to be fixed inanyfirm, and that the scale of the problem had therefore been severely overestimated.[148]
Countries such as South Korea, Italy, and Russia invested little to nothing in Y2K remediation,[129][146]yet had the same negligible Y2K problems as countries that spent enormous sums of money. Western countries anticipated such severe problems in Russia that many issued travel advisories and evacuated non-essential staff.[149]
Critics also cite the lack of Y2K-related problems in schools, many of which undertook little or no remediation effort. By 1 September 1999, only 28% of US schools had achieved compliance for mission critical systems, and a government report predicted that "Y2K failures could very well plague the computers used by schools to manage payrolls, student records, online curricula, and building safety systems".[150]
Similarly, there were few Y2K-related problems in an estimated 1.5 million small businesses that undertook no remediation effort. On 3 January 2000 (the first weekday of the year), theSmall Business Administrationreceived an estimated 40 calls from businesses with computer issues, similar to the average. None of the problems were critical.[151]
The2024 CrowdStrike incident, a global IT system outage, was compared to the Y2K bug by several news outlets, recalling fears surrounding it due to its scale and impact. There was also an incident with Toyota cars equipped with a screen in 2022, having all the car clocks roll back to 2002.[152][153] | https://en.wikipedia.org/wiki/Year_2000_problem |
Classificationis the activity of assigning objects to some pre-existing classes or categories. This is distinct from the task of establishing the classes themselves (for example throughcluster analysis).[1]Examples include diagnostic tests, identifying spam emails and deciding whether to give someone a driving license.
As well as 'category', synonyms or near-synonyms for 'class' include 'type', 'species', 'order', 'concept', 'taxon', 'group', 'identification' and 'division'.
The meaning of the word 'classification' (and its synonyms) may take on one of several related meanings. It may encompass both classification and the creation of classes, as for example in 'the task of categorizing pages in Wikipedia'; this overall activity is listed undertaxonomy. It may refer exclusively to the underlying scheme of classes (which otherwise may be called a taxonomy). Or it may refer to the label given to an object by the classifier.
Classification is a part of many different kinds of activities and is studied from many different points of view includingmedicine,philosophy,[2]law,anthropology,biology,taxonomy,cognition,communications,knowledge organization,psychology,statistics,machine learning,economicsandmathematics.
Methodological work aimed at improving the accuracy of a classifier is commonly divided between cases where there are exactly two classes (binary classification) and cases where there are three or more classes (multiclass classification).
Unlike indecision theory, it is assumed that a classifier repeats the classification task over and over. And unlike alottery, it is assumed that each classification can be either right or wrong; in the theory of measurement, classification is understood as measurement against anominalscale. Thus it is possible to try to measure the accuracy of a classifier.
Measuring the accuracy of a classifier allows a choice to be made between two alternative classifiers. This is important both when developing a classifier and in choosing which classifier to deploy. There are however many different methods for evaluating the accuracy of a classifier and no general method for determining which method should be used in which circumstances. Different fields have taken different approaches, even in binary classification. Inpattern recognition, error rate is popular. TheGini coefficientand KS statistic are widely used in the credit scoring industry.Sensitivity and specificityare widely used in epidemiology and medicine.Precision and recallare widely used in information retrieval.[3]
Classifier accuracy depends greatly on the characteristics of the data to be classified. There is no single classifier that works best on all given problems (a phenomenon that may be explained by theno-free-lunch theorem). | https://en.wikipedia.org/wiki/Classification |
Aternary computer, also calledtrinary computer, is one that usesternary logic(i.e.,base 3) instead of the more commonbinary system(i.e.,base 2) in its calculations. Ternary computers use trits, instead of binarybits.
Ternary computing deals with three discrete states, but the ternary digits themselves can be defined differently:[1]
Ternary quantum computers usequtritsrather than trits. A qutrit is aquantum statethat is acomplexunit vectorin three dimensions, which can be written as|Ψ⟩=α|0⟩+β|1⟩+γ|2⟩{\displaystyle |\Psi \rangle =\alpha |0\rangle +\beta |1\rangle +\gamma |2\rangle }in thebra-ket notation.[2]The labels given to thebasis vectors(|0⟩,|1⟩,|2⟩{\displaystyle |0\rangle ,|1\rangle ,|2\rangle }) can be replaced with other labels, for example those given above.
I often reflect that had the Ternary instead of thedenaryNotation been adopted in the Infancy of Society, machines something like the present would long ere this have been common, as the transition from mental to mechanical calculation would have been so very obvious and simple.
One early calculating machine, built entirely from wood by Thomas Fowler in 1840, operated in balanced ternary.[4][5][3]The first modern, electronic ternary computer,Setun, was built in 1958 in the Soviet Union at theMoscow State UniversitybyNikolay Brusentsov,[6][7]and it had notable advantages over thebinarycomputers that eventually replaced it, such as lower electricity consumption and lower production cost.[citation needed]In 1970 Brusentsov built an enhanced version of the computer, which he called Setun-70.[6]In the United States, the ternary computing emulatorTernacworking on a binary machine was developed in 1973.[8]: 22
The ternary computer QTC-1 was developed in Canada.[9]
Ternary computing is commonly implemented in terms ofbalanced ternary, which uses the three digits −1, 0, and +1. The negative value of any balancedternary digitcan be obtained by replacing every + with a − and vice versa. It is easy to subtract a number by inverting the + and − digits and then using normal addition. Balanced ternary can express negative values as easily as positive ones, without the need for a leading negative sign as with unbalanced numbers. These advantages make some calculations more efficient in ternary than binary.[10]Considering that digit signs are mandatory, and nonzero digits are magnitude 1 only, notation that drops the '1's and use only zero and the + − signs is more concise than if 1's are included.
Ternary computing can be implemented in terms of unbalanced ternary, which uses the three digits 0, 1, 2. The original 0 and 1 are explained as an ordinarybinary computer, but instead uses 2 asleakage current.
The world's first unbalanced ternary semiconductor design on a large wafer was implemented by the research team led by Kim Kyung-rok atUlsan National Institute of Science and Technologyin South Korea, which will help development of low power and high computing microchips in the future. This research theme was selected as one of the future projects funded bySamsungin 2017, published on July 15, 2019.[11]
With the advent of mass-produced binary components for computers, ternary computers have diminished in significance. However,Donald Knuthargues that they will be brought back into development in the future to take advantage of ternary logic's elegance and efficiency.[10]One possible way this could happen is by combining anoptical computerwith theternary logicsystem.[12]A ternary computer using fiber optics could use dark as 0 and two orthogonal polarizations of light as +1 and −1.[13]
TheJosephson junctionhas been proposed as a balanced ternary memory cell, using circulating superconducting currents, either clockwise, counterclockwise, or off. "The advantages of the proposed memory circuit are capability of high speed computation, low power consumption and very simple construction with fewer elements due to the ternary operation."[14]
Ternary computing shows promise for implementing fastternary large language models(LLMs) and potentially other AI applications, in lieu of floating point arithmetic.[15]
InRobert A. Heinlein's novelTime Enough for Love, the sapient computers of Secundus, the planet on which part of the framing story is set, including Minerva, use an unbalanced ternary system. Minerva, in reporting a calculation result, says "three hundred forty one thousand six hundred forty... the original ternary readout is unit pair pair comma unit nil nil comma unit pair pair comma unit nil nil point nil".[16]
With the emergence ofcarbon nanotube transistors, many researches have shown interest in designing ternary logic gates using them. During 2020–2024 more than 1000 papers about this subject onIEEE Xplorehave been published.[17] | https://en.wikipedia.org/wiki/Ternary_computer |
The12-hour clockis a time convention in which the 24 hours of the day are divided into two periods:a.m.(fromLatinante meridiem, translating to "before midday") andp.m.(from Latinpost meridiem, translating to "after midday").[1][2]Each period consists of 12 hours numbered: 12 (acting as 0),[3]1, 2, 3, 4, 5, 6, 7, 8, 9, 10, and 11. The 12-hour clock has been developed since thesecond millennium BCand reached its modern form in the 16th century.
The 12-hour time convention is common in several English-speaking nations and formerBritish colonies, as well as a few other countries. In English-speaking countries: "12 p.m." usually indicates noon, while "12 a.m." means midnight, but the reverse convention has also been used (see§ Confusion at noon and midnight).[4][5][6]"Noon" and "midnight" are unambiguous.
The natural day-and-night division of a calendar day forms the fundamental basis as to why each day is split into two cycles. Originally there were two cycles: one cycle which could be tracked by the position of the Sun (day), followed by one cycle which could be tracked by the Moon and stars (night). This eventually evolved into the two 12-hour periods which are used today, one called "a.m." starting at midnight and another called "p.m." starting at noon.[1]
The 12-hour clock can be traced back as far asMesopotamiaandancient Egypt.[7]Both an Egyptiansundialfor daytime use[8]and an Egyptianwater clockfor night-time use were found in the tomb of PharaohAmenhotep I.[9]Dating toc.1500 BC, these clocks divided their respective times of use into 12 hours each.
Theancient Romansalsoused a 12-hour clock: daylight and nighttime were each divided into 12 equal intervals (of varying duration according to the season).[10]The nighttime hours were grouped into fourwatches(vigiliae).[11]
The first mechanical clocks in the 14th century, if they had dials at all, showed all 24 hours using the24-hour analog dial, influenced by astronomers' familiarity with theastrolabeand sundial and by their desire to model theEarth's apparent motion around the Sun. InNorthern Europethese dials generally used the 12-hournumbering schemeinRoman numeralsbut showed botha.m.andp.m.periods in sequence. This is known as the double-XII system and can be seen on many surviving clock faces, such as those atWellsandExeter.
Elsewhere in Europe, numbering was more likely to be based on the 24-hour system (I to XXIV). The 12-hour clock was used throughout theBritish Empire.
During the 15th and 16th centuries, the 12-hour analog dial and time system gradually became established as standard throughout Northern Europe for general public use. The 24-hour analog dial was reserved for more specialized applications, such asastronomical clocksand chronometers.
Most analog clocks and watches today use the 12-hour dial, on which the shorter hour hand rotates once every 12 hours and twice in one day. Some analog clock dials have an inner ring of numbers along with the standard 1-to-12 numbered ring. The number 12 is paired either with a 00 or a 24, while the numbers 1 through 11 are paired with the numbers 13 through 23, respectively. This modification allows the clock to also be read in24-hour notation. This kind of 12-hour clock can be found in countries where the 24-hour clock is preferred.
In several countries the 12-hour clock is the dominant written and spoken system of time, predominantly in nations that were part of the former British Empire, for example, theUnited Kingdom,Republic of Ireland, theUnited States,Canada(excluding Quebec),Australia,New Zealand,South Africa,India,Pakistan, andBangladesh, and others follow this convention as well, such asMexicoand the former American colony of thePhilippines. Even in those countries where the 12-hour clock is predominant, there are frequently contexts (such as science, medicine, the military or transport) in which the 24-hour clock is preferred. In most countries, however, the24-hour clockis the standard system used, especially in writing. Some nations in Europe and Latin America use a combination of the two, preferring the 12-hour system in colloquial speech but using the 24-hour system in written form and in formal contexts.
The 12-hour clock in speech often uses phrases such as... in the morning,... in the afternoon,... in the evening, and... at night.Rider's British Merlinalmanac for 1795 and a similar almanac for 1773 published in London used them.[12]Other than in English-speaking countries and some Spanish-speaking countries, the termsa.m.andp.m.are seldom used and often unknown.[α]
In most countries, computers by default show the time in 24-hour notation. Most operating systems, includingMicrosoft WindowsandUnix-likesystems such asLinuxandmacOS, activate the 12-hour notation by default for a limited number of language and region settings. This behaviour can be changed by the user, such as with theWindowsoperating system's "Region and Language" settings.[13]
TheLatinabbreviationsa.m.andp.m.(often written "am" and "pm", "AM" and "PM", or "A.M." and "P.M.") are used inEnglish(andSpanish).[14][α]'Noon' is not abbreviated.
When abbreviations and phrases are omitted, one may rely on sentence context and societal norms to reduce ambiguity. For example, if one commutes to work at "9:00", 9:00 a.m. may be implied, but if asocial danceis scheduled to begin at "9:00", it may begin at 9:00 p.m.
The terms "a.m." and "p.m." are abbreviations of the Latinante meridiem(before midday) andpost meridiem(after midday). Depending on thestyle guidereferenced, the abbreviations "a.m." and "p.m." are variously written insmall capitals("am" and "pm"),[16][17]uppercaseletters without aperiod("AM" and "PM"), uppercase letters with periods, or lowercase letters ("am" and "pm"[18]or "a.m." and "p.m."[17]). With the advent of computer generated and printed schedules, especially airlines, advertising, and television promotions, the "M" character is often omitted as providing no additional information as in "9:30A" or "10:00P".[19]
Some style guides suggest the use of a space between the number and the a.m. or p.m. abbreviation.[citation needed]Style guides recommend not using a.m. and p.m. without a time preceding it.[20]
The hour/minute separatorvaries between countries: some use a colon, others use a period (full stop),[18]and still others use the letter h.[citation needed](In some usages, particularly "military time", of the24-hour clock, there is no separator between hours and minutes.[21]This style is not generally seen when the 12-hour clock is used.)
Unicodespecifiescodepointsfora.m.andp.m.asprecomposed characters, which are intended to be used only withChinese-Japanese-Korean(CJK) character sets, as they take up exactly the same space as one CJK character:
In speaking, it is common to round the time to the nearest five minutes and/or express the time as the past (or to) the closest hour; for example, "five past five" (5:05). Minutespastthe hour means those minutes are added to the hour; "ten past five" means 5:10. Minutesto, 'tilandofthe hour mean those minutes are subtracted; "ten of five", "ten 'til five", and "ten to five" all mean 4:50.
Fifteen minutes is often called aquarter hour, and thirty minutes is often known as ahalf hour. For example, 5:15 can be phrased "(a) quarter past five" or "five-fifteen"; 5:30 can be "half past five", "five-thirty" or simply "half five". The time 8:45 may be spoken as "eight forty-five" or "(a) quarter to nine".[22]In some languages, e.g. Polish, rounding off is mandatory when using (spoken) 12-hour clock, but disallowed when using 24 hour notation. I.e. 15:12 might be pronounced as "quarter past three" or "fifteen-twelve", butnot"three-twelve" or "quarter past fifteen".[23]
In older English, it was common for the number 25 to be expressed as "five-and-twenty".[24]In this way the time 8:35 might have been phrased as "five-and-twenty to 9",[25]although this styling fell out of fashion in the later part of the 1900s and is now rarely used.[26]
Instead of meaning 5:30, the "half five" expression is sometimes used to mean 4:30, or "halfway to five", especially for regions such as theAmerican Midwestand other areas that have been particularlyinfluenced by German culture.[citation needed]This meaning follows the pattern choices of many Germanic andSlavic languages, includingSerbo-Croatian,Dutch,Danish,Russian,Norwegian, andSwedish, as well asHungarian,Finnish, and the languages of theBaltic States.
Moreover, in situations where the relevant hour is obvious or has been recently mentioned, a speaker might omit the hour and just say "quarter to (the hour)", "half past" or "ten 'til" to avoid an elaborate sentence in informal conversations. These forms are often commonly used in television and radio broadcasts that cover multiple time zones at one-hour intervals.[27]
In describing a vague time of day, a speaker might say the phrase "seven-thirty, eight" to mean sometime around 7:30 or 8:00. Such phrasing can be misinterpreted for a specific time of day (here 7:38), especially by a listener not expecting an estimation. The phrase "aboutseven-thirtyoreight" clarifies this.
Some more ambiguous phrasing might be avoided. Within five minutes of the hour, the phrase "five of seven" (6:55) can be heard "five-oh-seven" (5:07). "Fivetoseven" or even "six fifty-five" clarifies this.
Minutes may be expressed as an exact number of minutes past the hour specifying the time of day (e.g., 6:32 p.m. is "six thirty-two"). Additionally, when expressing the time using the "past (after)" or "to (before)" formula, it is conventional to choose the number of minutes below 30 (e.g., 6:32 p.m. is conventionally "twenty-eight minutes to seven" rather than "thirty-two minutes past six").
In spoken English, full hours are often represented by the numbered hour followed byo'clock(10:00 asten o'clock, 2:00 astwo o'clock). This may be followed by the "a.m." or "p.m." designator, though some phrases such asin the morning, in the afternoon, in the evening,orat nightmore commonly follow analog-style terms such aso'clock, half past three,andquarter to four. O'clockitself may be omitted, telling a time asfour a.m.orfour p.m.Minutes ":01" to ":09" are usually pronounced asoh onetooh nine(noughtorzerocan also be used instead ofoh). Minutes ":10" to ":59" are pronounced as their usual number-words. For instance, 6:02 a.m. can be pronouncedsix oh two a.m.whereas 6:32 a.m. could be told assix thirty-two a.m..
It is not always clear what times "12:00 a.m." and "12:00 p.m." denote. InLatin,ante meridiem(a.m.) means "before midday" andpost meridiem(p.m.) means "after midday". Since noon is neither before nor after itself, the terms a.m. and p.m. do not apply.[2]Although noon could be denoted "12 m.", this is seldom done[37]and also does not resolve the question of how to indicate midnight.
By convention, "12 a.m." denotes midnight and "12 p.m." denotes noon.[38]However, many style guides recommend against using either because of the potential for confusion. Many recommend instead using the unambiguous terms "12 noon" and "12 midnight", or simply "noon" and "midnight". These includeThe American Heritage Dictionary of the English Language,[38]The Canadian Press Stylebook,[34]and theNIST's "Frequently asked questions (FAQ)" web page.[2]
Alternatively, some recommend referring to one minute before or after 12:00, especially when referring to midnight (for example, "11:59 p.m." or "12:01 a.m."). These include the UK'sNational Physical Laboratory"FAQ-Time" web page.[35]That has become common in the United States in legal contracts and forairplane,bus, ortrainschedules, though some schedules use other conventions. Occasionally, when trains run at regular intervals, the pattern may be broken at midnight by displacing the midnight departure one or more minutes, such as to 11:59 p.m. or 12:01 a.m.[39]
Some authors have been known to use the reverse of the normal convention. E. G. Richards in his bookMapping Time(1999) provided a diagram in which 12 a.m. means noon and 12 p.m. means midnight.[40]Historically, the style manual of theUnited States Government Printing Officeused 12 a.m. for noon and 12 p.m. for midnight, though this was reversed in its 2008 editions.[30][31]
InJapanese usage, midnight is written as午前0時(0 a.m.) and noon is written as午後0時(0 p.m.), making the hours numbered sequentially from 0 to 11 in both halves of the day. Alternatively, noon may be written as午前12時(12 a.m.) and midnight at the end of the day as午後12時(12 p.m.), as opposed to午前0時(0 a.m.) for the start of the day, making the Japanese convention the opposite of the English usage of 12 a.m. and 12 p.m.[33] | https://en.wikipedia.org/wiki/12-hour_clock |
The modern24-hour clockis the convention oftimekeepingin which thedayruns frommidnightto midnight and is divided into 24 hours. This is indicated by the hours (and minutes) passed since midnight, from00(:00)to23(:59), with24(:00)as an option to indicate the end of the day. This system, as opposed to the12-hour clock, is the most commonly used time notation in the world today,[A]and is used by the international standardISO 8601.[1]
A number of countries, particularly English speaking, use the 12-hour clock, or a mixture of the 24- and 12-hour time systems. In countries where the 12-hour clock is dominant, some professions prefer to use the 24-hour clock. For example, in the practice ofmedicine, the 24-hour clock is generally used in documentation of care as it prevents anyambiguityas to when events occurred in a patient'smedical history.[2]
A time of day is written in the 24-hour notation in the form hh:mm (for example 01:23) or hh:mm:ss (for example, 01:23:45), where hh (00 to 23) is the number of full hours that have passed sincemidnight, mm (00 to 59) is the number of full minutes that have passed since the last full hour, and ss (00 to 59) is the number of seconds since the last full minute. To indicate the exact end of the day, hh may take the value 24, with mm and ss taking the value 00. In the case of aleap second, the value of ss may extend to 60. A leading zero is added for numbers under 10, but it is optional for the hours. The leading zero is very commonly used in computer applications, and always used when a specification requires it (for example,ISO 8601).
Where subsecond resolution is required, the seconds can be adecimal fraction; that is, the fractional part follows a decimal dot or comma, as in 01:23:45.678. The most commonly used separator symbol between hours, minutes and seconds is thecolon, which is also the symbol used in ISO 8601. In the past, some European countries used thedot on the lineas a separator, but most national standards on time notation have since then been changed to the international standard colon. In some contexts (including some computer protocols and military time), no separator is used and times are written as, for example, "2359".
In the 24-hour time notation, the day begins at midnight, 00:00 or 0:00, and the last minute of the day begins at 23:59. Where convenient, the notation 24:00 may also be used to refer to midnight at the end of a given date[3]— that is, 24:00 of one day is the same time as 00:00 of the following day.
The notation 24:00 mainly serves to refer to the exact end of a day in a time interval. A typical usage is giving opening hours ending at midnight (e.g. "00:00–24:00", "07:00–24:00"). Similarly, some bus and train timetables show 00:00 as departure time and 24:00 as arrival time. Legal contracts often run from the start date at 00:00 until the end date at 24:00.
While the 24-hour notation unambiguously distinguishes between midnight at the start (00:00) and end (24:00) of any given date, there is no commonly accepted distinction among users of the 12-hour notation.Style guidesand military communication regulations in some English-speaking countries discourage the use of 24:00 even in the 24-hour notation, and recommend reporting times near midnight as 23:59 or 00:01 instead.[4]Sometimes the use of 00:00 is also avoided.[4]In variance with this, an older version of the correspondence manual for theUnited States NavyandUnited States Marine Corpsspecified 0001 to 2400.[5]The manual was updated in June 2015 to use 0000 to 2359.[6]
Time-of-day notations beyond 24:00 (such as 24:01 or 25:00 instead of 00:01 or 01:00) are not commonly used and not covered by the relevant standards. However, they have been used occasionally in some special contexts in the United Kingdom, France, Spain, Canada, Japan, South Korea, Hong Kong, and China where business hours extend beyond midnight, such as broadcast television production and scheduling. TheGTFSpublic transport schedule listings file format has the concept of service days and expects times beyond 24:00 for trips that run after midnight.[7]
In most countries, computers by default show the time in 24-hour notation. For example,Microsoft WindowsandmacOSactivate the 12-hour notation by default only if a computer is in a handful of specific language and region settings. The 24-hour system is commonly used in text-based interfaces.POSIXprograms such aslsdefault to displaying timestamps in 24-hour format.
InAmerican English, the termmilitary timeis a synonym for the 24-hour clock.[8]In the US, the time of day is customarily given almost exclusively using the 12-hour clock notation, which counts the hours of the day as 12, 1, ..., 11 with suffixesa.m.andp.m.distinguishing the twodiurnalrepetitions of this sequence. The 24-hour clock is commonly used there only in some specialist areas (military, aviation, navigation, tourism, meteorology, astronomy, computing, logistics, emergency services, hospitals), where theambiguities of the 12-hour notationare deemed too inconvenient, cumbersome, or dangerous.
Military usage, as agreed between the United States and allied English-speaking military forces,[9]differs in some respects from other twenty-four-hour time systems:
The first mechanical public clocks introduced in Italy weremechanical 24-hour clockswhich counted the 24 hours of the day from one-half hour after sunset to the evening of the following day. The 24th hour was the last hour of day time.[11]
From the 14th to the 17th century, two systems of time measurement competed in Europe:[12][13]
The modern 24-hour system is a late-19th century adaptation of the German midnight-starting system, and then prevailed in the world with the exception of some Anglophone countries.
Striking clocks had to produce 300 strokes each day, which required a lot of rope, and wore out the mechanism quickly, so some localities switched to ringing sequences of 1 to 12 twice (156 strokes), or even 1 to 6 repeated four times (84 strokes).[11]
Sandford Fleming, the engineer-in-chief of the CanadianIntercolonial Railway, was an early proponent of using the 24-hour clock as part of a programme to reform timekeeping, which also included establishingtime zonesand a standardprime meridian.[15][16]At theInternational Meridian Conferencein 1884, the following resolution was adopted by the conference:[17]
That this universal day is to be a mean solar day; is to begin for all the world at the moment of midnight of the initial meridian coinciding with the beginning of the civil day and date of that meridian, and is to be counted from zero up to twenty-four hours.[17]
TheCanadian Pacific Railwaywas among the first organisations to adopt the 24-hour clock, atmidsummer1886.[15][18]A report by a government committee in the United Kingdom noted Italy as the first country among those mentioned to adopt 24-hour time nationally, in 1893.[19]Other European countries followed: France adopted it in 1912 (the French army in 1909), followed by Denmark (1916), and Greece (1917). By 1920, Spain, Portugal, Belgium, and Switzerland had switched, followed by Turkey (1925), and Germany (1927). By the early 1920s, many countries in Latin America had also adopted the 24-hour clock.[20]Some of the railways in India had switched before the outbreak of the war.[19]
DuringWorld War I, the British Royal Navy adopted the 24-hour clock in 1915, and the Allied armed forces followed soon after,[19]with the British Army switching officially in 1918.[21]The Canadian armed forces first started to use the 24-hour clock in late 1917.[22]In 1920, the United States Navy was the first United States organisation to adopt the system; theUnited States Army, however, did not officially adopt the 24-hour clock until 1 July 1942.[23][24]
The use of the24-hour clock in the United Kingdomhas grown steadily since the beginning of the 20th century, although attempts to make the system official failed more than once.[25]In 1934, theBritish Broadcasting Corporation(BBC) switched to the 24-hour clock for broadcast announcements and programme listings. The experiment was halted after five months following a lack of enthusiasm from the public, and the BBC continued using the 12-hour clock.[25]In the same year,Pan American World Airways CorporationandWestern Airlinesin the United States both adopted the 24-hour clock.[26]In modern times, the BBC uses a mixture of both the 12-hour and the 24-hour clock.[25]British Rail,London Transport, and theLondon Undergroundswitched to the 24-hour clock for timetables in 1964.[25]A mixture of the 12- and 24-hour clocks similarly prevails in other English-speakingCommonwealthcountries: French speakers have adopted the24-hour clock in Canadamuch more broadly than English speakers, andAustraliaas well as New Zealand also use both systems. | https://en.wikipedia.org/wiki/24-hour_clock |
Intrigonometry, thegradian– also known as thegon(fromAncient Greekγωνία(gōnía)'angle'),grad, orgrade[1]– is aunit of measurementof anangle, defined as one-hundredth of theright angle; in other words, 100 gradians is equal to 90 degrees.[2][3][4]It is equivalent to1/400of aturn,[5]9/10of adegree, orπ/200of aradian. Measuring angles in gradians (gons) is said to employ thecentesimalsystem of angular measurement, initiated as part ofmetricationanddecimalisationefforts.[6][7][8][a]
In continentalEurope, the French wordcentigrade, also known ascentesimal minute of arc, was in use for one hundredth of a grade; similarly, thecentesimal second of arcwas defined as one hundredth of a centesimal arc-minute, analogous todecimal timeand thesexagesimalminutes and seconds of arc.[12]The chance of confusion was one reason for the adoption of the termCelsiusto replacecentigradeas the name of the temperature scale.[13][14]
Gradians (gons) are principally used insurveying(especially in Europe),[15][7][16]and to a lesser extent inmining[17]andgeology.[18][19]
The gon (gradian) is a legally recognised unit of measurement in theEuropean Union[20]: 9and inSwitzerland.[21]However, this unit is not part of theInternational System of Units(SI).[22][20]: 9–10
The unit originated inFrancein connection with theFrench Revolutionas thegrade, along with themetric system, hence it is occasionally referred to as ametric degree. Due to confusion with the existing termgrad(e)in some northern European countries (meaning a standard degree,1/360of a turn), the namegonwas later adopted, first in those regions, and later as the international standard.[which?]In France, it was also calledgrade nouveau. InGerman, the unit was formerly also calledNeugrad(new degree) (whereas the standard degree was referred to asAltgrad(old degree)), likewisenygradinDanish,SwedishandNorwegian(alsogradian), andnýgráðainIcelandic.
Although attempts at a general introduction were made, the unit was only adopted in some countries, and for specialised areas such assurveying,[15][7][16]mining[17]andgeology.[18][19]Today, the degree,1/360of aturn, or the mathematically more convenientradian,1/2πof a turn (used in theSIsystem of units) is generally used instead.
In the1970s –1990s, mostscientific calculatorsoffered the gon (gradian), as well as radians and degrees, for theirtrigonometric functions.[23]In the 2010s, some scientific calculators lack support for gradians.[24]
The international standard symbol for this unit is "gon" (seeISO 31-1, Annex B).[needs update]Other symbols used in the past include "gr", "grd", and "g", the last sometimes written as a superscript, similarly to a degree sign: 50g= 45°.
Ametric prefixis sometimes used, as in "dgon", "cgon", "mgon", denoting respectively 0.1 gon, 0.01 gon, 0.001 gon.
Centesimal arc-minutes and centesimal arc-seconds were also denoted with superscriptscandcc, respectively.
Each quadrant is assigned a range of 100 gon, which eases recognition of the four quadrants, as well as arithmetic involving perpendicular or opposite angles.
One advantage of this unit is that right angles to a given angle are easily determined. If one is sighting down a compass course of 117 gon, the direction to one's left is 17 gon, to one's right 217 gon, and behind one 317 gon. A disadvantage is that the common angles of 30° and 60° in geometry must be expressed in fractions (as33+1/3gon and66+2/3gon respectively).
In the 18th century, themetrewas defined as the 10-millionth part of aquarter meridian.
Thus, 1 gon corresponds to anarc lengthalong the Earth's surface of approximately 100 kilometres; 1 centigon to 1 kilometre; 10 microgons to 1 metre.[25](Themetre has been redefinedwith increasing precision since then.)
The gradian isnotpart of theInternational System of Units(SI). The EU directive on the units of measurement[20]: 9–10notes that the gradian "does not appear in the lists drawn up by theCGPM,CIPMorBIPM." The most recent, 9th edition of theSI Brochuredoes not mention the gradian at all.[22]The previous edition mentioned it only in the following footnote:[26]
The gon (or grad, where grad is an alternative name for the gon) is an alternative unit of plane angle to the degree, defined as (π/200) rad. Thus there are 100 gon in a right angle. The potential value of the gon in navigation is that because the distance from the pole to the equator of the Earth is approximately10000km, 1 km on the surface of the Earth subtends an angle of one centigon at the centre of the Earth. However the gon is rarely used. | https://en.wikipedia.org/wiki/Centesimal_minute_and_second_of_arc |
Adecimal calendaris a calendar which includesunits of timebased on thedecimal system. For example, a "decimal month" would consist of a year with 10 months and 36.52422 days per month.
The ancient Egyptian calendar consisted of twelve months, each divided into three weeks of ten days, with fiveintercalary days.[1]
The original Roman calendar consisted of ten months; however, the calendar year only lasted 304 days, with 61 days during winter not assigned to any month.[2]The months ofIanuariusandFebruariuswere added to the calendar byNuma Pompiliusin 700 BCE.[2]
The French Republican Calendar was introduced (along withdecimal time) in 1793, and was similar to the ancient Egyptian calendar.[3]It consisted of twelve months, each divided into threedécadesof ten days, with five or six intercalary days calledsansculottides.[3]The calendar was abolished byNapoleonon January 1, 1806.[3]
The modernGregorian calendardoes not use decimal units of time; however, several proposed calendar systems do. None of these have achieved widespread use.[example needed] | https://en.wikipedia.org/wiki/Decimal_calendar |
Anunusual unit of measurementis aunit of measurementthat does not form part of a coherentsystem of measurement, especially because its exact quantity may not be well known or because it may be an inconvenient multiple or fraction of a base unit.
Many of the unusual units of measurements listed here are colloquial measurements, units devised to compare a measurement to common and familiar objects.
Horizontal pitch (HP) is a unit of length defined by theEurocard printed circuit boardstandard used to measure the horizontal width of rack-mounted electronic equipment, similar to the rack unit (U) used to measure vertical heights of rack-mounted equipment. One HP is 0.2 inches (1⁄5″) or5.08 millimetres wide.
Valve'sSourcegame engine uses theHammer unitas its base unit of length. This unit refers to Source's official map creation software, Hammer.[1]The exact definition varies from game to game, but a Hammer unit is usually defined as a sixteenth of a foot (16 Hammer units = 1 foot). This means that 1 Hammer unit is equal to exactly19.05 millimetres or0.75 inches (3⁄4″) .
Button sizes are typically measured inligne, which can be abbreviated as L. The measurement refers to the button diameter, or the largest diameter of irregular button shapes. There are 40 ligne in 1 inch.[2][3]
Onerack unit(U) is 1.75 inches (44.45 mm) and is used to measure rack-mountable audiovisual, computing and industrial equipment. Rack units are typically denoted without a space between the number of units and the 'U'. Thus, a 4 Userverenclosure (case) is seven inches (177.8 mm) high, or more practically, built to occupy a vertical space seven inches high, with sufficient clearance to allow movement of adjacent hardware.
Thehandis a non-SIunit of length equal to exactly 4 inches (101.6 mm). It is normally used to measure the height of horses in some English-speaking countries, including Australia,[4]Canada, Ireland, the United Kingdom, and the United States. It is customary when measuring in hands to use a point to indicateinches(quarter-hands) and nottenths of a hand. For example, 15.1 hands normally means 15 hands, 1 inch (5 ft 1 in), rather than15+1⁄10hands.[5]
The light-nanosecond is defined as exactly 29.9792458 cm. It was popularized ininformation technologyas a unit of distance byGrace Hopperas the distance which a photon could travel in one billionth of a second (roughly 30 cm or one foot): "Thespeed of lightis one foot per nanosecond."[6][7]
A metric foot, defined as 300 millimetres (approximately 11.8 inches), has been used occasionally in the UK but has never been an official unit.[8]
The corresponding metric inch of 25 millimetres (0.984 in) was used for pin spacing in Soviet microchips, which were often cloned from Western designs but scaled down slightly from US customary inches to metric inches. This led to incompatibility issues in the Soviet computer market.[9]
AChinese footis around one third of a metre, with the exact definition depending on jurisdiction.
Horses are used to measure distances inhorse racing– ahorse length(shortened to merely alengthwhen the context makes it obvious) equals roughly 8 feet or 2.4 metres. Shorter distances are measured in fractions of ahorse length; also common are measurements of a full or fraction of ahead, aneck, or anose.[10]
In rowing races such as theOxford and Cambridge Boat Race, the margin of victory and of defeat is expressed in fractions and multiples ofboat lengths. The length of arowing eightis about 62 feet (19 m). A shorter distance is thecanvas, which is the length of the covered part of the boat between the bow and the bow oarsman. TheRacing Rules of Sailingalso makes heavy use of boat lengths.
A football field is often used as a comparative measurement of length when talking about distances that may be hard to comprehend when stated in terms of standard units.
AnAmerican football fieldis usually understood to be 100 yards (91 m) long, though it is technically 120 yards (110 m) when including the two 10 yd (9.1 m) long end zones. The field is 160 ft (53 yd; 49 m) wide.[11]
Anassociation football pitchmay vary within limits of 90–120 m (98–131 yd) in length and 45–90 m (49–98 yd) in width. The recommended field size is 105 m × 68 m (115 yd × 74 yd) for major competitions such as theFIFA World Cup,UEFA European ChampionshipandUEFA Champions League.
ACanadian football fieldis 65 yd (59 m) wide and 150 yd (140 m) long, including two 20 yd (18 m) long end zones.
In most US cities, acity blockis between1⁄16and1⁄8mi (100 and 200 m). InManhattan, the measurement "block" usually refers to a north–south block, which is1⁄20mi (80 m). Sometimes people living in places (like Manhattan) with a regularly spaced street grid will speak oflong blocksandshort blocks. Within a typical large North American city, it is often only possible to travel along east–west and north–south streets, so travel distance between two points is often given in the number of blocks east–west plus the number north–south (known to mathematicians as theManhattan Metric).[12]
The globally-averageradius of Earth, generally given as 6,371 kilometres (3,959 miles), is often employed as a unit of measure to intuitively compare objects of planetary size.
Lunar distance(LD), the distance from the centre of Earth to the centre of the Moon, is a unit of measure in astronomy. The lunar distance is approximately 384,400 km (238,900 mi), or 1.28light-seconds; this is roughly 30 timesEarth's diameter. A little less than 400 lunar distances make up anastronomical unit.
Thesiriometeris an obsolete astronomical measure equal to one millionastronomical units, i.e., one million times the average distance between the Sun and Earth.[13]This distance is equal to about 15.8light-years, 149.6Pm, or 4.8parsecs, and is about twice the distance from Earth to the starSirius.[14]
Thecubitis, among others, a unit used in the Bible for measuring the size ofNoah's Arkand of theArk of the Covenant. Cubits of various lengths were used in Antiquity by various peoples, not only the Hebrews. One cubit is originally the length from someone's elbow to the tip of their middle finger; it usually translates to approximately half a metre ±10%, though an ancient Roman cubit was as long as 120 cm.
One cubit was equal to 6–7palms, one palm being the width of a hand not including the thumb.
Ingroff/troffand specifically in the included traditional manuscript macro setms, the vee (v) is a unit of vertical distance often—but not always—corresponding to the height of an ordinary line of text.[15]
One barn is 10−28square metres, about the cross-sectional area of a uranium nucleus. The name probably derives from early neutron-deflection experiments, when the uranium nucleus was described, and the phrases "big as a barn" and "hit a barn door" were used. Barn are typically used forcross sectionsin nuclear and particle physics. Additional units include the microbarn (or "outhouse")[16]and the yoctobarn (or "shed").[17][18]
A Kuang is a traditional Chinese unit of area used insampling,[clarification needed]equal to 0.11 square metres or one squareChinese foot.[19]
One brass is exactly 100 square feet (9.29 m2) area (used in measurement of work done or to be done, such as plastering, painting, etc.). The same word is used, however, for 100 cubic feet (2.83 m3) of estimated or supplied loose material, such as sand, gravel, rubble, etc. This unit is prevalent in the construction industry inIndia.[20][21]
The same area is called asquarein the construction industry in North America,[22]and was historically used inAustraliaby real estate agents. Aroof's area may be calculated in square feet, then converted to squares.
In Ireland, before the 19th century, a "cow's grass" was a measurement used by farmers to indicate the size of their fields. A cow's grass was equal to the amount of land that could produce enough grass to support a cow.[23][24]
Afootball pitch, or field, can be used as a man-in-the-street unit of area.[25][26]The standardFIFAfootball pitchfor international matches is 105 m (344 ft) long by 68 m (223 ft) wide (7,140 m2or 0.714 ha or 1.76 acres); FIFA allows for a variance of up to 5 m (16.4 ft) in length in either direction and 7 m (23.0 ft) more or 4 m (13.1 ft) less in width (and larger departures if the pitch is not used for international competition), which generally results in the association football pitch generally only being used fororder of magnitudecomparisons.[27]
AnAmerican football field, including both end zones, is 360 by 160 ft (120.0 by 53.3 yd; 109.7 by 48.8 m), or 57,600 square feet (5,350 m2) (0.535 hectares or 1.32 acres). ACanadian football fieldis 65 yards (59 m) wide and 110 yards (100 m) long with end zones adding a combined 40 yards (37 m) to the length, making it 87,750 square feet (8,152 m2) or 0.8215 ha (2.030 acres).
AnAustralian rules football fieldmay be approximately 150 metres (160 yd) (or more) long goal to goal and 135 metres (148 yd) (or more) wide, although the field's elliptical nature reduces its area to a certain extent. A 150-by-135-metre (164 by 148 yd) football field has an area of approximately 15,900 m2(1.59 ha; 3.9 acres), twice the area of a Canadian football field and three times that of an American football field.
Amorgen("morning" in Dutch and German) was approximately the amount of land tillable by one man behind an ox in the morning hours of a day. This was an official unit of measurement in South Africa until the 1970s, and was defined in November 2007 by the South African Law Society as having a conversion factor of 1 morgen =0.856532hectares.[28]This unit of measure was also used in the Dutch colonial province ofNew Netherland(laterNew Yorkand parts ofNew England).[29][30]
The area of a familiar country, state or city is often used as a size reference, especially injournalism. Usually the region is used to describe something of similar size to the reference region, but in some cases such references become common enough that multiples of the area start to be used, as in "twice the area ofWales".[31][32][33]Besides Wales (20,779 km2(8,023 sq mi)), other regions that have been used this way includeBelgium(30,528 km2or 11,787 sq mi),[34]the German state ofSaarland(2,569.69 km2or 992.16 sq mi),[35]andWashington, D.C.(61.4 sq mi or 159 km2).[36]
A metric ounce is an approximation of the imperial ounce, US dry ounce, or US fluid ounce. These three customary units vary. However, the metric ounce is usually taken as 25 or 30 ml (0.88 or 1.06 imp fl oz; 0.85 or 1.01 US fl oz) when volume is being measured, or in grams when mass is being measured.
The USFood and Drug Administration(FDA) defines the "food labeling ounce" as 30 ml (1.1 imp fl oz; 1.0 US fl oz), slightly larger than the 29.6 ml (1.04 imp fl oz; 1.00 US fl oz)fluid ounce.[37]
SeveralDutch units of measurementhave been replaced with informal metric equivalents, including theonsor ounce. It originally meant1⁄16of a pound, or a little over 30 g (1.1 oz) depending on which definition of the pound was used, but was redefined as 100 g (3.5 oz) when the country metricated.[38]
Theshotis a liquid volume measure that varies from country to country and state to state depending on legislation. It is routinely used for measuring strongliquoror spirits when the amount served and consumed is smaller than the more common measures of alcoholic "drink" and "pint". There is a legally defined maximum size of a serving in some jurisdictions. The size of a "single" shot is 20–60 ml (0.70–2.11 imp fl oz; 0.68–2.03 US fl oz). The smaller "pony" shot is 20–30 ml (0.70–1.06 imp fl oz; 0.68–1.01 US fl oz). According to Encyclopædia Britannica Almanac 2009, a pony is 0.75fluid ounces[clarification needed]of liquor.[39]According toWolfram Alpha, one pony is 1U.S. fluid ounce.[40]"Double" shots (surprisingly not always the size of two single shots, even in the same place) are 40–100 ml (1.4–3.5 imp fl oz; 1.4–3.4 US fl oz). In the UK, spirits are sold in shots of either 25 ml (0.88 imp fl oz; 0.85 US fl oz) (approximating the oldfluid ounce) or 35 ml (1.2 imp fl oz; 1.2 US fl oz).[41]
A board foot is a United States and Canadian unit of approximate volume, used for lumber. It is equivalent to1 inch × 1 foot × 1 foot(144 cu in or 2,360 cm3). It is also found in the unit of densitypounds per board foot. In Australia and New Zealand the termssuper footorsuperficial footwere formerly used for this unit. The exact volume of wood specified is variable and depends on the type of lumber. For planed lumber the dimensions used to calculate board feet arenominal dimensions, which are larger than the actual size of the planed boards. SeeDimensional lumberfor more information on this.
A system of measure for timber in the round (standing or felled), now largely superseded by the metric system except in measuring hardwoods in certain countries. Its purpose is to estimate the value of sawn timber in a log, by measuring the unsawn log and allowing for wastage in the mill. Following the so-called "quarter-girth formula" (the square of one quarter of the circumference in inches multiplied by1⁄144of the length in feet), the notional log is four feet in circumference, one inch of which yields the hoppus board foot, 1 foot yields the hoppus foot, and 50 feet yields a hoppus ton. This translates to a hoppus foot being equal to 1.273 cubic feet (2,200 in3; 0.0360 m3). The hoppus board foot, when milled, yields about oneboard foot. The volume yielded by the quarter-girth formula is 78.54% of cubic measure (i.e. 1 ft3= 0.7854 h ft; 1 h ft = 1.273 ft3).[42]
A cubic ton is an antiquated measure of volume, varying based on the commodity from about 16 to 45 cu ft (0.45 to 1.27 m3). It is now only used for lumber, for which one cubic ton is equivalent to 40 cu ft (1.1 m3).
The cord is a unit of measure ofdry volumeused in Canada and the United States to measurefirewoodandpulpwood. A cord is the amount of wood that, when "ranked and well stowed" (arranged so pieces are aligned, parallel, touching and compact), occupies a volume of 128 cubic feet (3.62 m3).[43]This corresponds to a well-stacked woodpile, 4 feet deep by 4 feet high by 8 feet wide(122 cm × 122 cm × 244 cm), or any other arrangement of linear measurements that yields the same volume. A more unusual measurement for firewood is the "rick" orface cord. It is stacked 16 inches (40.6 cm) deep with the other measurements kept the same as a cord, making it1⁄3of a cord; however, regional variations mean that its precise definition is non-standardized.[44]
The twenty-foot equivalent unit is the volume of the smallest standardshipping container. It is equivalent to 1,360 cubic feet (39 m3). Larger intermodal containers are commonly described in multiples of TEU, as are container ship capacities.
An acre-foot is a unit of volume commonly used in the United States in reference to large-scale water resources, such as reservoirs, aqueducts, canals, sewer flow capacity, irrigation water[45]and river flows. It is defined by the volume of one acre of surface area to a depth of one foot. 43,560 cu ft (1,233 m3; 325,851 US gal; 271,328 imp gal).
Many well-known objects are regularly used as casual units of volume. They include:
The volume of water which flows in one unit of time through an orifice of one square inch area. The size of the unit varies from one place to another.
It is common in particle physics to use eV/c2as a unit of mass. Here, eV (electronvolt) is a unit of energy (the kinetic energy of anelectronaccelerated over onevolt,1.6×10−19joules), andcis the speed of light in vacuum. Energy and mass are related throughE=mc2. This definition is useful for alinear particle acceleratorwhen accelerating electrons.
In many systems ofnatural unitsc= 1, so thecis dropped and eV itself becomes a unit of mass.
The mass of an old bag ofcementwas onehundredweight(112 pounds, 51 kg). The amount of material that an aircraft could carry into the air is often visualised as the number of bags of cement that it could lift.[citation needed]In the concrete and petroleum industry, however, a bag of cement is defined as 94 lb (43 kg) because it has an apparent volume close to 1 cubic foot (28 litres).[61]Whenready-mix concreteis specified, a "bag mix" unit is used as if the batching company mixes 5 literal bags of cement per cubic yard (or cubic metre) when a "5 bag mix" is ordered.[citation needed]
In 1793, the French term "grave" (from "gravity") was suggested as the base unit of mass for the metric system. In 1795, however, the name "kilogramme" was adopted instead.
When reporting on the masses ofextrasolar planets, astronomers often discuss them in terms of multiples ofJupiter's mass(MJ= 1.9×1027kg).[62]For example, "Astronomers recently discovered a planet outside our Solar System with a mass of approximately 3 Jupiters." Furthermore, the mass of Jupiter is nearly equal to one thousandth of the mass of the Sun.
Solar mass(M☉=2.0×1030kg) is also often used in astronomy when talking about masses of stars or galaxies; for example,Alpha Centauri Ahas the mass of 1.1 suns, and theMilky Wayhas a mass of approximately 6×1011M☉.
Solar mass also has a special use when estimating orbital periods and distances of 2 bodies usingKepler's laws:a3= MtotalT2, whereais length ofsemi-major axisinAU,Tis orbital period in years andMtotalis the combined mass of objects inM☉. In case of planet orbiting a star,Mtotalcan be approximated to mean the mass of the central object. More specifically in the case of Sun and Earth the numbers reduce toMtotal~ 1,a~ 1 andT~ 1.
George Gamowdiscussed measurements of time such as the "light-mile" and "light-foot", the time taken for light to travel the specified unit distance, defined by "reversing the procedure" used in defining a light-year.[63]A light-foot is roughly onenanosecond, and one light-mile is approximately fivemicroseconds.[64]
Innuclear engineeringand astrophysics contexts, theshakeis sometimes used as a conveniently short period of time. 1 shake is defined as 10nanoseconds.[65]
Incomputing, thejiffyis the duration of one tick of the system timerinterrupt. Typically, this time is 0.01 seconds, though in some earlier systems (such as theCommodore8-bitmachines) the jiffy was defined as1⁄60of a second, roughly equal to the vertical refresh period (i.e. the field rate) onNTSCvideo hardware (and theperiod of AC electric powerin North America).
One unit derived from theFFF systemof units is the microfortnight, one millionth of the fundamental time unit of FFF, which equals 1.2096 seconds. This is a fairly representative example of "hacker humor",[66]and is occasionally used in operating systems; for example, theOpenVMSTIMEPROMPTWAIT parameter is measured in microfortnights.[67]
Thesidereal dayis based on the Earth's rotation rate relative to fixed stars, rather than the Sun. A sidereal day is approximately 23 hours, 56 minutes, 4.0905 SI seconds.
The measurement of time is unique in SI in that while thesecondis the base unit, and measurements of time smaller than a second use prefixed units smaller than a second (e.g. microsecond, nanosecond, etc.), measurements larger than a second instead use traditional divisions, including thesexagesimal-basedminuteandhouras well as the less regulardayandyearunits.
SI allows for the use of larger prefixed units based on the second, a system known asmetric time, but this is seldom used, since the number of seconds in a day (86,400 or,in rare cases, 86,401) negate one of the metric system's primary advantages: easy conversion by multiplying or dividing by powers of ten.
There have been numerous proposals and usage ofdecimal time, most of which were based on the day as the base unit, such that the number of units between any two events that happen at the same time of day would be equal to the number of days between them multiplied by some integer power of ten. In dynastic China, thekèwas a unit that represented1⁄100of a day (it has since been redefined to1⁄96of a day, or 15 minutes). In France, a decimal time system in place from 1793 to 1805 divided the day into 10 hours, each divided into 100 minutes, in turn each divided into 100 seconds; theFrench Republican Calendarfurther extended this by assembling days into ten-day "weeks".
Ordinal datesandJulian days, the latter of which has seen use in astronomy as it is not subject toleap yearcomplications, allow for the expression of a decimal portion of the day.[68]In the mid-1960s, to defeat the advantage of the recently introduced computers for the then popular rally racing in the Midwest, competition lag times in a few events were given in centids (1⁄100day, 864 seconds, 14.4 minutes), millids (1⁄1,000day, 86.4 seconds), and centims (1⁄100minute, 0.6 seconds) the latter two looking and sounding a bit like the related units of minutes and seconds.[verification needed]Decimal time proposals are frequently used in fiction, often infuturisticworks.
In addition to decimal time, there also existbinary clocksandhexadecimal time.
TheSwatch Internet Timesystem is based on Decimal time.
Many mechanical stopwatches are of the 'decimal minute' type. These split one minute into 100 units of 0.6s each. This makes addition and subtraction of times easier than using regular seconds.
The United States-basedNASA, when conducting missions to the planetMars, has typically used a time of day system calibrated to the mean solar day on that planet (known as a "sol"), training those involved on those missions to acclimate to that length of day, which is 88,775 SI seconds, or 2,375 seconds (about 39 minutes) longer than the mean solar day on Earth. NASA's Martian timekeeping system (instead of breaking down the sol into 25×53×67 or 25×67×53 SI second divisions) slows down clocks so that the 24-hour day is stretched to the length of that on Mars; Martian hours, minutes and seconds are thus 2.75% longer than their SI-compatible counterparts.[69][70]
TheDarian calendaris an arrangement of sols into a Martian year. It maintains a seven-sol week (retaining Sunday through Saturday naming customs), with four weeks to a month and 24 months to a Martian year, which contains 668 or 669 sols depending onleap years. The last Saturday of every six months is skipped over in the Darian calendar.
There are two diametrically opposed definitions of thedog year, primarily used to approximate the equivalent age ofdogsand other animals with similar life spans. Both are based upon apopular mythregarding theaging of dogsthat states that a dog ages seven years in the time it takes a human to age one year.
In fact, the aging of a dog varies by breed (larger breeds tend to have shorter lifespans than small and medium-sized breeds); dogs also develop faster and have longer adulthoods relative to their total life span than humans. Most dogs are sexually mature by 1 year old, which corresponds to perhaps 13 years old in humans.[citation needed]Giant dog breedsandbulldogstend to have the strongest linear correspondence to human aging, with longer adolescences and shorter overall lifespans; such breeds typically age about nine times as fast as humans throughout their lives.[73]
The galactic year, GY, is the time it takes theSolar Systemto revolve once around the galactic core, approximately 250 million years (megaannumor "Ma"). It is a convenient unit for long-term measurements. For example, oceans appeared on Earth after 4 GY, life is detectable at 5 GY, and multicellular organisms first appeared at 15 GY. Theage of the Earthis estimated at 20 GY.[74]This use of GY is not to be confused with Gyr forgigayearor Gy forGray (unit).
Amomentwas amedievalunit of time. The movement of a shadow on asundialcovered 40 moments in asolar hour. An hour in this case meant one twelfth of the period betweensunriseandsunset. The length of asolar hourdepended on the length of the day, which in turn varied with theseason, so the length of a moment in modernsecondswas not fixed, but onaverage, a moment corresponded to 90 seconds.
The term "minute" usually means1⁄60of anhour, coming from "a minute division of an hour". The term "second" comes from "thesecond minute divisionof an hour", as it is1⁄60of a minute, or1⁄60of1⁄60of an hour. While usually sub-second units are represented withSIprefixes on the second (e.g.milliseconds), this system can be extrapolated further, such that a "Third" would mean1⁄60of a second (16.7 milliseconds), and a "Fourth" would mean1⁄60of a third (278 microseconds), etc. These units are occasionally used in astronomy to denote angles.[75]
The Furman is a unit of angular measure equal to1⁄65,536of a circle, or just under 20arcseconds. It is named for Alan T. Furman, the American mathematician who adapted theCORDICalgorithm for 16-bit fixed-point arithmetic sometime around 1980.[76]16 bits give a resolution of 216= 65,536 distinct angles.
A related unit of angular measure equal to1⁄256of a circle, represented by 8 bits, has found some use in machinery control where fine precision is not required, most notably crankshaft and camshaft position ininternal combustion engine controllers, and in video game programming. There is no consensus as to its name, but it has been called the 8-Bit Furman, the Small Furman, the Furboy and more recently, the miFurman, (milli-binary-Furman). These units are convenient because binary integer overflow resembles angular arithmetic: the value of an 8-bit integer overflows from 255 to 0 when a full circle has been traversed. This means binary addition and subtraction work as expected for angular arithmetic. Measures are often made using aGray code, which is trivially converted into more conventional notation. Its value is equivalent toTau/256 radians, or about 0.0245436926 radians.
Coordinates were measured in grades on official French terrestrial ordnance charts from the French revolution well into the 20th century. 1 grade (or in modern symbology, 1 gon) = 0.9° or 0.01 right angle. One advantage of this measure is that the distance between latitude lines 0.01 gon apart at the equator is almost exactly 1 kilometre. It would be exactly 1 km if the original definition of 1 metre =1⁄10,000quarter-meridian had been adhered to. One disadvantage is that common angles like 30° and 60° are expressed by fractional values (33+1⁄3and66+2⁄3respectively) so this "decimal" unit failed to displace the "sexagesimal" units equilateral-vertex – degree – minute – second invented by Babylonian astronomers.[neutralityisdisputed]
Milsandstrecksare small units of angle used by various military organizations for range estimation and translating map coordinates used for directing artillery fire.[77]The exact size varies between different organizations: there are 6400 NATO mils perturn(1 NATO mil = 0.982mrad), or 6000 Warsaw pact mils per turn (1 Warsaw pact mil = 1.047 mrad). In the Swedish military, there are 6300 strecks per turn (1 streck = 0.997 mrad).
TheMERU, or Milli Earth Rate Unit, is an angular velocity equal to 1/1000 of Earth'srotation rate. It was introduced by MIT's Instrumentation Laboratories (nowDraper Labs) to measure the performance ofinertial navigation systems.[78]One MERU =7.292115×10^−8radiansper second[79]or about 0.2625 milliradians/hour.
In 2011, theUnited States Environmental Protection Agencyintroduced thegallon gasoline equivalentas a unit of energy because their research showed most U.S. citizens do not understand the standard units. The gallon gasoline equivalent is defined as 33.7 kWh,[80]or about 1.213×108joules.
Efficiency / fuel economy can be given asmiles per gallon gasoline equivalent.
The energy of various amounts of the explosiveTNT(kiloton, megaton, gigaton) is often used as a unit ofexplosionenergy, and sometimes of asteroid impacts and violent explosive volcanic eruptions. One ton of TNT produces 4.184×109joules, or (by arbitrary definition) exactly 109thermochemical calories(approximately 3.964×106BTU). This definition is only loosely based on the actual physical properties of TNT.
The energy released by theHiroshima bombexplosion (about 15 kt TNT equivalent, or 6×1013J) is often used bygeologistsas a unit when describing the energy ofearthquakes,volcanic eruptions, andasteroid impacts.
Prior to the detonation of the Hiroshima bomb, the size of theHalifax Explosion(about 3 kt TNT equivalent, or 1.26×1013J), was the standard for this type of relative measurement. Each explosion had been the largest known artificial detonation to date.[81]
Aquadis a unit of energy equal to 1015BTUs, or approximately 1.055×1018J (slightly over one exajoule). It is suitably large to quantify energy usage by nations or by the planet as whole using everyday numbers. For example, in 2004, US energy consumption was about 100 Q/year, while demand worldwide was about 400 Q/year.[82]
Afoeis a unit of energy equal to 1044joules(≈9.478×1040BTU) that was invented by physicist Gerry Brown ofStony Brook University. To measure the staggeringly immense amount of energy produced by asupernova, specialists occasionally use the "foe", an acronym derived from the phrase [ten to the power of]fifty-oneergs, or 1051ergs. This unit of measure is convenient because a supernova typically releases about one foe of observable energy in a very short period of time (which can be measured in seconds).
The rate at which heat is removed by melting one short ton (910 kg) of ice in 24 hours is called a ton of refrigeration, or even a ton of cooling. This unit of refrigeration capacity came from the days when large blocks of ice were used for cooling, and is still used to describe the heat-removal capabilities of refrigerators and chillers today. One ton of refrigeration is exactly equal to 12,000 BTU/h, or 3.517 kW.
With the phaseout of theincandescent lampin the United States and European Union[globalize]in the early 21st century, manufacturers and sellers of more energy-efficient lamps have compared the visible light output of their lamps to commonly used incandescent lamp sizes with thewattequivalentorwatt incandescent replacement(usually with a lowercase w as a unit symbol, as opposed to capital W for the actual wattage). 1 watt incandescent replacement corresponds to 15lumens. Thus, a 72-watthalogen lamp, a 23-wattcompact fluorescent lampand a 14-wattlight-emitting diode lamp, all of which emit 1500 lumens of visible light, are all marketed as "100 watt incandescent replacement" (100w).
The volume of discharge of theAmazon Riversometimes used to describe large volumes of water flow such as ocean currents. The unit is equivalent to 216,000m3/s.[83]
Onesverdrup(Sv) is equal to 1,000,000 cubic metres per second (264,000,000 USgal/s). It is used almost exclusively inoceanographyto measure the volumetric rate of transport ofocean currents.
TheBubnoff unitis defined as 1 micrometre per year (3.169×10−14m/s), or one millimeter per 1,000 years. It is employed ingeologyto measure rates of lowering of earth surfaces due toerosion.
Thelangley(symbol Ly) is used to measure solar radiation orinsolation. It is equal to one thermochemicalcalorieper square centimetre (4.184×104J/m2or ≈3.684 BTU/sq ft) and was named afterSamuel Pierpont Langley. Its symbol should not be confused with that for the light-year, ly.
One of the fewCGSunits to see wider use, onestokes(symbol S or St) is a unit ofkinematicviscosity, defined as 1 cm2/s, i.e., 10−4m2/s (≈1.08×10−3sq ft/s).
MERU(Milli Earth Rate Unit), anangular velocityequal to1⁄1000of Earth's rotation rate: 1 MERU = 0.015 degrees/hour ≈ 0.072921 microradian/second. Sometimes used to measure the angular drift rate of aninertial navigation system.[84]
Inradio astronomy, the unit ofelectromagnetic fluxis thejansky(symbol Jy), equivalent to 10−26wattsper square metre perhertz(= 10−26kg/s2in base units, about 8.8×10−31BTU/ft2). It is named after the pioneering radio astronomerKarl Jansky. The brightest natural radio sources have flux densities of the order of one to one hundred jansky.
A material-dependent unit used innuclearandparticle physicsand engineering to measure the thickness of shielding, for example around anuclear reactor,particle accelerator, orradiation or particle detector. 1 mwe of a material is the thickness of that material that provides the equivalent shielding of one metre (≈39.4 in) of water.
This unit is commonly used in underground science to express the extent to which theoverburden(usually rock) shields an underground space or laboratory fromcosmic rays. The actual thickness of overburden through which cosmic rays must traverse to reach the underground space varies as a function of direction due to the shape of the overburden, which may be a mountain, or a flat plain, or something more complex like a cliff side. To express the depth of an underground space in mwe (or kmwe for deep sites) as a single number, the convention is to use the depth beneath a flat overburden atsea levelthat gives the same overall cosmic raymuonflux in the underground location.
Thestrontium unit, formerly known as the Sunshine Unit (symbol S.U.), is a unit of biological contamination by radioactive substances (specificallystrontium-90). It is equal to onepicocurieof Sr-90 pergramof bodycalcium. Since about 2% of the human body mass is calcium, and Sr-90 has ahalf-lifeof 28.78 years, releasing 6.697+2.282MeVper disintegration, this works out to about 1.065×10−12graysper second. The permissible body burden was established at 1,000 S.U.
Bananas, like most organic material, naturally contain a certain amount of radioactive isotopes—even in the absence of any artificial pollution or contamination. Thebanana equivalent dose, defined as the additional dose a person will absorb from eating one banana, expresses the severity of exposure to radiation, such as resulting from nuclear weapons or medical procedures, in terms that would make sense to most people. This is approximately 78 nanosieverts– in informal publications one often sees this estimate rounded up to 0.1 μSv.
Natural background radiationtypically increases with altitude above the earth's surface. Utilizing this phenomenon, dose resulting from radiological exposures can be expressed in units of flight-time.Flight-time equivalent doseis defined as the time spent in an aircraft at cruising altitude required to receive a radiological dose approximately equivalent to a radiological exposure such as a medicalx-ray. One hour of flight-time is approximately equivalent to a dose of 0.004 millisieverts.
In thepulp and paperindustry,molar massis traditionally measured with a method where theintrinsic viscosity(dL/g) of the pulp sample is measured in cupriethylenediamine (Cuen). The intrinsic viscosity [η] is related to the weight-average molar mass (indaltons) by theMark-Houwink equation: [η] = 0.070 Mw0.70.[85]However, it is typical to cite [η] values directly in dL/g, as the "viscosity" of the cellulose, confusingly as it is not a viscosity.
In measuring unsaturation in fatty acids, the traditional method is theiodine number. Iodine adds stoichiometrically to double bonds, so their amount is reported in grams of iodine spent per 100 grams of oil. The standard unit is a dimensionless stoichiometry ratio of moles double bonds to moles fatty acid. A similar quantity,bromine number, is used in gasoline analysis.
In pulp and paper industry, a similarkappa numberis used to measure how muchbleachinga pulp requires.Potassium permanganateis added to react with the unsaturated compounds (lignin and uronic acids) in the pulp and back-titrated. Originally withchlorine bleachingthe required quantity ofchlorinecould be then calculated, although modern methods use multiple stages. Since the oxidizable compounds are not exclusively lignin and the partially pulped lignin does not have a single stoichiometry, the relation between the kappa number and the preciseamountof lignin is inexact.
Gas Markis a temperature scale, predominantly found on Britishovens, that scaleslinearlywith temperature above 135 °C (Gas Mark 1) and scales with thelogof Celsius below 135 °C.
Demographyand quantitativeepidemiologyarestatisticalfields that deal withcountsor proportions of people, or rates of change in these.Countsand proportions are technicallydimensionless, and so have nounits of measurement, although identifiers such as "people", "births", "infections" and the like are used for clarity. Rates of change arecounts per unit of timeand strictly have inverse time dimensions (per unit of time). Indemographyand epidemiology expressions such as"deaths per year"are used to clarify what is being measured.
Prevalence, a common measure inepidemiology, is strictly a type ofdenominator data, a dimensionless ratio or proportion.Prevalencemay be expressed as a fraction, a percentage or as the number of cases per 1,000, 10,000, or 100,000 in the population of interest.
A micromort is a unit ofriskmeasuring a one-in-a-millionprobabilityofdeath(frommicro-andmortality). Micromorts can be used to measure riskiness of various day-to-day activities. A microprobability is a one-in-a million chance of some event; thus a micromort is the microprobability of death. For example, smoking 1.4 cigarettes increases one's death risk by one micromort, as does traveling 370 km (230 miles) by car.
The large numbers of people involved indemographyare often difficult to comprehend. A useful visualisation tool is the audience capacity of large sports stadiums (often about 100,000). Often the capacity of the largest stadium in a region serves as a unit for a large number of people. For example, Uruguay'sEstadio Centenariois often used inUruguay,[86][87]while in parts of the United States,Michigan Stadiumis used in this manner.[88]In Australia, the capacity of theMelbourne Cricket Ground(about 100,000) is often cited in this manner. Hence theMelbourne Cricket Groundserves as both a measure of people and aunit of volume.[89][90][91]
The growth of computing has necessitated the creation of many new units, several of which are based on unusual foundations.
Volume or capacity of data is often compared to works of literature or large collections of writing. Popular units include the Bible, the Encyclopædia Britannica, phone books,the complete works of Shakespeare, and theLibrary of Congress.
When theCompact Discbegan to be used as adata storage deviceas theCD-ROM, journalists often described the disc capacity (650megabytes) by using the number of ChristianBiblesthat can be stored. TheKing James Version of the Biblein uncompressed plain 8-bit text contains about 4.5 million characters,[92]so aCD-ROMcan store about 150Bibles.
The print version of theEncyclopædia Britannicais another common data volume metric. It contains approximately 300 million characters,[93]so two copies would fit onto aCD-ROMand still have 50 megabytes (or about 11 bibles) left over.
The termLibrary of Congressis often used. It refers to the USLibrary of Congress. Information researchers have estimated that the entire print collections of the Library of Congress represent roughly 10terabytesof uncompressedtextual data.[94]
A measure of quantity of data or information, the "nibble" (sometimes spelled "nybble" or "nybl") is normally equal to 4 bits, or one half of the common 8-bit byte. The nibble is used to describe the amount of memory used to store a digit of a number stored inbinary-coded decimalformat, or to represent a single hexadecimal digit. Less commonly, 'nibble' may be used for any contiguous portion of a byte of specified length, e.g. "6-bit nibble"; this usage is most likely to be encountered in connection with a hardware architecture in which the word length is not a multiple of 8, such as older 36-bit minicomputers.
Incomputing, FLOPS (FLoating pointOperationsPerSecond) is a measure of a computer's computing power. It is also common to see measurements of kilo, mega, giga, and teraFLOPS.
It is also used to compare the performance of computers in practice.[95]
A measure to determine theCPUspeed. It was invented byLinus Torvaldsand is nowadays present on everyLinuxoperating system. However, it is not a meaningful measure to assess the actualCPUperformance.
A computer programming expression, theK-LOCor KLOC, pronouncedkay-lok, standing for "kilo-lines of code", i.e., thousand lines of code. The unit was used, especially byIBMmanagers,[96]to express the amount of work required to develop a piece of software. Given that estimates of 20 lines of functional code per day per programmer were often used, it is apparent that 1 K-LOC could take one programmer as long as 50 working days, or 10 working weeks. This measure is no longer in widespread use because different computer languages require different numbers of lines to achieve the same result (occasionally the measure "assembly equivalent lines of code" is used, with appropriate conversion factors from the language actually used to assembly language).
Error rates in programming are also measured in "Errors per K-LOC", which is called thedefect density. NASA'sSATCis one of the few organizations to claim zero defects in a large (>500K-LOC) project, for the space shuttle software.
An alternative measurement was defined byPegasus MailauthorDavid Harris: the "WaP" is equivalent to 71,500 lines of program code, because that number of lines is the length of one edition ofLeo Tolstoy'sWar and Peace.[97]
The "tick" is the amount of time between timerinterruptsgenerated by the timer circuit of a CPU. The amount of time is processor-dependent.[98][99]The word "tick" is also used to describe steps of processing in apps and video games, for example, Minecraft servers process the simulation at a rate of 20 ticks per second,[100][better source needed]while other games commonly use tickrates of 30, 60, 64, or 128 ticks per second.
In genetics, a centimorgan (abbreviated cM) or map unit (m.u.) is a unit for measuring genetic linkage. It is defined as the distance between chromosome positions (also termed loci or markers) for which the expected average number of intervening chromosomal crossovers in a single generation is 0.01. It is often used to infer distance along a chromosome. One centimorgan corresponds to about 1 million base pairs in humans on average.
Chesssoftware frequently uses centipawns internally or externally as a unit measuring how strong each player's situation position is, and hence also by how much one player is beating the other, and how strong a possible move is.[101]100 centipawns =the value of1 pawn – more specifically, something like the average value of the pawns at the start of the game, as the actual value of pawns depends on their position. Loss of a pawn will therefore typically lose that player 100 centipawns. The centipawn is often used for comparing possible moves, as in a given position, chess software will often rate the better of two moves within a few centipawns of each other.
The garn is NASA's unit of measure for symptoms resulting fromspace adaptation syndrome, the response of the human body to weightlessness in space, named after US SenatorJake Garn, who became exceptionally spacesick during an orbital flight in 1985. If an astronaut is completely incapacitated by space adaptation syndrome, he or she is under the effect of one garn of symptoms.[102]
Formerly used in real estate transactions in the American Southwest, it was the number of pregnant cows an acre of a given plot of land could support. It acted as a proxy for the agricultural quality, natural resource availability, and arability of a parcel of land.[103]
Numbers very close to, but below one are often expressed in "nines" (N – not to be confused with the unitnewton), that is in the number of nines following the decimal separator in writing the number in question. For example, "three nines" or "3N" indicates 0.999 or 99.9%, "four nines five" or "4N5" is the expression for the number 0.99995 or 99.995%.[104][105][106]
Typical areas of usage are:
The dol (from theLatinword for pain,dolor) is aunit of measurementforpain.James D. Hardy,Herbert G. Wolff, andHelen GoodellofCornell Universityproposed the unit based on their studies of pain during the 1940s and 1950s. They defined one dol to equal ajust-noticeable differencein pain. The unit never came into widespread use and other methods are now used to assess the level of pain experienced by patients.
The Schmidt sting pain index and Starr sting pain index arepain scalesrating the relative pain caused by differenthymenopterastings. Schmidt has refined hispain index(with a 1–4 scale) with extensive anecdotal experience, culminating in a paper published in 1990 which classifies the stings of 78 species and 41 genera ofHymenoptera. TheStarr sting pain scaleuses the same 1–4 scale.
The ASTA (American Spice Trade Association)pungencyunit is based on a scientific method of measuring chili pepper "heat". The technique utilizeshigh-performance liquid chromatographyto identify and measure the concentrations of the various compounds that produce a heat sensation. Scoville units are roughly1⁄15the size of pungency units while measuringcapsaicin, so a rough conversion is to multiply pungency by 15 to obtain Scoville heat units.[107]
TheScoville scaleis a measure of the hotness of achili pepper. It is the degree of dilution in sugar water of a specific chili pepper extract when a panel of 5 tasters can no longer detect its "heat".[108]Purecapsaicin(the chemical responsible for the "heat") has 16 million Scoville heat units.
The widely-readMAD Magazinemade extensive use, as arunning gag, of thePotrzebiesystem of units, which included units of length, mass etc.
Up to the 20th century, alcoholic spirits were assessed in the UK by mixing with gunpowder and testing the mixture to see whether it would still burn; spirit that just passed the test was said to be at 100°proof. The UK now uses percentagealcohol by volume(ABV) at 20 °C (68.0 °F), where spirit at 100° proof is approximately 57.15% ABV. In the US, "proof number" is defined as twice the ABV at 60 °F (15.6 °C).[109]
Thesavartis an 18th-century unit for measuring the frequency ratio of two sounds. It is equal to1⁄1000of adecade(not to be confused with the time period equal to 10 years). Thecentis preferred for musical use.
The erlang, named after A. K. Erlang, as adimensionless unitis used intelephonyas a statistical measure of the offered intensity oftelecommunicationstrafficon a group of resources. Traffic of one erlang refers to a singleresourcebeing in continuous use, or twochannelsbeing at fifty percent use, and so on,pro rata. Much telecommunications management and forecasting software uses this.
The crab is defined as the intensity of X-rays emitted from theCrab Nebulaat a given photon energy up to 30 kiloelectronvolts. The Crab Nebula is often used for calibration of X-ray telescopes. For measuring the X-ray intensity of a less energetic source, the milliCrab (mCrab) may be used.
One crab is approximately 24 pW/m2. | https://en.wikipedia.org/wiki/List_of_unusual_units_of_measurement |
Metric timeis the measure of time intervals using themetric system. The modernSIsystem defines thesecondas thebase unitof time, and forms multiples and submultiples withmetric prefixessuch as kiloseconds and milliseconds. Other units of time –minute,hour, andday– areaccepted for use with SI, but are not part of it. Metric time is a measure of time intervals, whiledecimal timeis a means of recordingtime of day.
The second is derived from thesexagesimalsystem, which originated with theSumeriansandBabylonians. This system divides a base unit into sixty minutes, each minute into sixty seconds, and each second into sixtytierces. The word "minute" comes from the Latinpars minuta prima, meaning "first small part", and "second" frompars minuta secundaor "second small part".Angular measurealso uses sexagesimal units; there, it is thedegreethat is subdivided into minutes and seconds, while in time, it is the hour.
In 1790, French diplomatCharles Maurice de Talleyrand-Périgordproposed that the fundamental unit of length for the metric system should be the length of apendulumwith a one-second period, measured at sea level on the 45th parallel (50 grades in the new angular measures), thus basing the metric system on the value of the second. A Commission of Weights and Measures was formed within the French Academy of Sciences to develop the system. The commission rejected the seconds-pendulum definition of themetrethe following year because the second of time was an arbitrary period equal to 1/86,400 day, rather than a decimal fraction of a natural unit. Instead, the metre would be defined as a decimal fraction of the length of theParis Meridianbetween theequatorand theNorth Pole.[1][2][3][4][5]
The commission initially proposed the decimal time units later enacted as part of the newRepublican calendar. In January, 1791,Jean-Charles de Bordacommissioned Louis Berthoud to manufacture a decimal chronometer displaying these units. On March 28, 1794, the commission's president,Joseph Louis Lagrange, proposed using the day (Frenchjour) as the base unit of time, with divisionsdéci-jourandcenti-jour,and suggested representing 4déci-joursand 5centi-joursas "4,5", "4/5", or just "45".[6]The final system, as introduced in 1795, included units for length, area, dry volume, liquid capacity, weight or mass, and currency, but not time.Decimal time of dayhad been introduced in France two years earlier, but mandatory use was suspended at the same time the metric system was inaugurated, and did not follow the metric pattern of a base unit and prefixed units.
Base units equivalent to decimal divisions of the day, such as 1/10, 1/100, 1/1,000, or 1/100,000 day, or other divisions of the day, such as 1/20 or 1/40 day, have also been proposed, with various names. Such alternative units did not gain any notable acceptance. In China, during theSong dynasty, a day was divided into smaller units, calledkè(刻). Onekèwas usually defined as1⁄100of a day until 1628, though there were short periods before then where days had 96, 108 or 120kè.[7]A kè is about 14.4 minutes, or 14 minutes 24 seconds. In the 19th century, Joseph Charles François de Rey-Pailhade endorsed Lagrange’s proposal of usingcentijours,but abbreviatedcé, and divided into 10decicés, 100centicés, 1,000millicés,[8]and 10,000dimicés.[9][10]
James Clerk MaxwellandElihu Thomson(through theBritish Association for the Advancement of Science, or BAAS) introduced theCentimetre gram second system of unitsin 1874 to derive electric and magnetic metric units, following the recommendation ofCarl Friedrich Gaussin 1832.
In 1897, theCommission de décimalisation du tempswas created by the FrenchBureau of Longitude, with the mathematicianHenri Poincaréas secretary. The commission proposed making the standardhourthe base unit of metric time, but the proposal did not gain acceptance and was eventually abandoned.[11]
When the modernSIsystem was defined at the 10thGeneral Conference on Weights and Measures(CGPM) in 1954, the ephemeris second (1/86400 of a mean solar day) was made one of the system's base units. Because the Earth's rotation is slowly decelerating at an irregular rate and was thus unsuitable as a reference point for precise measurements, the SI second was later redefined more precisely as the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of theground stateof thecaesium-133 atom. The international standardatomic clocksuse caesium-133 measurements as their main benchmark.
In computing, at least internally, metric time gained widespread use for ease of computation.Unix timegives date and time as the number of seconds since January 1, 1970, and Microsoft'sNTFSFILETIME as multiples of 100 ns since January 1, 1601.VAX/VMSuses the number of 100 ns since November 17, 1858, andRISC OSthe number of centiseconds since January 1, 1900.Microsoft Exceluses number of days (with decimals,floating point) since January 1, 1900.
All these systems present time for the user using traditional units. None of these systems is strictly linear, as they each have discontinuities atleap seconds.
Metric prefixes for subdivisions of a second are commonly used in science and technology. Milliseconds and microseconds are particularly common. Prefixes for multiples of a second are rarely used: | https://en.wikipedia.org/wiki/Metric_time |
Astardateis a fictional system of time measurement developed for the television and film seriesStar Trek. In the series, use of this date system is commonly heard at the beginning of avoice-overlog entry, such as "Captain's log, stardate 41153.7. Our destination is planet Deneb IV …". While the original method was inspired by theModified Julian date[1][2][3]system currently used by astronomers, the writers and producers have selected numbers using different methods over the years, some more arbitrary than others. This makes it impossible to convert all stardates into equivalent calendar dates, especially since stardates were originally intended to avoid specifying exactly whenStar Trektakes place.[4]
The original 1967Star Trek Guide(April 17, 1967, p. 25) instructed writers forthe originalStar TrekTV serieson how to select stardates for their scripts. Writers could pick any combination of four numbers plus a decimal point, and aim for consistency within a single script, but not necessarily between different scripts. This was to "avoid continually mentioningStar Trek's century" and avoid "arguments about whether this or that would have developed by then".[5]Though the guide sets the series "about two hundred years from now", the few references within the show itself were contradictory, and later productions and reference materials eventually placed the series between the years 2265 and 2269. The second pilot begins on stardate 1312.4 and the last-produced episode on stardate 5928.5.[6]Though the dating system was revised forStar Trek: The Next Generation, the pilot ofStar Trek: Discoveryfollows the original series' dating system, starting on stardate 1207.3, which is stated precisely to be Sunday, May 11, 2256.[7]
SubsequentStar Trekseries followed a new numerical convention.Star Trek: The Next Generation(TNG) revised the stardate system in the 1987Star Trek: The Next Generation Writer's/Director's Guide, to five digits and one decimal place. According to the guide, the first digit "4" should represent the 24th century, with the second digit representing thetelevision season. The remaining digits can progress unevenly, with the decimal representing the time as fractional days. Stardates ofStar Trek: Deep Space Ninebegan with 46379.1, corresponding to the sixth season ofTNGwhich was also set in the year 2369.Star Trek: Voyagerbegan with stardate 48315.6 (2371), one season afterTNGhad finished its seventh and final season. As inTNG, the second digit would increase by one every season, while the initial two digits eventually rolled over from 49 to 50, despite the year 2373 still being in the 24th century.Star Trek: Nemesiswas set around stardate 56844.9.Star Trek: Discoverytraveled to the year 3188, giving a stardate of 865211.3, corresponding to that year in this system of stardates.
On March 9, 2023,Star Trek: Picardgave a stardate of 78183.10. This indicates a continuity withTNG. Each stardate increment represents one milliyear, with 78 years in 2401, counted from 2323. The decimal represents a fractional day. Thus, stardates are a composition of two types ofdecimal time. In the twenty-first century, this would indicate 78 years from 1945.
Stardates usually are expressed with a single decimal digit, but sometimes with more than one. For instance,The Next Generationepisode,"The Child", displays the stardate 42073.1435. According toThe Star Trek Guide, the official writers' guide for the original series:
Likewise, page 32 of the 1988Star Trek: The Next Generation Writer's/Director's Guidefor season two states:
This was demonstrated by the ship's chronometer in theTOS-Remasteredepisode, "The Naked Time," and by Captain Varley's video logs in theTNGepisode "Contagion". The latter displays several stardates with two decimal digits next to corresponding times.
AdditionalStar Trekmedia have generated their own numbering systems. The 2009MMORPGStar Trek Onlinebegan on stardate 86088.58, in the in-game year 2409, counting 1000 stardates per year from May 25, 1922.[8]WriterRoberto Orcirevised the system for the2009 filmStar Trekso that the first four digits correspond to the year, while the remainder was intended to stand for the day of the year, in effect representing anordinal date.[9][10][11]In the first installment of the movie trilogy,Spockmakes his log of the destruction of Vulcan on stardate 2258.42, or February 11, 2258.Star Trek Into Darknessbegins on stardate 2259.55, or February 24, 2259.[12]Star Trek Beyondbegins on stardate 2263.02, or January 2, 2263. InThe Big Bang Theoryepisode, "The Adhesive Duck Deficiency",Sheldon Coopergives the stardate 63345.3, corresponding with the date of theLeonid meteor showerthat year, November 17, 2009.[13] | https://en.wikipedia.org/wiki/Stardate |
Unix time[a]is a date and time representation widely used incomputing. It measures time by the number of non-leap secondsthat have elapsed since 00:00:00UTCon 1 January 1970, the Unixepoch. For example, at midnight on 1 January 2010, Unix time was 1262304000.
Unix time originated as thesystem timeofUnixoperating systems. It has come to be widely used in other computeroperating systems,file systems,programming languages, anddatabases. In modern computing, values are sometimes stored with highergranularity, such asmicrosecondsornanoseconds.
Unix time is currently defined as the number of non-leap seconds which have passed since 00:00:00UTC on Thursday, 1 January 1970, which is referred to as theUnixepoch.[3]Unix time is typically encoded as asigned integer.
The Unix time0is exactly midnight UTC on 1 January 1970, with Unix time incrementing by 1 for every non-leap second after this. For example, 00:00:00UTC on 1 January 1971 is represented in Unix time as31536000. Negative values, on systems that support them, indicate times before the Unix epoch, with the value decreasing by 1 for every non-leap second before the epoch. For example, 00:00:00UTC on 1 January 1969 is represented in Unix time as−31536000. Every day in Unix time consists of exactly86400seconds.
Unix time is sometimes referred to asEpoch time. This can be misleading since Unix time is not the only time system based on an epoch and the Unix epoch is not the only epoch used by other time systems.[5]
Unix time differs from bothCoordinated Universal Time(UTC) andInternational Atomic Time(TAI) in its handling ofleap seconds. UTC includes leap seconds that adjust for the discrepancy between precise time, as measured byatomic clocks, andsolar time, relating to the position of the earth in relation to the sun.International Atomic Time(TAI), in which every day is precisely86400seconds long, ignores solar time and gradually losessynchronizationwith the Earth's rotation at a rate of roughly one second per year. In Unix time, every day contains exactly86400seconds. Each leap second uses thetimestampof a second that immediately precedes or follows it.[3]
On a normal UTC day, which has a duration of86400seconds, the Unix time number changes in acontinuousmanner across midnight. For example, at the end of the day used in the examples above, the time representations progress as follows:
When aleap secondoccurs, the UTC day is not exactly86400seconds long and the Unix time number (which always increases by exactly86400each day) experiences adiscontinuity. Leap seconds may be positive or negative. No negative leap second has ever been declared, but if one were to be, then at the end of a day with a negative leap second, the Unix time number would jump up by 1 to the start of the next day. During a positive leap second at the end of a day, which occurs about every year and a half on average, the Unix time number increases continuously into the next day during the leap second and then at the end of the leap second jumps back by 1 (returning to the start of the next day). For example, this is what happened on strictly conformingPOSIX.1systems at the end of 1998:
Unix time numbers are repeated in the second immediately following a positive leap second. The Unix time number1483228800is thus ambiguous: it can refer either to start of the leap second (2016-12-31 23:59:60) or the end of it, one second later (2017-01-01 00:00:00). In the theoretical case when a negative leap second occurs, no ambiguity is caused, but instead there is a range of Unix time numbers that do not refer to any point in UTC time at all.
A Unix clock is often implemented with a different type of positive leap second handling associated with theNetwork Time Protocol(NTP). This yields a system that does not conform to the POSIX standard. See the section below concerning NTP for details.
When dealing with periods that do not encompass a UTC leap second, the difference between two Unix time numbers is equal to the duration in seconds of the period between the corresponding points in time. This is a common computational technique. However, where leap seconds occur, such calculations give the wrong answer. In applications where this level of accuracy is required, it is necessary to consult a table of leap seconds when dealing with Unix times, and it is often preferable to use a different time encoding that does not suffer from this problem.
A Unix time number is easily converted back into a UTC time by taking the quotient and modulus of the Unix time number, modulo86400. The quotient is the number of days since the epoch, and the modulus is the number of seconds since midnight UTC on that day. If given a Unix time number that is ambiguous due to a positive leap second, this algorithm interprets it as the time just after midnight. It never generates a time that is during a leap second. If given a Unix time number that is invalid due to a negative leap second, it generates an equally invalid UTC time. If these conditions are significant, it is necessary to consult a table of leap seconds to detect them.
Commonly aMills-style Unix clock is implemented with leap second handling not synchronous with the change of the Unix time number. The time number initially decreases where a leap should have occurred, and then it leaps to the correct time 1 second after the leap. This makes implementation easier, and is described by Mills' paper.[6]This is what happens across a positive leap second:
This can be decoded properly by paying attention to the leap second state variable, which unambiguously indicates whether the leap has been performed yet. The state variable change is synchronous with the leap.
A similar situation arises with a negative leap second, where the second that is skipped is slightly too late. Very briefly the system shows a nominally impossible time number, but this can be detected by the TIME_DEL state and corrected.
In this type of system the Unix time number violates POSIX around both types of leap second. Collecting the leap second state variable along with the time number allows for unambiguous decoding, so the correct POSIX time number can be generated if desired, or the full UTC time can be stored in a more suitable format.
The decoding logic required to cope with this style of Unix clock would also correctly decode a hypothetical POSIX-conforming clock using the same interface. This would be achieved by indicating the TIME_INS state during the entirety of an inserted leap second, then indicating TIME_WAIT during the entirety of the following second while repeating the seconds count. This requires synchronous leap second handling. This is probably the best way to express UTC time in Unix clock form, via a Unix interface, when the underlying clock is fundamentally untroubled by leap seconds.
Another, much rarer, non-conforming variant of Unix time keeping involves incrementing the value for all seconds, including leap seconds;[7]some Linux systems are configured this way.[8]Time kept in this fashion is sometimes referred to as "TAI" (although timestamps can be converted to UTC if the value corresponds to a time when the difference between TAI and UTC is known), as opposed to "UTC" (although not all UTC time values have a unique reference in systems that do not count leap seconds).[8]
Because TAI has no leap seconds, and every TAI day is exactly 86400 seconds long, this encoding is actually a pure linear count of seconds elapsed since 1970-01-01T00:00:10TAI. This makes time interval arithmetic much easier. Time values from these systems do not suffer the ambiguity that strictly conforming POSIX systems or NTP-driven systems have.
In these systems it is necessary to consult a table of leap seconds to correctly convert between UTC and the pseudo-Unix-time representation. This resembles the manner in which time zone tables must be consulted to convert to and fromcivil time; theIANA time zone databaseincludes leap second information, and the sample code available from the same source uses that information to convert between TAI-based timestamps and local time. Conversion also runs into definitional problems prior to the 1972 commencement of the current form of UTC (see sectionUTC basisbelow).
This system, despite its superficial resemblance, is not Unix time. It encodes times with values that differ by several seconds from the POSIX time values. A version of this system, in which the epoch was 1970-01-01T00:00:00TAI rather than 1970-01-01T00:00:10TAI, was proposed for inclusion in ISO C'stime.h, but only the UTC part was accepted in 2011.[9]Atai_clockdoes, however, exist in C++20.
A Unix time number can be represented in any form capable of representing numbers. In some applications the number is simply represented textually as a string of decimal digits, raising only trivial additional problems. However, certain binary representations of Unix times are particularly significant.
The Unixtime_tdata type that represents a point in time is, on many platforms, asigned integer, traditionally of 32bits(butsee below), directly encoding the Unix time number as described in the preceding section. A signed 32-bit value covers about 68 years before and after the 1970-01-01 epoch. The minimum representable date is Friday 1901-12-13, and the maximum representable date is Tuesday 2038-01-19. One second after 2038-01-19T03:14:07Z this representation willoverflowin what is known as theyear 2038 problem.
UUIDv7 encodes the Unix epoch timestamp (in milliseconds) in an unsigned 48-bit field. This representation is valid until the year 10889 AD.[10]
In some newer operating systems,time_thas been widened to 64 bits. This expands the times representable to about292.3 billion yearsin both directions, which is over twenty times the presentage of the universe.
There was originally some controversy over whether the Unixtime_tshould be signed or unsigned. If unsigned, its range in the future would be doubled, postponing the 32-bit overflow (by 68 years). However, it would then be incapable of representing times prior to the epoch. The consensus is fortime_tto be signed, and this is the usual practice. The software development platform for version 6 of theQNXoperating system has an unsigned 32-bittime_t, though older releases used a signed type.
ThePOSIXandOpen GroupUnix specifications include theC standard library, which includes the time types and functions defined in the<time.h>header file. The ISO C standard states thattime_tmust be an arithmetic type, but does not mandate any specific type or encoding for it. POSIX requirestime_tto be an integer type, but does not mandate that it be signed or unsigned.
Unix has no tradition of directly representing non-integer Unix time numbers as binary fractions. Instead, times with sub-second precision are represented usingcomposite data typesthat consist of two integers, the first being atime_t(the integral part of the Unix time), and the second being the fractional part of the time number in millionths (instruct timeval) or billionths (instruct timespec).[11][12]These structures provide adecimal-basedfixed-pointdata format, which is useful for some applications, and trivial to convert for others.
The present form of UTC, with leap seconds, is defined only starting from 1 January 1972. Prior to that, since 1 January 1961 there was an older form of UTC in which not only were there occasional time steps, which were by non-integer numbers of seconds, but also the UTC second was slightly longer than the SI second, and periodically changed to continuously approximate the Earth's rotation. Prior to 1961 there was no UTC, and prior to 1958 there was no widespreadatomic timekeeping; in these eras, some approximation ofGMT(based directly on the Earth's rotation) was used instead of an atomic timescale.[citation needed]
The precise definition of Unix time as an encoding of UTC is only uncontroversial when applied to the present form of UTC. The Unix epoch predating the start of this form of UTC does not affect its use in this era: the number of days from 1 January 1970 (the Unix epoch) to 1 January 1972 (the start of UTC) is not in question, and the number of days is all that is significant to Unix time.
The meaning of Unix time values below+63072000(i.e., prior to 1 January 1972) is not precisely defined. The basis of such Unix times is best understood to be an unspecified approximation of UTC. Computers of that era rarely had clocks set sufficiently accurately to provide meaningful sub-second timestamps in any case. Unix time is not a suitable way to represent times prior to 1972 in applications requiring sub-second precision; such applications must, at least, define which form of UT or GMT they use.
As of 2009[update], the possibility of ending the use of leap seconds in civil time is being considered.[13]A likely means to execute this change is to define a new time scale, calledInternational Time[citation needed], that initially matches UTC but thereafter has no leap seconds, thus remaining at a constant offset from TAI. If this happens, it is likely that Unix time will be prospectively defined in terms of this new time scale, instead of UTC. Uncertainty about whether this will occur makes prospective Unix time no less predictable than it already is: if UTC were simply to have no further leap seconds the result would be the same.
The earliest versions of Unix time had a 32-bit integer incrementing at a rate of 60Hz, which was the rate of the system clock on the hardware of the early Unix systems. Timestamps stored this way could only represent a range of a little over two and a quarter years. The epoch being counted from was changed with Unix releases to prevent overflow, with midnight on 1 January 1971 and 1 January 1972 both being used as epochs during Unix's early development. Early definitions of Unix time also lacked timezones.[14][15]
The current epoch of 1 January 1970 00:00:00 UTC was selected arbitrarily by Unix engineers because it was considered a convenient date to work with. The precision was changed to count in seconds in order to avoid short-term overflow.[1]
WhenPOSIX.1was written, the question arose of how to precisely definetime_tin the face of leap seconds. The POSIX committee considered whether Unix time should remain, as intended, a linear count of seconds since the epoch, at the expense of complexity in conversions with civil time or a representation of civil time, at the expense of inconsistency around leap seconds. Computer clocks of the era were not sufficiently precisely set to form a precedent one way or the other.
The POSIX committee was swayed by arguments against complexity in the library functions,[citation needed]and firmly defined the Unix time in a simple manner in terms of the elements of UTC time. This definition was so simple that it did not even encompass the entireleap yearrule of the Gregorian calendar, and would make 2100 a leap year.
The 2001 edition of POSIX.1 rectified the faulty leap year rule in the definition of Unix time, but retained the essential definition of Unix time as an encoding of UTC rather than a linear time scale. Since the mid-1990s, computer clocks have been routinely set with sufficient precision for this to matter, and they have most commonly been set using the UTC-based definition of Unix time. This has resulted in considerable complexity in Unix implementations, and in theNetwork Time Protocol, to execute steps in the Unix time number whenever leap seconds occur.[citation needed]
Unix time is widely adopted in computing beyond its original application as the system time forUnix. Unix time is available in almost all system programmingAPIs, including those provided by both Unix-based and non-Unixoperating systems. Almost all modernprogramming languagesprovide APIs for working with Unix time or converting them to another data structure. Unix time is also used as a mechanism for storing timestamps in a number offile systems,file formats, anddatabases.
TheC standard libraryuses Unix time for all date and time functions, and Unix time is sometimes referred to as time_t, the name of thedata typeused for timestamps inCandC++. C's Unix time functions are defined as the system time API in thePOSIXspecification.[16]The C standard library is used extensively in all modern desktop operating systems, includingMicrosoft WindowsandUnix-likesystems such asmacOSandLinux, where it is a standard programming interface.[17][18][19]
iOSprovides a Swift API which defaults to using an epoch of 1 January 2001 but can also be used with Unix timestamps.[20]Androiduses Unix time alongside a timezone for its system time API.[21]
Windows does not use Unix time for storing time internally but does use it in system APIs, which are provided in C++ and implement the C standard library specification.[17]Unix time is used in thePE formatfor Windows executables.[22]
Unix time is typically available in major programming languages and is widely used in desktop, mobile, and web application programming.Javaprovides an Instant object which holds a Unix timestamp in both seconds and nanoseconds.[23]Pythonprovides a time library which uses Unix time.[24]JavaScriptprovides a Date library which provides and stores timestamps in milliseconds since the Unix epoch and is implemented in all modern desktop and mobileweb browsersas well as in JavaScript server environments likeNode.js.[25]
Free Pascalimplements UNIX time with the GetTickCount (deprecated unsigned 32 bit) and GetTickCount64 (Unsigned 64 bit) functions to a resolution of 1ms onUnix-likeplatforms.
Filesystems designed for use with Unix-based operating systems tend to use Unix time.APFS, the file system used by default across all Apple devices, andext4, which is widely used on Linux and Android devices, both use Unix time in nanoseconds for file timestamps.[26][27]Severalarchive file formatscan store timestamps in Unix time, includingRARandtar.[28][29]Unix time is also commonly used to store timestamps in databases, including inMySQLandPostgreSQL.[30][31]
Unix time was designed to encode calendar dates and times in a compact manner intended for use by computers internally. It is not intended to be easily read by humans or to store timezone-dependent values. It is also limited by default to representing time in seconds, making it unsuited for use when a more precise measurement of time is needed, such as when measuring the execution time of programs.[32]
Unix time by design does not require a specific size for the storage, but most common implementations of Unix time use asigned integerwith the same size as theword sizeof the underlying hardware. As the majority of modern computers are32-bitor64-bit, and a large number of programs are still written in 32-bit compatibility mode, this means that many programs using Unix time are using signed 32-bit integer fields. The maximum value of a signed 32-bit integer is231− 1, and the minimum value is−231, making it impossible to represent dates before 13 December 1901 (at 20:45:52 UTC) or after 19 January 2038 (at 03:14:07 UTC). The early cutoff can have an impact on databases that are storing historical information; in some databases where 32-bit Unix time is used for timestamps, it may be necessary to store time in a different form of field, such as a string, to represent dates before 1901. The late cutoff is known as theYear 2038 problemand has the potential to cause issues as the date approaches, as dates beyond the 2038 cutoff would wrap back around to the start of the representable range in 1901.[32]: 60
Date range cutoffs are not an issue with 64-bit representations of Unix time, as the effective range of dates representable with Unix time stored in a signed 64-bit integer is over 584 billion years, or 292 billion years in either direction of the 1970 epoch.[32]: 60-61[33]
Unix time is not the only standard for time that counts away from an epoch. OnWindows, theFILETIMEtype stores time as a count of 100-nanosecond intervals that have elapsed since 0:00GMTon 1 January 1601.[34]Windows epoch time is used to store timestamps for files[35]and in protocols such as theActive DirectoryTime Service[36]andServer Message Block.
TheNetwork Time Protocolused to coordinate time between computers uses an epoch of 1 January 1900, counted in an unsigned 32-bit integer for seconds and another unsigned 32-bit integer for fractional seconds, which rolls over every 232seconds (about once every 136 years).[37]
Many applications and programming languages provide methods for storing time with an explicit timezone.[38]There are also a number of time format standards which exist to be readable by both humans and computers, such asISO 8601.
Unix enthusiasts have a history of holding "time_t parties" (pronounced "timetea parties") to celebrate significant values of the Unix time number.[39][40]These are directly analogous to thenew yearcelebrations that occur at the change of year in many calendars. As the use of Unix time has spread, so has the practice of celebrating its milestones. Usually it is time values that are round numbers indecimalthat are celebrated, following the Unix convention of viewingtime_tvalues in decimal. Among some groups roundbinarynumbers are also celebrated,[citation needed]such as +230, which occurred at 13:37:04 UTC on Saturday, 10 January 2004.
The events that these celebrate are typically described as "Nseconds since the Unix epoch", but this is inaccurate; as discussed above, due to the handling of leap seconds in Unix time the number of seconds elapsed since the Unix epoch is slightly greater than the Unix time number for times later than the epoch.
Vernor Vinge's novelA Deepness in the Skydescribes a spacefaring trading civilization thousands of years in the future that still uses the Unix epoch. The "programmer-archaeologist" responsible for finding and maintaining usable code in mature computer systems first believes that the epoch refers to the time whenman first walked on the Moon, but then realizes that it is "the 0-second of one of humankind's first computer operating systems".[48] | https://en.wikipedia.org/wiki/Unix_time |
Simon Stevin(Dutch:[ˈsimɔnsteːˈvɪn]; 1548–1620), sometimes calledStevinus, was aFlemishmathematician, scientist andmusic theorist.[1]He made various contributions in many areas ofscienceandengineering, both theoretical and practical. He also translated various mathematical terms intoDutch, making it one of the few European languages in which the word formathematics,wiskunde(wisandkunde, i.e., "the knowledge of what is certain"), was not aloanwordfromGreekbut acalqueviaLatin. He also replaced the wordchemie, the Dutch for chemistry, byscheikunde("the art of separating"), made inanalogywithwiskunde.
Very little is known with certainty about Simon Stevin's life, and what we know is mostly inferred from other recorded facts.[2]The exact birth date and the date and place of his death are uncertain. It is assumed he was born inBruges, since he enrolled atLeiden Universityunder the nameSimon Stevinus Da Brugensis(meaning "Simon Stevin from Bruges"). His name is usually written as Stevin, but some documents regarding his father use the spellingStevijn(pronunciation [ˈste:vεɪn]); this was a common spelling shift in 16th-century Dutch.[3]Simon Stevin's mother, Cathelijne (or Catelyne), was the daughter of a wealthy family fromYpres; her father Hubert was apoorterof Bruges. Cathelijne would later marry Joost Sayon, who was involved in the carpet andsilktrade and was a member of theschuttersgildeSint-Sebastiaan. Through her marriage, Cathelijne became a member of a family ofCalvinists; it is thought that Simon Stevin was likely brought up in the Calvinist faith.[4]
It is believed that Stevin grew up in a relatively affluent environment and enjoyed a good education. He was likely educated at aLatin schoolin his hometown.[5]
Stevin left Bruges in 1571 apparently without a particular destination. Stevin was most likely aCalvinistsince aCatholicwould likely not have risen to the position of trust he later occupied withMaurice, Prince of Orange. It is assumed that he left Bruges to escape the religious persecution of Protestants by the Spanish rulers. Based on references in his work"Wisconstighe Ghedaechtenissen"(Mathematical Memoirs), it has been inferred that he must have moved first to Antwerp where he began his career as a merchant'sclerk.[6]Some biographers mention that he travelled toPrussia,Poland,Denmark,NorwayandSwedenand other parts ofNorthern Europe, between 1571 and 1577. It is possible that he completed these travels over a longer period of time. In 1577 Simon Stevin returned to Bruges and was appointedcity clerkby thealdermenof Bruges, a function he occupied from 1577 to 1581. He worked in the office of Jan de Brune of theBrugse Vrije, thecastellanyof Bruges.
Why he had returned to Bruges in 1577 is not clear. It may have been related to the political events of that period. Bruges was the scene of intense religious conflict. Catholics and Calvinists alternately controlled the government of the city. They usually opposed each other but would occasionally collaborate in order to counteract the dictates of KingPhilip II of Spain. In 1576 a certain level of official religious tolerance was decreed. This could explain why Stevin returned to Bruges in 1577. Later the Calvinists seized power in many Flemish cities and incarcerated Catholic clerics and secular governors supportive of the Spanish rulers. Between 1578 and 1584 Bruges was ruled by Calvinists.
In 1581 Stevin again left his native Bruges and moved toLeidenwhere he attended the Latin school.[5]On 16 February 1583 he enrolled, under the nameSimon Stevinus Brugensis(meaning "Simon Stevin from Bruges"), atLeiden University, which had been founded byWilliam the Silentin 1575. Here he befriended William the Silent's second son and heir PrinceMaurice, the Count of Nassau.[4]Stevin is listed in the university's registers until 1590 and apparently never graduated.
Following William the Silent's assassination and Prince Maurice's assumption of his father's office, Stevin became the principal advisor and tutor of Prince Maurice. Prince Maurice asked his advice on many occasions, and made him apublic officer– at first director of the so-called "waterstaet"[7](the government authority forpublic works, especially water management) from 1592, and laterquartermaster-generalof the army of the States-General.[8]Prince Maurice also asked Stevin to found an engineering school within the University of Leiden.
Stevin moved toThe Haguewhere he bought a house in 1612. He married in 1610 or 1614 and had four children. It is known that he left awidowwith two children at his death in Leiden or The Hague in 1620.[4]
Stevin is responsible for many discoveries and inventions. Stevin wrote numerous bestselling books, and he was a pioneer of the development and the practical application of (engineering related) science such asmathematics,physicsand applied science likehydraulic engineeringandsurveying. He was thought to have invented thedecimal fractionsuntil the middle of the 20th century, when researchers discovered that decimal fractions had been previously introduced by the medieval Islamic scholaral-Uqlidisiin a book written in 952. Moreover, a systematic development of decimal fractions was given well before Stevin in the bookMiftah al-Hisabwritten in 1427 byAl-Kashi.
His contemporaries were most struck by his invention of a so-calledland yacht, a carriage with sails, of which a model was preserved inScheveningenuntil 1802. The carriage itself had been lost long before. Around the year 1600 Stevin, withPrince Maurice of Orangeand twenty-six others, used the carriage on the beach betweenScheveningenandPetten. The carriage was propelled solely by the force of wind and acquired a speed which exceeded that of horses.[7]
Stevin's work in thewaterstaetinvolved improvements to thesluicesandspillwaysto controlflooding, exercises inhydraulic engineering.Windmillswere already in use to pump the water out but inVan de Molens(On mills), he suggested improvements including ideas that the wheels should move slowly with a better system for meshing of thegear teeth. These improved threefold the efficiency of the windmills used in pumping water out of thepolders.[9]He received apatenton his innovation in 1586.[8]
Stevin's aim was to bring about a second age ofwisdom, in which mankind would have recovered all of its earlier knowledge. He deduced that the language spoken in this age would have to be Dutch, because, as he showedempirically, in that language, more concepts could be indicated withmonosyllabicwords than in any of the (European) languages he had compared it with.[7]This was one of the reasons why he wrote all of his works in Dutch and left the translation of them for others to do. The other reason was that he wanted his works to be practically useful to people who had not mastered the common scientific language of the time, Latin. Thanks to Simon Stevin theDutch languagegot its proper scientific vocabulary such as "wiskunde" ("kunst van het gewisse of zekere"the art of what is known or what is certain) formathematics, "natuurkunde" (the "art of nature") forphysics, "scheikunde" (the "art of separation") forchemistry, "sterrenkunde" (the "art of stars") forastronomy, "meetkunde" (the "art of measuring") forgeometry.
Stevin was the first to show how to model regular and semiregularpolyhedraby delineating their frames in a plane. He also distinguished stable from unstable equilibria.[7]
Stevin contributed totrigonometrywith his book,De Driehouckhandel.
InThe First Book of the Elements of the Art of Weighing, The second part: Of the propositions [The Properties of Oblique Weights], Page 41, Theorem XI, Proposition XIX,[10]he derived the condition for the balance of forces oninclined planesusing a diagram with a "wreath" containing evenly spaced round masses resting on the planes of a triangular prism (see the illustration on the side). He concluded that the weights required were proportional to the lengths of the sides on which they rested assuming the third side was horizontal and that the effect of a weight was reduced in a similar manner. It is implicit that the reduction factor is the height of the triangle divided by the side (thesineof the angle of the side with respect to the horizontal). The proof diagram of this concept is known as the "Epitaph of Stevinus". As noted byE. J. Dijksterhuis, Stevin's proof of the equilibrium on an inclined plane can be faulted for usingperpetual motionto imply areductio ad absurdum. Dijksterhuis says Stevin "intuitively made use of the principle ofconservation of energy... long before it was formulated explicitly".[2]: 54
He demonstrated the resolution of forces beforePierre Varignon, which had not been remarked previously, even though it is a simple consequence of the law of their composition.[7]
Stevin discovered thehydrostatic paradox, which states that the pressure in a liquid is independent of the shape of the vessel and the area of the base, but depends solely on its height.[7]
He also gave the measure for the pressure on any given portion of the side of a vessel.[7]
He was the first to explain thetidesusing theattraction of the moon.[7]
In 1586, hedemonstratedthat two objects of different weight fall with the same acceleration.[11][12]
The first mention of equal temperament related to thetwelfth root of twoin the West appeared in Simon Stevin's unfinished manuscriptVan de Spiegheling der singconst(ca 1605) published posthumously three hundred years later in 1884;[13]however, due to insufficient accuracy of his calculation, many of the numbers (for string length) he obtained were off by one or two units from the correct values.[14]He appears to have been inspired by the writings of the Italianlutenistand musical theoristVincenzo Galilei(father ofGalileo Galilei), a onetime pupil ofGioseffo Zarlino.
Double-entry bookkeepingmay have been known to Stevin, as he was a clerk inAntwerpin his younger years, either practically or via the medium of the works of Italian authors such asLuca PacioliandGerolamo Cardano. However, Stevin was the first to recommend the use of impersonalaccountsin the national household. He brought it into practice for Prince Maurice, and recommended it to the French statesmanSully.[15][7]
Stevin wrote a 35-pagebookletcalledDe Thiende("the art of tenths"), first published in Dutch in 1585 and translated into French asLa Disme. The full title of the English translation wasDecimal arithmetic: Teaching how to perform all computations whatsoever by whole numbers withoutfractions, by the four principles of common arithmetic: namely,addition,subtraction,multiplication, anddivision.The concepts referred to in the booklet includedunit fractionsandEgyptian fractions.Muslim mathematicianswere the first to utilizedecimalsinstead of fractions on a large scale.Al-Kashi's book,Key to Arithmetic, was written at the beginning of the 15th century and was the stimulus for the systematic application of decimals to whole numbers and fractions thereof.[16][17]But nobody established their daily use before Stevin. He felt that this innovation was so significant, that he declared the universal introduction of decimal coinage, measures and weights to be merely a question of time.[18][7]
His notation is rather unwieldy. Thepointseparating theintegersfrom the decimal fractions seems to be the invention ofBartholomaeus Pitiscus, in whosetrigonometrical tables(1612) it occurs, and it was accepted byJohn Napierin hislogarithmicpapers (1614 and 1619).[7]
Stevin printed little circles around the exponents of the different powers of one-tenth. That Stevin intended these encircled numerals to denote mere exponents is clear from the fact that he employed the same symbol for powers ofalgebraicquantities. He did not avoid fractional exponents; only negative exponents do not appear in his work.[7]
Stevin wrote on other scientific subjects – for instance optics, geography, astronomy – and a number of his writings were translated into Latin by W. Snellius (Willebrord Snell). There are two complete editions in French of his works, both printed in Leiden, one in 1608, the other in 1634.[7]
Stevin wrote hisArithmeticin 1594. The work brought to the western world for the first time a general solution of thequadratic equation, originally documented nearly a millennium previously byBrahmaguptain India.
According toVan der Waerden, Stevin eliminated "the classical restriction of 'numbers' to integers (Euclid) or to rational fractions (Diophantos)...the real numbers formed a continuum. His general notion of a real number was accepted,tacitlyor explicitly, by all later scientists".[19]A recent study attributes a greater role to Stevin in developing thereal numbersthan has been acknowledged byWeierstrass'sfollowers.[20]Stevin proved theintermediate value theoremfor polynomials, anticipatingCauchy's proof thereof. Stevin uses adivide and conquerprocedure, subdividing the interval into ten equal parts.[21]Stevin's decimals were the inspiration forIsaac Newton's work oninfinite series.[22]
Stevin thought theDutch languageto be excellent for scientific writing, and he translated many of the mathematical terms to Dutch. As a result, Dutch is one of the few Western European languages that have many mathematical terms that do not stem from Greek or Latin. This includes the very namewiskunde(mathematics).
His eye for the importance of having the scientific language be the same as the language of the craftsman may show from the dedication of his bookDe Thiende('The Disme' or 'The Tenth'): 'Simon Stevin wishes the stargazers, surveyors, carpet measurers, body measurers in general, coin measurers and tradespeople good luck.' Further on in the same pamphlet, he writes: "[this text] teaches us all calculations that are needed by the people without using fractions. One can reduce all operations to adding, subtracting, multiplying and dividing with integers."
Some of the words he invented evolved: 'aftrekken' (subtract) and 'delen' (divide) stayed the same, but over time 'menigvuldigen' became 'vermenigvuldigen' (multiply, the added 'ver' emphasizes the fact it is an action). 'Vergaderen' (gathering) became 'optellen' (addlit.count up).
Another example is the Dutch word for diameter: 'middellijn', lit.: line through the middle.
The word 'zomenigmaal' (quotientlit. 'that many times') has been replaced by 'quotiënt' in modern-day Dutch.
Other terms did not make it into modern day mathematical Dutch, like 'teerling' (die, although still being used in the meaning as die), instead of cube.
Following his life, Belgium and the city of Bruges have continued to name places, statues and other topics in honor of Stevin
Amongst others, he published: | https://en.wikipedia.org/wiki/Simon_Stevin#Decimal_fractions |
Theinverted pyramidis ametaphorused byjournalistsand other writers to illustrate how information should be prioritised and structured inprose(e.g., a news report). It is a common method for writingnews storiesand has wide adaptability to other kinds of texts, such as blogs, editorial columns and marketing factsheets. It is a way to communicate the basics about a topic in the initial sentences. The inverted pyramid is taught tomass communicationand journalism students, and is systematically used inEnglish-languagemedia.[1]
The inverted or upside-down pyramid can be thought of as a triangle pointing down. The widest part at the top represents the most substantial, interesting, andimportantinformation that the writer means to convey, illustrating that this kind of material should head the article, while the tapering lower portion illustrates that other material should follow in order of diminishing importance.
It is sometimes called asummary news leadstyle,[2]orbottom line up front(BLUF).[3]The opposite, the failure to mention the most important, interesting or attention-grabbing elements of a story in the opening paragraphs, is calledburying the lead.
Other styles are also used in news writing, including the "anecdotal lead", which begins the story with an eye-catching tale oranecdoterather than the central facts; and theQ&A, or question-and-answer format. The inverted pyramid may also include a "hook" as a kind of prologue, typically a provocative quote, question, or image, to entice the reader into committing to reading the full story.
This format is valued for two reasons. First, readers can leave the story at any point and understand it, even if they do not have all the details. Second, it conducts readers through the details of the story by the end.[citation needed]
This system also means that information less vital to the reader's understanding comes later in the story, where it is easier to edit out for space or other reasons. This is called "cutting from the bottom."[4]Rather than petering out, a story may end with a "kicker"—a conclusion, perhaps call to action—which comesafterthe pyramid. This is particularly common infeature stylearticles.
Historians disagree about when the form was created. Many say theinvention of the telegraphsparked its development by encouraging reporters to condense material, to reduce costs,[5]or to hedge against the unreliability of the telegraph network.[6]Studies of 19th-century news stories in American newspapers, however, suggest that the form spread several decades later than the telegraph, possibly because the reform era's social and educational forces encouraged factual reporting rather than more interpretive narrative styles.[2]
Chip Scanlan's essay on the form[7]includes this frequently cited example of telegraphic reporting:
This evening at about 9:30 p.m. atFord's Theatre, thePresident, while sitting in his private box withMrs. Lincoln,Mrs. HarrisandMajor Rathburn, was shot by an assassin, who suddenly entered the box and approached behind the President.
The assassin then leaped upon the stage, brandishing a large dagger or knife, and made his escape in the rear of the theatre.
The pistol ball entered the back of the President's head and penetrated nearly through the head. The wound is mortal.
The President has been insensible ever since it was inflicted, and is now dying.
About the same hour an assassin, whether the same or not, enteredMr. Seward's apartment and under pretense of having a prescription was shown to the Secretary's sick chamber. The assassin immediately rushed to the bed and inflicted two or three stabs on the chest and two on the face.
It is hoped the wounds may not be mortal. My apprehension is that they will prove fatal.
The nurse alarmed Mr.Frederick Seward, who was in an adjoining rented room, and he hastened to the door of his father's room, when he met the assassin, who inflicted upon him one or more dangerous wounds. The recovery of Frederick Seward is doubtful.
It is not probable that the President will live through the night.
General Grantand his wife were advertised to be at the theatre...
Who, when, where, why, what, and howare addressed in the first paragraph. As the article continues, the less important details are presented. An even more pyramid-conscious reporter or editor would move two additional details to the first two sentences: That the shot was to the head, and that it was expected to prove fatal. The transitional sentence about the Grants suggests that less-important facts are being added to the rest of the story.
Other news outlets such as theAssociated Pressdid not use this format when covering the assassination, instead adopting a chronological organization.[8] | https://en.wikipedia.org/wiki/Inverted_pyramid_(journalism) |
Thedollar sign, also known as thepeso sign, is acurrency symbolconsisting of acapital⟨S⟩crossed with one or two vertical strokes ($ordepending ontypeface), used to indicate the unit of variouscurrenciesaround the world, including most currencies denominated "dollar" or "peso". The explicitly double-barredsign is calledcifrãoin thePortuguese language.
The sign is also used in several compound currency symbols, such as theBrazilian real(R$) and theUnited States dollar(US$): in local use, the nationality prefix is usually omitted. In countries that have othercurrency symbols, the US dollar is often assumed and the "US" prefix omitted.
The one- and two-stroke versions are often considered mere stylistic (typeface) variants, although in some places and epochs one of them may have been specifically assigned, by law or custom, to a specific currency. TheUnicodecomputer encoding standard defines a single code for both.
In mostEnglish-speaking countries that use that symbol, it is placed to the left of the amount specified, e.g. "$1", read as "one dollar".
The symbol appears in business correspondence in the 1770s fromSpanish America, the early independent U.S., British America and Britain, referring to the Spanish American peso,[1][2]also known as "Spanish dollar" or "piece of eight" in British America. Those coins provided the model for the currency that the United States adopted in 1792, and for the larger coins of the new Spanish American republics, such as theMexican peso,Argentine peso,Peruvian real, andBolivian solcoins.
With theCoinage Act of 1792, the United States Congress created the U.S. dollar, defining it to have "the value of a Spanish milled dollar as the same is now current"[3][4]but a variety of foreign coins were deemed to belegal tenderuntil theCoinage Act of 1857ended this status.[5]
The earliest U.S. dollar coins did not have any dollar symbol. The first occurrence in print is claimed to be from 1790s, by a Philadelphia printerArchibald Binny, creator of theMonticello typeface.[6]The $1United States Noteissued by the United States in 1869 included a large symbol consisting of a "U" with the right bar overlapping an "S" like a single-bar dollar sign, as well as a very small double-stroke dollar sign in the legal warning against forgery.[7]
It is still uncertain, however, how the dollar sign came to represent the Spanish American peso. There are currently several competing hypotheses:
The following theories seem to have been discredited or contradicted by documentary evidence:
The numerous currencies called "dollar" use the dollar sign to express money amounts. The sign is also generally used for the many currencies called "peso" (except thePhilippine peso, which uses the symbol "₱"). Within a country the dollar/peso sign may be used alone. In other cases, and to avoid ambiguity in international usage, it is usually combined with other glyphs, e.g. CA$ or Can$ forCanadian dollar. Particularly in professional contexts, the unambiguousISO 4217 three letter code(AUD, MXN, USD, etc.) is preferred.
The dollar sign, alone or in combination with other glyphs, is or was used to denote several currencies with other names, including:
In the United States, Mexico, Australia, Argentina, Chile, Colombia, New Zealand, Hong Kong, Pacific Island nations, and English-speaking Canada, the sign is written before the number ("$5"), even though the word is written or spoken after it ("five dollars", "cinco pesos"). In French-speaking Canada, exceptionally, the dollar symbol usually appears after the number,[25]e.g., "5$". (Thecent symbolis written after the number in most countries that use it, e.g., "5¢".)
In Portugal, Brazil, and other parts of thePortuguese Empire, the two-stroke variant of the sign namedcifrão(Portuguese pronunciation:[siˈfɾɐ̃w]ⓘ) has been used as the thousands separator in the national currency, thereal(plural "réis", abbreviated "Rs."). For instance,123500 would be equivalent to123500réis. This usage is attested in 1775, but may be older by a century or more.[14]The cifrão is always written with two vertical lines like, and is the official sign of theCape Verdean escudo(ISO 4217: CVE).
In 1911, Portugal redefined the national currency as theescudo, worth1000 réis, and divided into 100centavos; but thecifrãocontinued to be used as thedecimal separator,[26]so that12350 meant123.50 escudosor 123 escudos and 50 centavos. This usage ended in 2002 when the country switched to theeuro. (A similar scheme to use a letter symbol instead of a decimal point is used by theRKM codein electrical engineering since 1952.)
Cape Verde, a republic and former Portuguese colony, similarly switched from the real to their localescudoand centavos in 1914, and retains thecifrãousage as decimals separator as of 2021. Local versions of the Portuguese escudo were for a time created also for other overseas colonies, includingEast Timor(1958–1975),Portuguese India(1958–1961),Angola(1914–1928 and 1958–1977),Mozambique(1914–1980),Portuguese Guinea(1914–1975), andSão Tomé and Príncipe(1914–1977); all using thecifrãoas decimals separator.[citation needed]
Brazilretained the real and thecifrãoas thousands separator until 1942, when it switched to theBrazilian cruzeiro, with comma as the decimals separator. The dollar sign, officially with one stroke but often rendered with two, was retained as part of the currency symbol"Cr$", so one would writeCr$13,50for 13 cruzeiros and 50 centavos.[27]
Thecifrãowas formerly used by thePortuguese escudo(ISO: PTE) before its replacement by theeuroand by thePortuguese Timor escudo(ISO: TPE) before its replacement by theIndonesian rupiahand theUS dollar.[28]In Portuguese and Cape Verdean usage, thecifrãois placed as a decimal point between the escudo andcentavovalues.[29]The name originates in theArabicṣifr(صِفْر), meaning 'zero'.[30]
Outside the Portuguese cultural sphere, theSouth Vietnamese đồngbefore 1975 used a method similar to thecifrãoto separate values of đồng from its decimal subunitxu. For example,750 meant 7 đồng and 50 xu.
In some places and at some times, the one- and two-stroke variants have been used in the same contexts to distinguish between the U.S. dollar and other local currency, such as the formerPortuguese escudo.[26]
However, such usage is not standardized, and theUnicodespecification considers the two versions asgraphic variants of the same symbol—atypeface designchoice.[31]Computer and typewriter keyboards usually have a single key for that sign, and many character encodings (includingASCIIandUnicode) reserve a single numeric code for it. Indeed, dollar signs in the same digital document may be rendered with one or two strokes, if differentcomputer fontsare used, but the underlyingcodepointU+0024 (ASCII 3610) remains unchanged.
When a specific variant is not mandated by law or custom, the choice is usually a matter of expediency or aesthetic preference. Both versions were used in the US in the 18th century. (An 1861Civil War-era advertisement depicts the two-stroked symbol as a snake.[13]) The two-stroke version seems to be generally less popular today, though used in some "old-style" fonts likeBaskerville.
Because of its use in early American computer applications such as business accounting, the dollar sign is almost universally present in computercharacter sets, and thus has been appropriated for many purposes unrelated to money inprogramming languagesandcommand languages.
The dollar sign "$" has Unicode code point U+0024 (inherited fromASCIIviaLatin-1).[31]
There are no separate encodings for one- and two-line variants. The choice is typeface-dependent, they areallographs. However, there are three other code points that originate from other East Asian standards: the Taiwanesesmall form variant, the CJKfullwidth form, and the Japaneseemoji. The glyphs for these code points are typically larger or smaller than the primary code point, but the difference is mostly aesthetic or typographic, and the meanings of the symbols are the same.
However, for usage as the special character in various computing applications (see following sections), U+0024 is typically the only code that is recognized.
Support for the two-line variant varies. As of 2019,[update]theUnicodestandard considers the distinction between one- and two-bar dollar signs astylistic distinction between fonts, and has no separatecode pointfor thecifrão. The symbol is not in the October 2019 "pipeline",[34]though it has been requested formally.[26]
Among others, the following fonts display a double-bar dollar sign for code point 0024:[citation needed]regular-weightBaskerville,Big Caslon,Bodoni MT,Garamond: ($)
InLaTeX, with the textcomp package installed, thecifrão() can be input using the command\textdollaroldstyle. However, because offont substitutionand the lack of a dedicated code point, the author of an electronic document who uses one of these fonts intending to represent acifrãocannot be sure that every reader will see a double-bar glyph rather than the single barred version. Because of the continued lack of support in Unicode, a single bar dollar sign is frequently employed in its place even for official purposes.[29][35]Where there is any risk of misunderstanding, theISO 4217three-letter acronym is used.
The symbol is sometimes used derisively, in place of the letter S, to indicate greed or excess money such as in "Micro$oft", "Di$ney", "Chel$ea" and "GW$"; or supposed overt Americanisation as in "$ky". The dollar sign is also used intentionally to stylize names such asA$AP Rocky,Ke$ha, andTy Dolla $ignor words such as¥€$. In 1872,Ambrose Biercereferred to California governorLeland Stanfordas $tealand Landford.[38]
InScrabblenotation, a dollar sign is placed after a word to indicate that it is valid according to theNorth American word lists, but not according to the British word lists.[39]
A dollar symbol is used asunit of reactivityfor a nuclear reactor,0$being the threshold of slow criticality, meaning a steady reaction rate, while1 $is the threshold ofprompt criticality, which means a nuclear excursion or explosion.[40][41]
In the 1993 version of theTurkmen Latin alphabet$ was used as a transliteration of the Cyrillic letter Ш, in 1999 was replaced by the letter Ş. | https://en.wikipedia.org/wiki/Cifr%C3%A3o |
Significant figures, also referred to assignificant digits, are specificdigitswithin a number that is written inpositional notationthat carry both reliability and necessity in conveying a particular quantity. When presenting the outcome of a measurement (such as length, pressure, volume, or mass), if the number of digits exceeds what the measurement instrument can resolve, only the digits that are determined by theresolutionare dependable and therefore considered significant.
For instance, if a length measurement yields 114.8 mm, using a ruler with the smallest interval between marks at 1 mm, the first three digits (1, 1, and 4, representing 114 mm) are certain and constitute significant figures. Further, digits that are uncertain yet meaningful are also included in the significant figures. In this example, the last digit (8, contributing 0.8 mm) is likewise considered significant despite its uncertainty.[1]Therefore, this measurement contains four significant figures.
Another example involves a volume measurement of 2.98 L with an uncertainty of ± 0.05 L. The actual volume falls between 2.93 L and 3.03 L. Even if certain digits are not completely known, they are still significant if they are meaningful, as they indicate the actual volume within an acceptable range of uncertainty. In this case, the actual volume might be 2.94 L or possibly 3.02 L, so all three digits are considered significant.[1]Thus, there are three significant figures in this example.
The following types of digits are not considered significant:[2]
A zero after a decimal (e.g., 1.0) is significant, and care should be used when appending such a decimal of zero. Thus, in the case of 1.0, there are two significant figures, whereas 1 (without a decimal) has one significant figure.
Among a number's significant digits, themost significant digitis the one with the greatest exponent value (the leftmost significant digit/figure), while theleast significant digitis the one with the lowest exponent value (the rightmost significant digit/figure). For example, in the number "123" the "1" is the most significant digit, representing hundreds (102), while the "3" is the least significant digit, representing ones (100).
To avoid conveying a misleading level of precision, numbers are oftenrounded. For instance, it would createfalse precisionto present a measurement as 12.34525 kg when the measuring instrument only provides accuracy to the nearest gram (0.001 kg). In this case, the significant figures are the first five digits (1, 2, 3, 4, and 5) from the leftmost digit, and the number should be rounded to these significant figures, resulting in 12.345 kg as the accurate value. Therounding error(in this example, 0.00025 kg = 0.25 g) approximates the numerical resolution or precision. Numbers can also be rounded for simplicity, not necessarily to indicate measurement precision, such as for the sake of expediency in news broadcasts.
Significance arithmetic encompasses a set of approximate rules for preserving significance through calculations. More advanced scientific rules are known as thepropagation of uncertainty.
Radix10 (base-10, decimal numbers) is assumed in the following. (SeeUnit in the last placefor extending these concepts to other bases.)
Identifying the significant figures in a number requires knowing which digits are meaningful, which requires knowing the resolution with which the number is measured, obtained, or processed. For example, if the measurable smallest mass is 0.001 g, then in a measurement given as 0.00234 g the "4" is not useful and should be discarded, while the "3" is useful and should often be retained.[3]
The significance of trailing zeros in a number not containing a decimal point can be ambiguous. For example, it may not always be clear if the number 1300 is precise to the nearest unit (just happens coincidentally to be an exact multiple of a hundred) or if it is only shown to the nearest hundreds due to rounding or uncertainty. Many conventions exist to address this issue. However, these are not universally used and would only be effective if the reader is familiar with the convention:
As the conventions above are not in general use, the following more widely recognized options are available for indicating the significance of number with trailing zeros:
Roundingto significant figures is a more general-purpose technique than rounding tondigits, since it handles numbers of different scales in a uniform way. For example, the population of a city might only be known to the nearest thousand and be stated as 52,000, while the population of a country might only be known to the nearest million and be stated as 52,000,000. The former might be in error by hundreds, and the latter might be in error by hundreds of thousands, but both have two significant figures (5 and 2). This reflects the fact that the significance of the error is the same in both cases, relative to the size of the quantity being measured.
To round a number tonsignificant figures:[8][9]
In financial calculations, a number is often rounded to a given number of places. For example, to two places after thedecimal separatorfor many world currencies. This is done because greater precision is immaterial, and usually it is not possible to settle a debt of less than the smallest currency unit.
In UK personal tax returns, income is rounded down to the nearest pound, whilst tax paid is calculated to the nearest penny.
As an illustration, thedecimalquantity12.345can be expressed with various numbers of significant figures or decimal places. If insufficient precision is available then the number isroundedin some manner to fit the available precision. The following table shows the results for various total precision at two rounding ways (N/A stands for Not Applicable).
Another example for0.012345. (Remember that the leading zeros are not significant.)
The representation of a non-zero numberxto a precision ofpsignificant digits has a numerical value that is given by the formula:[citation needed]
10n⋅round(x10n){\displaystyle 10^{n}\cdot \operatorname {round} \left({\frac {x}{10^{n}}}\right)}
where
n=⌊log10(|x|)⌋+1−p{\displaystyle n=\lfloor \log _{10}(|x|)\rfloor +1-p}
which may need to be written with a specific marking as detailedaboveto specify the number of significant trailing zeros.
It is recommended for a measurement result to include the measurement uncertainty such asxbest±σx{\displaystyle x_{\text{best}}\pm \sigma _{x}}, wherexbestandσxare the best estimate and uncertainty in the measurement respectively.[10]xbestcan be the average of measured values andσxcan be the standard deviation or a multiple of the measurement deviation. The rules to writexbest±σx{\displaystyle x_{\text{best}}\pm \sigma _{x}}are:[11]
Uncertainty may be implied by the last significant figure if it is not explicitly expressed.[1]The implied uncertainty is ± the half of the minimum scale at the last significant figure position. For example, if the mass of an object is reported as 3.78 kg without mentioning uncertainty, then ± 0.005 kg measurement uncertainty may be implied. If the mass of an object is estimated as 3.78 ± 0.07 kg, so the actual mass is probably somewhere in the range 3.71 to 3.85 kg, and it is desired to report it with a single number, then 3.8 kg is the best number to report since its implied uncertainty ± 0.05 kg gives a mass range of 3.75 to 3.85 kg, which is close to the measurement range. If the uncertainty is a bit larger, i.e. 3.78 ± 0.09 kg, then 3.8 kg is still the best single number to quote, since if "4 kg" was reported then a lot of information would be lost.
If there is a need to write the implied uncertainty of a number, then it can be written asx±σx{\displaystyle x\pm \sigma _{x}}with stating it as the implied uncertainty (to prevent readers from recognizing it as the measurement uncertainty), wherexandσxare the number with an extra zero digit (to follow the rules to write uncertainty above) and the implied uncertainty of it respectively. For example, 6 kg with the implied uncertainty ± 0.5 kg can be stated as 6.0 ± 0.5 kg.
As there are rules to determine the significant figures in directlymeasuredquantities, there are also guidelines (not rules) to determine the significant figures in quantitiescalculatedfrom thesemeasuredquantities.
Significant figures inmeasuredquantities are most important in the determination of significant figures incalculated quantitieswith them. A mathematical or physical constant (e.g.,πin the formula for thearea of a circlewith radiusrasπr2) has no effect on the determination of the significant figures in the result of a calculation with it if its known digits are equal to or more than the significant figures in the measured quantities used in the calculation. An exact number such as1/2in the formula for thekinetic energyof a massmwith velocityvas1/2mv2has no bearing on the significant figures in the calculated kinetic energy since its number of significant figures is infinite (0.500000...).
The guidelines described below are intended to avoid a calculation result more precise than the measured quantities, but it does not ensure the resulted implied uncertainty close enough to the measured uncertainties. This problem can be seen in unit conversion. If the guidelines give the implied uncertainty too far from the measured ones, then it may be needed to decide significant digits that give comparable uncertainty.
For quantities created from measured quantities viamultiplicationanddivision, the calculated result should have as many significant figures as theleastnumber of significant figures among the measured quantities used in the calculation.[12]For example,
withone,two, andonesignificant figures respectively. (2 here is assumed not an exact number.) For the first example, the first multiplication factor has four significant figures and the second has one significant figure. The factor with the fewest or least significant figures is the second one with only one, so the final calculated result should also have one significant figure.
For unit conversion, the implied uncertainty of the result can be unsatisfactorily higher than that in the previous unit if this rounding guideline is followed; For example, 8 inch has the implied uncertainty of ± 0.5 inch = ± 1.27 cm. If it is converted to the centimeter scale and the rounding guideline for multiplication and division is followed, then20.32 cm ≈ 20 cm with the implied uncertainty of ± 5 cm. If this implied uncertainty is considered as too overestimated, then more proper significant digits in the unit conversion result may be 20.32 cm ≈ 20. cm with the implied uncertainty of ± 0.5 cm.
Another exception of applying the above rounding guideline is to multiply a number by an integer, such as 1.234 × 9. If the above guideline is followed, then the result is rounded as 1.234 × 9.000.... = 11.106 ≈ 11.11. However, this multiplication is essentially adding 1.234 to itself 9 times such as 1.234 + 1.234 + … + 1.234 so the rounding guideline for addition and subtraction described below is more proper rounding approach.[13]As a result, the final answer is 1.234 + 1.234 + … + 1.234 = 11.106= 11.106 (one significant digit increase).
For quantities created from measured quantities viaadditionandsubtraction, the last significant figure position (e.g., hundreds, tens, ones, tenths, hundredths, and so forth) in the calculated result should be the same as theleftmostor largest digit position among the last significant figures of themeasuredquantities in the calculation. For example,
with the last significant figures in theonesplace,tenthsplace,onesplace, andthousandsplace respectively. (2 here is assumed not an exact number.) For the first example, the first term has its last significant figure in the thousandths place and the second term has its last significant figure in theonesplace. The leftmost or largest digit position among the last significant figures of these terms is the ones place, so the calculated result should also have its last significant figure in the ones place.
The rule to calculate significant figures for multiplication and division are not the same as the rule for addition and subtraction. For multiplication and division, only the total number of significant figures in each of the factors in the calculation matters; the digit position of the last significant figure in each factor is irrelevant. For addition and subtraction, only the digit position of the last significant figure in each of the terms in the calculation matters; the total number of significant figures in each term is irrelevant.[citation needed]However, greater accuracy will often be obtained if some non-significant digits are maintained in intermediate results which are used in subsequent calculations.[citation needed]
Thebase-10logarithmof anormalized number(i.e.,a× 10bwith 1 ≤a< 10 andbas an integer), is rounded such that its decimal part (calledmantissa) has as many significant figures as the significant figures in the normalized number.
When taking the antilogarithm of a normalized number, the result is rounded to have as many significant figures as the significant figures in the decimal part of the number to be antiloged.
If atranscendental functionf(x){\displaystyle f(x)}(e.g., theexponential function, thelogarithm, and thetrigonometric functions) is differentiable at its domain element 'x', then its number of significant figures (denoted as "significant figures off(x){\displaystyle f(x)}") is approximately related with the number of significant figures inx(denoted as "significant figures ofx") by the formula
(significantfiguresoff(x))≈(significantfiguresofx)−log10(|df(x)dxxf(x)|){\displaystyle {\rm {(significant~figures~of~f(x))}}\approx {\rm {(significant~figures~of~x)}}-\log _{10}\left(\left\vert {{\frac {df(x)}{dx}}{\frac {x}{f(x)}}}\right\vert \right)},
where|df(x)dxxf(x)|{\displaystyle \left\vert {{\frac {df(x)}{dx}}{\frac {x}{f(x)}}}\right\vert }is thecondition number.
When performing multiple stage calculations, do not round intermediate stage calculation results; keep as many digits as is practical (at least one more digit than the rounding rule allows per stage) until the end of all the calculations to avoid cumulative rounding errors while tracking or recording the significant figures in each intermediate result. Then, round the final result, for example, to the fewest number of significant figures (for multiplication or division) or leftmost last significant digit position (for addition or subtraction) among the inputs in the final calculation.[14]
When using a ruler, initially use the smallest mark as the first estimated digit. For example, if a ruler's smallest mark is 0.1 cm, and 4.5 cm is read, then it is 4.5 (±0.1 cm) or 4.4 cm to 4.6 cm as to the smallest mark interval. However, in practice a measurement can usually be estimated by eye to closer than the interval between the ruler's smallest mark, e.g. in the above case it might be estimated as between 4.51 cm and 4.53 cm.[15]
It is also possible that the overall length of a ruler may not be accurate to the degree of the smallest mark, and the marks may be imperfectly spaced within each unit. However assuming a normal good quality ruler, it should be possible to estimate tenths between the nearest two marks to achieve an extra decimal place of accuracy.[16]Failing to do this adds the error in reading the ruler to any error in the calibration of the ruler.
When estimating the proportion of individuals carrying some particular characteristic in a population, from a random sample of that population, the number of significant figures should not exceed the maximum precision allowed by that sample size.
Traditionally, in various technical fields, "accuracy" refers to the closeness of a given measurement to its true value; "precision" refers to the stability of that measurement when repeated many times. Thus, it is possible to be "precisely wrong". Hoping to reflect the way in which the term "accuracy" is actually used in the scientific community, there is a recent standard, ISO 5725, which keeps the same definition of precision but defines the term "trueness" as the closeness of a given measurement to its true value and uses the term "accuracy" as the combination of trueness and precision. (See theaccuracy and precisionarticle for a full discussion.) In either case, the number of significant figures roughly corresponds toprecision, not to accuracy or the newer concept of trueness.
Computer representations of floating-point numbers use a form of rounding to significant figures (while usually not keeping track of how many), in general withbinary numbers. The number of correct significant figures is closely related to the notion ofrelative error(which has the advantage of being a more accurate measure of precision, and is independent of theradix, also known as the base, of the number system used).
Electronic calculatorssupporting a dedicated significant figures display mode are relatively rare.
Among the calculators to support related features are theCommodoreM55 Mathematician(1976)[17]and theS61 Statistician(1976),[18]which support two display modes, whereDISP+nwill givensignificant digits in total, whileDISP+.+nwill givendecimal places.
TheTexas InstrumentsTI-83 Plus(1999) andTI-84 Plus(2004) families ofgraphical calculatorssupport aSig-Fig Calculatormode in which the calculator will evaluate the count of significant digits of entered numbers and display it in square brackets behind the corresponding number. The results of calculations will be adjusted to only show the significant digits as well.[19]
For theHP20b/30b-based community-developedWP 34S(2011) andWP 31S(2014) calculators significant figures display modesSIG+nandSIG0+n(with zero padding) are available as acompile-timeoption.[20][21]TheSwissMicrosDM42-based community-developed calculatorsWP 43C(2019)[22]/C43(2022) /C47(2023) support a significant figures display mode as well. | https://en.wikipedia.org/wiki/Decimal_place |
Scriptio continua(Latinfor 'continuous script'), also known asscriptura continuaorscripta continua, is a style of writing withoutspacesorother marksbetween the words or sentences. The form also lackspunctuation,diacritics, or distinguishedletter case.
In the West, the oldest Greek and Latin inscriptions usedword dividersto separate words in sentences; however,Classical Greekand lateClassical Latinboth employedscriptio continuaas the norm.[1][2]Thescriptio continuais also known as Latin skeleton script.
Althoughscriptio continuais evidenced in most Classic Greek and Classic Latin manuscripts, different writing styles are depicted in documents that date back even further. Classical Latin often used theinterpunct, especially in monuments and inscriptions.
The earliest texts in Classical Greek that used the Greek alphabet, as opposed toLinear B, were formatted in a constant string of capital letters from right to left. Later, that evolved toboustrophedon, which included lines written in alternating directions.
The Latin language and the related Italic languages first came to be written usingalphabetic scriptsadapted from theEtruscan alphabet(itself ultimately derived from the Greek alphabet). Initially, Latin texts commonly marked word divisions by points, but later on the Romans came to follow the Greek practice ofscriptio continua.[3]
Before and after the advent of thecodex, Latin and Greek script was written onscrollsby slave scribes. The role of the scribes was to simply record everything they heard to create documentation. Because speech is continuous, there was no need to add spaces.[citation needed]Typically, the reader of the text was a trained performer, who would have already memorised the content and breaks of the script.[citation needed]During the reading performances, the scroll acted as a cue sheet and therefore did not require in-depth reading.[citation needed]
The lack of word parsing forced the reader to distinguish elements of the script without a visual aid, but it also presented the reader with more freedom to interpret the text. The reader had the liberty to insert pauses and dictate tone, which made the act of reading a significantly more subjective activity than it is today. However, the lack of spacing also led to some ambiguity because a minor discrepancy in word parsing could give the text a different meaning. For example, a phrase written inscriptio continuaascollectamexiliopubemmay be interpreted ascollectam ex Ilio pubem, meaning 'a people gathered from Troy', orcollectam exilio pubem, 'a people gathered for exile'. Thus, readers had to be much more cognisant of the context to which the text referred.[4]
Over time, the current system of rapidsilent readingfor information replaced the older, slower, and more dramatic performance-based reading,[5]: 113–115and word dividers and punctuation became more beneficial to text.[6]Thoughpaleographersdisagree about the chronological decline ofscriptio continuathroughout the world, it is generally accepted that the addition of spaces first appeared in Irish and Anglo-Saxon Bibles and Gospels from the seventh and eighth centuries.[7]: 21Subsequently, an increasing number of European texts adopted conventional spacing, and within the thirteenth and fourteenth centuries, all European texts were written with word separation.[7]: 120–121
When word separation became the standard system, it was seen as a simplification of Roman culture because it undermined the metric and rhythmic fluency generated throughscriptio continua. In contrast, paleographers today identify the extinction ofscriptio continuaas a critical factor in augmenting the widespread absorption of knowledge in the pre-Modern Era. By saving the reader the taxing process of interpreting pauses and breaks, the inclusion of spaces enables the brain to comprehend written text more rapidly. Furthermore, the brain has a greater capacity to profoundly synthesize text and commit a greater portion of information to memory.[7]: 16–17
Scriptio continuais still in use inThai script, other Southeast Asianabugidas: (Burmese,Lao,Khmer,Javanese,Balinese,Sundanese script), and in languages that useChinese characters(ChineseandJapanese). However, modernvernacular Chinesedifferentiates itself from ancientscriptio continuathrough its use of punctuation, although this method of separation was borrowed from the West only in the 19th and 20th centuries. Before this, the only forms of punctuation found in Chinese writings were marks to denote quotes, proper nouns, and emphasis. ModernTibetic languagesalso employ a form ofscriptio continua; while they punctuate syllables, they do not use spacing between units of meaning.
Latin text inscriptio continuawith typical capital letters, taken fromCicero'sDe finibus bonorum et malorum:
Which in modern punctuation is:
With ancient Latin punctuation is:NEQVE·PORRO·QVISQVAM·EST·QVI·DOLOREM·IPSVM·QVIA·DOLOR·SIT·AMET·CONSECTETVR·ADIPISCI·VELIT
Greek text inscriptio continuawith typical capital letters, taken fromHesiod'sTheogony:
Which in modern punctuation is:
Hebrew text is well known for lacking punctuation for many centuries. Modern versions of the language gradually amended those features.
The entire SwedishRök runestoneis written inscriptio continua, which poses problems for scholars attempting to translate it. One example is a phrase repeated several times,sakumukmini. Interpretations proposed includesagum Ygg minni'let us say the memory toYggr',sagum mógminni'let us say the folk-memory', andsagum ungmenni'let us say to the group of young men'.
A form ofscriptio continuahas become common in internet e-mail addresses anddomain nameswhere, because the "space" character is invalid, the address for a website for "Example Fake Website" is written asexamplefakewebsite.com– without spaces between the separate words. However, the "underscore" or "dash" characters are often used as stand-ins for the "space" character when its use would be invalid and their use would not be.
As another example, so-calledcamel case—in which the first letter of each word is capitalized—has become part of the culture of manycomputer programming languages. In this context, names ofvariablesandsubroutinesas well as otheridentifiersare rendered easier to read, as inMaxDataRate. Camel case can also eliminate ambiguity:CharTablemight name a table of characters, whereasChartablecould ask or answer the question, "Can (something) be charted?"
Chinese does not encounter the problem of incorporating spaces into text because, unlike mostwriting systems, Chinese characters representmorphemesand not phonemes.[3]Chinese is therefore readable without spaces.
Western punctuation was first used in China in the 20th century as a result of interaction with Western culture.[10]
However, sentences can still be ambiguous due to a lack of punctuation and/or word breaks. One Chinese joke[11]concerns a contract between a landlord and a poor scholar, which was written without punctuation and thus was interpreted in two different ways:
Japaneseimplements extensive use ofChinese characters—calledkanjiin Japanese. However, due to the radical differences between the Chinese and Japanese languages, writing Japanese exclusively in kanji would make it extremely difficult to read.[12]This can be seen in texts that predate the modernkanasystem, in which Japanese was written entirely in kanji andman'yōgana, the latter of which are written solely to indicate a word's pronunciation as opposed to its meaning. For that reason, differentsyllabarysystems called kana were developed to differentiate phoneticgraphemesfromideographicones.
Modern Japanese is typically written using three different types of graphemes, the first being kanji and the latter two being kana systems, the cursivehiraganaand the angularkatakana. While spaces are not normally used in writing, boundaries between words are often quickly perceived by Japanese speakers since kana are usually visually distinct from kanji. Japanese speakers also know that certain words, morphemes, and parts of speech are typically written using one of the three systems. Kanji is typically used for words of Japanese and Chinese origin as well ascontent words(e.g. nouns, verbs, adjectives, adverbs). Hiragana is typically used for native Japanese words, as well as commonly known words, phrases, andgrammatical particles, as well as inflections of content words like verbs, adjectives, and adverbs. Katakana is typically used for loanwords from languages other than Chinese,onomatopoeia, and emphasized words.
Like Chinese, Japanese lacked any sort of punctuation until interaction with Western civilizations became more common. Punctuation was adopted during theMeiji period.
Modern Thai script, which was said to have been created by KingRam Khamhaengin 1283, does not contain any spaces between words. Spaces indicate only the clear endings of clauses or sentences.[citation needed]
Below is a sample sentence of Thai written first without spaces between words (with Thai romanization in parentheses), second in Thai with spaces between words (also with Thai romanization in parentheses), and then finally translated into English.
For example, "ในน้ำมีปลา ในนามีข้าว" (pronounced "nai nam mi phla nai na mi khao", meaning "In the water there are fish; in the paddy fields there is rice.") can also be written as "ใน น้ำ มี ปลา ใน นา มี ข้าว".[13]
This example shows the first line of theUniversal Declaration of Human RightsinJavanese script, and a case of the text being divided, as in some modern writing, by spaces and dash signs, which look different.
Because of the absence of space, in computer typography, the line-break have to be inserted manually, otherwise a long sentence will not break into new lines. Some computer input methods have putzero-width spaceinstead for word break, which would then break the long sentences into multiple lanes, but the drawback of that method is it will not render the writing correctly.
Before typewriters, computers and smartphones changed the way of writing, Arabic was written continuously.[citation needed]That is easy because 22 letters in Arabic have final, medial and initial forms, which is comparable to initial, or capital, form for the Latin alphabet since theRenaissance. Six or seven letters in Arabic have only a final form (namelyا,د,ذ,ر,زandو, as well asء) and whenever they occur in a word they are followed by space that was originally as wide as the space between words, creating a clear visual break. There was also no hyphenation either. In the early Quranic manuscripts, all diacritics in the Arabic script were also omitted because pointing or other diacritics did not exist in the Arabic script until the early 2nd millennium, and this form is calledrasm. Rasm is also written continuously without spacing. In all early manuscripts, words were finished on the next line or, in manyQuranicmanuscripts, even on the next page. The letterhamzais the only one of the only letter of theArabic alphabetthat lacks a final, initial or medial form, only its alone or isolated form, as it is an unlinked letter.
Before the late 1960s and the early 1970s,Gurbaniand otherSikh scriptureswere written in the traditional method of writing theGurmukhiscript known aslarivārwhere there were no spacing between words in the texts (interpuncts in the form of a dot were used by some to differentiate between words, such as byGuru Arjan). This is opposed to the comparatively more recent method of writing in Gurmukhi known aspad ched, which breaks the words by inserting spacing between them.[14]
Before the invention of delimiters and other punctuation to set off groups of three digits in numbers above four digits, large numbers (e.g. numbers greater than 999) were written continuously. As of now, only numbers with fewer than four digits are written with no delimiter or other punctuation. This manner is somewhat similar how to separate a word in a sentence.
While numbers up to four digits are recommended for separating three digits, there are some of them are not. These include mostSlavic languages,Spanish,HungarianandSwiss German. These languages do not use a delimiter to separate numbers in four digits.Englishsometimes follows this practice. | https://en.wikipedia.org/wiki/Scriptio_continua |
Rasm(Arabic:رَسْم[ræsm]) is an Arabic writing script often used in the early centuries ofClassical Arabicliterature (7th century – early 11th century AD). It is the same as today's Arabic script except for the difference that theArabic diacriticsare omitted. These diacritics include consonant pointing orʾiʿjām(إِعْجَام), and supplementary diacritics ortaškīl(تَشْكِيل). The latter include theḥarakāt(حَرَكَات) short vowel marks—singular:ḥarakah(حَرَكَة). As an example, inrasm, the two distinct lettersص ضare indistinguishable becauseʾiʿjāmis omitted, or letters similar in shapeک كmay also become indistinguishable if the diacritics are omitted.Rasmis also known as Arabic skeleton script. This concept is somewhat similar toscriptio continuain the Latin script, where all spaces and other punctuations is omitted. Therasmform was common for writing Arabic until the early 2nd millennium.
In the early Arabic manuscripts that survive today (physical manuscripts dated 7th and 8th centuries AD), one finds dots but "putting dots was in no case compulsory".[1]The very earliest manuscripts have some consonantal diacritics, though use them only sparingly.[2]Signs indicatingshort vowelsand thehamzaare largely absent fromArabic orthographyuntil the 2nd to 8th century. One might assume that scribes would write these few diacritics in the most textually ambiguous places of the rasm, so as to make the Arabic text easier to read. However, many scholars have noticed that this is not the case. By focusing on the few diacritics that do appear in early manuscripts, Adam Bursi "situates early Qurʾān manuscripts within the context of other Arabic documents of the first/seventh century that exhibit similarly infrequent diacritics. Shared patterns in the usages of diacritics indicate that early Qurʾān manuscripts were produced by scribes relying upon very similar orthographic traditions to those that produced Arabic papyri and inscriptions of the first/seventh century." He concludes that Quranic scribes "neither
'left out' diacritics to leave the text open, nor 'added' more to clarify it, but in most cases simply wrote diacritics where they were accustomed to writing them by habit or convention."[3]
Rasmmeans 'drawing', 'outline', or 'pattern' in Arabic. When speaking of theQur'an, it stands for the basic text made of the 18 letters without theArabic diacriticswhich mark vowels (taškīl) and disambiguate consonants (ʾiʿjām).
Therasmis the oldest part of theArabic script; it has 18 elements, excluding the ligature oflāmandalif. When isolated and in the final position, the 18 letters are visually distinct. However, in the initial and medial positions, certain letters that are distinct otherwise are not differentiated visually. This results in only 15 visually distinctglyphseach in the initial and medial positions.
At the time when theʾiʿjāmwas optional, letters deliberately lacking the points ofʾiʿjām:⟨ح⟩/ħ/,⟨د⟩/d/,⟨ر⟩/r/,⟨س⟩/s/,⟨ص⟩/sˤ/,⟨ط⟩/tˤ/,⟨ع⟩/ʕ/,⟨ل⟩/l/,⟨ه⟩/h/— could be marked with a small v-shaped sign above or below the letter, or a semicircle, or a miniature of the letter itself (e.g. a smallس to indicate that the letter in question isس and notش), or one or several subscript dots, or a superscripthamza, or a superscript stroke.[4]These signs, collectively known as‘alāmātu-l-ihmāl, are still occasionally used in modernArabic calligraphy, either for their original purpose (i.e. marking letters withoutʾiʿjām), or often as purely decorative space-fillers. The smallک above thekāfin its final and isolated forms⟨ك ـك⟩was originally‘alāmātu-l-ihmāl, but became a permanent part of the letter. Previously this sign could also appear above the medial form ofkāf, instead of the stroke on itsascender.[5]
Among the historical examples ofrasmscript are the KuficBlue Qur'anand theSamarkand Qurʾan. The latter is written almost entirely in Kufic rasm.
The following is an example ofrasmfromSurahAl-A'raf(7),āyah86 and 87, in the Samarkand Qur'an, and its digital equivalentrasm,rasmwith normal spacing, and then fully vocalized with all diacretics:
Compare theBasmala(Arabic:بَسْمَلَة), the beginning verse of theQurʾānwith all diacritics and with the rasm only. Note that when rasm is written with spaces, spaces do not only occur between words. Within a word, spaces also appear between adjacent letters that are not connected, and this type of rasm is old and not used lately.
^c.The sentence may not display correctly in some fonts. It appears as it should if the full Arabic character set from theArial fontis installed; or one of theSIL International[6]fontsScheherazade[7]orLateef;[8]orKatibeh.[9] | https://en.wikipedia.org/wiki/Rasm |
Inwriting, aspace() is a blank area thatseparates words,sentences, and other written or printedglyphs(characters). Conventions for spacing vary among languages, and in some languages the spacing rules are complex.[citation needed]Inter-word spaces ease the reader's task of identifying words, and avoid outright ambiguities such as "now here" vs. "nowhere". They also provide convenient guides for where a human or program may start new lines.
Typesettingcan use spaces of varying widths, just as it can use graphic characters of varying widths. Unlike graphic characters, typeset spaces arecommonly stretched in order to align text. Atypewriter, on the other hand, typically has only one width for all characters, including spaces. Following widespread acceptance of the typewriter, some typewriter conventions influencedtypographyand the design of printed works.[citation needed]
Computerrepresentation of text facilitates getting around mechanical and physical limitations such as character widths in at least two ways:
Modern English uses a space to separate words, but not all languages follow this practice. According to Paul Saenger inSpace Between Words: The Origins of Silent Reading,Ancient HebrewandArabicdid use spaces partly to compensate in clarity for thelack of written vowelswhen nomater lectioniswas used for a vowel, though in the Middle Ages they sometimes omitted spaces when vowel points were marked.[1]The earliest Greek script also used interpuncts to divide words rather than spacing, although this practice was soon displaced by thescriptio continua. InLatin, spaces and interpuncts came often to be dropped in favor ofscriptio continua, and were not used to separate words again until roughly AD 600–800.
Word spacing was later used by Irish and Anglo-Saxon scribes, beginning after the creation of theCarolingian minusculebyAlcuin of Yorkand the scribes’ adoption of it. Spacing would become standard inRenaissanceItaly and France, and thenByzantiumby the end of the 16th century; then entering into the Slavic languages inCyrillicin the 17th century, and only in modern times entering modernSanskrit.[2][dubious–discuss]
CJKlanguages do not use spaces when dealing with text containing mostlyChinese charactersandkana. InJapanese, spaces may occasionally be used to separate people’sfamily namesfromgiven names, to denote omittedparticles(especially the topic particlewa), and for certain literary or artistic effects. ModernKorean, however, has spaces as an essential part of its writing system (because of Western influence), given the phonetic nature of thehangulscript that requires word dividers to avoid ambiguity, as opposed to Chinese characters which are mostly very distinguishable from each other. In Korean, spaces are used to separate chunks of nouns, nouns andparticles, adjectives, and verbs; for certain compounds or phrases, spaces may be used or not, as in the phrase for “Republic of Korea,” usually spelled without spaces as대한민국rather than with a space as대한 민국.
Runictexts use either aninterpunct-like or acolon-like punctuation mark to separate words. There are twoUnicodecharacters dedicated for this:U+16EB᛫RUNIC SINGLE PUNCTUATIONandU+16EC᛬RUNIC MULTIPLE PUNCTUATION.
Languages with a Latin-derived alphabet have used various methods of sentence spacing since the advent of movable type in the 15th century.
There has been somecontroversyregarding the proper amount of sentence spacing in typeset material. TheElements of Typographic Stylestates that only a single word space is required for sentence spacing.[21]Psychological studies suggest "readers benefit from having two spaces after periods."[22]
TheInternational System of Units (SI)prescribes inserting a space between a number and aunit of measurement(the space being regarded as an implied multiplication sign) but never between a prefix and a base unit; a space (or amultiplication dot) should also be used between units in compound units.[23]
The only exception to this rule is the traditional symbolic notation ofangles:degree(e.g., 30°),minute of arc(e.g., 22′), andsecond of arc(e.g., 8″).
The SI also prescribes the use of a space[24](often typographically athin space) as athousands separatorwhere required. Both the point and the comma are reserved asdecimal markers.
Sometimes anarrow non-breaking spaceornon-breaking space, respectively, is recommended (as in, for example,IEEE Standards[25]andIEC standards[26]) to avoid the separation of units and values or parts of compounds units, due to automaticline wrap and word wrap.
Unicodedefines many variantsof a single whitespace character, with various properties; the more commonly encountered variations include:
InURLs, spaces arepercent encodedwith itsASCII/UTF-8representation%20. | https://en.wikipedia.org/wiki/Space_(punctuation) |
Dot-decimal notationis a presentation format for numerical data. It consists of a string of decimal numbers, using thefull stop(dot) as aseparation character.[1]
A common use of dot-decimal notation is in information technology where it is a method of writing numbers inoctet-grouped base-10 (decimal) numbers.[2]Incomputer networking,Internet Protocol Version 4 (IPv4) addressesare commonly written using thequad-dotted notationof four decimal integers, ranging from 0 to 255 each.[3]
In computer networking, the notation is associated with the specific use ofquad-dotted notationto represent IPv4 addresses[4]and used as a synonym fordotted-quad notation.[5]Dot-decimal notation is a presentation format for numerical data expressed as a string of decimal numbers each separated by a full stop. For example, thehexadecimal number0xFF000000may be expressed in dot-decimal notation as255.0.0.0.
An IPv4 address has 32 bits. For purposes of representation, the bits may be divided into four octets written in decimal numbers, ranging from 0 to 255, concatenated as a character string with full stop delimiters between each number.[3]This octet-grouped dotted-decimal format may more specifically be called "dotted octet" format,[6]or a "dotted quad address".[7]
For example, the address of theloopbackinterface, usually assigned the host namelocalhost, is 127.0.0.1. It consists of the four octets, written in binary notation:01111111,00000000,00000000, and00000001. The 32-bit number is represented in hexadecimal notation as0x7F000001.
No formal specification of this textual IP address representation exists.[6]The first mention of this format inRFCdocuments was in RFC 780 for theMail Transfer Protocolpublished May 1981, in which the IP address was supposed to be enclosed in brackets or represented as a 32-bit decimal integer prefixed by a pound sign. A table in RFC 790 (Assigned Numbers) used the dotted decimal format, zero-padding each number to three digits.[6]RFC 1123 (Requirements for Internet Hosts – Application and Support) of October 1989 mentions a requirement for host software to accept "IP address in dotted-decimal ("#.#.#.#") form", although it notes "[t]his last requirement is not intended to specify the complete syntactic form for entering a dotted-decimal host number".[8]An IETF draft intended to define textual representation of IP addresses expired without further activity.[6]
A popular implementation of IP networking, originating in4.2BSD, contains a functioninet_aton()for converting IP addresses in character string representation to internal binary storage. In addition to the basic four-decimals format and 32-bit numbers, it also supported intermediate syntax forms ofoctet.24bits(e.g. 10.1234567; forClass Aaddresses) andoctet.octet.16bits(e.g. 172.16.12345; for Class B addresses). It also allowed the numbers to be written inhexadecimalandoctalrepresentations, by prefixing them with 0x and 0, respectively. These features continue to be supported in some software, even though they are considered as non-standard.[6]This means addresses with a component written with a leading zero digit may be interpreted differently in programs that do or do not recognize such formats.[9]
APOSIX-conforming variant ofinet_aton, theinet_pton()function, supports only the four-decimal variant of IP addresses.[10]
IP addresses in dot-decimal notation are also presented inCIDR notation, in which the IP address is suffixed with a slash and a number, used to specify the length of the associated routing prefix. For example, 127.0.0.1/8 specifies that the IP address has an eight-bit routing prefix, and therefore the subnet mask255.0.0.0.
Object identifiersuse a style of dot-decimal notation to represent an arbitrarily deep hierarchy of objects identified by decimal numbers. They may also use textual words separated by dots, like some computer languages (see inheritance).
Software releases are often givenversion numbersin dot-decimal notation, with the first digit designating major revisions and the smaller ones progressively more minor releases. Version numbers with a leading zero, say "0.1.8", conventionally indicate that the software is still inbetaand does not yet have complete features.
Libraries use notation systems consisting of decimal numbers separated by dots, such as the olderDewey Decimal Classificationand theUniversal Decimal Classification, to classify books and other works by subject. The UDC additionally codes works withmultipledot-decimal topics, separated by colons.[11]
Toe bones orphalanges of the foot.
Dot-decimal notation is also used to describe illnesses in a language-neutral way. For instance, theAO Foundation/Orthopaedic Trauma Association(AO/OTA) classification generates numeric codes for describingbroken toes.[12]They run88[meaning a fracture of thephalanges].[number-code of toe, with the big toe=1 and the little toe=5].[number-code of phalanx, counting 1-3 outwards from the foot].[number-code of location on the bone, with 1 being the inner end, 3 the outer, and 2 in-between].[12]So, for instance,88.5.3.2means a fracture to the little toe's outermost bone, in the center.[12]There are other classifications for other fractures and dislocations.[13] | https://en.wikipedia.org/wiki/Dot-decimal_notation |
TheInternational System of Units, internationally known by the abbreviationSI(from FrenchSystème international d'unités), is the modern form of themetric systemand the world's most widely usedsystem of measurement. It is the only system of measurement with official status in nearly every country in the world, employed in science, technology, industry, and everyday commerce. The SI system is coordinated by theInternational Bureau of Weights and Measures, which is abbreviated BIPM fromFrench:Bureau international des poids et mesures.
The SI comprises acoherentsystem ofunits of measurementstarting with sevenbase units, which are thesecond(symbol s, the unit oftime),metre(m,length),kilogram(kg,mass),ampere(A,electric current),kelvin(K,thermodynamic temperature),mole(mol,amount of substance), andcandela(cd,luminous intensity). The system can accommodate coherent units for an unlimited number of additional quantities. These are called coherentderived units, which can always be represented as products of powers of the base units. Twenty-two coherent derived units have been provided with special names and symbols.
The seven base units and the 22 coherent derived units with special names and symbols may be used in combination to express other coherent derived units. Since the sizes of coherent units will be convenient for only some applications and not for others, the SI provides twenty-fourprefixeswhich, when added to the name and symbol of a coherent unit produce twenty-four additional (non-coherent) SI units for the same quantity; these non-coherent units are always decimal (i.e. power-of-ten) multiples and sub-multiples of the coherent unit.
The current way of defining the SI is a result of a decades-long move towards increasingly abstract and idealised formulation in which therealisationsof the units are separated conceptually from the definitions. A consequence is that as science and technologies develop, new and superior realisations may be introduced without the need to redefine the unit. One problem with artefacts is that they can be lost, damaged, or changed; another is that they introduce uncertainties that cannot be reduced by advancements in science and technology.
The original motivation for the development of the SI was the diversity of units that had sprung up within thecentimetre–gram–second(CGS) systems (specifically the inconsistency between the systems ofelectrostatic unitsandelectromagnetic units) and the lack of coordination between the variousdisciplinesthat used them. The General Conference on Weights and Measures (French:Conférence générale des poids et mesures– CGPM), which was established by theMetre Conventionof 1875, brought together many international organisations to establish the definitions and standards of a new system and to standardise the rules for writing and presenting measurements. The system was published in 1960 as a result of an initiative that began in 1948, and is based on themetre–kilogram–second system of units(MKS) combined with ideas from the development of the CGS system.
The International System of Units consists of a set of seven defining constants with seven corresponding base units, derived units, and a set of decimal-based multipliers that are used as prefixes.[1]: 125
The seven defining constants are the most fundamental feature of the definition of the system of units.[1]: 125The magnitudes of all SI units are defined by declaring that seven constants have certain exact numerical values when expressed in terms of their SI units. These defining constants are thespeed of lightin vacuumc, thehyperfine transition frequency of caesiumΔνCs, thePlanck constanth, theelementary chargee, theBoltzmann constantk, theAvogadro constantNA, and theluminous efficacyKcd. The nature of the defining constants ranges from fundamental constants of nature such ascto the purely technical constantKcd. The values assigned to these constants were fixed to ensure continuity with previous definitions of the base units.[1]: 128
The SI selects seven units to serve asbase units, corresponding to seven base physical quantities. They are thesecond, with the symbols, which is the SI unit of the physical quantity oftime; themetre, symbolm, the SI unit oflength;kilogram(kg, the unit ofmass);ampere(A,electric current);kelvin(K,thermodynamic temperature);mole(mol,amount of substance); andcandela(cd,luminous intensity).[1]The base units are defined in terms of the defining constants. For example, the kilogram is defined by taking the Planck constanthto be6.62607015×10−34J⋅s, giving the expression in terms of the defining constants[1]: 131
All units in the SI can be expressed in terms of the base units, and the base units serve as a preferred set for expressing or analysing the relationships between units. The choice of which and even how many quantities to use as base quantities is not fundamental or even unique – it is a matter of convention.[1]:126
The system allows for an unlimited number of additional units, calledderived units, which can always be represented as products of powers of the base units, possibly with a nontrivial numeric multiplier. When that multiplier is one, the unit is called acoherentderived unit. For example, the coherent derived SI unit ofvelocityis themetre per second, with the symbolm/s.[1]: 139The base and coherent derived units of the SI together form a coherent system of units (the set of coherent SI units). A useful property of a coherent system is that when the numerical values of physical quantities are expressed in terms of the units of the system, then the equations between the numerical values have exactly the same form, including numerical factors, as the corresponding equations between the physical quantities.[3]: 6
Twenty-two coherent derived units have been provided with special names and symbols as shown in the table below. The radian and steradian have no base units but are treated as derived units for historical reasons.[1]: 137
The derived units in the SI are formed by powers, products, or quotients of the base units and are unlimited in number.[1]: 138[4]: 14, 16
Derived unitsapply to somederived quantities, which may by definition be expressed in terms ofbase quantities, and thus are not independent; for example,electrical conductanceis the inverse ofelectrical resistance, with the consequence that the siemens is the inverse of the ohm, and similarly, the ohm and siemens can be replaced with a ratio of an ampere and a volt, because those quantities bear a defined relationship to each other.[b]Other useful derived quantities can be specified in terms of the SI base and derived units that have no named units in the SI, such as acceleration, which has the SI unit m/s2.[1]: 139
A combination of base and derived units may be used to express a derived unit. For example, the SI unit offorceis thenewton(N), the SI unit ofpressureis thepascal(Pa) – and the pascal can be defined as one newton per square metre (N/m2).[5]
Like all metric systems, the SI usesmetric prefixesto systematically construct, for the same physical quantity, a set of units that are decimal multiples of each other over a wide range. For example, driving distances are normally given inkilometres(symbolkm) rather than in metres. Here the metric prefix 'kilo-' (symbol 'k') stands for a factor of 1000; thus,1 km=1000 m.
The SI provides twenty-four metric prefixes that signify decimal powers ranging from 10−30to 1030, the most recent being adopted in 2022.[1]: 143–144[6][7][8]Most prefixes correspond to integer powers of 1000; the only ones that do not are those for 10, 1/10, 100, and 1/100.
The conversion between different SI units for one and the same physical quantity is always through a power of ten. This is why the SI (and metric systems more generally) are calleddecimal systems of measurement units.[9]
The grouping formed by a prefix symbol attached to a unit symbol (e.g. 'km', 'cm') constitutes a new inseparable unit symbol. This new symbol can be raised to a positive or negative power. It can also be combined with other unit symbols to formcompound unitsymbols.[1]: 143For example,g/cm3is an SI unit ofdensity, wherecm3is to be interpreted as (cm)3.
Prefixes are added to unit names to produce multiples andsubmultiplesof the original unit. All of these are integer powers of ten, and above a hundred or below a hundredth all are integer powers of a thousand. For example,kilo-denotes a multiple of a thousand andmilli-denotes a multiple of a thousandth, so there are one thousand millimetres to the metre and one thousand metres to the kilometre. The prefixes are never combined, so for example a millionth of a metre is amicrometre, not amillimillimetre. Multiples of the kilogram are named as if the gram were the base unit, so a millionth of a kilogram is amilligram, not amicrokilogram.[10]: 122[11]: 14
The BIPM specifies 24 prefixes for the International System of Units (SI):
The base units and the derived units formed as the product of powers of the base units with a numerical factor of one form acoherent system of units. Every physical quantity has exactly one coherent SI unit. For example,1 m/s = (1 m) / (1 s)is the coherent derived unit for velocity.[1]: 139With the exception of the kilogram (for which the prefix kilo- is required for a coherent unit), when prefixes are used with the coherent SI units, the resulting units are no longer coherent, because the prefix introduces a numerical factor other than one.[1]: 137For example, the metre, kilometre, centimetre, nanometre, etc. are all SI units of length, though only the metre is acoherentSI unit. The complete set of SI units consists of both the coherent set and the multiples and sub-multiples of coherent units formed by using the SI prefixes.[1]: 138
The kilogram is the only coherent SI unit whose name and symbol include a prefix. For historical reasons, the names and symbols for multiples and sub-multiples of the unit of mass are formed as if thegramwere the base unit. Prefix names and symbols are attached to the unit namegramand the unit symbol g respectively. For example,10−6kgis writtenmilligramandmg, notmicrokilogramandμkg.[1]: 144
Several different quantities may share the same coherent SI unit. For example, the joule per kelvin (symbolJ/K) is the coherent SI unit for two distinct quantities:heat capacityandentropy; another example is the ampere, which is the coherent SI unit for bothelectric currentandmagnetomotive force. This illustrates why it is important not to use the unit alone to specify the quantity. As theSI Brochurestates,[1]: 140"this applies not only to technical texts, but also, for example, to measuring instruments (i.e. the instrument read-out needs to indicate both the unit and the quantity measured)".
Furthermore, the same coherent SI unit may be a base unit in one context, but a coherent derived unit in another. For example, the ampere is a base unit when it is a unit of electric current, but a coherent derived unit when it is a unit of magnetomotive force.[1]: 140
According to the SI Brochure,[1]: 148unit names should be treated ascommon nounsof the context language. This means that they should be typeset in the same character set as other common nouns (e.g.Latin alphabetin English,Cyrillic scriptin Russian, etc.), following the usual grammatical andorthographicalrules of the context language. For example, in English and French, even when the unit is named after a person and its symbol begins with a capital letter, the unit name in running text should start with a lowercase letter (e.g., newton, hertz, pascal) and iscapitalisedonly at the beginning of a sentence and inheadings and publication titles. As a nontrivial application of this rule, the SI Brochure notes[1]: 148that the name of the unit with the symbol°Cis correctly spelled as 'degreeCelsius': the first letter of the name of the unit, 'd', is in lowercase, while the modifier 'Celsius' is capitalised because it is a proper name.[1]: 148
The English spelling and even names for certain SI units, prefixes and non-SI units depend on the variety of English used.US Englishuses the spellingdeka-,meter, andliter, andInternational Englishusesdeca-,metre, andlitre. The name of the unit whose symbol is t and which is defined by1 t=103kgis 'metric ton' in US English and 'tonne' in International English.[4]: iii
Symbols of SI units are intended to be unique and universal, independent of the context language.[10]: 130–135The SI Brochure has specific rules for writing them.[10]: 130–135
In addition, the SI Brochure provides style conventions for among other aspects of displaying quantities units: the quantity symbols, formatting of numbers and the decimal marker, expressing measurement uncertainty, multiplication and division of quantity symbols, and the use of pure numbers and various angles.[1]: 147
In the United States, the guideline produced by theNational Institute of Standards and Technology(NIST)[11]: 37clarifies language-specific details for American English that were left unclear by the SI Brochure, but is otherwise identical to the SI Brochure.[14]For example, since 1979, thelitremay exceptionally be written using either an uppercase "L" or a lowercase "l", a decision prompted by the similarity of the lowercase letter "l" to the numeral "1", especially with certain typefaces or English-style handwriting. NIST recommends that within the United States, "L" be used rather than "l".[11]
Metrologists carefully distinguish between the definition of a unit and its realisation. The SI units are defined by declaring that sevendefining constants[1]: 125–129have certain exact numerical values when expressed in terms of their SI units. The realisation of the definition of a unit is the procedure by which the definition may be used to establish the value and associated uncertainty of a quantity of the same kind as the unit.[1]: 135
For each base unit the BIPM publishes amises en pratique, (Frenchfor 'putting into practice; implementation',[16]) describing the current best practical realisations of the unit.[17]The separation of the defining constants from the definitions of units means that improved measurements can be developed leading to changes in themises en pratiqueas science and technology develop, without having to revise the definitions.
The publishedmise en pratiqueis not the only way in which a base unit can be determined: the SI Brochure states that "any method consistent with the laws of physics could be used to realise any SI unit".[10]: 111Various consultative committees of theCIPMdecided in 2016 that more than onemise en pratiquewould be developed for determining the value of each unit.[18]These methods include the following:
The International System of Units, or SI,[1]:123is adecimalandmetricsystem of unitsestablished in 1960 and periodically updated since then. The SI has anofficial statusin most countries, includingthe United States,Canada, andthe United Kingdom, although these three countries are among the handful of nations that, to various degrees, also continue to use their customary systems. Nevertheless, with this nearly universal level of acceptance, the SI "has been used around the world as the preferred system of units, the basic language for science, technology, industry, and trade."[1]: 123, 126
The only other types of measurement system that still have widespread use across the world are theimperial and US customary measurement systems. Theinternational yard and poundare defined in terms of the SI.[22]
The quantities and equations that provide the context in which the SI units are defined are now referred to as theInternational System of Quantities(ISQ).
The ISQ is based on thequantitiesunderlying each of theseven base units of the SI. Other quantities, such asarea,pressure, andelectrical resistance, are derived from these base quantities by clear, non-contradictory equations. The ISQ defines the quantities that are measured with the SI units.[23]The ISQ is formalised, in part, in the international standardISO/IEC 80000, which was completed in 2009 with the publication ofISO 80000-1,[24]and has largely been revised in 2019–2020.[25]
The SI is regulated and continually developed by three international organisations that were established in 1875 under the terms of theMetre Convention. They are theGeneral Conference on Weights and Measures(CGPM[c]),[26]the International Committee for Weights and Measures (CIPM[d]), and theInternational Bureau of Weights and Measures(BIPM[e]).All the decisions and recommendations concerning units are collected in a brochure calledThe International System of Units (SI),[1]which is published in French and English by the BIPM and periodically updated. The writing and maintenance of the brochure is carried out by one of the committees of the CIPM. The definitions of the terms "quantity", "unit", "dimension", etc. that are used in theSI Brochureare those given in theinternational vocabulary of metrology.[27]The brochure leaves some scope for local variations, particularly regarding unit names and terms in different languages. For example, the United States'National Institute of Standards and Technology(NIST) has produced a version of the CGPM document (NIST SP 330), which clarifies usage for English-language publications that useAmerican English.[4]
The concept of a system of units emerged a hundred years before the SI.
In the 1860s,James Clerk Maxwell,William Thomson(later Lord Kelvin), and others working under the auspices of theBritish Association for the Advancement of Science, building on previous work ofCarl Gauss, developed thecentimetre–gram–second system of unitsor cgs system in 1874. The systems formalised the concept of a collection of related units called acoherentsystem of units. In a coherent system,base unitscombine to definederived unitswithout extra factors.[4]: 2For example, using metre per second is coherent in a system that uses metre for length and second for time, but kilometre per hour is not coherent. The principle of coherence was successfully used to define a number of units of measure based on the CGS, including theergforenergy, thedyneforforce, thebaryeforpressure, thepoisefordynamic viscosityand thestokesforkinematic viscosity.[29]
A French-inspired initiative for international cooperation inmetrologyled to the signing in 1875 of theMetre Convention, also called Treaty of the Metre, by 17 nations.[f][30]: 353–354TheGeneral Conference on Weights and Measures(French:Conférence générale des poids et mesures– CGPM), which was established by the Metre Convention,[29]brought together many international organisations to establish the definitions and standards of a new system and to standardise the rules for writing and presenting measurements.[31]: 37[32]Initially the convention only covered standards for the metre and the kilogram. This became the foundation of the MKS system of units.[4]: 2
At the close of the 19th century three different systems of units of measure existed for electrical measurements: aCGS-based system for electrostatic units, also known as the Gaussian or ESU system, aCGS-based system for electromechanical units(EMU), and an International system based on units defined by the Metre Convention[33]for electrical distribution systems. Attempts to resolve the electrical units in terms of length, mass, and time usingdimensional analysiswas beset with difficulties – the dimensions depended on whether one used the ESU or EMU systems.[34]This anomaly was resolved in 1901 whenGiovanni Giorgipublished a paper in which he advocated using a fourth base unit alongside the existing three base units. The fourth unit could be chosen to beelectric current,voltage, orelectrical resistance.[35]
Electric current with named unit 'ampere' was chosen as the base unit, and the other electrical quantities derived from it according to the laws of physics. When combined with the MKS the new system, known as MKSA, was approved in 1946.[4]
In 1948, the 9th CGPM commissioned a study to assess the measurement needs of the scientific, technical, and educational communities and "to make recommendations for a single practical system of units of measurement, suitable for adoption by all countries adhering to the Metre Convention".[36]This working document wasPractical system of units of measurement. Based on this study, the 10th CGPM in 1954 defined an international system derived six base units: the metre, kilogram, second, ampere, degree Kelvin, and candela.
The 9th CGPM also approved the first formal recommendation for the writing of symbols in the metric system when the basis of the rules as they are now known was laid down.[37]These rules were subsequently extended and now cover unit symbols and names, prefix symbols and names, how quantity symbols should be written and used, and how the values of quantities should be expressed.[10]: 104, 130
The 10th CGPM in 1954 resolved to create an international system of units[31]: 41and in 1960, the 11th CGPM adopted theInternational System of Units, abbreviated SI from the French nameLe Système international d'unités, which included a specification for units of measurement.[10]: 110
TheInternational Bureau of Weights and Measures(BIPM) has described SI as "the modern form of metric system".[10]: 95In 1971 themolebecame the seventh base unit of the SI.[4]: 2
After themetre was redefinedin 1960, theInternational Prototype of the Kilogram(IPK) was the only physical artefact upon which base units (directly the kilogram and indirectly the ampere, mole and candela) depended for their definition, making these units subject to periodic comparisons of national standard kilograms with the IPK.[38]During the 2nd and 3rd Periodic Verification of National Prototypes of the Kilogram, a significant divergence had occurred between the mass of the IPK and all of its official copies stored around the world: the copies had all noticeably increased in mass with respect to the IPK. Duringextraordinary verificationscarried out in 2014 preparatory to redefinition of metric standards, continuing divergence was not confirmed. Nonetheless, the residual and irreducible instability of a physical IPK undermined the reliability of the entire metric system to precision measurement from small (atomic) to large (astrophysical) scales.[39]By avoiding the use of an artefact to define units, all issues with the loss, damage, and change of the artefact are avoided.[1]: 125
A proposal was made that:[40]
The new definitions were adopted at the 26th CGPM on 16 November 2018, and came into effect on 20 May 2019.[41]The change was adopted by the European Union through Directive (EU) 2019/1258.[42]
Prior to its redefinition in 2019, the SI was defined through the seven base units from which the derived units were constructed as products of powers of the base units. After the redefinition, the SI is defined by fixing the numerical values of seven defining constants. This has the effect that the distinction between the base units and derived units is, in principle, not needed, since all units, base as well as derived, may be constructed directly from the defining constants. Nevertheless, the distinction is retained because "it is useful and historically well established", and also because theISO/IEC 80000series of standards, which define theInternational System of Quantities(ISQ), specifies base and derived quantities that necessarily have the corresponding SI units.[1]: 129
Many non-SI units continue to be used in the scientific, technical, and commercial literature. Some units are deeply embedded in history and culture, and their use has not been entirely replaced by their SI alternatives. The CIPM recognised and acknowledged such traditions by compiling a list of non-SI units accepted for use with SI,[10]including the hour, minute, degree of angle, litre, and decibel.
This is a list of units that are not defined as part of theInternational System of Units(SI) but are otherwise mentioned in the SI Brochure,[43]listed as being accepted for use alongside SI units, or for explanatory purposes.
The SI prefixes can be used with several of these units, but not, for example, with the non-SI units of time.
Others, in order to be converted to the corresponding SI unit, require conversion factors that are not powers of ten. Some common examples of such units are the customary units of time, namely the minute (conversion factor of60 s/min, since1 min=60 s), the hour (3600 s), and the day (86400s); the degree (for measuring plane angles,1°=(π /180) rad);and theelectronvolt(a unit of energy,1 eV=1.602176634×10−19J).[43]
Although the termmetric systemis often used as an informal alternative name for the International System of Units,[46]other metric systems exist, some of which were in widespread use in the past or are even still used in particular areas. There are also individualmetric unitssuch as thesverdrupand thedarcythat exist outside of any system of units. Most of the units of the other metric systems are not recognised by the SI.
Sometimes, SI unit name variations are introduced, mixing information about the corresponding physical quantity or the conditions of its measurement; however, this practice is unacceptable with the SI. "Unacceptability of mixing information with units: When one gives the value of a quantity, any information concerning the quantity or its conditions of measurement must be presented in such a way as not to be associated with the unit."[10]Instances include: "watt-peak" and "watt RMS"; "geopotential metre" and "vertical metre"; "standard cubic metre"; "atomic second", "ephemeris second", and "sidereal second".
Organisations
Standards and conventions
[1]This article incorporatestextfrom this source, which is available under theCC BY 3.0license. | https://en.wikipedia.org/wiki/International_System_of_Units |
International standardISO2145defines a typographic convention for the "numbering of divisions and subdivisions in written documents". It applies to any kind of document, including manuscripts, books, journal articles, and standards.
The ISO 2145 numbering scheme is defined by the following rules:
A table of contents might look like:
Division and subdivision numbers are cited in written text as in:
In spoken language, the full stops are omitted: | https://en.wikipedia.org/wiki/ISO_2145 |
TheRKM code,[1]also referred to as "letter and numeral code forresistanceandcapacitancevalues andtolerances",[1]"letter and digit code for resistance and capacitance values and tolerances",[2][3]or informally as "R notation"[4][5][6][7][8][9]is a notation to specifyresistorandcapacitorvalues defined in the international standardIEC60062 (formerly IEC 62) since 1952. Other standards includingDIN40825 (1973),BS1852 (1975),[10]IS8186 (1976), andEN60062 (1993) have also accepted it. The updated IEC 60062:2016,[1]amended in 2019, comprises the most recent release of the standard.
Originally meant also as part marking code, this shorthand notation is widely used inelectrical engineeringto denote the values of resistors and capacitors incircuit diagramsand in the production ofelectronic circuits(for example inbills of materialand insilk screens). This method avoids overlooking thedecimal separator, which may not be rendered reliably on components or when duplicating documents.
The standards also define acolor code for fixed resistors.
For brevity, the notation omits to always specify the unit (ohmorfarad) explicitly and instead relies on implicit knowledge raised from the usage of specific letters either only for resistors or for capacitors,[nb 2]the case used (uppercase letters are typically used for resistors, lowercase letters for capacitors),[nb 3]a part's appearance, and the context.
The notation also avoids using adecimal separatorand replaces it by a letter associated with the prefix symbol for the particular value.[nb 4]
This is not only for brevity (for example when printed on the part or PCB), but also to circumvent the problem that decimal separators tend to "disappear" whenphotocopyingprinted circuit diagrams.
Another advantage is the easier sortability of values which helps to optimize thebill of materialsby combining similar part values to improve maintainability and reduce costs.[nb 5]
The code letters are loosely related to the correspondingSI prefix, but there are several exceptions, where the capitalization differs or alternative letters are used.
For example,8K2indicates a resistor value of 8.2 kΩ. Additional zeros imply tighter tolerance, for example15M0.[12]
When the value can be expressed without the need for a prefix, anRorFis used instead of the decimal separator. For example,1R2indicates1.2Ω, and18Rindicates18 Ω.
Forresistances, the standard dictates the use of the uppercase lettersL(for 10−3),R(for 100= 1),K(for 103),M(for 106), andG(for 109) to be used instead of the decimal point.[12]
The usage of the letterRinstead of the SI unit symbol Ω for ohms stems from the fact that the Greek letter Ω is absent from most oldercharacter encodings(though it is present in the now-ubiquitousUnicode) and therefore is sometimes impossible to reproduce, in particular in some CAD/CAM environments. The letterRwas chosen because visually it loosely resembles the Ω glyph, and also because it works nicely as amnemonicforresistance in many languages.[citation needed]
The lettersGandTweren't part of the first issue of the standard, which pre-dates the introduction of theSI system(hence the name "RKM code"), but were added after the adoption of the corresponding SI prefixes.
The introduction of the letterLin more recent issues of the standard (instead of anSI prefixmformilli) is justified to maintain the rule of only using uppercase letters for resistances (the otherwise resultingMwas already in use formega).
Similar, the standard prescribes the following lowercase letters forcapacitancesto be used instead of the decimal point:p(for 10−12),n(for 10−9),μ(for 10−6),m(for 10−3), but uppercaseF(for 100= 1) forfarad.
The letterspandnweren't part of the first issue of the standard, but were added after the adoption of the corresponding SI prefixes.
In cases where the Greek letterμis not available, the standard allows it to be replaced byu(orU, when only uppercase letters are available). This usage ofuinstead ofμis also in line withISO 2955(1974,[14]1983[15]),DIN 66030(Vornorm 1973;[16]1980,[17][18]2002[19]),BS 6430(1983) andHealth Level 7(HL7),[20]which allow the prefixμto be substituted by the letteru(orU) in circumstances in which only theLatin alphabetis available.
Several manufacturers of resistors utilize the RKM code as part of the components'manufacturer's part numbers(MPNs).[21][22]
Though non-standard, some manufacturers also use the RKM code to markinductorswithRindicating the decimal point inmicrohenry(e.g.4R7for4.7μH).[23][24]
A similar non-standard notation using the unit symbol instead of a decimal separator is sometimes used to indicatevoltages(i.e.0V8for0.8V,1V8for1.8 V,3V3for3.3 Vor5V0for5.0 V[25][26][27][28][29][30]) in contexts where a decimal separator would be impossible to use or inappropriate (e.g. in signal or pin names, invariable names, infile names, or inlabelsorsubscripts). Alternatively, letterP(presumably standing for "positive voltage" or "power supply rail")[nb 6]is seen being used instead of theVsometimes in device models and netnames (i.e.1P8for1.8 V,3P3for3.3 V).[31][32][33][34][35][36]Respectively, both variants are also used as part of the MPN codes ofzener diodes[27][37]andvoltage regulators[36]by some manufacturers.
Letter code for resistance and capacitance tolerances:
Before the introduction of the RKM code, some of the letters for symmetrical tolerances (viz. G, J, K, M) were already used in US military contexts following theAmerican War Standard(AWS) andJoint Army-NavySpecifications (JAN) since the mid-1940s.[38]
Letter codes for thetemperature coefficient of resistance(TCR):
Example: J8 = August 2017 (or August 1997)
Some manufacturers also used the production date code as a stand-alone code to indicate the production date of integrated circuits.[44]
Some manufacturers specify a three-character date code with a two-digit week number following the year letter.[45]
IEC 60062 also specifies a four-character year/week code.
Example: 78 = August 2017
IEC 60062 also specifies a four-character year/week code.
IEC 60062 also specifies a single-character four-year cycle year/month code.[nb 11]
For resistances following the (E48or)E96 seriesof preferred values, the former EIA-96 as well as IEC 60062:2016 define aspecial three-character marking code for resistorsto be used on small parts. The code consists of two digits denoting one of the "positions" in the series of E96 values followed by a letter indicating the multiplier.[12]
For capacitances following the (E3,E6,E12or)E24 seriesof preferred values, the former ANSI/EIA-198-D:1991, ANSI/EIA-198-1-E:1998 and ANSI/EIA-198-1-F:2002 as well as the amendment IEC 60062:2016/AMD1:2019 to IEC 60062 define aspecial two-character marking code for capacitorsfor very small parts which leave no room to print any longer codes onto them. The code consists of an uppercase letter denoting the two significant digits of the value followed by a digit indicating the multiplier. The EIA standard also defines a number of lowercase letters to specify a number of values not found in E24.[46] | https://en.wikipedia.org/wiki/RKM_code |
Software versioningis the process of assigning either uniqueversion namesor uniqueversion numbersto unique states of computer software. Within a given version number category (e.g., major or minor), these numbers are generally assigned in increasing order and correspond to new developments in the software. At a fine-grained level,revision controlis used for keeping track of incrementally-different versions of information, whether or not this information is computer software, in order to be able to roll any changes back.
Modern computer software is often tracked using two different software versioning schemes: aninternal version numberthat may be incremented many times in a single day, such as a revision control number, and arelease versionthat typically changes far less often, such as semantic versioning[1]or a project code name.
File numbers were used especially in public administration, as well as companies, to uniquely identify files or cases. For computer files this practice was introduced for the first time with MIT's ITS file system, later the TENEX filesystem for the PDP-10 in 1972.[2]
Later lists of files including their versions were added, and dependencies amongst them. Linux distributions like Debian, with itsdpkg, early on created package management software which could resolve dependencies between their packages. Debian's first try was that a package knew other packages which depended on it. From 1994 on this idea was inverted, so a package that knew the packages it needed. When installing a package, dependency resolution was used to automatically calculate the packages needed as well, and install them with the desired package. To facilitate upgrades, minimum package versions were introduced. Thus the numbering scheme needed to tell which version was newer than the required one.[3][4][5]
A variety of version numbering schemes have been created to keep track of different versions of a piece of software. The ubiquity of computers has also led to these schemes being used in contexts outside computing.
In sequence-based software versioning schemes, eachsoftware releaseis assigned a unique identifier that consists of one or more sequences of numbers or letters.[6]This is the extent of the commonality; schemes vary widely in areas such as the number of sequences, the attribution of meaning to individual sequences, and the means of incrementing the sequences.
In some schemes, sequence-based identifiers are used to convey the significance of changes between releases. Changes are classified by significance level, and the decision of which sequence to change between releases is based on the significance of the changes from the previous release, whereby the first sequence is changed for the most significant changes, and changes to sequences after the first represent changes of decreasing significance.
Depending on the scheme, significance may be assessed by lines of code changed, function points added or removed, the potential impact on customers in terms of work required to adopt a new version, risk of bugs or undeclared breaking changes, degree of changes in visual layout, the number of new features, or almost anything the product developers or marketers deem to be significant, including marketing desire to stress the "relative goodness" of the new version.
Semantic versioning(akaSemVer)[1]is a widely-adopted version scheme[7]that encodes a version by a three-part version number (Major.Minor.Patch), an optional pre-release tag, and an optional build meta tag. In this scheme, risk and functionality are the measures of significance. Breaking changes are indicated by increasing the major number (high risk); new, non-breaking features increment the minor number (medium risk); and all other non-breaking changes increment the patch number (lowest risk). The presence of a pre-release tag (-alpha, -beta) indicates substantial risk, as does a major number of zero (0.y.z), which is used to indicate a work-in-progress that may contain any level of potentially breaking changes (highest risk). As an example of inferring compatibility from a SemVer version, software which relies on version 2.1.5 of an API is compatible with version 2.2.3, but not necessarily with 3.2.4.
Developers may choose to jump multiple minor versions at a time to indicate that significant features have been added, but are not enough to warrant incrementing a major version number; for example,Internet Explorer 5from 5.1 to 5.5 orAdobe Photoshop5 to 5.5. This may be done to emphasize the value of the upgrade to the software user or, as in Adobe's case, to represent a release halfway between major versions (although levels of sequence-based versioning are not necessarily limited to a single digit, as inBlenderversion 2.91 orMinecraftJava Edition starting from 1.7.10).
A different approach is to use themajorandminornumbers along with an alphanumeric string denoting the release type, e.g. "alpha" (a), "beta" (b), or "release candidate" (rc). Asoftware release trainusing this approach might look like 0.5, 0.6, 0.7, 0.8, 0.9 → 1.0b1, 1.0b2 (with some fixes), 1.0b3 (with more fixes) → 1.0rc1 (which, if it is stableenough), 1.0rc2 (if more bugs are found) → 1.0. It is a common practice in this scheme to lock out new features and breaking changes during the release candidate phases and, for some teams, even betas are locked down to bug fixes only, to ensure convergence on the target release.
Other schemes impart meaning on individual sequences:
Again, in these examples, the definition of what constitutes a "major" as opposed to a "minor" change is entirely subjective and up to the author, as is what defines a "build", or how a "revision" differs from a "minor" change.
Shared libraries in Solaris andLinuxmay use thecurrent.revision.ageformat where:[8][9]
A similar problem of relative change significance and versioning nomenclature exists in book publishing, whereedition numbers or namescan be chosen based on varying criteria.
In most proprietary software, the first released version of a software product has version 1.[citation needed]
Some projects use the major version number to indicate incompatible releases. Two examples areApache Portable Runtime(APR)[10]and the FarCry CMS.[11]
Often programmers write new software to bebackward compatible, i.e., the new software is designed to interact correctly with older versions of the software (using old protocols and file formats) and the most recent version (using the latest protocols and file formats). For example, IBMz/OSis designed to work properly with 3 consecutive major versions of the operating system running in the same sysplex.
This enables people who run ahigh availabilitycomputer cluster to keep most of the computers up and running while one machine at a time is shut down, upgraded, and restored to service.[12]
Oftenpacket headersandfile formatinclude a version number – sometimes the same as the version number of the software that wrote it; other times a "protocol version number" independent of the software version number.
The code to handle olddeprecatedprotocols and file formats is often seen ascruft.
Software in the experimental stage (alphaorbeta) often uses a zero in the first ("major") position of the sequence to designate its status. However, this scheme is only useful for the early stages, not for upcoming releases with established software where the version number has already progressed past 0.[1]
A number of schemes are used to denote the status of a newer release:
The two purely numeric forms remove the special logic required to handle the comparison of "alpha < beta < rc < no prefix" as found in semantic versioning, at the cost of clarity.
There are two schools of thought regarding how numeric version numbers are incremented. Mostfree and open-source softwarepackages, includingMediaWiki, treat versions as a series of individual numbers, separated by periods, with a progression such as 1.7.0, 1.8.0, 1.8.1, 1.9.0, 1.10.0, 1.11.0, 1.11.1, 1.11.2, and so on.
On the other hand, some software packages identify releases by decimal numbers: 1.7, 1.8, 1.81, 1.82, 1.9, etc. Decimal versions were common in the 1980s, for example withNetWare,DOS, andMicrosoft Windows, but even in the 2000s have been for example used byOpera[13]andMovable Type.[14]In the decimal scheme, 1.81 is the minor version following 1.8, while maintenance releases (i.e. bug fixes only) may be denoted with an alphabetic suffix, such as 1.81a or 1.81b.
The standardGNUversion numbering scheme is major.minor.revision,[15]butEmacsis a notable example using another scheme where the major number (1) was dropped and auser siterevision was added which is always zero in original Emacs packages but increased by distributors.[16]Similarly,Debianpackage numbers are prefixed with an optional "epoch", which is used to allow the versioning scheme to be changed.[17]
In some cases, developers may decide to reset the major version number. This is sometimes used to denote a new development phase being released. For example,MinecraftAlpha ran from version 1.0.0 to 1.2.6, and when Beta was released, it reset the major version number and ran from 1.0 to 1.8. Once the game was fully released, the major version number again reset to 1.0.0.[18]
When printed, the sequences may be separated with characters. The choice of characters and their usage varies by the scheme. The following list shows hypothetical examples of separation schemes for the same release (the thirteenth third-level revision to the fourth second-level revision to the second first-level revision):[original research?]
When a period is used to separate sequences, itmayormay notrepresent a decimal point—see "Incrementing sequences" section for various interpretation styles.
There is sometimes a fourth, unpublished number which denotes thesoftware build(as used byMicrosoft).Adobe Flashis a notable case where a four-part version number is indicated publicly, as in 10.1.53.64. Some companies also include the build date. Version numbers may also include letters and other characters, such asLotus 1-2-3Release 1a.
Some projects use negative version numbers. One example is theSmartEiffelcompiler which started from −1.0 and counted upwards to 0.0.[16]
Many projects use a date-based versioning scheme calledCalendar Versioning(akaCalVer[19]).
Ubuntuis one example of a project using calendar versioning; Ubuntu 18.04, for example, was released in April 2018. This has the advantage of being easily relatable to development schedules and support timelines. Some video games also use date as versioning, for example thearcade gameStreet Fighter EX. At startup it displays the version number as a date plus a region code, for example961219 ASIA.[citation needed]
When using dates in versioning, for instance, file names, it is common to use theISO 8601scheme[20]YYYY-MM-DD, as this is easily string-sorted in increasing or decreasing order. The hyphens are sometimes omitted. TheWineproject formerly used a date versioning scheme, which used the year followed by the month followed by the day of the release; for example, "Wine 20040505".[citation needed]Minecrafthad a similar version formatting, but instead used DDHHMM, ex: rd-132211, 13 being the 13th of May, and 2211 being 22:11.
Microsoft Officebuild numbers are an encoded date:[21]the first two digits indicate the number of months that have passed from the January of the year in which the project started (with each major Office release being a different project), while the last two digits indicate the day of that month. So 3419 is the 19th day of the 34th month after the month of January of the year the project started.[citation needed]
Other examples that identify versions by year includeAdobe Illustrator88 andWordPerfect Office2003. When a year is used to denote version, it is generally for marketing purposes, and an actual version number also exists. For example,Windows 95is internally versioned asMS-DOS 7.00and Windows 4.00; likewise,Windows 2000is internally versioned as NT 5.0.[22]
ThePython Software Foundationhas published PEP 440 – Version Identification and Dependency Specification,[23]outlining their own flexible scheme, that defines an epoch segment, a release segment, pre-release and post-release segments and a development release segment.
TeXhas anidiosyncraticversion numbering system, an unusual feature invented by its developerDonald Knuth. Since version 3.1, updates have been indicated by adding an extra digit at the end, so that the version numberasymptoticallyapproaches the numberπ, so 3.14 effectively means 3.2 in semantic versioning. (This is a form ofunary numbering; the version number is the number of digits.) Since 2021, the version number has been 3.141592653 (3.9). This is a reflection of TeX being very stable, and only minor updates are anticipated. TeX developer Donald Knuth has stated that the"absolutely final change (to be made after [his] death)"will be to change the version number toπ, at which point all remaining bugs will become permanent features.[24]
In a similar way, the version number ofMetafontasymptotically approachesEuler's number,e.[24]As of February 2021, the version number is 2.71828182 (2.8). Metafont was also devised by Donald Knuth as a companion to his TeX typesetting system.
During the era of theclassic Mac OS, minor version numbers rarely went beyond ".1". When they did, they usually jumped straight to ".5", suggesting the release was "more significant".[a]Thus, "8.5" was marketed as its own release, representing "Mac OS 8 and a half", and 8.6 effectively meant "8.5.1".
Mac OS Xdeparted from this trend, in large part because "X" (the Roman numeral for 10) was in the name of the product. As a result, all versions of OS X began with the number 10. The first major release of OS X was given the version number 10.0, but the next major release was not 11.0. Instead, it was numbered 10.1, followed by 10.2, 10.3, and so on for each subsequent major release. Thus the 11th major version of OS X was labeled "10.10". Even though the "X" was dropped from the name as ofmacOS 10.12, this numbering scheme continued through macOS 10.15. Under the "X"-based versioning scheme, the third number (instead of the second) denoted a minor release, and additional updates below this level, as well as updates to a given major version of OS X coming after the release of a new major version, were titled Supplemental Updates.[25]
The Roman numeral X was concurrently leveraged for marketing purposes across multiple product lines. BothQuickTimeandFinal Cut Projumped from version 7 directly to version 10, QuickTime X and Final Cut Pro X. Like Mac OS X itself, the products were not upgrades to previous versions, but brand-new programs. As with OS X, major releases for these programs incremented the second digit and minor releases were denoted using a third digit. The "X" was dropped from Final Cut's name with the release of macOS 11.0 (see below), and QuickTime's branding became moot when the framework was deprecated in favor of AVFoundation in 2011 (the program for playing QuickTime video was only named QuickTime Player from the start).
Apple's next macOS release, provisionally numbered 10.16,[26]was officially announced asmacOS 11at WWDC in June 2020, and released in November 2020.[27]The following macOS version,macOS Monterey, was released in October 2021 and bumped its major version number to 12.[28]
TheMicrosoft Windowsoperating system was first labelled with standard version numbers forWindows 1.0throughWindows 3.11. After this Microsoft excluded the version number from the product name. ForWindows 95(version 4.0),Windows 98(4.10) andWindows 2000(5.0), year of the release was included in the product title. After Windows 2000, Microsoft created theWindows Serverfamily which continued the year-based style with a difference: For minor releases, Microsoft suffixed "R2" to the title, e.g.,Windows Server 2008 R2(version 6.1). This style had remained consistent to this date. The client versions of Windows however did not adopt a consistent style. First, they received names with arbitrary alphanumeric suffixes as withWindows Me(4.90),Windows XP(5.1), andWindows Vista(6.0). Then, once again Microsoft adopted incremental numbers in the title, but this time, they were not versioning numbers; the version numbers ofWindows 7,Windows 8andWindows 8.1are respectively 6.1, 6.2 and 6.3. InWindows 10, the version number leaped to 10.0[29]andsubsequent updates to the OSonly incremented build number and update build revision (UBR) number.
The successor of Windows 10,Windows 11, was released on October 5, 2021. Despite being named "11", the new Windows release didn't bump its major version number to 11. Instead, it stayed at the same version number of 10.0, used by Windows 10.[30]
Some software producers use different schemes to denote releases of their software. The Debian project uses a major/minor versioning scheme for releases of its operating system but uses code names from the movieToy Storyduring development to refer to stable, unstable, and testing releases.[31]
BLAG Linux and GNUfeatures very large version numbers: major releases have numbers such as 50000 and 60000, while minor releases increase the number by 1 (e.g. 50001, 50002). Alpha and beta releases are given decimal version numbers slightly less than the major release number, such as 19999.00071 for alpha 1 of version 20000, and 29999.50000 for beta 2 of version 30000. Starting at 9001 in 2003, the most recent version as of 2011[update]is 140000.[32][33][34]
UrbitusesKelvin versioning(named after the absoluteKelvintemperature scale): software versions start at a high number and count down to version 0, at which point the software is considered finished and no further modifications are made.[35][36]
Software may have an "internal" version number which differs from the version number shown in the product name (and which typically follows version numbering rules more consistently).Java SE5.0, for example, has the internal version number of 1.5.0, and versions of Windows from NT 4 on have continued the standard numerical versions internally: Windows 2000 isNT5.0, XP is Windows NT 5.1,Windows Server 2003andWindows XP Professional x64 Editionare NT 5.2,Windows Server 2008and Vista are NT 6.0,Windows Server 2008 R2and Windows 7 are NT 6.1,Windows Server 2012andWindows 8are NT 6.2, andWindows Server 2012 R2andWindows 8.1are NT 6.3. Windows 10 was initially intended to be NT 6.4, as the earliest Technical Preview build shared to the public is numbered 6.4.9841. However, that did not last as the version of Windows 10 was quickly artificially increased to 10.0[37]to align with the commercial name, resulting in the first released version of the operating system being numbered 10.0.10240. Note, however, that Windows NT is only on its fifth major revision, as its first release was numbered 3.1 (to match the then-current Windows release number) and the Windows 10 launching made a version leap from 6.3 to 10.0.
In conjunction with the various versioning schemes listed above, a system for denoting pre-release versions is generally used, as the program makes its way through the stages of thesoftware release life cycle.
Programs that are in an early stage are often called "alpha" software, after the first letter in the Greek alphabet. After they mature but are not yet ready for release, they may be called "beta" software, after the second letter in the Greek alphabet. Generally alpha software is tested by developers only, while beta software is distributed for community testing.
Some systems use numerical versions less than 1 (such as 0.9), to suggest their approach toward a final "1.0" release. This is a common convention inopen source software.[38][39]However, if the pre-release version is for an existing software package (e.g. version 2.5), then an "a" or "alpha" may be appended to the version number. So the alpha version of the 2.5 release might be identified as 2.5a or 2.5.a.
An alternative is to refer to pre-release versions as "release candidates", so that software packages which are soon to be released as a particular version may carry that version tag followed by "rc-#", indicating the number of the release candidate; when the final version is released, the "rc" tag is removed.
Asoftware release trainis a form of software release schedule in which a number of distinct series of versioned software releases for multiple products are released as a number of different "trains" on a regular schedule. Generally, for each product line, a number of different release trains are running at a given time, with each train moving from initial release to eventual maturity and retirement on a planned schedule. Users may experiment with a newer release train before adopting it for production, allowing them to experiment with newer, "raw", releases early, while continuing to follow the previous train'spoint releasesfor their production systems prior to moving to the new release train as it becomes mature.
Cisco'sIOSsoftware platform used a release train schedule with many distinct trains for many years. More recently, a number of other platforms includingFirefoxand Fenix for Android,[40]Eclipse,[41]LibreOffice,[42]Ubuntu,[43]Fedora,[44]Python,[45]digiKam[46]andVMware[47]have adopted the release train model.
Between the 1.0 and the 2.6.x series, theLinux kernelusedoddminor version numbers to denote development releases andevenminor version numbers to denote stable releases. For example, Linux 2.3 was a development family of the second major design of the Linux kernel, and Linux 2.4 was the stable release family that Linux 2.3 matured into. After the minor version number in the Linux kernel is the release number, in ascending order; for example, Linux 2.4.0 → Linux 2.4.22. Since the 2004 release of the 2.6 kernel, Linux no longer uses this system, and has a much shorter release cycle.
The same odd-even system is used by some other software with long release cycles, such asNode.jsup to version 0.12 as well asWineHQ.[48]
Sun'sJavahas at times had a hybrid system, where the internal version number has always been 1.xbut has been marketed by reference only to thex:
Sun also dropped the first digit for Solaris, where Solaris 2.8 (or 2.9) is referred to as Solaris 8 (or 9) in marketing materials.
A similar jump took place with theAsteriskopen-source PBX construction kit in the early 2010s, whose project leads announced that the current version 1.8.x would soon be followed by version 10.[49]
This approach, panned by many because it breaks the semantic significance of the sections of the version number, has been adopted by an increasing number of vendors includingMozilla(forFirefox).
Version numbers very quickly evolve from simple integers (1, 2, ...) to rational numbers (2.08, 2.09, 2.10)
and then to non-numeric "numbers" such as 4:3.4.3-2. These complex version numbers are therefore better treated as character strings. Operating systems that include package management facilities (such as all non-trivial Linux orBSDdistributions) will use a distribution-specific algorithm for comparing version numbers of different software packages. For example, the ordering algorithms ofRed Hatand derived distributions differ to those of the Debian-like distributions.
As an example of surprising version number ordering implementation behavior, in Debian, leading zeroes are ignored in chunks, so that 5.0005 and 5.5 are considered as equal, and 5.5<5.0006. This can confuse users; string-matching tools may fail to find a given version number; and this can cause subtle bugs inpackage managementif the programmers use string-indexed data structures such as version-number indexedhash tables.
To ease sorting, some software packages represent each component of themajor.minor.releasescheme with a fixed width. Perl represents its version numbers as a floating-point number; for example, Perl's 5.8.7 release can also be represented as 5.008007. This allows a theoretical version of 5.8.10 to be represented as 5.008010. Other software packages pack each segment into a fixed bit width; for example, on Microsoft Windows, version number 6.3.9600.16384 would be represented ashexadecimal0x0006000325804000. The floating-point scheme breaks down if any segment of the version number exceeds 999; a packed-binary scheme employing 16 bits apiece breaks down after 65535.
Thefree-softwareandopen sourcecommunities tend to release softwareearly and often. Initial versions are numbers less than 1, with these 0.x version used to convey that the software is incomplete and not reliable enough for general release or usable in its current state. Backward-incompatible changes are common with 0.x versions.
Version 1.0 is used as a majormilestone, indicating that the software has at least all major features plus functions the developers wanted to get into that version, and is considered reliable enough for general release.[38][39]A good example of this is the Linux kernel, which was first released as version 0.01 in 1991,[50]and took until 1994 to reach version 1.0.0.[51]
The developers of thearcade gameemulatorMAMEdo not ever intend to release a version 1.0 of the program because there will always be morearcade gamesto emulate and thus the project can never be truly completed. Accordingly, version 0.99 was followed by version 0.100.[52]
Since the internet has become widespread, most commercial software vendors no longer follow the maxim that a major version should be "complete" and instead rely onpatcheswith bugfixes to sort out the known issues which a solution has been found for and could be fixed.[citation needed]
A relatively common practice is to make major jumps in version numbers for marketing reasons. Sometimes software vendors just bypass the 1.0 release or quickly release a release with a subsequent version number because 1.0 software is considered by many customers too immature to trust with production deployments.[citation needed]For example, as in the case ofdBase II, a product is launched with a version number that implies that it is more mature than it is.
Other times version numbers are increased to match those of competitors. This can be seen in many examples of product version numbering by Microsoft,America Online, SunSolaris,Java Virtual Machine, SCO Unix,WordPerfect.Microsoft Accessjumped from version 2.0 to version 7.0, to match the version number ofMicrosoft Word.
Microsoft has also been the target of "catch-up" versioning, with theNetscapebrowsers skipping version 5 to 6, in line with Microsoft'sInternet Explorer, but also because the Mozilla application suite inherited version 5 in itsuser agentstring during pre-1.0 development and Netscape 6.x was built upon Mozilla's code base.
Another example of keeping up with competitors is whenSlackwareLinux jumped from version 4 to version 7 in 1999.[53]
In the mid-1990s, the rapidly growingCMMS, Maximo, moved from Maximo Series 3 directly to Series 5, skipping Series 4 due to that number's perceived marketing difficulties in the Chinese market, where the number 4 is associated with "death" (seetetraphobia). This did not stop Maximo Series 5 version 4.0 from being released. (The "Series" versioning has since been dropped, effectively resetting version numbers after Series 5 version 1.0's release.)
Version numbers are used in practical terms by the consumer, orclient, to identify or compare their copy of the software product against another copy, such as the newest version released by the developer. For the programmer or company, versioning is often used on a revision-by-revision basis, where individual parts of the software are compared and contrasted with newer or older revisions of those same parts, often in a collaborativeversion control system.
In the 21st century, more programmers started to use a formalized version policy, such as the semantic versioning policy.[1]The purpose of such policies is to make it easier for other programmers to know when code changes are likely to break things they have written. Such policies are especially important forsoftware librariesandframeworks, but may also be very useful for command-line applications (which may be called from other applications) and for other applications (which may be scripted and/or extended by third parties).
Versioning is also a required practice to enable many schemes of patching and upgrading software, especially to automatically decide what and where to upgrade to.
Version numbers allow people providing support to ascertainexactlywhich code a user is running, so that they can rule out bugs that have already been fixed as a cause of an issue, and the like. This is especially important when a program has a substantial user community, especially when that community is large enough that the people providing technical support arenotthe people who wrote the code. The semantic meaning[1]of version.revision.change style numbering is also important to information technology staff, who often use it to determine how much attention and research they need to pay to a new release before deploying it in their facility. As a rule of thumb, the bigger the changes, the larger the chances that something might break (although examining the Changelog, if any, may reveal only superficial or irrelevant changes). This is one reason for some of the distaste expressed in the "drop the major release" approach taken by Asterisk et alia: now, staff must (or at least should) do a full regression test for every update.
Somecomputer file systems, such as theOpenVMS Filesystem, also keep versions for files.
Versioning amongst documents is relatively similar to the routine used with computers and software engineering, where with each small change in the structure, contents, or conditions, the version number is incremented by 1, or a smaller or larger value, again depending on the personal preference of the author and the size or importance of changes made.
Software-style version numbers can be found in other media.
In some cases, the use is a direct analogy (for example:Jackass 2.5, a version of Jackass Number Two with additional special features; the second album byGarbage, titledVersion 2.0; orDungeons & Dragons3.5, where the rules were revised from the third edition, but not so much as to be considered the fourth).
More often it's used to play on an association with high technology, and doesn't literally indicate a 'version' (e.g.,Tron 2.0, a video game followup to the filmTron, or the television seriesThe IT Crowd, which refers to the second season as Version 2.0). A particularly notable usage isWeb 2.0, referring to websites from the early 2000s that emphasizeduser-generated content,usabilityandinteroperability.
Technical drawingandCAD softwarefiles may also use some kind of primitive versioning number to keep track of changes. | https://en.wikipedia.org/wiki/Version_numbering |
The standard circulatingcoinageof theUnited Kingdom,British Crown DependenciesandBritish Overseas Territoriesis denominated in pennies andpoundssterling(symbol "£", commercial GBP), and ranges in value fromone penny sterlingto two pounds. Sincedecimalisation, on 15 February 1971, the pound has been divided into 100 pence (shown on coins as "new pence" until 1981). Before decimalisation, twelvepencemade ashilling, and twenty shillings made a pound.
British coins are minted by theRoyal MintinLlantrisant, Wales. The Royal Mint also commissions the coins' designs; however they also have to be accepted by the reigning monarch.
In addition to the circulating coinage, the UK also mints commemorative decimal coins (crowns) in the denomination of five pounds, ceremonialMaundy moneyin denominations of 1, 2, 3 and 4 pence in sterling (.925) silver and bullion coinage ofgold sovereigns,half sovereigns, and gold and silverBritannia coinsare also produced. Some territories outside the United Kingdom, which use the pound sterling, produce their own coinage, with the same denominations and specifications as the UK coinage but with local designs; these coins are not legal tender in the mainland United Kingdom.
The current decimal coins consist of:
All circulating coins have aneffigyof one of two monarchs on the obverse; various national, regional and commemorative designs on the reverse; and the denomination in numbers or words.
All genuine UK coins are produced by theRoyal Mint. The same coinage is used across the United Kingdom: unlike banknotes, local issues of coins are not produced for different parts of the UK. The pound coin until 2016 was produced in regional designs, but these circulate equally in all parts of the UK (seeUK designs, below).
Every year, newly minted coins are checked for size, weight, and composition at aTrial of the Pyx. Essentially the same procedure has been used since the 13th century. Assaying is now done by theWorshipful Company of Goldsmithson behalf ofHM Treasury.
The 1p and 2p coins from 1971 are the oldest standard-issue coins still in circulation. Pre-decimal crowns are the oldest coins in general that are still legal tender, although they are in practice never encountered in general circulation.[4]
Coins from the British dependencies and territories that use sterling as their currency are sometimes found in change in other jurisdictions. Strictly, they are not legal tender in the United Kingdom; however, since they have the same specifications as UK coins, they are sometimes tolerated in commerce, and can readily be used in vending machines.
UK-issued coins are, on the other hand, generally fully accepted and freely mixed in other British dependencies and territories that use the pound.
An extensive coinage redesign was commissioned by the Royal Mint in 2005, and new designs were gradually introduced into the circulating British coinage from summer 2008. Except for the £1 coin, the pre-2008 coins remain legal tender and are expected to stay in circulation for the foreseeable future.
The estimated volume in circulation as of March 2016[update]is:[5]
Because of trade links with Charlemagne's Frankish Empire, the Anglo-Saxon kingdoms copied the Frankish currency system of 12deniers("d", pennies) to thesou(shilling) and 240 deniers or 20 sous to thelibra("£", pound), the origin of the name of the current British currency. It referred to the literal weight of 240 penny coins, which at 30 grains each, weighed 1tower poundof sterling (0.925 fine) silver. At this point and for centuries, pennies were the only coins struck; shillings and pounds were only units of account.[6]
The English silver penny first appeared in the 8th century CE in adoption of Western Europe'sCarolingianmonetary system wherein 12 pence made a shilling and 20 shillings made a pound. The weight of the English penny was fixed at22+1⁄2troy grains (about 1.46 grams) byOffa of Mercia, an 8th-century contemporary ofCharlemagne; 240 pennies weighed 5,400 grains or atower pound(different from thetroy poundof 5,760 grains). The silver penny was the only coin minted for 500 years, from c. 780 to 1280.
From the time ofCharlemagneuntil the 12th century, the silver currency of England was made from the highest purity silver available. But there were disadvantages to minting currency offine silver, notably the level of wear it suffered, and the ease with which coins could be "clipped", or trimmed. In 1158 a new standard for English coinage was established byHenry IIwith the "Tealby Penny" – thesterling silverstandard of 92.5% silver and 7.5% copper. This was a harder-wearing alloy, yet it was still a rather high grade of silver. It went some way towards discouraging the practice of "clipping", though this practice was further discouraged and largely eliminated with the introduction of the milled edge seen on coins today.
The weight of a silver penny stayed constant at above 22 grains until 1344; afterwards its weight was reduced to 18 grains in 1351, to 15 grains in 1412, to 12 grains in 1464, and to 101⁄2grains in 1527.
The history of theRoyal Mintstretches back to AD 886.[7]For many centuries production was in London, initially at theTower of London, and then at premises nearby inTower Hillin what is today known asRoyal Mint Court. In the 1970s production was transferred toLlantrisantin South Wales.[8]Historically Scotland and England had separate coinage; the lastScottish coinswere struck in 1709 shortly afterunion with England.[9]
During the reign of Henry VIII, the silver content was gradually debased, reaching a low of one-third silver. However, in Edward VI's reign in 1551, this debased coinage was discontinued in favor of a return to sterling silver with the penny weighing 8 grains. The first crowns and half-crowns were produced that year. From this point onwards till 1920, sterling was the rule.
Coins were originallyhand-hammered– an ancient technique in which two dies are struck together with a blank coin between them. This was the traditional method of manufacturing coins in the Western world from the classical Greek era onwards, in contrast with Asia, where coins were traditionally cast. Milled (that is, machine-made) coins were produced first during the reign ofElizabeth I(1558–1603) and periodically during the subsequent reigns ofJames IandCharles I, but there was initially opposition to mechanisation from the moneyers, who ensured that most coins continued to be produced by hammering. All British coins produced since 1662 have been milled.
By 1601 it was decreed that onetroy ounceor 480 grains of sterling silver be minted into 62 pennies (i.e. each penny weighed 7.742 grains). By 1696, the currency had been seriously weakened by an increase in clipping during theNine Years' War[10]to the extent that it was decided to recall and replace all hammered silver coinage in circulation.[11]The exercise came close to disaster due to fraud and mismanagement,[12]but was saved by the personal intervention ofIsaac Newtonafter his appointment asWarden of the Mint, a post which was intended to be asinecure, but which he took seriously.[11]Newton was subsequently given the post ofMaster of the Mintin 1699. Following the 1707unionbetween theKingdom of Englandand theKingdom of Scotland, Newton used his previous experience to direct the1707–1710 Scottish recoinage, resulting in acommon currencyfor the newKingdom of Great Britain. After 15 September 1709 no further silver coins were ever struck in Scotland.[13]
As a result of a report written by Newton on 21 September 1717 to theLords Commissioners of His Majesty's Treasury[14]the bimetallic relationship between gold coins and silver coins was changed byroyal proclamationon 22 December 1717, forbidding the exchange of gold guineas for more than 21 silver shillings.[15]Due to differing valuations in other European countries this unintentionally resulted in a silver shortage, as silver coins were used to pay for imports, while exports were paid for in gold, effectively moving Britain from thesilver standardto its firstgold standard, rather than thebimetallic standardimplied by the proclamation.
The coinage reform of 1816 set up a weight/value ratio and physical sizes for silver coins. Each troy ounce of sterling silver was henceforth minted into 66 pence or 51⁄2shillings.
In 1920, the silver content of all British coins was reduced from 92.5% to 50%, with some of the remainder consisting ofmanganese, which caused the coins to tarnish to a very dark colour after they had been in circulation for long. Silver was eliminated altogether in 1947, except forMaundy coinage, which returned to the pre-1920 92.5% silver composition.
The 1816 weight/value ratio and size system survived the debasement of silver in 1920, and the adoption of token coins ofcupronickelin 1947. It even persisted after decimalisation for those coins which had equivalents and continued to be minted with their values in new pence. The UK finally abandoned it in 1992 when smaller, more convenient, "silver" coins were introduced.
Sincedecimalisationon 15 February 1971 the pound (symbol "£") has been divided into 100 pence. (Prior to decimalisation the pound was divided into 20 shillings, each of 12 [old] pence; thus, there were 240 [old] pence to the pound.) The pound remained as Britain's currency unit after decimalisation (unlike in many other British commonwealth countries, which dropped the pound upon decimalisation by introducing dollars or new units worth 10 shillings or1⁄2pound). The following coins were introduced with these reverse designs:
The first decimal coins – thefive pence(5p) andten pence(10p) — were introduced in 1968 in the run-up to decimalisation in order to familiarise the public with the new system. These initially circulated alongside the pre-decimal coinage and had the same size and value as the existingone shillingandtwo shillingcoins respectively. Thefifty pence(50p) coin followed in 1969, replacing the old ten shilling note. The remaining decimal coins – at the time, thehalf penny(1⁄2p),penny(1p) andtwo pence(2p) — were issued in 1971 at decimalisation. A quarter-penny coin, to be struck in aluminium, was proposed at the time decimalisation was being planned, but was never minted.
The new coins were initially marked with the wordingNEW PENNY(singular) orNEW PENCE(plural). The word "new" was dropped in 1982. The symbol "p" was adopted to distinguish the new pennies from the old, which used the symbol "d" (from theLatindenarius,a coin used in theRoman Empire).
In the years since decimalisation, a number of changes have been made to the coinage; these new denominations were introduced with the following designs:
Additionally:
Thetwenty pence(20p) coin was introduced in 1982 to fill the gap between the 10p and 50p coins. Thepound coin(£1) was introduced in 1983 to replace theBank of England £1 banknotewhich was discontinued in 1984 (although the Scottish banks continued producing them for some time afterwards; the last of them, theRoyal Bank of Scotland £1 note, is still issued in a small volume as of 2021[ref]). The designs on the £1 coin changed annually in a largely five-year cycle, until the introduction of the new 12-sided £1 coin in 2017.
The decimal halfpenny coin wasdemonetisedin 1984 as its value was by then too small to be useful. The pre-decimalsixpence,shillingandtwo shillingcoins, which had continued to circulate alongside the decimal coinage with values of2+1⁄2p, 5p and 10p respectively, were finally withdrawn in 1980, 1990 and 1993 respectively. Thedouble florinandcrown, with values of 20p and 25p respectively, have technically not been withdrawn, but in practice are never seen in general circulation.
In the 1990s, the Royal Mint reduced the sizes of the 5p, 10p, and 50p coins. As a consequence, the oldest 5p coins in circulation date from 1990, the oldest 10p coins from 1992 and the oldest 50p coins come from 1997. Since 1997, many specialcommemorativedesigns of 50p have been issued. Some of these are found fairly frequently in circulation and some are rare. They are all legal tender.
In 1992 the composition of the 1p and 2p coins was changed from bronze to copper-plated steel. Due to their high copper content (97%), the intrinsic value of pre-1992 1p and 2p coins increased with the surge in metal prices of the mid-2000s, until by 2006 the coins would, if melted down, have been worth about 50% more than their face value.[16]
A circulatingbimetallictwo pound(£2) coin was introduced in 1998 (firstmintedin, and dated, 1997). There had previously been unimetallic commemorative £2 coins which did not normally circulate. This tendency to use the two pound coin for commemorative issues has continued since the introduction of the bimetallic coin, and a few of the older unimetallic coins have since entered circulation.
There are also commemorative issues ofcrowns. Until 1981, these had a face value oftwenty-five pence(25p), equivalent to the five shilling crown used in pre-decimal Britain. However, in 1990 crowns were redenominated with a face value offive pounds(£5)[17]as the previous value was considered not sufficient for such a high-status coin. The size and weight of the coin remained exactly the same. Decimal crowns are generally not found in circulation as their market value is likely to be higher than their face value, but they remain legal tender.
All modern British coins feature a profile of the current monarch's head on the obverse. Until 2022, there had been only one monarch since decimalisation,Queen Elizabeth II, and her head appeared on all decimal coins minted up to that date, facing to the right (see alsoMonarch's profile, below). Five different effigies were used, reflecting the Queen's changing appearance as she aged. They were created byMary Gillick(for coins minted until 1968),Arnold Machin(1968–1984),Raphael Maklouf(1985–1997),Ian Rank-Broadley(1998–2015), andJody Clark(from 2015).[18]In September 2022, the first portrait ofCharles IIIwas revealed, designed byMartin Jennings.[19]
Most current coins carry aLatininscription whose full form isELIZABETH II DEI GRATIA REGINA FIDEI DEFENSATRIX, meaning "Elizabeth II,by the grace of God, Queen andDefender of the Faith". The inscription appears in any of several abbreviated forms, typicallyELIZABETH II D G REG F D. Those minted and circulated after the accession of Charles III are inscribed withCHARLES III DEI GRATIA REX FIDEI DEFENSOR, typically abbreviated asCHARLES III D G REX F DorCHARLES III DEI GRA REX FID DEF.
In 2008, UK coins underwent an extensive redesign which eventually changed the reverse designs of all coins, the first wholesale change to British coinage since the first decimal coins were introduced in April 1968.[20]The major design feature was the introduction of a reverse design shared across six coins (1p, 2p, 5p, 10p, 20p, 50p), that can be pieced together to form an image of theRoyal Shield. This was the first time a coin design had been featured across multiple coins in this way.[20]To summarize the reverse design changes made in 2008 and afterwards:
The original intention was to exclude both the £1 and £2 coins from the redesign because they were "relatively new additions" to the coinage, but it was later decided to include a £1 coin with a complete Royal Shield design from 2008 to 2016,[21]and the 2015 redesign of the £2 coin occurred due to complaints over the disappearance ofBritannia's image from the 50p coin in 2008.[22]
On all coins, the beading (ring of small dots) around the edge of the obverses has been removed. The obverse of the 20p coin has also been amended to incorporate the year, which had been on the reverse of the coin since its introduction in 1982 (giving rise to an unusual issue of amuleversionwithout any date at all). The orientation of both sides of the 50p coin has been rotated through 180 degrees, meaning the bottom of the coin is now a corner rather than a flat edge. The numerals showing the decimal value of each coin, previously present on all coins except the £1 and £2, have been removed, leaving the values spelled out in words only.
The redesign was the result of a competition launched by the Royal Mint in August 2005, which closed on 14 November 2005. The competition was open to the public and received over 4,000 entries.[20]The winning entry was unveiled on 2 April 2008, designed byMatthew Dent.[20]The Royal Mint stated the new designs were "reflecting a twenty-first century Britain". An advisor to the Royal Mint described the new coins as "post-modern" and said that this was something that could not have been done 50 years previously.[23]
The redesign was criticised by some for having no specifically Welsh symbol (such as theWelsh Dragon), because the Royal Shield does not include a specifically Welsh symbol.WrexhamMember of Parliament(MP)Ian Lucas, who was also campaigning to have the Welsh Dragon included on theUnion Flag, called the omission "disappointing", and stated that he would be writing to the Queen to request that the Royal Standard be changed to include Wales.[24]The Royal Mint stated that "the Shield of the Royal Arms is symbolic of the whole of the United Kingdom and as such, represents Wales, Scotland, England and Northern Ireland."[24]Designer Dent stated "I am a Welshman and proud of it, but I never thought about the fact we did not have a dragon or another representation of Wales on the design because as far as I am concerned Wales is represented on the Royal Arms. This was never an issue for me."[24]
The Royal Mint's choice of an inexperienced coin designer to produce the new coinage was criticised byVirginia Ironside, daughter ofChristopher Ironsidewho designed the previous UK coins. She stated that the new designs were "totally unworkable as actual coins", due to the loss of a numerical currency identifier, and the smaller typeface used.[25]
The German news magazineDer Spiegelclaimed that the redesign signalled the UK's intention "not to join the euroany time soon".[26]
As of 2012, 5p and 10p coins have been issued in nickel-plated steel, and much of the remaining cupronickel types withdrawn, in order to retrieve more expensive metals. The new coins are 11% thicker to maintain the same weight.[27][28]There are heightened nickel allergy concerns over the new coins. Studies commissioned by the Royal Mint found no increased discharge of nickel from the coins when immersed in artificial sweat. However, an independent study found that the friction from handling results in four times as much nickel exposure as from the older-style coins. Sweden already plans to desist from using nickel in coins from 2015.[29]
In 2016, the £1 coin's composition was changed from a single-metal round shape to a 12-sided bi-metal design, with a slightly larger diameter, and with multiple past designs discontinued in favor of a single, unchanging design. Production of the new coins started in 2016,[30]with the first, dated 2016, entering circulation 28 March 2017.[31]
In February 2015, the Royal Mint announced a new design for the £2 coin featuringBritanniabyAntony Dufort, with no change to its bimetallic composition.[32]
Edge inscriptionson British coins used to be commonly encountered on round £1 coins of 1983–2016, but are nowadays found only on £2 coins. The standard-issue£2 coinfrom 1997 to 2015 carried the edge inscriptionSTANDING ON THE SHOULDERS OF GIANTS. The redesigned coin since 2015 has a new edge inscriptionQUATUOR MARIA VINDICO, Latin for "I will claim the four seas", an inscription previously found on coins bearing the image of Britannia.Other commemorative £2 coinshave their own unique edge inscriptions or designs.
In October 2023 the Royal Mint announced new designs for the circulating coinage, which were to be released by the end of the year.[33][34]The new designs feature a portrait ofKing Charles IIIfacing left on the obverse, with a small Tudor Crown privy mark behind the Kings’ neck.
The reverses are divided vertically, the leftmost third comprising a background of three interlocking “C”s, reminiscent of the interlocking C’s on the coins of King Charles II, and a large number indicating the value, countering criticism of the 2008 redesign’s lack of numeric values. The rightmost two-thirds of each design features an animal or plant representing each of the four nations:
The following decimal coins have been withdrawn from circulation and have ceased to be legal tender.
* The specifications and dates of 5p, 10p, and 50p coins refer to the larger sizes issued since 1968.
† The specification refers to the round coin issued from 1983 to 2016. Although obsolete, this coin is still redeemable at banks and the British railway[clarification needed]systems.
Circulatingfifty penceandtwo poundcoins have been issued with various commemorative reverse designs, typically to mark the anniversaries of historical events or the births of notable people.
Three commemorative designs were issued of the large version of the 50p: in 1973 (theEEC), 1992–3 (ECpresidency) and 1994 (D-Dayanniversary). Commemorative designs of the smaller 50p coin have been issued (alongside the Britannia standard issue) in 1998 (two designs), 2000, and from 2003 to 2007 yearly (two designs in 2006). For a complete list, seeFifty pence (British decimal coin).
Prior to 1997, the two pound coin was minted in commemorative issues only – in 1986, 1989, 1994, 1995 and 1996. Commemorative £2 coins have been regularly issued since 1999, alongside the standard-issue bi-metallic coins which were introduced in 1997. One or two designs have been minted each year, with the exception of none in 2000, and four regional 2002 issues marking the2002 Commonwealth Gamesin Manchester. As well as a distinct reverse design, these coins have an edge inscription relevant to the subject. The anniversary themes are continued until at least 2009, with two designs announced. For a complete list, seeTwo pounds (British decimal coin).
From 2018 to 2019 a series of 10p coins with 26 different designs was put in circulation "celebrating Great Britain with The Royal Mint's Quintessentially British A to Z series of coins".[35]
Coins are sometimes issued as special collectible commemorative versions, sold at a value higher than their face value. They are usually legal tender, but worth only their face value to pay debts. For example, in 2023 a 50 pence piece was announced, the first coin depictingKing Charles III, and celebrating the fictional wizardHarry Potter. The standard version sells for £11 and a colour version for £20. Other versions range up to a gold coin of £200 face value, selling for £5,215.[36]
The following are special-issue commemorative coins, seldom encountered in normal circulation due to their precious metal content or collectible value, but are still considered legal tender.
The prolific issuance since 2013 of silver commemorative £20, £50 and £100 coins at face value has led to attempts to spend or deposit these coins, prompting theRoyal Mintto clarify the legal tender status of these silver coins as well as the cupronickel £5 coin.[37][38][39]Legal tender has a very narrow legal meaning, related to paying into a court to satisfy a debt, and nobody is obliged to accept any particular form of payment (whether legal tender or not), including commemorative coins. Royal Mint guidelines advise that, although these coins were approved as legal tender, they are considered limited edition collectables not intended for general circulation.
Maundy moneyis a ceremonial coinage traditionally given to the poor, and nowadays awarded annually to deserving senior citizens. There are Maundy coins in denominations of one, two, three and four pence. They bear dates from 1822 to the present and are minted in very small quantities. Though they are legal tender in the UK, they are rarely or never encountered in circulation. The pre-decimal Maundy pieces have the same legal tender status and value as post-decimal ones, and effectively increased in face value by 140% upon decimalisation. Theirnumismaticvalue is much greater.
Maundy coins still bear the original portrait of the Queen as used in the circulating coins of the first years of her reign.
The traditional bullion coin issued by Britain is thegold sovereign, formerly a circulating coin worth 20 shillings (or one pound) and with 0.23542 troy ounces (7.322 g) of fine gold, but now with a nominal value of one pound. The Royal Mint continues to produce sovereigns, as well asquarter sovereigns(introduced in 2009),half sovereigns,double sovereignsandquintuple sovereigns.
Between 1987 and 2012 a series of bullion coins, theBritannia, was issued, containing 1troy ounce(31.1 g),1⁄2ounce,1⁄4ounce and1⁄10ounce of fine gold at amillesimal finenessof 916 (22 carat) and with face values of £100, £50, £25, and £10.
Since 2013Britanniabullion contains 1 troy ounce of fine gold at amillesimal finenessof 999 (24 carat).
Between 1997 and 2012 silver bullion coins have also been produced under the name "Britannias". The alloy used wasBritannia silver(millesimal fineness 958). The silver coins were available in 1troy ounce(31.1 g),1⁄2ounce,1⁄4ounce and1⁄10ounce sizes. Since 2013 the alloy used is silver at a (millesimal fineness 999).
In 2016the Royal Mintlaunched a series of 10Queen's Beastsbullion coins,[40]one for each beast available in both gold and silver.
The Royal Mint also issues silver, gold and platinum proof sets of the circulating coins, as well as gift products such as gold coins set into jewellery.
Outside the United Kingdom, the BritishCrown DependenciesofJerseyandGuernseyuse the pound sterling as their currencies. However, they produce local issues of coinage in the same denominations and specifications, but with different designs. These circulate freely alongside UK coinage and English, Northern Irish, and Scottish banknotes within these territories, but must be converted in order to be used in the UK. The island ofAlderneyalso produces occasional commemorative coins.(Seecoins of the Jersey pound,coins of the Guernsey pound, andAlderney poundfor details.). TheIsle of Manis a unique case among the Crown Dependencies, issuing its own currency, theManx pound.[citation needed]While the Isle of Man recognises the Pound Sterling as a secondary currency,coins of the Manx poundare not legal tender in the UK.
The pound sterling is also the official currency of theBritish overseas territoriesofSouth Georgia and the South Sandwich Islands,[41]British Antarctic Territory[42]andTristan da Cunha.[43]South Georgia and the South Sandwich Islands produces occasional special collectors' sets of coins.[44]In 2008, British Antarctic Territory issued a £2 coin commemorating the centenary of Britain's claim to the region.[45]
The currencies of theBritish overseas territoriesofGibraltar, theFalkland IslandsandSaint Helena/Ascension— namely theGibraltar pound,Falkland Islands poundandSaint Helena pound— are pegged one-to-one to the pound sterling but are technically separate currencies. These territories issue their own coinage, again with the same denominations and specifications as the UK coinage but with local designs, ascoins of the Gibraltar pound,coins of the Falkland Islands poundandcoins of the Saint Helena pound.
The other British overseas territories do not use sterling as their official currency.
Before decimalisation in 1971, the pound was divided into 240 pence rather than 100, though it was rarely expressed in this way. Rather it was expressed in terms of pounds,shillingsandpence, where:
Thus: £1 = 240d. The penny was further subdivided at various times, though these divisions vanished as inflation made them irrelevant:
Using the example of five shillings and sixpence, the standard ways of writing shillings and pence were:
The sum of 5/6 would be spoken as "five shillings and sixpence" or "five and six".
The abbreviation for the old penny, d, was derived from the Romandenarius, and the abbreviation for the shilling, s, from the Romansolidus. The shilling was also denoted by the slash symbol, also called asolidusfor this reason, which was originally an adaptation of thelong s.[46]The symbol "£", for the pound, is derived from the first letter of theLatinword for pound,libra.[47]
A similar pre-decimal system operated in France, also based on theRoman currency, consisting of thelivre(L),solorsou(s) anddenier(d). Until 1816 another similar system was used in theNetherlands, consisting of thegulden(G),stuiver(s;1⁄20G) andduit, (d;1⁄8s or1⁄160G).
The metal composition varied, not just between different denomiantions but also over time. The crown, half crown, florin, shilling, and sixpence were made fromsterling silver(925fine) until 1920;debased silver(500 fine) from 1920 until 1946; andcupronickelfrom 1947 onwards.[48]
The penny, halfpenny, and farthing were made fromcopperuntil 1860, after whichbronzewas used. The bronze alloy initially consisted of 95% copper, 4%tinand 1%zinc, but in 1923 was altered to 95.5% copper, 3% tin and 1.5% zinc.[49]
The threepence introduced in 1937 was a twelve-sided nickel-brass coin, but the previous threepence, a small silver coin of diameter 16 mm (0.630 in) continued to be made until 1945. Like the higher value silver coins, this was changed from sterling silver to debased silver in 1920.[50]
In the years just prior to decimalisation, the circulating British coins were:
Thefarthing(1⁄4d) had been demonetised on 1 January 1961, whilst thecrown(5/-) was issued periodically as a commemorative coin but rarely found in circulation.
Some of the pre-decimalisation coins with exact decimal equivalent values continued in use after 1971 alongside the new coins, albeit with new names (theshillingbecame equivalent to the 5p coin, with theflorinequating to 10p), and the others were withdrawn almost immediately. The use of florins and shillings aslegal tenderin this way ended in 1991 and 1993 when the 5p and 10p coins were replaced with smaller versions. Indeed, while pre-decimalisation shillings were used as 5p coins, for a while after decimalisation many people continued to call the new 5p coin a shilling, since it remained1⁄20of a pound, but was now counted as 5p (five new pence) instead of 12d (twelve old pennies). The pre-decimalisation sixpence, also known as a sixpenny bit or sixpenny piece, was equivalent to2+1⁄2p, but was demonetised in 1980.
Some pre-decimalisation coins or denominations became commonly known by colloquial and slang terms, perhaps the most well known beingbobfor ashilling, andquidfor a pound. Afarthingwas amag, a silver threepence was ajoeyand the later nickel-brass threepence was called athreepenny bit(/ˈθrʌpni/or/ˈθrɛpni/bit, i.e. thrup'ny or threp'ny bit – the apostrophe was pronounced on a scale from full "e" down to complete omission); a sixpence was atanner, the two-shilling coin orflorinwas atwo-bob bit. Bob is still used in phrases such as "earn/worth a bob or two",[51][better source needed]and "bob‐a‐job week". The two shillings and sixpence coin orhalf-crownwas ahalf-dollar, also sometimes referred to astwo and a kick. A value of two pence was universally pronounced/ˈtʌpəns/tuppence, a usage which is still heard today, especially among older people. The unaccented suffix "-pence", pronounced/pəns/, was similarly appended to the other numbers up to twelve; thus "fourpence", "sixpence-three-farthings", "twelvepence-ha'penny", but "eighteen pence" would usually be said "one-and-six".
Quidremains as popular slang for one or more pounds to this day in Britain in the form "a quid" and then "two quid", and so on. Similarly, in some parts of the country,bobcontinued to represent one-twentieth of a pound, that is five new pence, andtwo bobis 10p.[52]
The introduction of decimal currency caused a new casual usage to emerge, where any value in pence is spoken using the suffixpee: e.g. "twenty-three pee" or, in the early years, "two-and-a-half pee" rather than the previous "tuppence-ha'penny". Amounts over a pound are normally spoken thus: "five pounds forty". A value with less than ten pence over the pound is sometimes spoken like this: "one pound and a penny", "three pounds and fourpence". The slang term "bit" has almost disappeared from use completely, although in Scotland a fifty pence is sometimes referred to as a "ten bob bit". Decimal denomination coins are generally described using the termspieceor coin, for example, "a fifty-pee piece", a "ten-pence coin".
All coins since the late[53]17th century have featured a profile of the current monarch's head. The direction in which they face changes with each successive monarch, a pattern that began with theStuarts, as shown in the table below:
For theTudorsand the Stuarts up to and including Charles II,[55][56]both left- and right-facing portrait images were minted within the reign of a single monarch (left-facing images were more common), together with equestrian portraits on certain coins and (earlier) full face portrait images.[53]In the Middle Ages, portrait images tended to be full face.
There was a small quirk in this alternating pattern whenEdward VIIIbecame king in January 1936 and was portrayed facing left, the same as his predecessorGeorge V. This was because Edward thought his left side to be better than his right.[57]However, Edward VIIIabdicatedin December 1936 and his coins were never put into general circulation. WhenGeorge VIcame to the throne, he had his coins struck with him facing the left, as if Edward VIII's coins had faced right (as they should have done according to tradition). Thus, in a timeline of circulating British coins, George V and VI's coins both feature left-facing portraits, although they follow directly chronologically.[58]
From a very early date, British coins have been inscribed with the name of the ruler of the kingdom in which they were produced, and a longer or shorter title, always in Latin; among the earliest distinctive English coins are the silver pennies ofOffa of Mercia, which were inscribed with the legendOFFA REX"King Offa". As the legends became longer, words in the inscriptions were often abbreviated so that they could fit on the coin; identical legends have often been abbreviated in different ways depending upon the size and decoration of the coin. Inscriptions which go around the edge of the coin generally have started at the centre of the top edge and proceeded in a clockwise direction. A very lengthy legend would be continued on the reverse side of the coin. All monarchs used Latinised names, save Edward III and Edward VI,[59]both Elizabeths, and Charles III (which would have been EDWARDUS, ELIZABETHA, and CAROLUS respectively).
Some coins made for circulation in the British colonies are considered part of British coinage because they have no indication of what country it was minted for and they were made in the same style as contemporary coins circulating in the United Kingdom.
Athree halfpence(1+1⁄2pence,1/160of a pound) coin was circulated mainly in theWest IndiesandCeylonin the starting in 1834. Jamaicans referred to the coin as a "quatty".[63]
Thehalf farthing(1/8of a penny,1/1920of a pound) coin was initially minted in 1828 for use in Ceylon, but was declared legal tender in the United Kingdom in 1842.[64]
Thethird farthing(1/12of a penny,1/2880of a pound) coin was minted for use inMalta, starting in 1827.[64]
Thequarter farthing(1/16of a penny,1/3840of a pound) coin was minted for use in Ceylon starting in 1839.[64]
In addition to the title, a Latin or French motto might be included, generally on the reverse side of the coin. These varied between denominations and issues; some were personal to the monarch, others were more general. Some of the mottos were:
Coins with errors in the minting process that reach circulation are often seen as valuable items bycoin collectors.
In 1983, the Royal Mint mistakenly produced sometwo pencepieces with the old wording "New Pence" on the reverse (tails) side, when the design had been changed from 1982 to "Two Pence".
In 2016, a batch of double-dated £1 coins was released into circulation. These coins had the main date on the obverse as '2016', but micro-engraving on the reverse dated as '2017'. It is not known how many exist and are in circulation, but the amount is fewer than half a million.
In June 2009, the Royal Mint estimated that between 50,000 and 200,000 dateless20 pencecoins had entered circulation, the first undated British coin to enter circulation in more than 300 years. It resulted from the accidental combination of old and new facetoolingin a production batch, creating what is known as amule, following the 2008 redesign which moved the date from the reverse (tails) to the obverse (heads) side.[65] | https://en.wikipedia.org/wiki/British_coinage |
Decimal Day(Irish:Lá Deachúil)[1]in the United Kingdom and inIrelandwas Monday 15 February 1971, the day on which each countrydecimalisedits respective£sdcurrencyofpounds,shillings, andpence.
Before this date, both the Britishpound sterlingand theIrish pound(symbol "£") were subdivided into 20 shillings, each of 12 (old) pence, a total of 240 pence. With decimalisation, the pound kept its old value and name in each currency, but the shilling was abolished, and the pound was divided into 100 new pence (abbreviated to "p"). In the UK, the new coins initially featured the word “new”, but in due course this was dropped. Each new penny was worth 2.4 old pence ("d.") in each currency.
Coins of half a new penny were introducedin the UKandin Irelandto maintain the approximate granularity of the old penny, but these were dropped in the UK in 1984 and in Ireland on 1 January 1987 as inflation reduced their value. An old value of 7 pounds, 10 shillings, and sixpence, abbreviated £7 10/6 or£7.10s.6d, became£7.521/2p. Amounts with a number of old pence which was not 0 or 6 did not convert into a round number of new pence.
TheRussian rublewas the first decimal currency to be used in Europe, dating to 1704, though China had been using a decimal system forat least 2000 years.[2]Elsewhere, theCoinage Act of 1792introduced decimal currency to the United States, the first English-speaking country to adopt a decimalised currency. In France, the decimalFrench francwas introduced in 1795.
Before the 1970s, earlier efforts in the United Kingdom to introduce decimalised currency had failed; in 1824, theUnited Kingdom ParliamentrejectedSir John Wrottesley's proposals to decimalise sterling, which were prompted by the introduction of the French franc three decades earlier. Following this, little progress towards decimalisation was made in the United Kingdom for over a century, with the exception of the two shilling silverflorin, first issued on 1849, worth1/10of a pound. Adouble florinor four shilling piece, introduced in 1887, was a further step towards decimalisation, but failed to gain acceptance and was struck only between 1887 and 1890.
Though little further progress was made, The Decimal Association, founded in 1841 to promote decimalisation andmetrication, saw interest in both causes boosted by a growing national realisation of the importance of ease in international trade, following the 1851Great Exhibition; it was as a result of the growing interest in decimalisation that the florin was issued. In a preliminary report issued in 1857 by theRoyal Commissionon Decimal Coinage, the benefits and drawbacks of decimalisation were considered, but the report failed to draw any conclusions on the adoption of a change in currency.[3]A final report in 1859 from the two remaining commissioners,Lord Overstoneand Governor of theBank of EnglandJohn Hubbard, came out against the idea, claiming that it had "few merits".[4]
In 1862, theSelect committeeonWeights and Measuresfavoured the introduction of decimalisation to accompany the introduction ofmetric weights and measures.[5]
TheRoyal Commission on Decimal Coinage(1918–1920), chaired byLord Emmott, reported in 1920 that the only feasible scheme was to divide the pound into 1,000mills(thepound and millsystem, first proposed in 1824), but that it would be too inconvenient to introduce. A minority of four members said that the disruption would be worthwhile. A further three members recommended that the pound should be replaced by the royal, consisting of 100halfpennies, with there then being 4.8 royals to the former pound.[6]
In 1960, a report prepared jointly by theBritish Association for the Advancement of Scienceand the Association ofBritish Chambers of Commerce, followed by the success of decimalisation in South Africa, prompted the Government to set up theCommittee of the Inquiry on Decimal Currency(Halsbury Committee) in 1961, which reported in 1963.[7]The adoption of the changes suggested in the report was announced on 1 March 1966.[8]The Decimal Currency Board (DCB) was created to manage the transition, but the plans were only approved by Parliament with the Decimal Currency Act 1967 (c. 47). The formerGreater London CouncilleaderBill Fiskewas named as the chairman of the Decimal Currency Board.
Consideration was given to introducing a new major unit of currency worth ten shillings in the old currency. Suggested names included thenew pound, theroyaland thenoble. It would have resulted in the "decimal penny" being worth only slightly more than the old penny, an approach adopted in South Africa, Australia and New Zealand in the 1960s, adopting respectively theSouth African rand,Australian dollarandNew Zealand dollarequal in value to 10 shillings. However, Halsbury decided that the pound sterling's importance as areserve currencymeant that the pound should remain unchanged.
Under the new system, the pound was retained, but was divided into 100 new pence, denoted by the symbolp. New coinage was issued alongside the old coins. The5pand10pcoins were introduced in April 1968 and were the same size, composition and value as theshillingandtwo shillingcoins in circulation with them. In October 1969, the50pcoin was introduced, with the10s. notewithdrawn on 20 November 1970. This reduced the number of new coins required to be introduced on Decimal Day, meaning that the British public would already be familiar with three of the six new coins. Small booklets were made available, containing some or all of the new denominations.
The oldhalfpennywas withdrawn from circulation on 31 July 1969, and thehalf-crown(2s. 6d.) followed on 31 December to ease the transition.[9]Thefarthing, last minted in 1956, had already ceased to belegal tenderin 1961.
A substantial publicity campaign took place in the weeks before Decimal Day, including a song byMax Bygravescalled "Decimalisation".[10]The BBC broadcast a series of five-minute programmes, titled "Decimal Five", to whichThe Scaffoldcontributed some specially written tunes.[11]ITV repeatedly broadcast a short drama calledGranny Gets The PointstarringDoris Hare, in which an elderly woman who does not understand the new system is taught to use it by her grandson.[12]At 10 a.m. on 15 February and again the following week, BBC 1 broadcast 'New Money Day', aMerry-Go Roundschools' programme in which puppet makerPeter Firminand his small friend Muskit encountered different prices and new coins when they visited the shops.[13][14]
Banks received stocks of the new coins in advance, which were issued to retailers shortly before Decimal Day to enable them to give change immediately after the changeover. Banks were closed from 3:30 p.m. on Wednesday 10 February 1971 to 10:00 a.m. on Monday 15 February to enable all outstanding cheques and credits in the clearing system to be processed and customers' account balances to be converted from £sd to decimal. In many banks, the conversion was done manually, as few bank branches were then computerised. February had been chosen for Decimal Day because it was the quietest time of the year for the banks, shops and transport organisations.
Many items were priced in both currencies for some time before and after the change. Prior to Decimal Day, items priced in both currencies had displayed the price in predecimalised currency first, with the price in decimal currency last in parentheses. From Decimal Day onwards, this order was reversed, with decimal currency presented first, and predecimal currency last in parentheses; for example,1s (5p)would become5p (1s). This latter order was used on mostfootballprogrammes during the 1970–71 season. High-denomination (10p, 20p and 50p) stamps were issued on 17 June 1970.[9]Post offices were issued with simplifiedtraining stampsin the same colours as the upcoming decimal stamps.[15]
Exceptions to the 15 February introduction of decimalisation wereBritish RailandLondon Transport, which had gone decimal one day early, the former urging customers, if they chose to use pennies or threepenny pieces, to pay them in multiples of 6d (21/2p, thelowest common multipleof the two systems).
Conversion tables were provided, showing how prices in £sd rounded to the new currency:[16]this including saying 3d was equivalent to 1p though 9d was equivalent to 4p. This led to some anomalies: school meals were charged at 1s 9d a day or 8s 9d for a five-day week; these became 9p a day or 45p a week despite the conversion tables suggesting 8s 9d should be 44p.[17]
Because of extensive preparations and the publicity campaigns organised by the British government, Decimal Day itself went smoothly. Some criticism – such as the fact that thenew halfpenny coinwas relatively small, and that some traders had taken advantage of the transition to raise their prices – were levelled, despite the fact that in the latter case, overall price adjustments slightly favoured the consumer.[citation needed]Some used new pennies as sixpences in vending machines.[18]After 15 February, shops continued to accept payment in old coins but always issued change in new coins. The old coins were then returned to banks, and so most of them were quickly taken out of circulation.
The newhalfpenny,penny, andtwopencecoins were introduced on 15 February 1971. Within two weeks of Decimal Day, theold penny(1d) andold threepence(3d) coins had left circulation, andold sixpenceshad become somewhat rare.[18]On 31 August 1971, the 1d and 3d were officially withdrawn from circulation, ending the transition period to decimal currency.[19]
The government intended that in speech, the new units would be called "new pence"; however, the British public quickly began to refer to pennies as "pee" when shortened, with "10p" pronounced as "ten pee" instead of "ten new pence". Other shortenings previously common, such as "tuppence", were now rarely heard, and terms such as "tanner" (used for the silversixpence), which previously designated amounts of money, were no longer used.[citation needed]However, someslang terms, such as "quid" and "bob", previously used for pounds and shillings respectively, survived from predecimal times. Amounts denominated inguineas(21s or £1.05) were reserved still for specialist transactions, and continued to be used in the sale of horses and at some auctions, amongst others.
The public information campaign over the preceding two years helped, as well as the trick of getting a rough conversion of new pence into old shillings and pence by simply doubling the number of new pence and placing asolidus, or slash, between the digits: 17p multiplied by 2 = 34, – approximately equal to 3/4 ("three and four", or three shillings and four pence), with a similar process for the reverse conversion.[citation needed]The willingness of Britain's younger population to embrace decimalisation also helped, with elderly people having greater difficulty in adapting; the phrase "How much is that in old money?", or even "How much is that in real money?" became associated with those who struggled with the change, before in the following decades coming to refer to conversions between metric and imperial weights and measures.[20][21]In shops from Decimal Day onwards, new stock would be universally priced in 'new money', though in smaller shops such as newsagents, it was still possible to find stock priced in £sd for several years after 1971; however, remaining stock priced in £sd would still be charged in its equivalent in decimal currency.
Around Decimal Day, "Decimal Adders" and other converters were available to help people convert between the old and new coins. The following is a table showing conversions between the decimal and pre-decimal systems.
In response to the change, some new coins were stamped with phrases such as "DUD".[22]
All predecimal coins, except for certain non-circulating coins such ascrowns,sovereignsanddouble florins[23]which were explicitly excluded from demonetisation, are now no longer legal tender. Several other pre-decimal coins remained in circulation beyond 1971 (see below), but have now all been withdrawn following changes to the standards and specifications of circulating coinage.
Thesixpence(6d), worth exactly 21/2p, was withdrawn in June 1980. This enabled the withdrawal of the decimal half-penny coin in 1984.
Shillings and florins, together with their same sized 5p and 10p coin equivalents, coexisted in circulation as valid currency until the early 1990s. In theory, this would have included coins dating back to 1816, but in practice, the oldest were dated 1947, as older coins contained silver, meaning the value of their metal was worth more than their nominal value.
The coins were withdrawn when smaller 5p and 10p coins were introduced in 1990 and 1992 respectively. The demonetisation of the larger-size 50p in 1998 means that there are now no sterling coins in everyday circulation dated earlier than 1971.
The face value ofMaundy moneycoins was maintained, increasing all their face values by a factor of 2.4, as the coins continued to belegal tenderas new pence.[24]Thenumismaticvalue of each coin, though, greatly exceeds face value.
Commemorative 'decimal' Crowns dated 1972, 1977, 1980 and 1981 remain legal tender (with a face value of 25p) as do the£5 coinsissued from 1990 onwards.[25]
Thedecimal halfpenny(1/2p), which had been introduced in 1971, remained in circulation until 1984, when its value had been greatly reduced by inflation. It was not struck, save for collectors' sets, following 1983, with those dated 1984 struck only as proofs, or in uncirculated mint sets. The decimal halfpenny was demonetised on 31 December 1984. The 50p piece was reduced in size in 1997, following the reduction in size of the 5p in 1990 and the 10p in 1992 (the large versions of all the three have been demonetised). The 1p and 2p underwent a compositional change from bronze to plated steel in 1992. However, both coins remain valid back to 1971, the only circulating coins on Decimal Day that are still valid.
In 1982, the word "new" in "new penny" or "new pence" was removed from the inscriptions on coins, and was replaced by the number of pence in the denomination (for example, "ten pence" or "fifty pence"). This coincided with the introduction of a new20p coin, which from the outset bore simply the legend "twenty pence".
A£1 coinwas introduced into circulation in 1983,[26]and a£2 coinin 1998 (although a series of commemorative uni-metallic £2 coins had been issued between 1986 and 1996 to celebrate special occasions).
When the old£sdsystem (consisting of pounds, shillings, and pence) was in operation, the United Kingdom and Ireland operated within thesterling area, effectively a single monetary area. TheIrish poundwas created as a separate currency in 1927 with distinct coins and notes, but the terms of the Currency Act 1927 obliged the Irish currency commissioners to redeem Irish pounds on a fixed 1:1 basis, and so day-to-day banking operations continued exactly as they had been before the creation of the Irish pound.[27]The Irish pound was decimalised on 15 February 1971, the same date as the British pound.[28]
This arrangement continued until 1979 when Irish obligations to theEuropean Monetary Systemled to Ireland breaking the historic link with sterling.[29]
In Ireland, all pre-decimalcoins, except the1s.,2s.and10s.coins, were called in during the initial process between 1969 and 1972; theten shilling coin, which, as recently issued and in any event equivalent to 50p, was permitted to remain outstanding (though due tosilvercontent, the coin did not circulate widely). The 1s.and 2s.were recalled in 1993 and 1994, respectively. Pre-decimal Irish coins may still be redeemed at their face value equivalent ineurosat theCentral Bankin Dublin.
Pre-decimal Irish coins were denoted withsfor shillings anddfor pence, abbreviations derived from the Latinsolidianddenarii, in contrast to stamps, which instead boreIrish-languageabbreviations (scilling("shilling", abbreviated "s") andpingin("penny", abbreviated "p")). After decimalisation, coins were marked with the Irish-language abbreviations. While British stamps switched from 'd' to 'p',Irish stamps(unlike the coins) printed the number with no accompanying letter; so a stamp worth 2 new pence was marked '2p' in the UK and simply '2' in Ireland.
The following is a table showing conversions between the Irish decimal and pre-decimal systems. It was similar to the British one, except for the higher-value coins.
Ireland's new decimal coinage had face values of1/2p,1p,2p,5p,10pand50p.
The oldshilling coincontinued to circulate with a value of 5 new pence, and the oldflorinwith a value of 10 new pence.[30]Unlike in the UK, where the sixpence continued to circulate at a value of2+1/2p, theIrish sixpencewas withdrawn from circulation after decimalisation. Theten-shilling notewas withdrawn from circulation, but the otherSeries A banknotescontinued in use.[31]
Atwenty-pence coinwas introduced in 1986.[32]Thedecimal halfpenny(1/2p) remained in circulation until 1987, when its value had been greatly reduced byinflation. Very few were produced after the initial minting.[33]
In 1990,the pound coinwas introduced,[34]and in 1992 the5pand10pcoins were reduced in size. The old shilling and florin coins ceased to belegal tenderat the same time.[35]
The Irish pound coins were withdrawn from circulation in 2002, to be replaced by theeuro.[36] | https://en.wikipedia.org/wiki/Decimal_Day |
Metricationormetrificationis the act or process of converting to themetric systemof measurement.[1]All over the world, countries have transitioned from local and traditionalunits of measurementto the metric system. This process began inFrance during the 1790s, and has persistently advanced over two centuries, accumulating into 95% of the world officially only using themodern metric system.[2]Nonetheless, this also highlights that certain countries and sectors are either still transitioning or have chosen not to fully adopt the metric system.
The process of metrication is typically initiated and overseen by a country's government, generally motivated by the necessity of establishing a uniform measurement system for effective international cooperation in fields like trade and science. Governments achieve metrication through either mandatory changes to existing units within a specified timeframe or through voluntary adoption.
While metric use is mandatory in some countries and voluntary in others, all countries have recognised and adopted the SI, albeit to different degrees, including the United States. As of 2011, ninety-five percent of the world's population live in countries where the metric system is the only legal system of measurement.[3]: p. 49, ch 2
According to theNational Institute of Standards and Technology (NIST), there are only three countries that do not have mandatory metric laws (Liberia,Myanmar, and theUnited States),[4][5]however a research paper completed by Vera (2011) stated in practice there were four additional countries, namely theUnited States COFAcountries (Federated States of Micronesia,Marshall IslandsandPalau), andSamoa.
Samoahas since mandated metric trade.[6][3]: 60, 494–496
In 2018, the Liberian government had pledged to adopt the metric system.[7]In 2013, the Myanmar Ministry of Commerce announced that Myanmar was preparing to adopt the metric system as the country's official system of measurement, andmetrication in Myanmarbegan with some progress was made (road signs and temperature are legislated to be in metric), however there had been very little progress in local trade.[8][9]
As of 2023[update], the United States has a national policy of adopting the metric system based on theMetric Conversion Actof 1975, amended by the Omnibus Trade and Competitiveness Act of 1988, and Presidential Executive Order 12770 of 1991, and allUnited States governmentagencies are required to adopt it.[10]
The metrication process can take years to implement and complete: for instance,Guyanaadopted the metric system in 2002 and was only able to make it mandatory in local trade 2017 after the metric system was fully adopted in schools.[11]Antigua and Barbuda, also officially metric, is moving slowly in its metrication process, with a new push in 2011 for all government agencies to convert by 2013 and the entire country to use the metric system by the first quarter of 2015.[12]Other metricCaribbeancountries, such asSaint Luciaofficially metric 2000, are still in the process toward full conversion.[13]
TheUnited Kingdomhas officially embraced a dual measurement system. The United Kingdom as of 2007 haltedits metrication process, and retain imperial units of the mile and yard in road markings,pintsfor returnable milk containers, and (with Ireland) for the pint for draught beer and cider sold in pubs.[14]Throughout the 1990s, theEuropean Commissionhelped accelerate the metrication process for member states, for the implemented theUnits of Measure Directiveto promote trade. This acceleration caused public backlash in the United Kingdom, and in 2007 the United Kingdom announced that it had secured permanent exemptions listed above and, to appease British public opinion and to facilitate trade with the United States, the option to include imperials units alongside metric units could continue indefinitely.[14][15]
The United Kingdom and the United States face ongoing resistance toward metrication, which may be partially rooted in a belief that their cultural identity is intertwined with the traditional measurement systems they historically have used.[16]This has resulted in a review of mandatory sales and trade of metric units by the UK government. The outcome of this review with over 100 000 respondents was that a majority had limited or no appetite for increased use of imperial measures.[17]
The metre was adopted as exclusive measure in 1801 under theFrench Consulate, then theFirst French Empireuntil 1812, whenNapoleondecreed the introduction of themesures usuelleswhich remained in use in France up to 1840 in the reign ofLouis Philippe.[18]Meanwhile, the metre was adopted by the Republic of Geneva.[19]After the joining ofcanton of Genevato Switzerland in 1815,Guillaume Henri Dufourpublished the first Swiss official map for which the metre was adopted as unit of length.[20][21]A Swiss-French binational officer,Louis Napoléon Bonapartewas present when a baseline was measured nearZürichforDufour mapwhich would win the gold medal for the national map at theExposition Universelle of 1855.[22][23][24]Among the scientific instruments calibrated on the metre, which were displayed at the Exposition Universelle, wasBrunnerapparatus, a geodetic instrument devised for measuring the central baseline of Spain whose designer,Carlos Ibáñez e Ibáñez de Iberowould represent Spain at theInternational Statistical Institute. In addition to the Exposition Universelle and the second Statistical Congress held in Paris, an International Association for obtaining a uniform decimal system of measures, weights, and coins was created there in 1855.[25][26][27][28]Copies of the Spanish standard would be made for Egypt, France and Germany.[29][30]These standards were compared to each other and with Borda apparatus which was the main reference for measuring all geodetic baselines in France.[31][32][33]These comparisons were essential, because of theexpansibilityof solid materials with raise in temperature. Indeed, one fact had constantly dominated all the fluctuations of ideas on the measurement of geodesic bases: it was the constant concern to accurately assess the temperature of standards in the field; and the determination of this variable, on which depended the length of the instrument of measurement, had always been considered bygeodesistsas so difficult and so important that one could almost say that the history of measuring instruments is almost identical with that of the precautions taken to avoid temperature errors.[34]In 1867, the second general Conference of theEuropean Arc Measurementrecommended the adoption of the metre in replacement of the toise. In 1869, theSaint Petersburg Academy of Sciencessent to theFrench Academy of Sciencesa report drafted byOtto Wilhelm von Struve,Heinrich von WildandMoritz von Jacobiinviting his French counterpart to undertake joint action with a view to ensuring the universal use of themetric systemin all scientific work.[35]The same year, Napoleon III convened the International Metre Commission which was to meet in Paris in 1870. TheFranco-Prussian Warbroke out, theSecond French Empirecollapsed, but the metre survived.[36][37]
During the nineteenth century the metric system of weights and measures proved a convenient political compromise during the unification processes in the Netherlands, Germany and Italy. In 1814, Portugal became the second country not part of the French Empire to officially adopt the metric system. Spain found it expedient in 1849 to follow the French example and within a decadeLatin Americahad also adopted the metric system, or had already adopted the system, such as the case of Chile by 1848. There was considerable resistance to metrication in the United Kingdom and in the United States. Despite this, they were actually the first countries in the World to use a metric standard for cartography.[38][39][40][33][41]
The introduction of the metric system into France in 1795 was done on a district by district basis with Paris being the first district. By modern standards the transition was poorly managed. Although thousands of pamphlets were distributed, the Agency of Weights and Measures who oversaw the introduction underestimated the work involved. Paris alone needed 500,000 metre sticks, yet one month after the metre became the sole legal unit of measure, they only had 25,000 in store.[42]: 269This, combined with the excesses of the Revolution and the high level of illiteracy in 18th century France, made the metric system unpopular.
Napoleonhimself ridiculed the metric system but, as an able administrator, recognised the value of a sound basis for a system of measurement. Under thedécret impérial du 12 février 1812(imperial decree of 12 February 1812), a new system of measure – themesures usuelles("customary measures") was introduced for use in small retail businesses – all government, legal and similar works still had to use the metric system and the metric system continued to be taught at all levels of education.[43]That system reintroduced the names of many units used during the ancient regime, but their values were redefined in terms of metric units. Thus thetoisewas defined as being two metres, with sixpiedsmaking up onetoise, twelvepoucesmaking up onepiedand twelvelignesmaking up onepouce. Likewise thelivrewas defined as being 500 g, eachlivrecomprising sixteenonceand eachonceeightgrosand theauneas 120 centimetres.[44]This intermediate step eased the transition to a metric-based system.
By theLoi du 4 juillet 1837(the law of 4 July 1837),Louis Philippe Ieffectively revoked the use ofmesures usuellesby reaffirming the laws of measurement of 1795 and 1799 to be used from 1 May 1840.[45][46]However, many units of measure, such as thelivre(for half a kilogram), remained in everyday use for many years,[46][47]and to a residual extent up to this day.
At the outbreak of the French Revolution, much of modern-day Germany and Austria were part of theHoly Roman Empirewhich had become a loose federation of kingdoms, principalities, free cities, bishoprics and other fiefdoms, each with its own system of measurement, though in most cases the systems were loosely derived from theCarolingiansystem instituted byCharlemagnea thousand years earlier.
During the Napoleonic era, some of the German states moved to reform their systems of measurement using the prototype metre and kilogram as the basis of the new units.Baden, in 1810, for example, redefined theRuthe(rods) as being 3.0 m exactly and defined the subunits of theRutheas 1Ruthe= 10Fuß(feet) = 100Zoll(inches) = 1,000Linie(lines) = 10,000Punkt(points) (for simplicity at the expense of grammar, these are the singular forms of each name) while thePfundwas defined as 500 g, divided into 30 Loth, each of 16.67 g.[48][49]Bavaria, in its reform of 1811, trimmed the BavarianPfundfrom 561.288 g to 560 g exactly, consisting of 32Loth, each of 17.5 g[50]while thePrussianPfundremained at 467.711 g.[51]
After theCongress of Viennathere was a degree of commercial cooperation between the various German states resulting in the German Customs Union (Zollverein). There were, however, still many barriers to trade untilBavariatook the lead in establishing the General German Commercial Code in 1856. As part of the code theZollvereinintroduced theZollpfund(Customs Pound) which was defined as exactly 500 g and could be split into 30 'lot'.[52]This unit was used for inter-state movement of goods, but was not applied in all states for internal use.
In 1832,Carl Friedrich Gaussstudied theEarth's magnetic fieldand proposed adding thesecondto the basic units of themetreand thekilogramin the form of theCGS system(centimetre,gram, second). In 1836, he founded theMagnetischer Verein, the first international scientific association, in collaboration withAlexander von HumboldtandWilhelm Edouard Weber.Geophysics(the study of the Earth by the means ofphysics) preceded physics[citation needed]and contributed to the development of its methods. It was primarily anatural philosophywhose object was the study of natural phenomena such as the Earth's magnetic field,lightningandgravity. The coordination of the observation of geophysical phenomena in different points of the globe was of paramount importance and was at the origin of the creation of the first international scientific associations. The foundation of theMagnetischer Vereinwould be followed by that of the Central European Arc Measurement (German:Mitteleuropäische Gradmessung) on the initiative ofJohann Jacob Baeyerin 1863, and by that of theInternational Meteorological Organisationwhose second president, the Swissmeteorologistandphysicist,Heinrich von WildrepresentedRussiaat theInternational Committee for Weights and Measures(CIPM).[53][54][55]In 1867, the European Arc Measurement (German:Europäische Gradmessung) called for the creation of a new,international prototype metre(IPM) and the arrangement of a system where national standards could be compared with it. The French government gave practical support to the creation of an International Metre Commission, which met in Paris in 1870 and again in 1872 with the participation of about thirty countries. TheMetre Conventionwas signed on 20 May 1875 in Paris and theInternational Bureau of Weights and Measureswas created under the supervision of theCIPM.
Although the Zollverein collapsed after theAustro-Prussian Warof 1866, the metric system became the official system of measurement in the newly formedGerman Empirein 1872[42]: 350and of Austria in 1875.[56]The Zollpfund ceased to be legal in Germany after 1877.[57]
TheCisalpine Republic, a North Italian republic set up by Napoleon in 1797 with its capital atMilan, first adopted a modified form of the metric system based on thebraccio cisalpino(Cisalpine cubit) which was defined to be half a metre.[58]In 1802 the Cisalpine Republic was renamed theItalian Republic, with Napoleon as its head of state. The following year the Cisalpine system of measure was replaced by the metric system.[58]
In 1806, the Italian Republic was replaced by theKingdom of Italywith Napoleon as its emperor. By 1812, all of Italy from Rome northwards was under the control of Napoleon, either as French Departments or as part of the Kingdom of Italy, ensuring that the metric system was in use throughout this region.
After theCongress of Vienna, the various Italian states reverted to their original systems of measurements, but in 1845 theKingdom of Piedmont and Sardiniapassed legislation to introduce the metric system within five years. By 1860, most of Italy had been unified under the King of SardiniaVictor Emmanuel II; and underLaw 132 of 28 July 1861the metric system became the official system of measurement throughout the kingdom. NumerousTavole di ragguaglio(conversion tables) were displayed in shops until 31 December 1870.[58]
The Netherlands (as the revolutionaryBatavian Republic) began to use the metric system from 1799 but, as with its co-revolutionaries in France, encounterednumerous practical difficulties. Subsequently, as part of theFirst French Empiresince 1809, the Netherlands used Napoleon'smesures usuellesfrom their introduction in 1812 until the fall of his Empire in 1815. Under the (Dutch)Weights and Measures Actof 21 August 1816 and the Royal decree of 27 March 1817 (Koningklijk besluit van den 27 Maart 1817), the newly formedKingdom of the Netherlandsabandoned themesures usuellesin favour of the "Dutch"metric system(Nederlands metrisch stelsel) in which metric units were simply given the names of units of measure that were then in use: for instance theons(ounce) was defined as 100 g.[59]
In 1875, Norway was the first country to ratify the metre convention, and it was seen as an important step towards Norwegian independence. The decision to adopt the metric system is said to have beenthe Norwegian Parliament's fastest decisionin peacetime.
In August 1814, Portugal officially adopted the metric system but with the names of the units substituted byPortuguese traditional ones. In this system the basic units were themão-travessa(hand) = 1decimetre(10mão-travessas= 1vara(yard) = 1 metre), thecanada= 1 litre and thelibra(pound) = 1 kilogram.[60]
Until the ascent of theBourbonmonarchy in Spain in 1700, each region of Spain had its own system of measurement. The new Bourbon monarchy tried to centralise control and with it the system of measurement. There were debates regarding the desirability of retaining theCastilianunits of measure or, in the interests of harmonisation, adopting the French system.[61]Although Spain assistedMéchainin his meridian survey, the Government feared the French revolutionary movement and reinforced the Castilian units of measure to counter such movements. By 1849 however, it proved difficult to maintain the old system and in that year the metric system became the legal system of measure in Spain.[61]
TheSpanish Royal Academy of Scienceurged the Government to approve the creation of a large-scale map ofSpainin 1852. The following yearCarlos Ibáñez e Ibáñez de Iberowas appointed to undertake this task. All the scientific and technical material had to be created. Ibáñez e Ibáñez de Ibero and Saavedra went toParisto supervise the production by Brunner of a measuring instrument which they had devised and which they later compared withBorda's double-toise N°1 which was the main reference for measuring all geodetic bases in France and whose length was by definition 3.8980732 metres at a specified temperature.[38][62]
In 1865 the triangulation ofSpainwas connected with that ofPortugalandFrance. In 1866 at the conference of the Association of Geodesy inNeuchâtel, Ibáñez announced thatSpainwould collaborate in remeasuring theFrench meridian arc. In 1879 Ibáñez andFrançois Perrier(representing France) completed the junction between the geodetic network of Spain andAlgeriaand thus completed the measurement of theFrench meridian arcwhich extended fromShetlandto theSahara.
In 1866, Spain and Portugal joined the Central European Arc Measurement which would become theEuropean Arc Measurementthe next year. In 1867 at the second general conference of the geodetic association held in Berlin, the question of an international standard unit of length was discussed in order to combine the measurements made in different countries to determine the size and shape of the Earth. The conference proposed according to recommendations drawn up by a committee chaired byOtto Wilhelm von Struvedirector of thePulkovo Observatoryin St. Petersburg the adoption of themetreand the creation of an international metre commission, after a preliminary discussion held in Neuchâtel betweenJohann Jacob Baeyerdirector of the Royal Prussian Geodetic Institute,Adolphe Hirschfounder of theNeuchâtel ObservatoryandCarlos Ibáñez e Ibáñez de IberoSpanish representative, founder and first director of theInstituto Geográfico Nacional.[63][64][65][66][67]
In November 1869 the French government issued invitations to join this commission. Spain accepted andCarlos Ibáñez e Ibáñez de Iberotook part in the Committee of preparatory research from the first meeting of the International Metre Commission in 1870. He became president of the permanent Committee of the International Metre Commission in 1872. In 1874 he was elected as president of the Permanent Commission of theEuropean Arc Measurement. He also presided the General Conference of theEuropean Arc Measurementheld in Paris in 1875, when the association decided the creation of an international geodetic standard for the bases' measurement.[68]He represented Spain at the 1875 conference of theMetre Convention, which was ratified the same year in Paris. The Spanish geodesist was elected as the first president of theInternational Committee for Weights and Measures. His activities resulted in the distribution of a platinum and iridium prototype of themetreto all States parties to theMetre Conventionduring the first meeting of theGeneral Conference on Weights and Measuresin 1889. These prototypes defined the metre right up until 1960.
In 1801, theHelvetic Republicat the instigation ofJohann Georg Trallespromulgated a law introducing the metric system. However this was never applied, because in 1803 the competence for weights and measures returned to thecantons. On the territory of the currentcanton of Jura, then annexed to France (Mont-Terrible), the metre was adopted in 1800. TheCanton of Genevaadopted the metric system in 1813, thecanton of Vaudin 1822, thecanton of Valaisin 1824 and thecanton of Neuchâtelin 1857. In 1835, twelve cantons of theSwiss Plateauand the north-east adopted a concordat based on the federal foot (exactly 0.3 m) which entered into force in 1836. The cantons ofcentralandeastern Switzerland, as well as the Alpine cantons, continued to use the old measures.[19][69]
Guillaume-Henri Dufourfounded in 1838 inGenevaa topographic office (the futureFederal Office of topography), which published under his direction, from 1845 to 1864, the first officialmap of Switzerland, on the basis of new cantonal measurements. This map at1:100,000engraved on copper, suggested the relief by hatching and shadows. The map projection adopted by the commission was theBonne projection, centred on theBernObservatory (5° 6' 10.8'' east ofParis meridian), although this point was much closer to the western end of Switzerland than to its eastern end. But its position was well known, and there was no more centralobservatory. The scale was set at 1:100 000 because it was considered more suitable for a country as rugged as Switzerland than the 1:80 000 adopted for the largemap of Franceand the two maps were in any case inconsistent, as the meridians of themap of Switzerlandtilted in the opposite direction to those of the map of France. The map commission wanted to adopt decimal measures; and Switzerland did not have an already existing map which, like theCassini map, used a scale close to 1:86 400, i.e. 1line(1⁄12of a French inch) to 100toises(i.e. 600 French feet). Themetrewas adopted as a linear measure, and the entire map was divided into twenty-five sheets: five east–west and five north–south. Each sheet of the map showed two scales, one purelymetric, the other in Swissleagues4800 metres in length. The frame was divided intosexagesimalminutesand centesimal minutes; the latter, each subdivided into ten parts, had the advantage of showingKilometresin the direction of themeridians; so that there were new scales on the sides of the sheet to evaluate the distances.[20][21][70]
According to the1848 Constitutionthe federal foot was to come into force throughout the country. InGeneva, a committee chaired byGuillaume Henri Dufourmilitated in favor of maintaining the decimal metric system in the French-speaking cantons and against the standardization of weights and measures in Switzerland on the basis of the metric foot. In 1868 the metric system was legalized alongside the federal foot, which was a first step towards its definitive introduction. Cantonal calibrators were supervised by aFederal Bureau of Verificationcreated in 1862, whose management was entrusted toHeinrich von Wildfrom 1864. In 1875, the responsibility for weights and measures was transferred back from the cantons to the Confederation, andSwitzerland(represented byAdolphe Hirsch) joined theMetre Convention. The same year a federal law imposed the metric system from 1 January 1877. In 1977, Switzerland joined theInternational System of Units.[19][54][71][72][73]
TheWeights and Measures Act 1824(5 Geo. 4. c. 74) imposed one standard 'imperial' system of weights and measures on the British Empire.[74]The effect of this act was to standardise existing British units of measure rather than to align them with the metric system.
During the next eighty years a number of parliamentary select committees recommended the adoption of the metric system, each with a greater degree of urgency, but Parliament prevaricated. A select committee report of 1862 recommended compulsory metrication, but with an "Intermediate permissive phase"; Parliament responded in 1864 by legalising metric units only for 'contracts and dealings'.[75]The United Kingdom initially declined to sign theTreaty of the Metre, but did so in 1883. Meanwhile, British scientists and technologists were at the forefront of the metrication movement – it was theBritish Association for the Advancement of Sciencethat promoted theCGS system of unitsas a coherent system[76]: 109and it was the British firmJohnson Mattheythat was accepted by the CGPM in 1889 to cast the international prototype metre and kilogram.[77]
In 1895, another parliamentary select committee recommended the compulsory adoption of the metric system after a two-year permissive period. TheWeights and Measures (Metric System) Act 1897(60 & 61 Vict.c. 46) legalised the metric units for trade, but did not make them mandatory.[75]A bill to make the metric system compulsory to help the British industrial base fight off the challenge of the nascent German base passed through the House of Lords in 1904, but did not pass in the House of Commons before the next general election was called. Following opposition by the Lancashire cotton industry, a similar bill was defeated in the House of Commons in 1907 by 150 votes to 118.[75]
In 1965, the UK began an official programme of metrication, and as of 2025, in theUnited Kingdomthe metric is the official measurement system for all regulated trading by weight or measure purposes, however the imperialpintremains the sole legal unit for milk in returnable bottles and for draught beer and cider in British pubs. Imperial units are also legally permitted to be used alongside metric units on food packaging and price indications for goods sold loose.[78]The UK government undertook a "Choice on units of measurement: consultation response", and found just over 1% of respondents wish to revert to an increase the use of imperial units, and as such kept the current regulations on the sale of goods.[79]
In addition imperial units may be used exclusively where a product is sold by description, rather than by weight/mass/volume: e.g. television screen and clothing sizes tend to be given in inches only, but a piece of material priced per inch would be unlawful unless the metric price was also shown.
The general public still use imperial units in common langange for their height and weight, and imperial units are the norm when discussing longer distances such as journeys by car, but otherwise metric measurements are often used.[citation needed]
In 1805 a Swiss geodesistFerdinand Rudolph Hasslerbrought copies of the French metre and kilogram to the United States.[80][81]In 1830 the Congress decided to create uniform standards for length and weight in the United States.[82]Hasslerwas mandated to work out the new standards and proposed to adopt themetric system.[82]The Congress opted for the British Parliamentary Standard from 1758 and the Troy Pound of Great Britain from 1824 as length and weight standards.[82]Nevertheless, the primary baseline of the Survey of the Coast (renamed the United States Coast Survey in 1836 and theUnited States Coast and Geodetic Surveyin 1878) was measured in 1834 atFire Islandusing four 2-metre (6 ft 7 in) iron bars constructed after Hassler's specification in the United Kingdom and brought back in the United States in 1815.[83][84][81]All distances measured by the Survey of the Coast, Coast Survey, and Coast and Geodetic Survey were referred to the metre.[38][39]In 1866 theUnited States Congresspassed a bill making it lawful to use the metric system in the United States. The bill, which was permissive rather than mandatory in nature, defined the metric system in terms ofcustomary unitsrather than with reference to the international prototype metre and kilogram.[85][86]: 10–13Ferdinand Rudolph Hassler's use of the metre in coastal surveying, which had been an argument for the introduction of theMetric Act of 1866allowing the use of the metre in the United States, probably also played a role in the choice of the metre as international scientific unit of length and the proposal by theEuropean Arc Measurement(German:Europäische Gradmessung) to “establish a Europeaninternational bureau for weights and measures”.[39][38][87][88]
By 1893, the reference standards for customary units had become unreliable. Moreover, the United States, being a signatory of theMetre Conventionwas in possession of national prototype metres and kilograms that were calibrated against those in use elsewhere in the world. This led to theMendenhall Orderwhich redefined the customary units by referring to the national metric prototypes, but used the conversion factors of the 1866 act.[86]: 16–20In 1896, a bill that would make the metric system mandatory in the United States was presented to Congress. Twenty-three of the 29 people who gave evidence before the congressional committee who were considering the bill were in favor of it, but six were against. Four of these six dissenters represented manufacturing interests and the other two were from the United States Revenue service. The grounds cited were the cost and inconvenience of the change-over. The bill was not enacted. Subsequent bills suffered a similar fate.[56]
The United States mandated the acceptance of the metric system in 1866 for commercial and legal proceedings, without displacing their customary units.[89]The non-mandatory nature of the adoption of the SI has resulted in a much slower pace of adoption in the US than in other countries.[90]
In 1971, the USNational Bureau of Standardscompleted a three-year study of the impact of increasing worldwide metric use on the US. The study concluded with a report to Congress entitledA Metric America – A Decision Whose Time Has Come. Since then metric use has increased in the US, principally in the manufacturing and educational sectors. Public Law 93-380, enacted 21 August 1974, states that it is the policy of the US to encourage educational agencies and institutions to prepare students to use the metric system of measurement with ease and facility as a part of the regular education program. On 23 December 1975, President Gerald Ford signed Public Law 94–168, theMetric Conversion Actof 1975. This act declares a national policy of coordinating the increasing use of the metric system in the US. It established a US Metric Board whose functions as of 1 October 1982 were transferred to the Dept of Commerce, Office of Metric Programs, to coordinate the voluntary conversion to the metric system.[91]
In January 2007NASAdecided to use metric units for all future Moon missions, in line with the practice of other space agencies.[92]
The British metrication programme signalled the start of metrication programmes elsewhere in theCommonwealth, though India had started its programme in 1959, six years before the United Kingdom. South Africa (then not a member of the Commonwealth) set up a Metrication Advisory Board in 1967, New Zealand set up its Metric Advisory Board in 1969, Australia passed the Metric Conversion Act in 1970 and Canada appointed a Metrication Commission in 1971.
Metrication in Australia, New Zealand and South Africa was essentially complete within a decade, while in Canada metrication has been halted since the 1970s. In Canada, thesquare footis still widespread for commercial and residential advertisements and partially in construction because of the close trade relations with the United States. Metric measurements on food products such as canned food are often merely the equivalent of the still widely used imperial units of measurement such as the ounce and the pound. Butter in Canada is sold in 454 g packagings, which is the equivalent of one pound. Therailways of Canadasuch as theCanadian NationalandCanadian Pacificas well ascommuter railservices continue to measure their trackage in miles and speed limits in miles per hour because they also operate in the United States (although urban railways includingsubwaysandlight railhave adopted kilometres and kilometres per hour).[93]Canadian railcars show weight figures in both imperial and metric. Most other Commonwealth countries adopted the metric system during the 1970s.[94]
Apart from the United Kingdom and Canada, which have effectively halted their metrication programs, the great majority of countries using the imperial system have completed official metrication during the second half of the 20th century or the first decade of the 21st century. The most recent to complete this process was theRepublic of Ireland, which began metric conversion in the 1970s and completed it in early 2005.[95]Hong Kong uses three systems(Chinese, imperial, and metric) and all three are permitted for use in trade.[96]
Links in thecountry/regionpoint to articles about metrication in that country/region.[97]
Notes
There are three common ways that nations convert from traditional measurement systems to the metric system. The first is the quick or "Big-Bang" route. The second way is to phase in units over time and progressively outlaw traditional units. This method, favoured by someindustrial nations, is slower and generally less complete. The third way is to redefine traditional units in metric terms. This has been used successfully where traditional units were ill-defined and had regional variations.
The "Big-Bang" way is to simultaneously outlaw the use ofpre-metricmeasurement, metricate, reissue all government publications and laws, and change education systems to metric.Indiawas the first Commonwealth country to use this method of conversion. Its changeover lasted from 1 April 1960, when metric measurements became legal, to 1 April 1962, when all other systems were banned. The Indian model was extremely successful and was copied over much of the developing world. Two industrialized Commonwealth countries, Australia andNew Zealand, also did a quick conversion to metric.
The phase-in way is to pass a law permitting the use of metric units in parallel with traditional ones, followed by education of metric units, then progressively ban the use of the older measures. This has generally been a slow route to metric. TheBritish Empirepermitted the use of metric measures in 1873, but the changeover was not completed in most Commonwealth countries other than India and Australia until the 1970s and 1980s when governments took an active role in metric conversion. In the United Kingdom and Canada, the process is still incomplete. Japan also followed this route and did not complete the changeover for 70 years. By law, loose goods sold with reference to units of quantity have to be weighed and sold using the metric system. In 2001, the EU directive80/181/EECstated that supplementary units (imperial units alongside metric including labelling on packages) would become illegal from the beginning of 2010. In September 2007,[15]a consultation process was started which resulted in the directive being modified to permit supplementary units to be used indefinitely.
The third method is to redefine traditional units in terms of metric values. These redefined "quasi-metric" units often stay in use long after metrication is said to have been completed. Resistance to metrication in post-revolutionary France convinced Napoleon to revert tomesures usuelles(usual measures), and, to some extent, the names remain throughout Europe. In 1814, Portugal adopted the metric system, but with the names of the units substituted byPortuguese traditional ones. In this system, the basic units were themão-travessa(hand) = 1decimetre(10mão-travessas= 1vara(yard) = 1 metre), thecanada= 1 litre and thelibra(pound) = 1 kilogram.[60]In theNetherlands, 500 g is informally referred to as apond(pound) and 100 g as anons(ounce), and in Germany and France, 500 g is informally referred to respectively asein Pfundandune livre("one pound").[137]
In Denmark, the re-definedpund(500 g) is occasionally used, particularly among older people and (older) fruit growers, since these were originally paid according to the number of pounds of fruit produced. InSwedenandNorway, amil(Scandinavian mile) is informally equal to 10 km, and this has continued to be the predominantly used unit in conversation when referring to geographical distances. In the 19th century,Switzerlandhad a non-metric system completely based on metric terms (e.g. 1Fuss(foot) = 30 cm, 1Zoll(inch) = 3 cm, 1Linie(line) = 3 mm). In China, thejinnow has a value of 500 g and theliangis 50 g.
Surveys are performed by various interest groups or the government to determine the degree to which ordinary people change to using metric in their daily lives. In countries that have recently changed, older segments of the population tend still to use the older units.[138]
As of 2024[update], the metric system predominates in most of the world, however specific industries are more resistant to metrication. For example:
Air and sea transportation commonly use thenautical mile. This is about oneminute of arcoflatitudealong anymeridian arcand it is precisely defined as 1 852 metres (about 1.151miles). It is not anSIunit. The prime unit ofspeedorvelocityfor maritime and air navigation remains theknot(nautical mile per hour).
The prime unit of measure for aviation (altitude, orflight level) is usually estimated based on air pressure values, and in many countries, it is still described in nominal feet, although many others employ nominal metres. The policies of theInternational Civil Aviation Organization(ICAO) relating to measurement are:
Consistent with ICAO policy, aviation has undergone a significant amount of metrication over the years. For example, runway lengths are usually given in metres. The United States metricated the data interchange format (METAR) for temperature reports in 1996, and since indicates temperature inCelsius.[146]Metrication is also gradually taking place in cargo mass and dimensions and in fuel volume and mass.
In former Soviet countries and China, the metric system is used in aviation (whereby in Russia altitudes above thetransition levelare given in feet).[147][148]Sailplanes use the metric system in many European countries.
In 1975, the assembly of theInternational Maritime Organization(IMO) decided that future conventions of theInternational Convention for the Safety of Life at Sea(SOLAS) and other future IMO instruments should use SI units only.[149]
In the United Kingdom, some of the population continues to resist metrication to varying degrees. The traditional imperial measures are preferred by a majority and continue to have widespread use in some applications.[150][151]The metric system is used by most businesses,[152]and is used for most trade transactions. Metric units must be used for certain trading activities (selling by weight or measure for example), although imperial units may continue to be displayed in parallel.[153]
British law has enacted the provisions ofEuropean Union directive 80/181/EEC, which catalogues the units of measure that may be used for "economic, public health, public safety and administrative purposes".[154]These units consist of the recommendations of theGeneral Conference on Weights and Measures,[76]supplemented by some additional units of measure that may be used for specified purposes.[155]Metric units could be legally used for trading purposes for nearly a century beforemetrication effortsbegan in earnest. The government had been making preparations for the conversion of theimperial unitsince the 1862Select Committee on Weights and Measuresrecommended the conversion[156]and theWeights and Measures Act of 1864and theWeights and Measures (Metric System) Act of 1896legalised the metric system.[157]
In 1965, with lobbying from British industries and the prospects of joining theCommon Market, the government set a 10-year target for full conversion, and created theMetrication Boardin 1969. Metrication occurred in some areas during this time period, including the re-surveying ofOrdnance Surveymaps in 1970,decimalisationof thecurrencyin 1971, and teaching the metric system in schools. No plans were made to make the use of the metric system compulsory, and the Metrication Board was abolished in 1980 following achange in government.[158]
The United Kingdom avoided having to comply with the 1989 European Units of Measurement Directive (89/617/EEC), which required all member states to make the metric system compulsory, by negotiatingderogations(delayed switchovers), including for miles on road signs and for pints for draught beer, cider, and milk sales.[159]
Immediately following the United Kingdom'svote to withdraw from the European Union, it was reported that some retailers requested to revert to imperial units, with some reverting without permission. A poll following the 2016 vote also found that 45% of Britons sought to revert to selling produce in imperial units.[160]
The UK government started a consultation on 3 June 2022 on the choice of units of measurement markings.[161]
Imperial units remain in common everyday use for human body measurements, in particularstonesandpoundsfor weight, andfeetandinchesfor height.
Fuel economy is often advertised in miles per imperial gallon, which may lead to some confusion for users of US gallons for American manufactured cars.[162]
Heating, air conditioning, and gas cooking appliances occasionally display power in British thermal units per hour (BTU/h).[163]
Over time, the metric system has influenced the United States through international trade and standardisation. The use of the metric system was made legal as a system of measurement in 1866[164]and the United States was a founding member of theInternational Bureau of Weights and Measuresin 1875.[165]The system was officially adopted by the federal government in 1975 for use in the military and government agencies, and as preferred system for trade and commerce.[166]Attempts in the 1990s to make it mandatory for federal and state road signage to use metric units failed and it remains voluntary.[167]
A 1992 amendment to theFair Packaging and Labeling Act (FPLA), which took effect in 1994, required labels on federally regulated "consumer commodities"[168]to include bothmetricandUS customary units. As of 2013, all but one US state (New York) have passed laws permitting metric-only labels for the products they regulate.[169]
After many years of informal or optional metrication, the American public and much of the private business and industry still use US customary units today.[170]At least two states, Kentucky and California, have even moved towards demetrication of highway construction projects.[171][172][173]
Canada legally allows for dual labelling of goods provided that the metric unit is listed first and that there is a distinction of whether a liquid measure is a US or a Canadian (imperial) unit.[174]
Belize, which is a former British colony, uses both the metric and British imperial systems.[175][176]Milesare the most commonly used unit for measuring distance,[177]and gasoline is sold inUS gallons(similar to neighboring countries in Central America).
Confusion over units during the process of metrication can sometimes lead to accidents. In 1983, anAir CanadaBoeing 767, nicknamed the "Gimli Glider" following the incident, ran out of fuel in midflight. The incident was caused, in a large part, by the confusion over the conversion between litres, kilograms, and pounds, resulting in the aircraft receiving 22,300 pounds (10,100 kg) of fuel instead of the required 22,300 kilograms (49,200 lb).[178]
While not strictly an example of national metrication, the use of two different measurement systems was a contributing factor in the loss of theMars Climate Orbiterin 1999. TheNational Aeronautics and Space Administration(NASA) specified metric units in the contract. NASA and other organisations worked in metric units, but one subcontractor,Lockheed Martin, provided thruster performance data to the team inpound force-seconds instead ofnewton-seconds. The spacecraft was intended to orbitMarsat about 150 kilometres (93 mi) in altitude, but the incorrect data meant that it descended to about 57 kilometres (35 mi). As a result, it burned up in theMartian atmosphere.[179] | https://en.wikipedia.org/wiki/Metrication |
Anon-decimal currencyis acurrencythat has sub-units that are a non-decimal fraction of the main unit, i.e. the number of sub-units in a main unit is not a power of 10. Historically, most currencies were non-decimal, though today virtually all are nowdecimal.
Today, only two countries have non-decimal currencies:Mauritania, where 1ouguiya= 5khoums, andMadagascar, where 1ariary= 5iraimbilanja.[1]However, these are only theoretically non-decimal, as in both cases the value of each sub-unit is too small to be of any practical use and coins of sub-unit denominations are no longer used.
The official currency of theSovereign Military Order of Malta, which retains its claims ofsovereigntyunder international law and has been grantedpermanent observer statusat theUnited Nations, is theMaltese scudo, which is subdivided into 12 tarì, each of 20 grani with 6 piccoli to the grano.
All other contemporary currencies are eitherdecimalor have no sub-units at all, either because they have been abolished or because they have lost all practical value and are no longer used.
Historically, a variety of non-decimal systems have been used. For example, Avigesimalsystem (base 20) was in use within ancientMesoamerica. Asexagesimalsystem (base 60) was in wide use in ancientMesopotamia, as this system was used in measurements of time, geometry, currency, and other fields.
Decimal currencies also have disadvantages. The principal advantage of most non-decimal currencies is that they are more easily divided, particularly by numbers such as 3 and 8, than decimal currencies, due to being based upon conversion values that have a large number of factors. A currency with a 100:1 ratio is divisible neither into 3 nor into 8. For example, one-third of an AustrianGulden(of 60Kreuzer) was 20 Kreuzer while a third of adollaris 33.3cents. This divisibility is useful when trading and when sharing out sums of money. For these reasons, many states chose in the past to adopt non-decimal currencies based on divisions into sub-units such as 12 or 20, sometimes with more than one tier of sub-units.
There is a second, more fortuitous, way in which non-decimal currencies emerged. Often multiple currencies would circulate concurrently in an economy, with non-decimal exchange rates between them. For example, a group related currencies calledReichsthaler,rixdollar,riksdaler,rijksdaalder, andrigsdalerwere widely accepted as a common accounting unit which represented a variety of local coins in Stockholm, Copenhagen, Antwerp, and Cologne. Inflation developed locally, with changing subdivisions. For instance the Riksdaler was equivalent to 2 silver dalers in Sweden in 1700, but after the 1715-19 devaluation of the silver daler coin until 1776 one Riksdaler equated to 3 daler silvermint. Most currencies made no distinction between units of accounting and units represented by coins and thus created such shifts. (A similar example in the UK was the guinea, which was worth slightly more than one pound sterling.)
In general, when the major unit was, say, a gold coin and the minor units were silver or copper coins, then when the relative values of the metals changed, perhaps because of an increase or decrease in the supply of one of the metals, then the number of minor units equivalent to one major unit would also change.
Thus the following list does not give a complete picture: it is a list of examples picked from different periods. Many of the subdivisions given below underwent historical changes.
TheRussian rubleis often said to have become the first decimalized currency whenPeter the Greatestablished the ratio 1 ruble = 100 kopecks in 1701. The Japanese were in some sense earlier calculating with the silver momme and its decimal subunits - but then the momme was not a coin but a unit of weight equivalent to 3.75 g: accounting was by weight of silver. The Britishpound sterlingwas the last major currency to bedecimalized, on 15 February 1971. The Maltese waited just one year (1972) before following suit and Nigeria followed in 1973. An early proposal for decimalizing the pound in the 19th century envisaged a system of 1 Pound = 10florins= 100 dimes = 1000 cents. However the only step taken at that time was the introduction in 1849 of a florin (two shillings) coin (the earliest examples bore the inscription "One Tenth of a Pound").
A partial listing of former non-decimal currencies (giving onlyunits of account):
In theEurozone, in the interval between fixing the conversion factors between national currencies and theeuroand the introduction ofeuro coins, the national currencies were non-decimal subdivisions of the euro. | https://en.wikipedia.org/wiki/Non-decimal_currencies |
Incomputing,decimal32is adecimal floating-pointcomputer numbering formatthat occupies 4 bytes (32 bits) in computer memory.
Like thebinary16andbinary32formats, decimal32 uses less space than the actually most common format binary64.
decimal32 supports'normal' values, which can have 7 digit precision from±1.000000×10^−95up to±9.999999×10^+96, plus'subnormal' valueswith ramp-down relative precision down to±1.×10^−101(one digit),signed zeros, signed infinities andNaN(Not a Number). The encoding is somewhat complex, see below.
The binary format with the same bit-size,binary32, has an approximate range from subnormal-minimum±1×10^−45over normal-minimum with full 24-bit precision:±1.1754944×10^−38to maximum±3.4028235×10^38.
decimal32 values are encoded in a 'not normalized' near to 'scientific format', with combining some bits of the exponent with the leading bits of the significand in a 'combination field'.
Besides the special cases infinities and NaNs there are four points relevant to understand the encoding of decimal32.
both produce the same result [2019 version[1]of IEEE 754 in clause 3.3, page 18]. Both applies to BID as well as DPD encoding. For decimalxxx datatypes the second view is more common, while for binaryxxx datatypes the first, the biases are different for each datatype.)
In all cases for decimal32, the value represented is
Alternatively it can be understood as(−1)sign× 10exponent−95×significandwith thesignificanddigits understood asd0.d−1d−2d−3d−4d−5d−6, note the radix dot making it a fraction.
For ±Infinity, besides the sign bit, all the remaining bits are ignored (i.e., both the exponent and significand fields have no effect).
For NaNs the sign bit has no meaning in the standard, and is ignored. Therefore, signed and unsigned NaNs are equivalent, even though some programs will show NaNs as signed. The bit m5determines whether the NaN is quiet (0) or signaling (1). The bits of the significand are the NaN's payload and can hold user defined data (e.g., to distinguish how NaNs were generated). Like for normal significands, the payload of NaNs can be either in BID or DPD encoding.
Be aware that the bit numbering used in the tables for e.g.m10… m0is in opposite direction than that used in the document for the IEEE 754 standardG0… G10.
The resulting 'raw' exponent is a 8 bit binary integer where the leading bits are not '11', thus values0 ...10111111b=0 ... 191d, appr. bias is to be subtracted. The resulting significand could be a positive binary integer of 24 bits up to1001 1111111111 1111111111b= 10485759d, but values above107− 1 =9999999= 98967F16=1001100010010110011111112are 'illegal' and have to be treated as zeroes. To obtain the individual decimal digits the significand has to be divided by 10 repeatedly.
The resulting 'raw' exponent is a 8 bit binary integer where the leading bits are not '11', thus values0 ...10111111b=0 ... 191d, appr. bias is to be subtracted. The significand's leading decimal digit forms from the(0)cdeor100ebits as binary integer. The subsequent digits are encoded in the 10 bit 'declet' fields 'tttttttttt' according the DPD rules (see below). The full decimal significand is then obtained by concatenating the leading and trailing decimal digits.
The 10-bit DPD to 3-digit BCD transcoding for the declets is given by the following table.b9… b0are the bits of the DPD, andd2… d0are the three BCD digits. Be aware that the bit numbering used here for e.g.b9… b0is in opposite direction than that used in the document for the IEEE 754 standardb0… b9, add. the decimal digits are numbered 0-base here while in opposite direction and 1-based in the IEEE 754 paper. The bits on white background are not counting for the value, but signal how to understand / shift the other bits. The concept is to denote which digits are small (0 … 7) and encoded in three bits, and which are not, then calculated from a prefix of '100', and one bit specifying if 8 or 9.
The 8 decimal values whose digits are all 8s or 9s have four codings each.
The bits marked x in the table above areignoredon input, but will always be 0 in computed results.
(The8 × 3 = 24non-standard encodings fill in the gap from103= 1000 and 210- 1 = 1023.)
Benefit of this encoding is access to individual digits by de- / encoding only 10 bits, disadvantage is that some simple functions like sort and compare, very frequently used in coding, do not work on the bit pattern but require decoding to decimal digits (and evtl. re-encode to binary integers) first.
An alternate encoding in short BID sections, 10 bits declets encoding 0d... 1023dand simply using only the range from 0 to 999, would provide the same functionality, direct access to digits by de- / encoding 10 bits, with near zero performance penalty in modern systems, and preserve the option for bit-pattern oriented sort and compare, but the 'Sudoku encoding' shown above was chosen in history, may provide better performance in hardware implementations, and now 'is as it is'.
decimal32 has been introduced in the2008 version[3]ofIEEE 754, adopted by ISO as ISO/IEC/IEEE 60559:2011.[4]
DPD encoding is relatively efficient, not wasting more than about 2.4 percent of space vs. BID, because the 210= 1024 possible values in 10 bit is only little more than what is used to encode all numbers from 0 to 999.
Zero has 192 possible representations (384 when bothsigned zerosare included).
The gain in range and precision by the 'combination encoding' evolves because the taken 2 bits from the exponent only use three states, and the 4 MSBs of the significand stay within 0000 … 1001 (10 states). In total that is3 × 10 = 30possible states when combined in one encoding, which is representable in 5 bits (25=32{\displaystyle 2^{5}=32}).[clarification needed]
The decimal formats include denormal values, for a graceful degradation of precision near zero, but in contrast to the binary formats they are not marked / do not need a special exponent, in decimal32 they are just values too small to have full 7 digit precision even with the smallest exponent.[clarification needed]
In the cases of infinity and NaN, all other bits of the encoding are ignored. Thus, it is possible to initialize an array to infinities or NaNs by filling it with a single byte value.[citation needed] | https://en.wikipedia.org/wiki/Decimal32_floating-point_format |
Incomputing,decimal64is adecimal floating-pointcomputer number formatthat occupies 8 bytes (64 bits) in computer memory.
Decimal64 is a decimal floating-point format, formally introduced in the2008 revision[1]of theIEEE 754standard, also known as ISO/IEC/IEEE 60559:2011.[2]
Decimal64 supports 'normal' values that can have 16 digit precision from±1.000000000000000×10^−383to±9.999999999999999×10^384, plus 'denormal' values with ramp-down relative precision down to ±1.×10−398,signed zeros, signed infinities andNaN(Not a Number). This format supports two different encodings.
The binary format of the same size supports a range from denormal-min±5×10^−324, over normal-min with full 53-bit precision±2.2250738585072014×10^−308to max±1.7976931348623157×10^+308.
Because the significand for theIEEE 754decimal formats is not normalized, most values with less than 16significant digitshave multiple possible representations; 1000000 × 10−2=100000 × 10−1=10000 × 100=1000 × 101all have the value 10000. These sets of representations for a same value are calledcohorts, the different members can be used to denote how many digits of the value are known precisely. Each signed zero has 768 possible representations (1536 for all zeros, in two different cohorts).
IEEE 754 allows two alternative encodings for decimal64 values. The standard does not specify how to signify which representation is used, for instance in a situation where decimal64 values are communicated between systems:
Both alternatives provide exactly the same set of representable numbers: 16 digits of significand and3 × 28= 768possible decimal exponent values. (All the possible decimal exponent values storable in abinary64number are representable in decimal64, and most bits of the significand of a binary64 are stored keeping roughly the same number of decimal digits in the significand.)
In both cases, the most significant 4 bits of the significand (which actually only have 10 possible values) are combined with two bits of the exponent (3 possible values) to use 30 of the 32 possible values of a 5-bit field. The remaining combinations encodeinfinitiesandNaNs. BID and DPD use different bits of the combination field for that.
In the cases of Infinity and NaN, all other bits of the encoding are ignored. Thus, it is possible to initialize an array to Infinities or NaNs by filling it with a single byte value.
This format uses a binary significand from 0 to1016− 1 =9999999999999999= 2386F26FC0FFFF16=1000111000011011110010011011111100000011111111111111112.The encoding, completely stored on 64 bits, can represent binary significands up to10 × 250− 1 =11258999068426239= 27FFFFFFFFFFFF16,but values larger than1016− 1are illegal (and the standard requires implementations to treat them as 0, if encountered on input).
As described above, the encoding varies depending on whether the most significant4 bitsof the significand are in the range 0 to 7 (00002to 01112), or higher (10002or 10012).
If the 2 after the sign bit are "00", "01", or "10", then the exponent field consists of the10 bitsfollowing the sign bit, and the significand is the remaining53 bits, with an implicit leading0 bit. This includessubnormal numberswhere the leading significand digit is 0.
If the2 bitsafter the sign bit are "11", then the 10-bit exponent field is shifted2 bitsto the right (after both the sign bit and the "11" bits thereafter), and the represented significand is in the remaining51 bits. In this case there is an implicit (that is, not stored) leading 3-bit sequence "100" for the MSB bits of the true significand (in the remaining lower bitsttt...tttof the significand, not all possible values are used).
Finite number with small first digit of significand (0 .. 7).
Finite number with big first digit of significand (8 or 9).
The leading bits of the significand field donotencode the most significant decimal digit; they are simply part of a larger pure-binary number. For example, a significand of8000000000000000is encoded as binary0111000110101111110101001001100011010000000000000000002, with the leading4 bitsencoding 7; the first significand which requires a 54th bit is253=9007199254740992.The highest valid significant is9999999999999999whose binary encoding is(100)0111000011011110010011011111100000011111111111111112(with the 3 most significant bits (100) not stored but implicit as shown above; and the next bit is always zero in valid encodings).
In the above cases, the value represented is
If the four bits after the sign bit are "1111" then the value is an infinity or a NaN, as described above:
In this version, the significand is stored as a series of decimal digits. The leading digit is between 0 and 9 (3 or 4 binary bits), and the rest of the significand uses thedensely packed decimal(DPD) encoding.
The leading2 bitsof the exponent and the leading digit (3 or4 bits) of the significand are combined into the five bits that follow the sign bit.
This eight bits after that are the exponent continuation field, providing the less-significant bits of the exponent.
The last50 bitsare the significand continuation field, consisting of five 10-bitdeclets.[3]Each declet encodes three decimal digits[3]using the DPD encoding.
If the first two bits after the sign bit are "00", "01", or "10", then those are the leading bits of the exponent, and the three bits "cde" after that are interpreted as the leading decimal digit (0 to 7):
If the first two bits after the sign bit are "11", then the second 2-bits are the leading bits of the exponent, and the next bit "e" is prefixed with implicit bits "100" to form the leading decimal digit (8 or 9):
The remaining two combinations (11 110 and 11 111) of the 5-bit field after the sign bit are used to represent ±infinity and NaNs, respectively.
Finite number with small first digit of significand (0 … 7).
Finite number with big first digit of significand (8 or 9).
The DPD/3BCD transcoding for the declets is given by the following table. b9...b0 are the bits of the DPD, and d2...d0 are the three BCD digits.
The 8 decimal values whose digits are all 8s or 9s have four codings each.
The bits marked x in the table above areignoredon input, but will always be 0 in computed results.
(The8 × 3 = 24non-standard encodings fill in the gap between103= 1000 and 210= 1024.)
In the above cases, with thetrue significandas the sequence of decimal digits decoded, the value represented is | https://en.wikipedia.org/wiki/Decimal64_floating-point_format |
Incomputing,decimal128is adecimal floating-pointnumber formatthat occupies 128 bits inmemory. Formally introduced inIEEE 754-2008,[1]it is intended for applications where it is necessary to emulate decimal rounding exactly, such as financial and tax computations.[2]
The decimal128 format supports 34decimal digitsofsignificandand anexponentrange of −6143 to +6144, i.e.±0.000000000000000000000000000000000×10^−6143to±9.999999999999999999999999999999999×10^6144. Because the significand is not normalized, most values with less than 34significant digitshave multiple possible representations;1 × 102=0.1 × 103=0.01 × 104, etc. This set of representations for a same value is called acohort. Zero has 12288 possible representations (24576 if bothsigned zerosare included, in two different cohorts).
TheIEEE 754standard allows two alternative encodings for decimal128 values:
This standard does not specify how to signify which encoding is used, for instance in a situation where decimal128 values are communicated between systems.
Both alternatives provide exactly the same set of representable numbers: 34 digits of significand and3 × 212=12288possible exponent values.
In both cases, the most significant 4 bits of the significand (which actually only have 10 possible values) are combined with the most significant 2 bits of the exponent (3 possible values) to use 30 of the 32 possible values of 5 bits in the combination field. The remaining combinations encodeinfinitiesandNaNs.
In the case of Infinity and NaN, all other bits of the encoding are ignored. Thus, it is possible to initialize an array to Infinities or NaNs by filling it with a single byte value.
This format uses a binary significand from 0 to1034− 1=9999999999999999999999999999999999= 1ED09BEAD87C0378D8E63FFFFFFFF16=0111101101000010011011111010101101100001111100000000110111100011011000111001100011111111111111111111111111111111112.
The encoding can represent binary significands up to10 × 2110− 1=12980742146337069071326240823050239but values larger than1034− 1are illegal (and the standard requires implementations to treat them as 0, if encountered on input).
As described above, the encoding varies depending on whether the most significant 4 bits of the significand are in the range 0 to 7 (00002to 01112), or higher (10002or 10012).
If the 2 bits after the sign bit are "00", "01", or "10", then the
exponent field consists of the 14 bits following the sign bit, and the
significand is the remaining 113 bits, with an implicit leading 0 bit:s 00eeeeeeeeeeee (0)ttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt
s 01eeeeeeeeeeee (0)ttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt
s 10eeeeeeeeeeee (0)ttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt ttttttttttThis includessubnormal numberswhere the leading significand digit is 0.
If the 2 bits after the sign bit are "11", then the 14-bit exponent field is shifted 2 bits to the right (after both the sign bit and the "11" bits thereafter), and the represented significand is in the remaining 111 bits. In this case there is an implicit (that is, not stored) leading 3-bit sequence "100" in the true significand.s 1100eeeeeeeeeeee (100)t tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt
s 1101eeeeeeeeeeee (100)t tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt
s 1110eeeeeeeeeeee (100)t tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt ttttttttttThe "11" 2-bit sequence after the sign bit indicates that there is animplicit"100" 3-bit prefix to the significand. Compare having an implicit 1 in the significand of normal values for the binary formats. The "00", "01", or "10" bits are part of the exponent field.
For the decimal128 format, all of these significands are out of the valid range (they begin with2113> 1.038 × 1034), and are thus decoded as zero, but the pattern is same asdecimal32anddecimal64.
In the above cases, the value represented is
If the four bits after the sign bit are "1111" then the value is an infinity or a NaN, as described above:
In this version, the significand is stored as a series of decimal digits. The leading digit is between 0 and 9 (3 or 4 binary bits), and the rest of the significand uses thedensely packed decimal(DPD) encoding.
The leading 2 bits of the exponent and the leading digit (3 or 4 bits) of the significand are combined into the five bits that follow the sign bit.
This twelve bits after that are the exponent continuation field, providing the less-significant bits of the exponent.
The last 110 bits are the significand continuation field, consisting of eleven 10-bitdeclets.[3]Each declet encodes three decimal digits[3]using the DPD encoding.
If the first two bits after the sign bit are "00", "01", or "10", then those are the leading bits of the exponent, and the three bits after that are interpreted as the leading decimal digit (0 to 7):s 00 TTT (00)eeeeeeeeeeee (0TTT)[tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt]
s 01 TTT (01)eeeeeeeeeeee (0TTT)[tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt]
s 10 TTT (10)eeeeeeeeeeee (0TTT)[tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt]If the first two bits after the sign bit are "11", then the second two bits are the leading bits of the exponent, and the last bit is prefixed with "100" to form the leading decimal digit (8 or 9):s 1100 T (00)eeeeeeeeeeee (100T)[tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt]
s 1101 T (01)eeeeeeeeeeee (100T)[tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt]
s 1110 T (10)eeeeeeeeeeee (100T)[tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt]The remaining two combinations (11110 and 11111) of the 5-bit field
are used to represent ±infinity and NaNs, respectively.
The DPD/3BCD transcoding for the declets is given by the following table.
b9...b0 are the bits of the DPD, and d2...d0 are the three BCD digits.
The 8 decimal values whose digits are all 8s or 9s have four codings each.
The bits marked x in the table above areignoredon input, but will always be 0 in computed results.
(The8 × 3 = 24non-standard encodings fill in the gap between103= 1000and210= 1024.)
In the above cases, with thetrue significandas the sequence of decimal digits decoded, the value represented is | https://en.wikipedia.org/wiki/Decimal128_floating-point_format |
RADIX 50[1][2][3]orRAD50[3](also referred to asRADIX50,[4]RADIX-50[5]orRAD-50), is an uppercase-onlycharacter encodingcreated byDigital Equipment Corporation(DEC) for use on theirDECsystem,PDP, andVAXcomputers.
RADIX 50's 40-character repertoire (050 inoctal) can encode six characters plus four additional bits into one36-bitmachineword(PDP-6,PDP-10/DECsystem-10,DECSYSTEM-20), three characters plus two additional bits into one18-bitword (PDP-9,[2]PDP-15),[6]or three characters into one16-bitword (PDP-11, VAX).[3]
The actual encoding differs between the 36-bit and 16-bit systems.
In 36-bit DEC systems RADIX 50 was commonly used insymbol tablesfor assemblers or compilers which supported six-character symbol names from a 40-character alphabet. This left four bits to encode properties of the symbol.
For its similarities to theSQUOZE character encoding schemeused inIBM'sSHARE Operating Systemfor representing object code symbols, DEC's variant was also sometimes calledDEC Squoze,[7]however, IBM SQUOZE packed six characters of a 50-character alphabet plus two additional flag bits into one 36-bit word.[6]
RADIX 50 was not normally used in 36-bit systems for encoding ordinary character strings; file names were normally encoded as sixsix-bitcharacters, and full ASCII strings as five seven-bit characters and one unused bit per 36-bit word.
RADIX 50 (also calledRadix 508format[2]) was used in Digital's 18-bit PDP-9 and PDP-15 computers to store symbols in symbol tables, leaving two extra bits per 18-bit word ("symbol classification bits").[2]
Some strings in DEC's 16-bit systems were encoded as 8-bit bytes, while others used RADIX 50 (then also calledMOD40).[3][8]
In RADIX 50, strings were encoded in successive words as needed, with the first character within each word located in the most significant position.
For example, using the PDP-11 encoding, the string "ABCDEF", with character values 1, 2, 3, 4, 5, and 6, would be encoded as a word containing the value 1×402+ 2×401+ 3×400=1683, followed by a second word containing the value 4×402+ 5×401+ 6×400=6606. Thus, 16-bit words encoded values ranging from 0 (three spaces) to63999("999"). When there were fewer than three characters in a word, the last word for the string was padded with trailing spaces.[3]
There were several minor variations of this encoding with differing interpretations of the 27, 28, 29 code points. Where RADIX 50 was used for filenames stored on media, the code points represent the$,%,*characters, and will be shown as such when listing the directory with utilities such as DIR.[9]When encoding strings in the PDP-11 assembler and other PDP-11programming languagesthe code points represent the$,.,%characters, and are encoded as such with the default RAD50 macro in the global macros file, and this encoding was used in thesymbol tables. Some early documentation for theRT-11operating system considered the code point 29 to be undefined.[3]
The use of RADIX 50 was the source of the filename size conventions used byDigital Equipment CorporationPDP-11 operating systems. Using RADIX 50 encoding, six characters of a filename could be stored in two 16-bit words, while three more extension (file type) characters could be stored in a third 16-bit word. Similarly, a three-character device name such as "DL1" could also be stored in a 16-bit word. The period that separated the filename and its extension, and the colon separating a device name from a filename, was implied (i.e., was not stored and always assumed to be present). | https://en.wikipedia.org/wiki/DEC_RADIX_50 |
RADIX 50[1][2][3]orRAD50[3](also referred to asRADIX50,[4]RADIX-50[5]orRAD-50), is an uppercase-onlycharacter encodingcreated byDigital Equipment Corporation(DEC) for use on theirDECsystem,PDP, andVAXcomputers.
RADIX 50's 40-character repertoire (050 inoctal) can encode six characters plus four additional bits into one36-bitmachineword(PDP-6,PDP-10/DECsystem-10,DECSYSTEM-20), three characters plus two additional bits into one18-bitword (PDP-9,[2]PDP-15),[6]or three characters into one16-bitword (PDP-11, VAX).[3]
The actual encoding differs between the 36-bit and 16-bit systems.
In 36-bit DEC systems RADIX 50 was commonly used insymbol tablesfor assemblers or compilers which supported six-character symbol names from a 40-character alphabet. This left four bits to encode properties of the symbol.
For its similarities to theSQUOZE character encoding schemeused inIBM'sSHARE Operating Systemfor representing object code symbols, DEC's variant was also sometimes calledDEC Squoze,[7]however, IBM SQUOZE packed six characters of a 50-character alphabet plus two additional flag bits into one 36-bit word.[6]
RADIX 50 was not normally used in 36-bit systems for encoding ordinary character strings; file names were normally encoded as sixsix-bitcharacters, and full ASCII strings as five seven-bit characters and one unused bit per 36-bit word.
RADIX 50 (also calledRadix 508format[2]) was used in Digital's 18-bit PDP-9 and PDP-15 computers to store symbols in symbol tables, leaving two extra bits per 18-bit word ("symbol classification bits").[2]
Some strings in DEC's 16-bit systems were encoded as 8-bit bytes, while others used RADIX 50 (then also calledMOD40).[3][8]
In RADIX 50, strings were encoded in successive words as needed, with the first character within each word located in the most significant position.
For example, using the PDP-11 encoding, the string "ABCDEF", with character values 1, 2, 3, 4, 5, and 6, would be encoded as a word containing the value 1×402+ 2×401+ 3×400=1683, followed by a second word containing the value 4×402+ 5×401+ 6×400=6606. Thus, 16-bit words encoded values ranging from 0 (three spaces) to63999("999"). When there were fewer than three characters in a word, the last word for the string was padded with trailing spaces.[3]
There were several minor variations of this encoding with differing interpretations of the 27, 28, 29 code points. Where RADIX 50 was used for filenames stored on media, the code points represent the$,%,*characters, and will be shown as such when listing the directory with utilities such as DIR.[9]When encoding strings in the PDP-11 assembler and other PDP-11programming languagesthe code points represent the$,.,%characters, and are encoded as such with the default RAD50 macro in the global macros file, and this encoding was used in thesymbol tables. Some early documentation for theRT-11operating system considered the code point 29 to be undefined.[3]
The use of RADIX 50 was the source of the filename size conventions used byDigital Equipment CorporationPDP-11 operating systems. Using RADIX 50 encoding, six characters of a filename could be stored in two 16-bit words, while three more extension (file type) characters could be stored in a third 16-bit word. Similarly, a three-character device name such as "DL1" could also be stored in a 16-bit word. The period that separated the filename and its extension, and the colon separating a device name from a filename, was implied (i.e., was not stored and always assumed to be present). | https://en.wikipedia.org/wiki/DEC_MOD40 |
SQUOZE(abbreviated asSQZ) is a memory-efficient representation of a combinedsourceandrelocatableobjectprogram file with asymbol tableonpunched cardswhich was introduced in 1958 with theSCAT assembler[1][2]on theSHARE Operating System(SOS) for theIBM 709.[3][4]A program in this format was called aSQUOZEdeck.[5][6][7]It was also used on later machines including theIBM 7090and7094.
A SQUOZE deck contains an encoded binary form of the original assembly language code; SQUOZE decks are converted to absolute machine code and stored in memory by a loader program.[8][9][10]
In theSQUOZE encoding, identifiers in the symbol table were represented in a 50-characteralphabet, allowing a 36-bitmachine wordto represent sixalphanumericcharacters plus two flag bits, thus saving two bits per six characters,[6][1]because the six bits normally allocated for each character could store up to 64 states rather than only the 50 states needed to represent the 50 letters of the alphabet, and 506< 234.
Using base 50 already saves a single bit every three characters, so it was used in two three-character chunks. The manual[1]has a formula for encoding six characters ABCDEF:(A∗502+B∗50+C)∗217+(D∗502+E∗50+F){\displaystyle (A*50^{2}+B*50+C)*2^{17}+(D*50^{2}+E*50+F)}
For example "SQUOZE", normally 36 bits:35 33 37 31 44 17(base 8)would be encoded in two 17-bit pieces to fit in the 34 bits as( 0o220231 << 17 ) | 0o175473 == 0o110114575473.
A simpler example of the same logic would be how a three-digitBCD numberwould take up 12 bits, such as 987:9 8 7(base 16)1001 1000 0111(base 2), but any such value could be stored in 10 bits directly, saving two bits, such as 987:3db(base 16)11 1101 1011(base 2).
"Squoze" is a facetiouspast participleof the verb 'to squeeze'.[5][6]
The name SQUOZE was later borrowed for similar character encoding schemes used onDECmachines;[4]they had a 40-character alphabet (50 inoctal) and were calledDEC RADIX 50andMOD40,[11]but sometimes nicknamedDEC Squoze. | https://en.wikipedia.org/wiki/IBM_SQUOZE |
Avigesimal(/vɪˈdʒɛsɪməl/vij-ESS-im-əl) orbase-20(base-score) numeral system is based ontwenty(in the same way in which thedecimal numeral systemis based onten).Vigesimalis derived from the Latin adjectivevicesimus, meaning 'twentieth'.
In a vigesimalplacesystem, twenty individual numerals (or digit symbols) are used, ten more than in the decimal system. One modern method of finding the extra needed symbols is to writetenas the letter A, or A20, where the20meansbase20, to writenineteenas J20, and the numbers between with the corresponding letters of the alphabet. This is similar to the commoncomputer-sciencepractice of writinghexadecimalnumerals over 9 with the letters "A–F". Another less common method skips over the letter "I", in order to avoid confusion between I20aseighteenandone, so that the number eighteen is written as J20, and nineteen is written as K20. The number twenty is written as 1020.
According to this notation:
In the rest of this article below, numbers are expressed in decimal notation, unless specified otherwise. For example,10meansten,20meanstwenty. Numbers in vigesimal notation use the convention that I means eighteen and J means nineteen.
As 20 is divisible by two and five and is adjacent to 21, the product of three and seven, thus covering the first four prime numbers, many vigesimal fractions have simple representations, whether terminating or recurring (although thirds are more complicated than in decimal, repeating two digits instead of one). In decimal, dividing by three twice (ninths) only gives one digit periods (1/9= 0.1111.... for instance) because 9 is the number below ten. 21, however, the number adjacent to 20 that is divisible by 3, is not divisible by 9. Ninths in vigesimal have six-digit periods. As 20 has the same prime factors as 10 (two and five), a fraction will terminate in decimalif and only ifit terminates in vigesimal.
The prime factorization of twenty is 22× 5, so it is not aperfect power. However, its squarefree part, 5, is congruent to 1 (mod 4). Thus, according toArtin's conjecture on primitive roots, vigesimal has infinitely manycyclicprimes, but the fraction of primes that are cyclic is not necessarily ~37.395%. An UnrealScript program that computes the lengths of recurring periods of various fractions in a given set of bases found that, of the first 15,456 primes, ~39.344% are cyclic in vigesimal.
Many cultures that use a vigesimal system count in fives to twenty, then count twenties similarly. Such a system is referred to asquinary-vigesimalby linguists. Examples includeGreenlandic,Iñupiaq,Kaktovik,Maya,Nunivak Cupʼig, andYupʼiknumerals.[1][2][3]
Vigesimal systems are common in Africa, for example inYoruba.[4]While the Yoruba number system may be regarded as a vigesimal system, it is complex.[further explanation needed]
There is some evidence of base-20 usage in theMāori languageof New Zealand with the suffixhoko-(i.e.hokowhitu,hokotahi).[citation needed]
In several European languages likeFrenchandDanish,20is used as a base, at least with respect to the linguistic structure of the names of certain numbers (though a thoroughgoing consistent vigesimal system, based on the powers 20, 400, 8000 etc., is not generally used).
Open Location Codeuses a word-safe version of base 20 for itsgeocodes. The characters in this alphabet were chosen to avoid accidentally forming words. The developers scored all possible sets of 20 letters in 30 different languages for likelihood of forming words, and chose a set that formed as few recognizable words as possible.[16]The alphabet is also intended to reduce typographical errors by avoiding visually similar digits, and is case-insensitive.
This table shows theMaya numeralsand thenumber namesinYucatec Maya,Nahuatlin modern orthography and inClassical Nahuatl. | https://en.wikipedia.org/wiki/Vigesimal |
Sexagesimal, also known asbase 60,[1]is anumeral systemwithsixtyas itsbase. It originated with the ancientSumeriansin the 3rd millennium BC, was passed down to the ancientBabylonians, and is still used—in a modified form—for measuringtime,angles, andgeographic coordinates.
The number 60, asuperior highly composite number, has twelvedivisors, namely 1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30, and 60, of which 2, 3, and 5 areprime numbers. With so many factors, manyfractionsinvolving sexagesimal numbers are simplified. For example, one hour can be divided evenly into sections of 30 minutes, 20 minutes, 15 minutes, 12 minutes, 10 minutes, 6 minutes, 5 minutes, 4 minutes, 3 minutes, 2 minutes, and 1 minute. 60 is the smallest number that is divisible by every number from 1 to 6; that is, it is thelowest common multipleof 1, 2, 3, 4, 5, and 6.
In this article, all sexagesimal digits are represented as decimal numbers, except where otherwise noted. For example, the largest sexagesimal digit is "59".
According toOtto Neugebauer, the origins of sexagesimal are not as simple, consistent, or singular in time as they are often portrayed. Throughout their many centuries of use, which continues today for specialized topics such as time, angles, and astronomical coordinate systems, sexagesimal notations have always contained a strong undercurrent of decimal notation, such as in how sexagesimal digits are written. Their use has also always included (and continues to include) inconsistencies in where and how various bases are used to represent numbers even within a single text.[2]
The most powerful driver for rigorous, fully self-consistent use of sexagesimal has always been its mathematical advantages for writing and calculating fractions. In ancient texts this shows up in the fact that sexagesimal is used most uniformly and consistently in mathematical tables of data.[2]Another practical factor that helped expand the use of sexagesimal in the past, even if less consistently than in mathematical tables, was its decided advantages to merchants and buyers for making everyday financial transactions easier when they involved bargaining for and dividing up larger quantities of goods. In the late 3rd millennium BC, Sumerian/Akkadian units of weight included thekakkaru(talent, approximately 30 kg) divided into 60manû(mina), which was further subdivided into 60šiqlu(shekel); the descendants of these units persisted for millennia, though the Greeks later coerced this relationship into the more base-10–compatible ratio of ashekelbeing one 50th of amina.
Apart from mathematical tables, the inconsistencies in how numbers were represented within most texts extended all the way down to the most basiccuneiformsymbols used to represent numeric quantities.[2]For example, the cuneiform symbol for 1 was an ellipse made by applying the rounded end of the stylus at an angle to the clay, while the sexagesimal symbol for 60 was a larger oval or "big 1". But within the same texts in which these symbols were used, the number 10 was represented as a circle made by applying the round end of the style perpendicular to the clay, and a larger circle or "big 10" was used to represent 100. Such multi-base numeric quantity symbols could be mixed with each other and with abbreviations, even within a single number. The details and even the magnitudes implied (sincezero was not used consistently) were idiomatic to the particular time periods, cultures, and quantities or concepts being represented. In modern times there is the recent innovation of adding decimal fractions to sexagesimal astronomical coordinates.[2]
The sexagesimal system as used in ancientMesopotamiawas not a pure base-60 system, in the sense that it did not use 60 distinct symbols for itsdigits. Instead, thecuneiformdigits usedtenas a sub-base in the fashion of asign-value notation: a sexagesimal digit was composed of a group of narrow, wedge-shaped marks representing units up to nine (,,,, ...,) and a group of wide, wedge-shaped marks representing up to five tens (,,,,). The value of the digit was the sum of the values of its component parts:
Numbers larger than 59 were indicated by multiple symbol blocks of this form inplace value notation. Because there was no symbol forzeroit is not always immediately obvious how a number should be interpreted, and its true value must sometimes have been determined by its context. For example, the symbols for 1 and 60 are identical.[3][4]Later Babylonian texts used a placeholder () to represent zero, but only in the medial positions, and not on the right-hand side of the number, as in numbers like13200.[4]
In theChinese calendar, a system is commonly used in which days or years are named by positions in a sequence of ten stems and in another sequence of 12 branches. The same stem and branch repeat every 60 steps through this cycle.
Book VIII ofPlato'sRepublicinvolves an allegory of marriage centered on the number 604=12960000and its divisors. This number has the particularly simple sexagesimal representation 1,0,0,0,0. Later scholars have invoked both Babylonian mathematics and music theory in an attempt to explain this passage.[5]
Ptolemy'sAlmagest, a treatise onmathematical astronomywritten in the second century AD, uses base 60 to express the fractional parts of numbers. In particular, histable of chords, which was essentially the only extensivetrigonometric tablefor more than a millennium, has fractional parts of a degree in base 60, and was practically equivalent to a modern-day table of values of thesinefunction.
Medieval astronomers also used sexagesimal numbers to note time.Al-Birunifirst subdivided the hour sexagesimally intominutes,seconds,thirdsandfourthsin 1000 while discussing Jewish months.[6]Around 1235John of Sacroboscocontinued this tradition, although Nothaft thought Sacrobosco was the first to do so.[7]The Parisian version of theAlfonsine tables(ca. 1320) used the day as the basic unit of time, recording multiples and fractions of a day in base-60 notation.[8]
The sexagesimal number system continued to be frequently used by European astronomers for performing calculations as late as 1671.[9]For instance,Jost BürgiinFundamentum Astronomiae(presented toEmperor Rudolf IIin 1592), his colleague Ursus inFundamentum Astronomicum, and possibly alsoHenry Briggs, used multiplication tables based on the sexagesimal system in the late 16th century, to calculate sines.[10]
In the late 18th and early 19th centuries,Tamilastronomers were found to make astronomical calculations, reckoning with shells using a mixture of decimal and sexagesimal notations developed byHellenisticastronomers.[11]
Base-60 number systems have also been used in some other cultures that are unrelated to the Sumerians, for example by theEkari peopleofWestern New Guinea.[12][13]
Modern uses for the sexagesimal system include measuringangles,geographic coordinates, electronic navigation, andtime.[14]
Onehourof time is divided into 60minutes, and one minute is divided into 60 seconds. Thus, a measurement of time such as 3:23:17(3 hours, 23 minutes, and 17 seconds)can be interpreted as a whole sexagesimal number (no sexagesimal point), meaning3 × 602+ 23 × 601+ 17 × 600seconds. However, each of the three sexagesimal digits in this number (3, 23, and 17) is written using thedecimalsystem.
Similarly, the practical unit of angular measure is thedegree, of which there are360(six sixties) in a circle. There are 60minutes of arcin a degree, and 60arcsecondsin a minute.
In version 1.1[15]of theYAMLdata storage format, sexagesimals are supported for plain scalars, and formally specified both for integers[16]and floating point numbers.[17]This has led to confusion, as e.g. someMAC addresseswould be recognised as sexagesimals and loaded as integers, where others were not and loaded as strings. In YAML 1.2 support for sexagesimals was dropped.[18]
InHellenistic Greekastronomical texts, such as the writings ofPtolemy, sexagesimal numbers were written usingGreek alphabetic numerals, with each sexagesimal digit being treated as a distinct number. Hellenistic astronomers adopted a new symbol for zero,—°, which morphed over the centuries into other forms, including the Greek letter omicron, ο, normally meaning 70, but permissible in a sexagesimal system where the maximum value in any position is 59.[19][20]The Greeks limited their use of sexagesimal numbers to the fractional part of a number.[21]
In medieval Latin texts, sexagesimal numbers were written usingArabic numerals; the different levels of fractions were denotedminuta(i.e., fraction),minuta secunda,minuta tertia, etc. By the 17th century it became common to denote the integer part of sexagesimal numbers by a superscripted zero, and the various fractional parts by one or more accent marks.John Wallis, in hisMathesis universalis, generalized this notation to include higher multiples of 60; giving as an example the number49‵‵‵‵36‵‵‵25‵‵15‵1°15′2″36‴49⁗; where the numbers to the left are multiplied by higher powers of 60, the numbers to the right are divided by powers of 60, and the number marked with the superscripted zero is multiplied by 1.[22]This notation leads to the modern signs for degrees, minutes, and seconds. The same minute and second nomenclature is also used for units of time, and the modern notation for time with hours, minutes, and seconds written in decimal and separated from each other by colons may be interpreted as a form of sexagesimal notation.
In some usage systems, each position past the sexagesimal point was numbered, using Latin or French roots:primeorprimus,secondeorsecundus,tierce,quatre,quinte, etc. To this day we call the second-order partof an hourorof a degreea "second". Until at least the 18th century,1/60of a second was called a "tierce" or "third".[23][24]
In the 1930s,Otto Neugebauerintroduced a modern notational system for Babylonian and Hellenistic numbers that substitutes modern decimal notation from 0 to 59 in each position, while using a semicolon (;) to separate the integer and fractional portions of the number and using a comma (,) to separate the positions within each portion.[25]For example, the meansynodic monthused by both Babylonian and Hellenistic astronomers and still used in theHebrew calendaris 29;31,50,8,20 days. This notation is used in this article.
In the sexagesimal system, anyfractionin which thedenominatoris aregular number(having only 2, 3, and 5 in itsprime factorization) may be expressed exactly.[26]Shown here are all fractions of this type in which the denominator is less than or equal to 60:
However numbers that are not regular form more complicatedrepeating fractions. For example:
The fact that the two numbers that are adjacent to sixty, 59 and 61, are both prime numbers implies that fractions that repeat with a period of one or two sexagesimal digits can only have regular number multiples of 59 or 61 as their denominators, and that other non-regular numbers have fractions that repeat with a longer period.
The representations ofirrational numbersin any positional number system (including decimal and sexagesimal) neither terminate norrepeat.
Thesquare root of 2, the length of thediagonalof aunit square, was approximated by the Babylonians of the Old Babylonian Period (1900 BC – 1650 BC) as
Because√2≈1.41421356... is anirrational number, it cannot be expressed exactly in sexagesimal (or indeed any integer-base system), but its sexagesimal expansion does begin 1;24,51,10,7,46,6,4,44... (OEIS:A070197)
The value ofπas used by theGreekmathematician and scientistPtolemywas 3;8,30 =3 +8/60+30/602=377/120≈3.141666....[28]Jamshīd al-Kāshī, a 15th-centuryPersianmathematician, calculated 2πas a sexagesimal expression to its correct value when rounded to nine subdigits (thus to1/609); his value for 2πwas 6;16,59,28,1,34,51,46,14,50.[29][30]Like√2above, 2πis an irrational number and cannot be expressed exactly in sexagesimal. Its sexagesimal expansion begins 6;16,59,28,1,34,51,46,14,49,55,12,35... (OEIS:A091649) | https://en.wikipedia.org/wiki/Sexagesimal |
Abinary prefixis aunit prefixthat indicates amultipleof aunit of measurementby an integerpower of two. The most commonly used binary prefixes arekibi(symbol Ki, meaning210= 1024),mebi(Mi, 220=1048576), andgibi(Gi, 230=1073741824). They are most often used ininformation technologyas multipliers ofbitandbyte, when expressing the capacity ofstorage devicesor the size of computerfiles.
The binary prefixes "kibi", "mebi", etc. were defined in 1999 by theInternational Electrotechnical Commission(IEC), in theIEC 60027-2standard(Amendment 2). They were meant to replace themetric (SI)decimal powerprefixes, such as "kilo" (k, 103= 1000), "mega" (M, 106=1000000) and "giga" (G, 109=1000000000),[1]that were commonly used in the computer industry to indicate the nearest powers of two. For example, a memory module whose capacity was specified by the manufacturer as "2 megabytes" or "2 MB" would hold2 × 220=2097152bytes, instead of2 × 106=2000000.
On the other hand, a hard disk whose capacity is specified by the manufacturer as "10 gigabytes" or "10 GB", holds10 × 109=10000000000bytes, or a little more than that, but less than10 × 230=10737418240and a file whose size is listed as "2.3 GB" may have a size closer to2.3 × 230≈2470000000or to2.3 × 109=2300000000, depending on theprogramoroperating systemproviding that measurement. This kind of ambiguity is often confusing to computer system users and has resulted inlawsuits.[2][3]The IEC 60027-2 binary prefixes have been incorporated in theISO/IEC 80000standard and are supported by other standards bodies, including theBIPM, which defines the SI system,[1]: p.121theUSNIST,[4][5]and theEuropean Union.
Prior to the 1999 IEC standard, some industry organizations, such as theJoint Electron Device Engineering Council(JEDEC), noted the common use of the termskilobyte,megabyte, andgigabyte, and the corresponding symbolsKB,MB, andGBin the binary sense, for use in storage capacity measurements. However, other computer industry sectors (such asmagnetic storage) continued using those same terms and symbols with the decimal meaning. Since then, the major standards organizations have expressly disapproved the use of SI prefixes to denote binary multiples, and recommended or mandated the use of the IEC prefixes for that purpose, but the use of SI prefixes in this sense has persisted in some fields.
In 2022, theInternational Bureau of Weights and Measures(BIPM) adopted the decimal prefixesronnafor 10009andquettafor 100010.[6][7]In 2025, the prefixesrobi(Ri, 10249) andquebi(Qi, 102410) were adopted by the IEC.[8]
The relative difference between the values in the binary and decimal interpretations increases, when using the SI prefixes as the base, from 2.4% for kibi vs. kilo to nearly 27% for the quebi vs. quetta.
The originalmetric systemadopted by France in 1795 included two binary prefixes nameddouble-(2×) anddemi-(1/2×).[9]However, these were not retained when theSI prefixeswere internationally adopted by the 11thCGPM conferencein 1960.
Early computers used one of two addressing methods to access the system memory; binary (base 2) or decimal (base 10).[10]For example, theIBM 701(1952) used a binary methods and could address 2048wordsof 36bitseach, while theIBM 702(1953) used a decimal system, and could address ten thousand 7-bit words.
By the mid-1960s, binary addressing had become the standard architecture in most computer designs, and main memory sizes were most commonly powers of two. This is the most natural configuration for memory, as all combinations of states of theiraddress linesmap to a valid address, allowing easy aggregation into a larger block of memory with contiguous addresses.
While early documentation specified those memory sizes as exact numbers such as 4096, 8192, or16384units (usuallywords, bytes, or bits), computer professionals also started using the long-established metric system prefixes "kilo", "mega", "giga", etc., defined to be powers of 10,[1]to mean instead the nearest powers of two; namely, 210= 1024, 220= 10242, 230= 10243, etc.[11][12]The corresponding metric prefix symbols ("k", "M", "G", etc.) were used with the same binary meanings.[13][14]The symbol for 210= 1024 could be written either in lower case ("k")[15][16][17]or in uppercase ("K"). The latter was often used intentionally to indicate the binary rather than decimal meaning.[18]This convention, which could not be extended to higher powers, was widely used in the documentation of theIBM 360(1964)[18]and of theIBM System/370(1972),[19]of theCDC 7600,[20]of the DECPDP-11/70 (1975)[21]and of the DECVAX-11/780(1977).[citation needed]
In other documents, however, the metric prefixes and their symbols were used to denote powers of 10, but usually with the understanding that the values given were approximate, often truncated down. Thus, for example, a 1967 document byControl Data Corporation(CDC) abbreviated "216=64 × 1024=65536words" as "65K words" (rather than "64K" or "66K"),[22]while the documentation of theHP 21MXreal-time computer (1974) denoted3 × 216=192 × 1024=196608as "196K" and 220=1048576as "1M".[23]
These three possible meanings of "k" and "K" ("1024", "1000", or "approximately 1000") were used loosely around the same time, sometimes by the same company. TheHP 3000business computer (1973) could have "64K", "96K", or "128K" bytes of memory.[24]The use of SI prefixes, and the use of "K" instead of "k" remained popular in computer-related publications well into the 21st century, although the ambiguity persisted. The correct meaning was often clear from the context; for instance, in a binary-addressed computer, the true memory size had to be either a power of 2, or a small integer multiple thereof. Thus a "512 megabyte" RAM module was generally understood to have512 × 10242=536870912bytes, rather than512000000.
In specifying disk drive capacities, manufacturers have always used conventional decimal SI prefixes representing powers of 10. Storage in a rotatingdisk driveis organized in platters and tracks whose sizes and counts are determined by mechanical engineering constraints so that the capacity of a disk drive has hardly ever been a simple multiple of a power of 2. For example, the first commercially sold disk drive, theIBM 350(1956), had 50 physical disk platters containing a total of50000sectors of 100 characters each, for a total quoted capacity of 5 million characters.[25]
Moreover, since the 1960s, many disk drives used IBM'sdisk format, where each track was divided into blocks of user-specified size; and the block sizes were recorded on the disk, subtracting from the usable capacity. For example, the IBM 3336 disk pack was quoted to have a 200-megabyte capacity, achieved only with a single13030-byte block in each of its 808 × 19 tracks.
Decimal megabytes were used for disk capacity by the CDC in 1974.[26]The SeagateST-412,[27]one of several types installed in theIBM PC/XT,[28]had a capacity of10027008byteswhen formatted as 306 × 4 tracks and 32 256-byte sectors per track, which was quoted as "10 MB".[29]Similarly, a "300 GB" hard drive can be expected to offer only slightly more than300×109=300000000000, bytes, not300 × 230(which would be about322×109bytes or "322 GB"). The first terabyte (SI prefix,1000000000000bytes) hard disk drive was introduced in 2007.[30]Decimal prefixes were generally used by information processing publications when comparing hard disk capacities.[31]
Some programs and operating systems, such asMicrosoft Windows, still use "MB" and "GB" to denote binary prefixes even when displaying disk drive capacities and file sizes, as didClassic Mac OS. Thus, for example, the capacity of a "10 MB" (decimal "M") disk drive could be reported as "9.56 MB", and that of a "300 GB" drive as "279.4 GB". Some operating systems, such asMac OS X,[32]Ubuntu,[33]andDebian,[34]have been updated to use "MB" and "GB" to denote decimal prefixes when displaying disk drive capacities and file sizes. Some manufacturers, such asSeagate Technology, have released recommendations stating that properly-written software and documentation should specify clearly whether prefixes such as "K", "M", or "G" mean binary or decimal multipliers.[35][36]
Floppy disksuseda variety of formats, and their capacities was usually specified with SI-like prefixes "K" and "M" with either decimal or binary meaning. The capacity of the disks was often specified without accounting for the internalformattingoverhead, leading to more irregularities.
The early 8-inch diskette formats could contain less than a megabyte with the capacities of those devices specified in kilobytes, kilobits or megabits.[37][38]
The 5.25-inch diskette sold with theIBM PC ATcould hold1200 × 1024=1228800bytes, and thus was marketed as "1200 KB" with the binary sense of "KB".[39]However, the capacity was also quoted "1.2 MB",[40]which was a hybrid decimal and binary notation, since the "M" meant 1000 × 1024. The precise value was1.2288 MB(decimal) or1.171875MiB(binary).
The 5.25-inchApple Disk IIhad 256 bytes per sector, 13 sectors per track, 35 tracks per side, or a total capacity of116480bytes. It was later upgraded to 16 sectors per track, giving a total of140 × 210=143360bytes, which was described as "140KB" using the binary sense of "K".
The most recent version of the physical hardware, the "3.5-inch diskette" cartridge, had 720 512-byte blocks (single-sided). Since two blocks comprised 1024 bytes, the capacity was quoted "360 KB", with the binary sense of "K". On the other hand, the quoted capacity of "1.44 MB" of the High Density ("HD") version was again a hybrid decimal and binary notation, since it meant 1440 pairs of 512-byte sectors, or1440 × 210=1474560bytes. Some operating systems displayed the capacity of those disks using the binary sense of "MB", as "1.4 MB" (which would be1.4 × 220≈1468000bytes). User complaints forced both Apple[citation needed]and Microsoft[41]to issue support bulletins explaining the discrepancy.
When specifying the capacities of opticalcompact discs, "megabyte" and "MB" usually meant 10242bytes. Thus a "700-MB" (or "80-minute") CD has a nominal capacity of about700 MiB, which is approximately730 MB(decimal).[42]
On the other hand, capacities of otheroptical discstorage media likeDVD,Blu-ray Disc,HD DVDandmagneto-optical (MO)have been generally specified in decimal gigabytes ("GB"), that is, 10003bytes. In particular, a typical "4.7 GB" DVD has a nominal capacity of about4.7×109bytes, which is about4.38 GiB.[43]
Tape drive and media manufacturers have generally used SI decimal prefixes to specify the maximum capacity,[44][45]although the actual capacity would depend on theblock sizeused when recording.
Computerclockfrequencies are always quoted using SI prefixes in their decimal sense. For example, the internal clock frequency of the originalIBM PCwas4.77 MHz, that is4770000Hz.
Similarly, digital information transfer rates are quoted using decimal prefixe. TheParallel ATA"100MB/s" disk interface can transfer100000000bytes per second, and a "56 Kb/s" modem transmits56000bits per second. Seagate specified the sustained transfer rate of some hard disk drive models with both decimal and IEC binary prefixes.[35]The standard sampling rate of musiccompact disks, quoted as44.1 kHz, is indeed44100samples per second.[citation needed]A "1 Gb/s"Ethernetinterface can receive or transmit up to 109bits per second, or125000000bytes per second within each packet. A "56k" modem can encode or decode up to56000bits per second.
Decimal SI prefixes are also generally used forprocessor-memory data transferspeeds. APCI-Xbus with66 MHzclock and 64 bits wide can transfer6600000064-bit words per second, or4224000000bit/s=528000000B/s, which is usually quoted as528MB/s. APC3200memory on adouble data ratebus, transferring 8 bytes per cycle with a clock speed of200 MHzhas a bandwidth of200000000× 8 × 2=3200000000B/s, which would be quoted as3.2GB/s.
The ambiguous usage of the prefixes "kilo ("K" or "k"), "mega" ("M"), and "giga" ("G"), as meaning both powers of 1000 or (in computer contexts) of 1024, has been recorded in popular dictionaries,[46][47][48]and even in some obsolete standards, such asANSI/IEEE 1084-1986[49]andANSI/IEEE 1212-1991,[50]IEEE 610.10-1994,[51]andIEEE 100-2000.[52]Some of these standards specifically limited the binary meaning to multiples of "byte" ("B") or "bit" ("b").
Before the IEC standard, several alternative proposals existed for unique binary prefixes, starting in the late 1960s. In 1996,Markus Kuhnproposed the extra prefix "di" and the symbolsuffixorsubscript"2" to mean "binary"; so that, for example, "one dikilobyte" would mean "1024 bytes", denoted "K2B" or "K2B".[53]
In 1968, Donald Morrison proposed to use the Greek letter kappa (κ) to denote 1024, κ2to denote 10242, and so on.[54](At the time, memory size was small, and only K was in widespread use.) In the same year,Wallace Givensresponded with a suggestion to use bK as an abbreviation for 1024 and bK2 or bK2for 10242, though he noted that neither the Greek letter nor lowercase letter b would be easy to reproduce on computer printers of the day.[55]Bruce Alan MartinofBrookhaven National Laboratoryproposed that, instead of prefixes, binary powers of two were indicated by the letterBfollowed by the exponent, similar toEindecimal scientific notation. Thus one would write 3B20 for3 × 220.[56]This convention is still used on some calculators to present binary floating point-numbers today.[57]
In 1969,Donald Knuth, who uses decimal notation like 1 MB = 1000 kB,[58]proposed that the powers of 1024 be designated as "large kilobytes" and "large megabytes", with abbreviations KKB and MMB.[59]
The ambiguous meanings of "kilo", "mega", "giga", etc., has caused significantconsumer confusion, especially in thepersonal computerera. A common source of confusion was the discrepancy between the capacities of hard drives specified by manufacturers, using those prefixes in the decimal sense, and the numbers reported by operating systems and other software, that used them in the binary sense, such as theApple Macintoshin 1984. For example, a hard drive marketed as "1 TB" could be reported as having only "931 GB". The confusion was compounded by fact that RAM manufacturers used the binary sense too.
The different interpretations of disk size prefixes led to class action lawsuits against digital storage manufacturers. These cases involved both flash memory and hard disk drives.
Early cases (2004–2007) were settled prior to any court ruling with the manufacturers admitting no wrongdoing but agreeing to clarify the storage capacity of their products on the consumer packaging. Accordingly, many flash memory and hard disk manufacturers have disclosures on their packaging and web sites clarifying the formatted capacity of the devices or defining MB as 1 million bytes and 1 GB as 1 billion bytes.[60][61][62][63]
On 20 February 2004,Willem Vroegh filed a lawsuitagainst Lexar Media, Dane–Elec Memory,Fuji Photo Film USA,Eastman KodakCompany, Kingston Technology Company, Inc.,MemorexProducts, Inc.;PNY TechnologiesInc.,SanDisk Corporation,Verbatim Corporation, andViking Interworksalleging that their descriptions of the capacity of theirflash memorycards were false and misleading.
Vroegh claimed that a 256 MB Flash Memory Device had only 244 MB of accessible memory. "Plaintiffs allege that Defendants marketed the memory capacity of their products by assuming that one megabyte equals one million bytes and one gigabyte equals one billion bytes." The plaintiffs wanted the defendants to use the customary values of 10242for megabyte and 10243for gigabyte. The plaintiffs acknowledged that the IEC and IEEE standards define a MB as one million bytes but stated that the industry has largely ignored the IEC standards.[64]
The parties agreed that manufacturers could continue to use the decimal definition so long as the definition was added to the packaging and web sites.[65]The consumers could apply for "a discount of ten percent off a future online purchase from Defendants' Online Stores Flash Memory Device".[66]
On 7 July 2005, an action entitledOrin Safier v.Western DigitalCorporation, et al.was filed in the Superior Court for the City and County of San Francisco, Case No. CGC-05-442812. The case was subsequently moved to the Northern District of California, Case No. 05-03353 BZ.[67]
Although Western Digital maintained that their usage of units is consistent with "the indisputably correct industry standard for measuring and describing storage capacity", and that they "cannot be expected to reform the software industry", they agreed to settle in March 2006 with 14 June 2006 as the Final Approval hearing date.[68]
Western Digital offered to compensate customers with agratisdownload of backup and recovery software that they valued at US$30. They also paid$500000in fees and expenses to San Francisco lawyers Adam Gutride and Seth Safier, who filed the suit. The settlement called for Western Digital to add a disclaimer to their later packaging and advertising.[69][70][71]Western Digital had this footnote in their settlement. "Apparently, Plaintiff believes that he could sue an egg company for fraud for labeling a carton of 12 eggs a 'dozen', because some bakers would view a 'dozen' as including 13 items."[72]
A lawsuit (Cho v. Seagate Technology (US) Holdings, Inc., San Francisco Superior Court, Case No. CGC-06-453195) was filed againstSeagate Technology, alleging that Seagate overrepresented the amount of usable storage by 7% on hard drives sold between 22 March 2001 and 26 September 2007. The case was settled without Seagate admitting wrongdoing, but agreeing to supply those purchasers with gratis backup software or a 5% refund on the cost of the drives.[73]
On 22 January 2020, the district court of the Northern District of California ruled in favor of the defendant,SanDisk, upholding its use of "GB" to mean1000000000bytes.[74]
In 1995, theInternational Union of Pure and Applied Chemistry's (IUPAC) Interdivisional Committee on Nomenclature and Symbols (IDCNS) proposed the prefixes "kibi" (short for "kilobinary"), "mebi" ("megabinary"), "gibi" ("gigabinary") and "tebi" ("terabinary"), with respective symbols "kb", "Mb", "Gb" and "Tb",[75]for binary multipliers. The proposal suggested that the SI prefixes should be used only for powers of 10; so that a disk drive capacity of "500 gigabytes", "0.5 terabytes", "500 GB", or "0.5 TB" should all mean500×109bytes, exactly or approximately, rather than500 × 230(=536870912000) or0.5 × 240(=549755813888).
The proposal was not accepted by IUPAC at the time, but was taken up in 1996 by theInstitute of Electrical and Electronics Engineers(IEEE) in collaboration with theInternational Organization for Standardization(ISO) andInternational Electrotechnical Commission(IEC). The prefixes "kibi", "mebi", "gibi" and "tebi" were retained, but with the symbols "Ki" (with capital "K"), "Mi", "Gi" and "Ti" respectively.[76]
In January 1999, the IEC published this proposal, with additional prefixes "pebi" ("Pi") and "exbi" ("Ei"), as an international standard (IEC 60027-2Amendment 2)[77][78][79]The standard reaffirmed the BIPM's position that the SI prefixes should always denote powers of 10. The third edition of the standard, published in 2005, added prefixes "zebi" and "yobi", thus matching all then-defined SI prefixes with binary counterparts.[80]
The harmonizedISO/IECIEC 80000-13:2025 standard cancels and replaces subclauses 3.8 and 3.9 of IEC 60027-2:2005 (those defining prefixes for binary multiples). The only significant change is the addition of explicit definitions for some quantities.[81]In 2009, the prefixes kibi-, mebi-, etc. were defined byISO 80000-1in their own right, independently of the kibibyte, mebibyte, and so on.
The BIPM standard JCGM 200:2012 "International vocabulary of metrology – Basic and general concepts and associated terms (VIM), 3rd edition" lists the IEC binary prefixes and states "SI prefixes refer strictly to powers of 10, and should not be used for powers of 2. For example, 1 kilobit should not be used to represent1024bits (210bits), which is 1 kibibit."[82]
The IEC 60027-2 standard recommended operating systems and other software were updated to use binary or decimal prefixes consistently, but incorrect usage of SI prefixes for binary multiples is still common. At the time, the IEEE decided that their standards would use the prefixes "kilo", etc. with their metric definitions, but allowed the binary definitions to be used in an interim period as long as such usage was explicitly pointed out on a case-by-case basis.[83]
The IEC standard binary prefixes are supported by other standardization bodies and technical organizations.
The United StatesNational Institute of Standards and Technology(NIST) supports the ISO/IEC standards for
"Prefixes for binary multiples" and has a web page[84]documenting them, describing and justifying their use. NIST suggests that in English, the first syllable of the name of the binary-multiple prefix should be pronounced in the same way as the first syllable of the name of the corresponding SI prefix, and that the second syllable should be pronounced asbee.[5]NIST has stated the SI prefixes "refer strictly to powers of 10" and that the binary definitions "should not be used" for them.[85]
As of 2014, the microelectronics industry standards bodyJEDECdescribes the IEC prefixes in its online dictionary, but acknowledges that the SI prefixes and the symbols "K", "M" and "G" are still commonly used with the binary sense for memory sizes.[86][87]
On 19 March 2005, the IEEE standardIEEE 1541-2002("Prefixes for Binary Multiples") was elevated to a full-use standard by the IEEE Standards Association after a two-year trial period.[88][89]as of April 2008[update], the IEEE Publications division does not require the use of IEC prefixes in its major magazines such asSpectrum[90]orComputer.[91]
TheInternational Bureau of Weights and Measures(BIPM), which maintains theInternational System of Units(SI), expressly prohibits the use of SI prefixes to denote binary multiples, and recommends the use of the IEC prefixes as an alternative since units of information are not included in the SI.[92][1]
TheSociety of Automotive Engineers(SAE) prohibits the use of SI prefixes with anything but a power-of-1000 meaning, but does not cite the IEC binary prefixes.[93]
The European Committee for Electrotechnical Standardization (CENELEC) adopted the IEC-recommended binary prefixes via the harmonization document HD 60027-2:2003-03.[94]The European Union (EU) has required the use of the IEC binary prefixes since 2007.[95]
Some computer industry participants, such as Hewlett-Packard (HP),[96]and IBM[97][98]have adopted or recommended IEC binary prefixes as part of their general documentation policies.
As of 2023, the use of SI prefixes with the binary meanings is still prevalent for specifying the capacity of themain memoryof computers, ofRAM,ROM,EPROM, andEEPROMchipsandmemory modules, and of thecacheofcomputer processors. For example, a "512-megabyte" or "512 MB" memory module holds 512 MiB; that is, 512 × 220bytes, not 512 × 106bytes.[99][100][101][102]
JEDEC continues to include the customary binary definitions of "kilo", "mega", and "giga" in the documentTerms, Definitions, and Letter Symbols,[103]and, as of 2010[update], still used those definitions in theirmemory standards.[104][105][106][107][108]
On the other hand, the SI prefixes with powers of ten meanings are generally used for the capacity of external storage units, such asdisk drives,[109][110][111][112][113]solid state drives, andUSB flash drives,[63]except for someflash memorychips intended to be used asEEPROMs. However, some disk manufacturers have used the IEC prefixes to avoid confusion.[114]The decimal meaning of SI prefixes is usually also intended in measurements of data transfer rates, and clock speeds.[citation needed]
Some operating systems and other software use either the IEC binary multiplier symbols ("Ki", "Mi", etc.)[115][116][117][118][119][120]or the SI multiplier symbols ("k", "M", "G", etc.) with decimal meaning. Some programs, such as theGNUlscommand, let the user choose between binary or decimal multipliers. However, some continue to use the SI symbols with the binary meanings, even when reporting disk or file sizes. Some programs may also use "K" instead of "k", with either meaning.[121]
While the binary prefixes are predominantly used with units of data, bits and bytes, they may be used with other unit of measure. For example, insignal processingit may be convenient to use a binary prefix with the unit of frequency,hertz(Hz), to produce a unit such as thekibihertz(KiHz), which is equal to1024 Hz.[122][123] | https://en.wikipedia.org/wiki/Binary_prefix |
CJK Compatibilityis aUnicode blockcontaining square symbols (both CJK and Latin alphanumeric) encoded for compatibility with East Asian character sets. In Unicode 1.0, it was divided into two blocks, namedCJK Squared Words(U+3300–U+337F) andCJK Squared Abbreviations(U+3380–U+33FF).[3]The square forms can have different presentations when they are used in horizontal orvertical text.
For example, the charactersU+333E㌾SQUARE BORUTO(fromボルト) andU+3327㌧SQUARE TON(fromトン) should look different in horizontal and in vertical right-to-left:[4]㌧㌾
Characters U+337B through U+337E are theJapanese era calendar schemesymbolsHeisei(㍻),Shōwa(㍼),Taishō(㍽) andMeiji(㍾) (also available in certain legacy sets, such as the "NEC special characters" extension forJIS X 0208, as included inMicrosoft's versionand laterJIS X 0213).[5]TheReiwaera symbol (U+32FF㋿SQUARE ERA NAME REIWA) is inEnclosed CJK Letters and Months(the CJK Compatibility block having been fully allocated by the time of its commencement).
The following Unicode-related documents record the purpose and process of defining specific characters in the CJK Compatibility block:
CJK Unified IdeographsCJK Unified Ideographs Extension ACJK Unified Ideographs Extension BCJK Unified Ideographs Extension CCJK Unified Ideographs Extension DCJK Unified Ideographs Extension ECJK Unified Ideographs Extension FCJK Unified Ideographs Extension GCJK Unified Ideographs Extension HCJK Unified Ideographs Extension ICJK Radicals SupplementKangxi RadicalsIdeographic Description CharactersCJK Symbols and PunctuationCJK StrokesEnclosed CJK Letters and MonthsCJK CompatibilityCJK Compatibility IdeographsCJK Compatibility FormsEnclosed Ideographic SupplementCJK Compatibility Ideographs Supplement
0BMP0 BMP2SIP2 SIP2 SIP2 SIP2 SIP3TIP3 TIP2 SIP0 BMP0 BMP0 BMP0 BMP0 BMP0 BMP0 BMP0 BMP0 BMP1SMP2 SIP
4E00–9FFF3400–4DBF20000–2A6DF2A700–2B73F2B740–2B81F2B820–2CEAF2CEB0–2EBEF30000–3134F31350–323AF2EBF0–2EE5F2E80–2EFF2F00–2FDF2FF0–2FFF3000–303F31C0–31EF3200–32FF3300–33FFF900–FAFFFE30–FE4F1F200–1F2FF2F800–2FA1F
20,9926,59242,7204,1542225,7627,4734,9394,1926221152141664392552564723264542
UnifiedUnifiedUnifiedUnifiedUnifiedUnifiedUnifiedUnifiedUnifiedUnifiedNot unifiedNot unifiedNot unifiedNot unifiedNot unifiedNot unifiedNot unified12 are unifiedNot unifiedNot unifiedNot unified
HanHanHanHanHanHanHanHanHanHanHanHanCommonHan,Hangul, Common,InheritedCommonHangul,Katakana, CommonKatakana, CommonHanCommonHiragana, CommonHan | https://en.wikipedia.org/wiki/CJK_Compatibility |
TheE seriesis a system ofpreferred numbers(also called preferred values) derived for use inelectronic components. It consists of theE3,E6,E12,E24,E48,E96andE192series,[1]where the number after the 'E' designates the quantity oflogarithmicvalue "steps" perdecade. Although it is theoretically possible to produce components of any value, in practice the need for inventory simplification has led the industry to settle on the E series forresistors,capacitors,inductors, andzener diodes. Other types of electrical components are either specified by theRenard series(for examplefuses) or are defined in relevant product standards (for exampleIEC 60228for wires).
During theGolden Age of Radio(1920s to 1950s), numerous companies manufacturedvacuum-tube–basedAM radioreceiversfor consumer use. In the early years, many components were not standardized between AM radio manufacturers. The capacitance values of capacitors (previously called condensers)[2][3]and resistance values of resistors[4][5][6][7]were not standardized as they are today.[8]
In 1924, theRadio Manufacturers Association(RMA) was formed inChicago, Illinoisby 50 radio manufacturers to license and share patents. Over time, this group created some of the earliest standards for electronics components. In 1936, the RMA adopted a preferred-number system for the resistance values of fixed-composition resistors.[9]Over time, resistor manufacturers migrated from older values to the 1936 resistance value standard.[6][7]
DuringWorld War II(1940s), American and Britishmilitary productionwas a major influence for establishing common standards across many industries, especially in electronics, where it was essential to produce high quantities of standardized electronic parts to build military devices, such aswireless communications,radar,radar jammers,LORANradio navigation receivers for aircraft, test equipment, andmore.
Later, themid-20th century baby boomand the invention of thetransistorkicked off demand forconsumer electronicsgoods during the 1950s. As portabletransistor radiomanufacturing migrated from United States towards Japan during the late 1950s, it was critical for the electronic industry to have international standards.
After being worked on by the RMA,[10]theInternational Electrotechnical Commission(IEC) began work on an international standard in 1948.[11]The first version of thisIEC Publication 63(IEC 63) was released in 1952.[12]Later, IEC 63 was revised, amended, and renamed into the current version known asIEC 60063:2015.[13]
IEC 60063 release history:
The E series of preferred numbers was chosen such that when a component is manufactured it will end up in a range of roughly equally spaced values (geometric progression) on alogarithmic scale. Each E series subdivides eachdecademagnitude into steps of 3, 6, 12, 24, 48, 96, and 192 values, termedE3,E6, and so forth toE192, with maximum errors of 40%, 20%, 10%, 5%, 2%, 1%, 0.5%, respectively.[nb 1]Also, the E192 series is used for 0.25% and 0.1% tolerance resistors.
Historically, the E series is split into two major groupings:
The formula for each value is determined by them-th root, but unfortunately the calculated values don't match the official values of all E series.[14]
Vn=round(10nm){\displaystyle V_{n}=\mathrm {round} ({\sqrt[{m}]{10^{n}}})}
For E3, E6, E12, and E24, the values from the formula are rounded to 2 significant figures, but eight official values (shown inbold& green) are different from the calculated values (shown in red). During the early half of the 20th century, electronic components had different sets of component values than today. In the late 1940s, standards organizations started working towards codifying a standard set of official component values, and they decided that it wasn't practical to change some of the former established historical values. The first standard was accepted in Paris in 1950, then published as IEC 63 in 1952.[12]The official values of the E3, E6, and E12 series aresubsetsof the following official E24 values.
The E3 series is rarely used,[nb 1]except for some components with high variations likeelectrolytic capacitors, where the giventoleranceis often unbalanced between negative and positive such as+50%−30%or+80%−20%, or for components with uncritical values such aspull-up resistors. The calculated constant tangential tolerance for this series gives (3√10− 1) ÷ (3√10+ 1) = 36.60%, approximately. While the standard only specifies a tolerance greater than 20%, other sources indicate 40% or 50%. Currently, most electrolytic capacitors are manufactured with values in the E6 or E12 series, thus E3 series is mostly obsolete.
For E48, E96, and E192, the values from the formula are rounded to 3 significant figures, but one value (shown in bold) is different from the calculated values.
Since some values of the E24 series do not exist in the E48, E96, or E192 series, some resistor manufacturers have added missing E24 values intosomeof their 1%, 0.5%, 0.25%, 0.1% tolerance resistor families. This allows easier purchasing migration between various tolerances. This E series merging is noted on resistor datasheets and webpages as "E96 + E24" or "E192 + E24".[15][16][17]In the following table, the red cells denote E24 values that don't exist in the E48, E96, or E192 series, and indicate the closest value or values that do instead.
If a manufacturer sold resistors with all values in a range of 1ohmto 10 megaohms, the available resistance values for E3 through E12 would be:
If a manufacturer sold capacitors with all values in a range of 1pFto 10,000 μF, the available capacitance values for E3 and E6 would be:
List of official values for each E series:[nb 1]
Printable E series tables | https://en.wikipedia.org/wiki/E1_series_(preferred_numbers) |
Engineering notationorengineering form(alsotechnical notation) is a version ofscientific notationin which the exponent of ten is always selected to be divisible by three to match the common metric prefixes, i.e. scientific notation that aligns with powers of a thousand, for example, 531×103instead of 5.31×105(but on calculator displays written without the ×10 to save space). As an alternative to writing powers of 10,SI prefixescan be used,[1]which also usually provide steps of a factor of a thousand.[nb 1]On most calculators, engineering notation is called "ENG" mode as scientific notation is denoted SCI.
An early implementation of engineering notation in the form of range selection and number display with SI prefixes was introduced in the computerized HP 5360Afrequency counterbyHewlett-Packardin 1969.[1]
Based on an idea by Peter D. Dickinson[2][1]the firstcalculatorto support engineering notation displaying the power-of-ten exponent values was theHP-25in 1975.[3]It was implemented as a dedicated display mode in addition to scientific notation.
In 1975,Commodoreintroduced a number of scientific calculators (like theSR4148/SR4148R[4]andSR4190R[5]) providing avariable scientific notation, where pressing theEE↓andEE↑keys shifted the exponent and decimal point by ±1[nb 2]inscientificnotation. Between 1976 and 1980 the sameexponent shiftfacility was also available on someTexas Instrumentscalculators of the pre-LCDera such as earlySR-40,[6][7]TI-30[8][9][10][11][12][13][14][15]andTI-45[16][17]model variants utilizing (INV)EE↓instead. This can be seen as a precursor to a feature implemented on manyCasiocalculators since 1978/1979 (e.g. in theFX-501P/FX-502P), where number display inengineeringnotation is available on demand by the single press of a (INV)ENGbutton (instead of having to activate a dedicated display mode as on most other calculators), and subsequent button presses would shift the exponent and decimal point of the number displayed by ±3[nb 2]in order to easily let results match a desired prefix. Some graphical calculators (for example thefx-9860G) in the 2000s also support the display of some SI prefixes (f, p, n, μ, m, k, M, G, T, P, E) as suffixes in engineering mode.
Compared to normalized scientific notation, one disadvantage of using SI prefixes and engineering notation is thatsignificant figuresare not always readily apparent when the smallest significant digit or digits are 0. For example, 500 μm and500×10−6mcannot express theuncertaintydistinctions between5×10−4m,5.0×10−4m, and5.00×10−4m. This can be solved by changing the range of the coefficient in front of the power from the common 1–1000 to 0.001–1.0. In some cases this may be suitable; in others it may be impractical. In the previous example, 0.5 mm, 0.50 mm, or 0.500 mm would have been used to show uncertainty and significant figures. It is also common to state the precision explicitly, such as "47 kΩ±5%"
Another example: when thespeed of light(exactly299792458m/s[18]by the definition of the meter) is expressed as3.00×108m/sor3.00×105km/sthen it is clear that it is between299500km/sand300500km/s, but when using300×106m/s, or300×103km/s,300000km/s, or the unusual but short300 Mm/s, this is not clear. A possibility is using0.300×109m/sor0.300 Gm/s.
On the other hand, engineering notation allows the numbers to explicitly match their corresponding SI prefixes, which facilitates reading and oral communication. For example,12.5×10−9mcan be read as "twelve-point-five nanometers" (10−9beingnano) and written as 12.5 nm, while its scientific notation equivalent1.25×10−8mwould likely be read out as "one-point-two-five times ten-to-the-negative-eight meters".
Engineering notation, like scientific notation generally, can use theE notation, such that3.0×10−9can be written as 3.0E−9 or 3.0e−9. The E (or e) should not be confused with theEuler's number eor the symbol for theexa-prefix.
Just like decimal engineering notation can be viewed as a base-1000 scientific notation (103= 1000),binaryengineering notation relates to a base-1024 scientific notation (210= 1024), where the exponent of two must be divisible by ten. This is closely related to the base-2floating-pointrepresentation (B notation) commonly used in computer arithmetic, and the usage of IECbinary prefixes, e.g. 1B10 for 1 × 210, 1B20 for 1 × 220, 1B30 for 1 × 230, 1B40 for 1 × 240etc.[19] | https://en.wikipedia.org/wiki/Engineering_notation |
TheIndian numbering systemis used inIndia,Pakistan,Nepal,Sri Lanka, andBangladeshto express large numbers, which differs from theInternational System of Units. Commonly used quantities includelakh(one hundred thousand) andcrore(ten million) – written as 100,000 and 10,000,000 respectively in somelocales.[1]For example: 150,000rupeesis "1.5lakhrupees" which can be written as "1,50,000 rupees", and 30,000,000 (thirty million) rupees is referred to as "3crorerupees" which can be written as "3,00,00,000 rupees".
There are names for numbers larger thancrore, but they are less commonly used. These includearab(100crore, 109),kharab(100arab, 1011),nilor sometimestransliteratedasneel(100 kharab, 1013),padma(100 nil, 1015),shankh(100 padma, 1017), andmahashankh(100 shankh, 1019). In common parlance (though inconsistent), thelakhandcroreterminology repeats for larger numbers. Thuslakh croreis 1012.
In the ancient Indian system, still in use in regional languages of India, there are words for (1062). These names respectively starting at 1000 aresahasra,ayuta,laksha,niyuta,koti,arbhudha,abhja,karva,nikarva,mahapadma,shanmkhu,jaladhi,amtya,madhya,paraardha. In the Indian system, now prevalent in the northern parts,[clarification needed]the next powers of ten are onelakh, tenlakh, onecrore, tencrore, onearab(or one hundredcrore), and so on.
The Indian system isdecimal(base-10), same as in theInternational System of Units, and the first fiveorders of magnitudeare named in a similar way: one (100), ten (101), one hundred (102), one thousand (103), and ten thousand (104). For higher powers of ten, naming diverges. The Indian system uses names for everysecondpower of ten:lakh(105),crore(107),arab(109),kharab(1011), etc. In the rest of the world,long and short scales, there are names for everythirdpower of ten. The short scale uses million (106), billion (109), trillion (1012), etc.
The Indian system groups digits of a large decimal representation differently than theInternational System of Units. The Indian system does group the first three digits to the left of the decimal point. But thereafter, groups by two digits to align with the naming of quantities at multiples of 100.[2]
Like English and other locales, the Indian system uses aperiodas thedecimal separatorand thecommafor grouping, while others use a comma for decimal separator and athin spaceor point to group digits.[3]
When speakers of indigenous Indian languages are speaking English, the pronunciations may be closer to their mother tongue; e.g. "lakh" and "crore" might be pronounced /lɑkʰ/, /kɑrɔːr/, respectively.
The table below includes the spelling and pronunciation of numbers in various Indian languages along with corresponding short scale names.
(bongo)দশ হাজার লাখ কোটি(dôś hāzār lākh kōṭi)
(mohabongo)শত হাজার লাখ কোটি(śoto hāzār lākh kōṭi)
There are various systems of numeration found in various ancient epic literature of India (itihasas). The following table gives one such system used in the ValmikiRamayana.[4]
The denominations by which land was measured in theKumaon Kingdomwere based on arable lands and thus followed an approximate system with local variations. The most common of these was avigesimal(base-20) numbering system with the main denomination called abisi(seeHindustani numberbīs), which corresponded to the land required to sow 20nalisof seed. Consequently, its actual land measure varied based on the quality of the soil.[5]This system became the established norm in Kumaon by 1891.[6]
Below is a list of translations for the words lakh and crore in other languages spoken in the Indian subcontinent:
Formal written publications in English in India tend to use lakh/crore for Indian currency and International numbering for foreign currencies.[7]
The official usage of this system is limited to the nations ofIndia,PakistanandBangladesh. It is universally employed within these countries, and is preferred to the International numbering system.[8]
Sri LankaandNepalused this system in the past but has switched to the International numbering system in recent years. In theMaldives, the term lakh is widely used in official documents and local speech. However, theInternational System of Unitsis preferred for higher denominations (such as millions).
Most institutions and citizens in India use the Indian number system. TheReserve Bank of Indiawas noted as a rare exception in 2015,[9]whereas by 2024 the Indian system was used for amounts in rupees and the International system for foreign currencies throughout the Reserve Bank's website.[10] | https://en.wikipedia.org/wiki/Indian_numbering_system |
TheJoint Committee for Guides in Metrology(JCGM) is an organization inSèvresthat prepared theGuide to the Expression of Uncertainty in Measurement(GUM) and theInternational Vocabulary of Metrology(VIM). The JCGM assumed responsibility for these two documents from theISOTechnical Advisory Group 4 (TAG4).
Partner organizations below send representatives into the JCGM:
JCGM has two Working Groups. Working Group 1, "Expression of uncertainty in measurement", has the task to promote the use of the GUM and to prepare Supplements and other documents for its broad application. Working Group 2, "Working Group on International vocabulary of basic and general terms in metrology (VIM)", has the task to revise and promote the use of the VIM. For further information on the activity of the JCGM, seewww.bipm.org.
The Guide to the Expression of Uncertainty in Measurement (GUM)[1]is a document published by theJCGMthat establishes general rules for evaluating and expressing uncertainty in measurement.[2]
The GUM provides a way to express the perceived quality of the result of a measurement. Rather than express the result by providing an estimate of the measurand along with information about systematic and random error values (in the form of an "error analysis"), the GUM approach is to express the result of a measurement as an estimate of the measurand along with an associated measurement uncertainty.
One of the basic premises of the GUM approach is that it is possible to characterize the quality of a measurement by accounting for both systematic and random errors on a comparable footing, and a method is provided for doing that. This method refines the information previously provided in an "error analysis", and puts it on a probabilistic basis through the concept of measurement uncertainty.
Another basic premise of the GUM approach is that it is not possible to state how well the true value of the measurand is known, but only how well it is believed to be known. Measurement uncertainty can therefore be described as a measure of how well one believes one knows the true value of the measurand. This uncertainty reflects the incomplete knowledge of the measurand.
The notion of "belief" is an important one, since it moves metrology into a realm where results of measurement need to be considered and quantified in terms ofprobabilitiesthat express degrees of belief.
For a review on other applicable measurement uncertainty guidance documents see.[3]
TheInternational Vocabulary of Metrology(VIM)[4]is an attempt to find a common language and terminology inmetrology, i.e. the science of measurements, across different fields of science, legislature and commerce. The 3rd edition was developed using the principles of terminology work[5](ISO 704:2000 Terminology Work—Principles and Methods; ISO 1087-1:2000 Terminology Work—Vocabulary—Part 1:Theory and Application; ISO 10241:1992 International Terminology Standards—Preparation and Layout).
The VIM is the most global attempt to standardize terminology across different fields of science, legislature, commerce and trade.
Acceptance of VIM standards is rather good in legislature, commerce and trade where it is often legally required. Acceptance is also good in textbooks and many fields of sciences. There are, however, some fields of science that stick to their traditional jargon, most notablytheoretical physicsandmass spectrometry.[citation needed]
Revision by Working Group 1 of the GUM itself is under way, in parallel with work on preparing documents in a series of JCGM documents under the generic heading Evaluation of measurement data. The parts in the series are:[6] | https://en.wikipedia.org/wiki/International_vocabulary_of_metrology |
ISO/IEC 80000,Quantities and units, is aninternational standarddescribing theInternational System of Quantities(ISQ). It was developed and promulgated jointly by theInternational Organization for Standardization(ISO) and theInternational Electrotechnical Commission(IEC). It serves as a style guide for usingphysical quantitiesandunits of measurement, formulas involving them, and their corresponding units, in scientific and educational documents for worldwide use. The ISO/IEC 80000 family of standards was completed with the publication of the first edition of Part 1 in November 2009.[1][2]
By 2021, ISO/IEC 80000 comprised 13 parts, two of which (parts 6 and 13) were developed by IEC and the remaining 11 were developed by ISO, with a further three parts (15, 16 and, 17) under development. Part 14 was withdrawn.
By 2021 the 80000 standard had 13 published parts. A description of each part is available online, with the complete parts for sale.[20][21]
ISO 80000-1:2022 revised ISO 80000-1:2009, which replaced ISO 31-0:1992 and ISO 1000:1992.[22]This document gives general information and definitions concerning quantities, systems of quantities, units, quantity and unit symbols, and coherent unit systems, especially the International System of Quantities (ISQ).[3]The descriptive text of this part is available online.[23][24]
According to the standard, symbols for quantities are "generally single letters from the Latin or Greek alphabet" and are "written in italic (sloping) type". Examples include
ISO 80000-2:2019 revised ISO 80000-2:2009,[4]which supersededISO 31-11.[25]It specifies mathematical symbols, explains their meanings, and gives verbal equivalents and applications. The descriptive text of this part is available online.[26]
ISO 80000-3:2019 revised ISO 80000-3:2006,[5]which supersedesISO 31-1andISO 31-2.[27]It gives names, symbols, definitions and units for quantities of space and time. The descriptive text of this part is available online.[28]
A definition of thedecibel, included in the original 2006 publication, was omitted in the 2019 revision, leaving ISO/IEC 80000 without a definition of this unit; a new part of the standard, IEC 80000-15 (Logarithmic and related quantities), is under development.
ISO 80000-4:2019 revised ISO 80000-4:2006,[6]which supersededISO 31-3.[29]It gives names, symbols, definitions and units for quantities of mechanics. The descriptive text of this part is available online.[30]
ISO 80000-5:2019 revised ISO 80000-5:2007,[7]which supersededISO 31-4.[31]It gives names, symbols, definitions and units for quantities ofthermodynamics. The descriptive text of this part is available online.[32]
IEC 80000-6:2022 revised IEC 80000-6:2008,[8]which supersededISO 31-5[33]as well as IEC 60027-1. It gives names, symbols, and definitions for quantities and units ofelectromagnetism. The descriptive text of this part is available online.[34]
ISO 80000-7:2019 revised ISO 80000-7:2008,[9]which supersededISO 31-6.[35]It gives names, symbols, definitions and units for quantities used forlightandoptical radiationin thewavelengthrange of approximately 1 nm to 1 mm. The descriptive text of this part is available online.[36]
ISO 80000-8:2020 revised ISO 80000-8:2007,[37]which revised ISO 31-7:1992.[38]It gives names, symbols, definitions, and units for quantities ofacoustics. The descriptive text of this part is available online.[39]
It has a foreword, scope introduction, scope, normative references (of which there are none), as well as terms, and definitions. It includes definitions ofsound pressure,sound power, andsound exposure, and their correspondinglevels:sound pressure level,sound power level, andsound exposure level. It includes definitions of the following quantities:
IEC 80000-13:2025 revised IEC 80000-13:2008, which replaced subclauses 3.8 and 3.9 of IEC 60027-2:2005 andIEC 60027-3.[15]It defines quantities and units used ininformation scienceandinformation technology, and specifies names and symbols for these quantities and units. It has a scope; normative references; names, definitions, and symbols; and prefixes forbinarymultiples.
Quantities defined in this standard are:
The standard also includes definitions for units relating to information technology, such as theerlang(E),bit(bit),octet(o),byte(B),baud(Bd),shannon(Sh),hartley(Hart), and thenatural unit of information(nat).
Clause 4 of the standard defines standardbinary prefixesused to denote powers of 1024 as 10241(kibi-), 10242(mebi-), 10243(gibi-), 10244(tebi-), 10245(pebi-), 10246(exbi-), 10247(zebi-), 10248(yobi-), 10249(robi-), and 102410(quebi-).
Part 1 of ISO 80000 introduces the International System of Quantities and describes its relationship with theInternational System of Units(SI). Specifically, its introduction states "The system of quantities, including the relations among the quantities used as the basis of the units of the SI, is named theInternational System of Quantities, denoted 'ISQ', in all languages." It further clarifies that "ISQ is simply a convenient notation to assign to the essentially infinite and continually evolving and expanding system of quantities and equations on which all of modern science and technology rests. ISQ is a shorthand notation for the 'system of quantities on which the SI is based'."
The standard includes all SI units but is not limited to only SI units. Units that form part of the standard but not the SI include the units of information storage (bitandbyte), units ofentropy(shannon,natural unit of informationandhartley), and theerlang(a unit of traffic intensity). | https://en.wikipedia.org/wiki/ISO/IEC_80000 |
Numeralornumber prefixesareprefixesderived fromnumeralsor occasionally othernumbers. In English and many other languages, they are used to coin numerous series of words. For example:
In many European languages there are two principal systems, taken fromLatinandGreek, each with several subsystems; in addition,Sanskritoccupies a marginal position.[B]There is also an international set ofmetric prefixes, which are used in the world'sstandard measurement system.
In the following prefixes, a final vowel is normally dropped before a root that begins with a vowel, with the exceptions ofbi-,which is extended tobis-before a vowel; among the othermonosyllables,du-,di-,dvi-, andtri-, never vary.
Words in thecardinalcategory arecardinal numbers, such as the Englishone,two,three, which name the count of items in a sequence. Themultiplecategory areadverbialnumbers, like the Englishonce,twice,thrice, that specify the number of events or instances of otherwise identical or similar items. Enumeration with thedistributivecategory originally was meant to specifyone each,two eachorone by one,two by two, etc., giving how many items of each type are desired or had been found, although distinct word forms for that meaning are now mostly lost. Theordinalcategory are based onordinal numberssuch as the Englishfirst,second,third, which specify position of items in a sequence. In Latin and Greek, the ordinal forms are also used for fractions for amounts higher than 2; only the fraction1/2has special forms.
The same suffix may be used with more than one category of number, as for example the orginary numbers secondaryand tertiaryand the distributive numbers binaryand ternary.
For the hundreds, there are competing forms: Those in-gent-, from the original Latin, and those in-cent-, derived fromcenti-, etc. plus the prefixes for 1 through 9 .
Many of the items in the following tables are not in general use, but may rather be regarded as coinages by individuals. In scientific contexts, eitherscientific notationorSI prefixesare used to express very large or very small numbers, and not unwieldy prefixes.
(buthybridhexadecimal)
Because of the common inheritance of Greek and Latin roots across theRomance languages, the import of much of that derived vocabulary into non-Romance languages (such as intoEnglishviaNorman French), and theborrowingof 19th and 20th century coinages into many languages, the same numerical prefixes occur in many languages.
Numerical prefixes are not restricted to denoting integers. Some of the SI prefixes denote negative powers of 10, i.e. division by a multiple of 10 rather than multiplication by it. Several common-use numerical prefixes denotevulgar fractions.
Words containing non-technical numerical prefixes are usually not hyphenated. This is not an absolute rule, however, and there are exceptions (for example:quarter-deckoccurs in addition toquarterdeck). There are no exceptions for words comprising technical numerical prefixes, though.Systematic namesand words comprisingSI prefixesand binary prefixes are not hyphenated, by definition.
Nonetheless, for clarity, dictionaries list numerical prefixes in hyphenated form, to distinguish the prefixes from words with the same spellings (such asduo-andduo).
Several technical numerical prefixes are not derived from words for numbers. (mega-is not derived from a number word, for example.) Similarly, some are only derived from words for numbers inasmuch as they areword play. (Peta-is word play onpenta-, for example. See its etymology for details.) Themetric prefixespeta, exa, zetta, yotta, ronna, and quetta are based on the Ancient Greek or Ancient Latin numbers from 5 to 10, referring to the fifth through tenth powers of1000. The initial letter h has been removed from some of these stems and the initial letters z, y, r, and q have been added, ascending in reverse alphabetical order, to avoid confusion with other metric prefixes.
The root language of a numerical prefix need not be related to the root language of the word that it prefixes. Some words comprising numerical prefixes arehybrid words.
In certain classes of systematic names, there are a few other exceptions to the rule of using Greek-derived numerical prefixes. TheIUPAC nomenclature of organic chemistry, for example, uses the numerical prefixes derived from Greek, except for the prefix for 9 (as mentioned) and the prefixes from 1 to 4 (meth-, eth-, prop-, and but-), which are not derived from words for numbers. These prefixes were invented by the IUPAC, deriving them from the pre-existing names for several compounds that it was intended to preserve in the new system:methane(viamethyl, which is in turn from the Greek word for wine),ethane(fromethylcoined byJustus von Liebigin 1834),propane(frompropionic, which is in turn frompro-and the Greek word for fat), andbutane(frombutyl, which is in turn frombutyric, which is in turn from the Latin word for butter). | https://en.wikipedia.org/wiki/Numeral_prefix |
In aratio scalebased onpowers of ten, theorder of magnitudeis a measure of the nearness of two figures. Two numbers are "within an order of magnitude" of each other if their ratio is between 1/10 and 10. In other words, the two numbers are within about a factor of 10 of each other.[1]
For example, 1 and 1.02 are within an order of magnitude. So are 1 and 2, 1 and 9, or 1 and 0.2. However, 1 and 15 are not within an order of magnitude, since their ratio is 15/1 = 15 > 10. The reciprocal ratio, 1/15, is less than 0.1, so the same result is obtained.
Differencesin order of magnitude can be measured on a base-10logarithmic scalein "decades" (i.e., factors of ten).[2]For example, there is one order of magnitude between 2 and 20, and two orders of magnitude between 2 and 200. Each division or multiplication by 10 is called an order of magnitude.[3]This phrasing helps quickly express the difference in scale between 2 and 2,000,000: they differ by 6 orders of magnitude.
Examples of numbers of different magnitudes can be found atOrders of magnitude (numbers).
Below are examples of different methods of partitioning the real numbers into specific "orders of magnitude" for various purposes. There is not one single accepted way of doing this, and different partitions may be easier to compute but less useful for approximation, or better for approximation but more difficult to compute.
Generally, the order of magnitude of a number is the smallest power of 10 used to represent that number.[4]To work out the order of magnitude of a numbern{\displaystyle n}, the number is first expressed in the following form:
where110≤a<10{\displaystyle {\frac {1}{\sqrt {10}}}\leq a<{\sqrt {10}}}, or approximately0.316≲a≲3.16{\displaystyle 0.316\lesssim a\lesssim 3.16}. Then,b{\displaystyle b}represents the order of magnitude of the number. The order of magnitude can be anyinteger. The table below enumerates the order of magnitude of some numbers using this definition:
Thegeometric meanof10b−1/2{\displaystyle 10^{b-1/2}}and10b+1/2{\displaystyle 10^{b+1/2}}is10b{\displaystyle 10^{b}}, meaning that a value of exactly10b{\displaystyle 10^{b}}(i.e.,a=1{\displaystyle a=1}) represents a geometrichalfway pointwithin the range of possible values ofa{\displaystyle a}.
Some use a simpler definition where0.5≤a<5{\displaystyle 0.5\leq a<5}.[5]This definition has the effect of lowering the values ofb{\displaystyle b}slightly:
Orders of magnitude are used to make approximate comparisons. If numbers differ by one order of magnitude,xisaboutten times different in quantity thany. If values differ by two orders of magnitude, they differ by a factor of about 100. Two numbers of the same order of magnitude have roughly the same scale: the larger value is less than ten times the smaller value.
The growing amounts of Internet data have led to addition of newSI prefixesover time, most recently in 2022.[6]
The order of magnitude of a number is, intuitively speaking, the number of powers of 10 contained in the number. More precisely, the order of magnitude of a number can be defined in terms of thecommon logarithm, usually as theintegerpart of the logarithm, obtained bytruncation.[contradictory]For example, the number4000000has a logarithm (in base 10) of 6.602; its order of magnitude is 6. When truncating, a number of this order of magnitude is between 106and 107. In a similar example, with the phrase "seven-figure income", the order of magnitude is the number of figures minus one, so it is very easily determined without a calculator to be 6. An order of magnitude is an approximate position on alogarithmic scale.
An order-of-magnitude estimate of a variable, whose precise value is unknown, is an estimateroundedto the nearest power of ten. For example, an order-of-magnitude estimate for a variable between about 3 billion and 30 billion (such as the humanpopulationof theEarth) is 10billion. To round a number to its nearest order of magnitude, one rounds its logarithm to the nearest integer. Thus4000000, which has a logarithm (in base 10) of 6.602, has 7 as its nearest order of magnitude, because "nearest" implies rounding rather than truncation. For a number written in scientific notation, this logarithmic rounding scale requires rounding up to the next power of ten when the multiplier is greater than the square root of ten (about 3.162). For example, the nearest order of magnitude for1.7×108is 8, whereas the nearest order of magnitude for3.7×108is 9. An order-of-magnitude estimate is sometimes also called azeroth order approximation.
An order of magnitude is an approximation of thelogarithmof a value relative to some contextually understood reference value, usually 10, interpreted as thebaseof the logarithm and the representative of values of magnitude one.Logarithmic distributionsare common in nature and considering the order of magnitude of values sampled from such a distribution can be more intuitive. When the reference value is 10, the order of magnitude can be understood as the number of digits minus one in the base-10 representation of the value. Similarly, if the reference value is one of some powers of 2 since computers store data in abinaryformat, the magnitude can be understood in terms of the amount of computer memory needed to store that value.
Other orders of magnitude may be calculated using bases other than integers. In the field ofastronomy, the nighttime brightnesses of celestial bodies are ranked by"magnitudes"in which each increasing level is brighter by afactorof1005≈2.512{\displaystyle {\sqrt[{5}]{100}}\approx 2.512}greater than the previous level. Thus, a level being 5 magnitudes brighter than another indicates that it is a factor of(1005)5=100{\displaystyle ({\sqrt[{5}]{100}})^{5}=100}times brighter: that is, two base 10 orders of magnitude.
This series of magnitudes forms a logarithmic scale with a base of1005{\displaystyle {\sqrt[{5}]{100}}}.
The differentdecimalnumeral systemsof the world use a larger base to better envision the size of the number, and have created names for the powers of this larger base. The table shows what number the order of magnitude aim at for base 10 and for base1000000. It can be seen that the order of magnitude is included in the number name in this example, because bi- means 2, tri- means 3, etc. (these make sense in the long scale only), and the suffix -illion tells that the base is1000000. But the number names billion, trillion themselves (here withother meaningthan in the first chapter) are not names of theorders ofmagnitudes, they are names of "magnitudes", that is thenumbers1000000000000etc.
SIunits in the table at right are used together withSI prefixes, which were devised with mainly base 1000 magnitudes in mind.The IEC standard prefixeswith base 1024 were invented for use in electronic technology. | https://en.wikipedia.org/wiki/Order_of_magnitude |
Theorder of magnitudeof data may be specified in strictly standards-conformantunits of informationand multiples of thebitandbytewith decimal scaling, or using historically common usages of a few multiplier prefixes in a binary interpretation which has been common incomputinguntil new binary prefixes were defined in the 1990s.
Thebytehas been a commonly used unit of measure for much of theinformation ageto refer to a number ofbits. In the early days of computing, it was used for differing numbers of bits based onconventionandcomputer hardwaredesign, but today means 8 bits. A more accurate, but less commonly used name for 8 bits isoctet.
Commonly, a decimalSImetric prefix(such askilo-) is used with bit and byte to express larger sizes (kilobit, kilobyte). But, this is usually inaccurate since these prefixes are decimal, whereas binary hardware size is usually binary. Customarily, each metric prefix, 1000n, is used to mean a close approximation of a binary multiple, 1024n. Often, this distinction is implicit, and therefore, use of metric prefixes can lead to confusion. TheIECbinary prefixes(such askibi-) allow for accurate description of hardware sizes, but are not commonly used.[1][2]
This page references two kinds of entropy which are not entirely equivalent. For comparison, theAvogadro constantis6.02214076×1023entities per mole, based upon the number of atoms in 12 grams ofcarbon-12isotope. SeeEntropy in thermodynamics and information theory.
– minimum length to store 2 decimal digits
– equivalent to 1 "word" on 8-bit computers (Apple II,Atari 8-bit computers,Commodore 64, etc.).– the "word size" for 8-bitconsole systemsincluding:Atari 2600,Nintendo Entertainment System
– minimumbitlength to store a single byte with error-correctingcomputer memory– minimumframelength to transmit a single byte withasynchronous serial protocols
– theBasic Multilingual Plane of Unicode, containing character codings for almost all modern languages, and a large number of symbols– the basic unit inUTF-16; the fullUniversal Character Set (Unicode)can be encoded in one or two of these– commonly used in manyprogramming languages, the size of anintegercapable of holding 65,536 different values– equivalent to 1 "word" on 16-bit computers (IBM PC,Commodore Amiga)– the "word size" for 16-bitconsole systemsincluding:Sega Genesis,Super Nintendo,Mattel Intellivision
– size of an integer capable of holding 4,294,967,296 different values– size of anIEEE 754single-precisionfloating pointnumber– size of addresses inIPv4, the currentInternet Protocol– equivalent to 1 "word" on 32-bit processors, including those for theApple Macintosh,Pentium-based PC,PlayStation,GameCube,Xbox,Wii
– size of an integer capable of holding 18,446,744,073,709,551,616 different values– size of an IEEE 754 double-precision floating point number– equivalent to 1 "word" on 64-bit computers (Power,PA-RISC, Alpha,Itanium,SPARC, x86-64 PCs and Macintoshes).– the "word size" for 64-bitconsole systemsincluding:Nintendo 64,PlayStation 2,PlayStation 3,Xbox 360
– size of addresses inIPv6, the successor protocol ofIPv4– minimum cipher strength of theRijndaelandAESencryption standards, and of the widely usedMD5cryptographicmessage digestalgorithm– size of anSSEvector register, included as part of thex86-64standard
– minimum key length for the recommended strong cryptographicmessage digestsas of 2004[update]– size of anAVX2vector register, present on newerx86-64CPUs
– maximum key length for the standard strong cryptographicmessage digestsin 2004– size of anAVX-512vector register, present on somex86-64CPUs
– typical sector size, and minimum space allocation unit on computer storage volumes, with mostfile systems– approximate amount of information on a sheet of single-spaced typewritten paper (without formatting) | https://en.wikipedia.org/wiki/Orders_of_magnitude_(data) |
TheUnified Code for Units of Measure(UCUM) is a system of codes for unambiguously representing measurement units. Its primary purpose is machine-to-machine communication rather than communication between humans.[1]UCUM is used by different organizations likeIEEE, and standards likeDICOM,LOINC,HL7, andISO 11240:2012.[2]
The code set includes all units defined inISO 1000,ISO 2955-1983,[3][a]ANSI X3.50-1986,[4][b]HL7andENV 12435, and explicitly and verifiably addresses the naming conflicts and ambiguities in those standards to resolve them. It provides for representations of units in 7 bitASCIIfor machine-to-machine communication, with unambiguous mapping betweencase-sensitiveand case-insensitive representations.
A referenceopen-sourceimplementation is available as aJava applet. There is also anOSGi-based implementation atEclipse Foundation.
Units are represented in UCUM with reference to a set of seven base units.[5]The UCUM base units are themetrefor measurement oflength, thesecondfortime, thegramformass, thecoulombforcharge, thekelvinfortemperature, thecandelaforluminous intensity, and theradianforplane angle. The UCUM base units form a set of mutually independent dimensions as required bydimensional analysis.
Some of the UCUM base units are different from theSI base units. UCUM is compatible with, but not isomorphic with,SI. There are four differences between the two sets of base units:
Each unit represented in UCUM is identified as either "metric" or "non-metric".[5]Metric units can acceptmetric prefixesas in SI. Non-metric units are not permitted to be used with prefixes. All of the base units are metric.
UCUM refers to units that are defined on non-ratio scales as "special units". Common examples include thebelanddegree Celsius. While these are not considered metric units by UCUM, UCUM nevertheless allows metric prefixes to be used with them where this is common practice.[5]
Binary prefixesare also supported.
UCUM recognizes units that are defined by a particular measurement procedure, and which cannot be related to the base units.[5]These units are identified as "arbitrary units". Arbitrary units are not commensurable with any other unit; measurements in arbitrary units cannot be compared with or converted into measurements in any other units. Many of the recognized arbitrary units are used in biochemistry and medicine.
Any metric unit in any common system of units can be expressed in terms of the UCUM base units. | https://en.wikipedia.org/wiki/Unified_Code_for_Units_of_Measure |
TheSuzhou numerals, also known asSūzhōu mǎzi(蘇州碼子), is anumeral systemused in China before the introduction ofHindu numerals. The Suzhou numerals are also known asSoochow numerals,[1]ma‑tzu,[2]huāmǎ(花碼),[3][better source needed]cǎomǎ(草碼),[3][better source needed]jīngzǐmǎ(菁仔碼),[3][better source needed]fānzǐmǎ(番仔碼)[3][better source needed]andshāngmǎ(商碼).[3][better source needed]
The Suzhou numeral system is the only surviving variation of therod numeralsystem. The rod numeral system is apositionalnumeral system used by the Chinese in mathematics. Suzhou numerals are a variation of theSouthern Songrod numerals.
Suzhou numerals were used as shorthand in number-intensive areas of commerce such as accounting and bookkeeping. At the same time, standardChinese numeralswere used in formal writing, akin to spelling out the numbers in English. Suzhou numerals were once popular in Chinese marketplaces, such as those inHong Kongand Chinese restaurants inMalaysiabefore the 1990s, but they have gradually been supplanted by Hindu numerals.[citation needed]This is similar to what had happened in Europe withRoman numeralsused in ancient and medieval Europe for mathematics and commerce. Nowadays, the Suzhou numeral system is only used for displaying prices in Chinese markets[4]or on traditional handwritten invoices.[citation needed]
In the Suzhou numeral system, special symbols are used for digits instead of the Chinese characters. The digits of the Suzhou numerals are defined between U+3021 and U+3029 inUnicode. An additional three code points starting from U+3038 were added later.
The symbols for 5 to 9 are derived from those for 0 to 4 by adding a vertical bar on top, which is similar to adding an upper bead which represents a value of 5 in an abacus. The resemblance makes the Suzhou numerals intuitive to use together with the abacus as the traditional calculation tool.
The numbers one, two, and three are all represented by vertical bars. This can cause confusion when they appear next to each other. Standard Chinese ideographs are often used in this situation to avoid ambiguity. For example, "21" is written as "〢一" instead of "〢〡" which can be confused with "3" (〣). The first character of such sequences is usually represented by the Suzhou numeral, while the second character is represented by the Chinese ideograph.
The digits arepositional. The full numerical notations are written in two lines to indicate numerical value,order of magnitude, andunit of measurement. Following the rod numeral system, the digits of the Suzhou numerals are always written horizontally from left to right, just like how numbers are represented in an abacus, even when used within vertically written documents.[5]
For example:
The first line contains the numerical values, in this example, "〤〇〢二" stands for "4022". The second line consists of Chinese characters that represents theorder of magnitudeandunit of measurementof the first digit in the numerical representation. In this case "十元" which stands for "tenyuan". When put together, it is then read as "40.22 yuan".
Possible characters denoting order of magnitude include:
Other possible characters denoting unit of measurement include:
Notice that the decimal point is implicit when the first digit is set at theten position. Zero is represented by the character for zero (〇). Leading and trailing zeros are unnecessary in this system.
This is very similar to the modernscientific notationforfloating pointnumbers where the significant digits are represented in the mantissa and the order of magnitude is specified in the exponent. Also, the unit of measurement, with the first digit indicator, is usually aligned to the middle of the "numbers" row.
In theUnicodestandard version 3.0, these characters are incorrectly namedHangzhoustyle numerals. In the Unicode standard 4.0, anerratumwas added which stated:[4]
The Suzhou numerals (Chinesesu1zhou1ma3zi) are special numeric forms used by traders to display the prices of goods. The use of "HANGZHOU" in the names is a misnomer.
All references to "Hangzhou" in the Unicode standard have been corrected to "Suzhou" except for the character names themselves, which cannot be changed once assigned, in accordance with the Unicode Stability Policy.[8](This policy allows software to use the names as unique identifiers.) | https://en.wikipedia.org/wiki/Suzhou_numerals |
Incomputer architecture,bit-serial architecturessend data one bit at a time, along a single wire, in contrast tobit-parallelwordarchitectures, in which data values are sent all bits or a word at once along a group of wires.
All digital computers built before 1951, and most of the earlymassive parallel processingmachines used a bit-serial architecture—they wereserial computers.
Bit-serial architectures were developed fordigital signal processingin the 1960s through 1980s, including efficient structures for bit-serial multiplication and accumulation.[1]
TheHP Nutprocessor used in manyHewlett-Packard calculatorsoperated bit-serially.[2]
Assuming N is an arbitrary integer number, N serial processors will often take lessFPGAarea and have a higher total performance than a single N-bit parallel processor.[3]
Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it. | https://en.wikipedia.org/wiki/Bit-serial_architecture |
Aserial computeris a computer typified bybit-serial architecture– i.e., internally operating on onebitordigitfor eachclock cycle. Machines with serialmain storagedevices such as acoustic ormagnetostrictivedelay linesandrotating magnetic deviceswere usually serial computers.
Serial computers require much less hardware than their bit-parallel counterparts[1]which exploitbit-level parallelismto do more computation per clock cycle. There are modern variants of the serial computer available as asoft microprocessor[2]which can serve niche purposes where the size of the CPU is the main constraint.
The first computer that was not serial and used aparallel buswas theWhirlwindin 1951.
A serial computer is not necessarily the same as a computer with a1-bit architecture, which is a subset of the serial computer class. 1-bit computer instructions operate on data consisting of single bits, whereas a serial computer can operate onN-bit data widths, but does so a single bit at a time.
Most of the earlymassive parallel processingmachines were built out of individual serial processors, including: | https://en.wikipedia.org/wiki/Digit-serial_architecture |
Indigital circuitsandmachine learning, aone-hotis a group ofbitsamong which the legal combinations of values are only those with a single high (1) bit and all the others low (0).[1]A similar implementation in which all bits are '1' except one '0' is sometimes calledone-cold.[2]Instatistics,dummy variablesrepresent a similar technique for representingcategorical data.
One-hot encoding is often used for indicating the state of astate machine. When usingbinary, adecoderis needed to determine the state. A one-hot state machine, however, does not need a decoder as the state machine is in thenth state if, and only if, thenth bit is high.
Aring counterwith 15 sequentially ordered states is an example of a state machine. A 'one-hot' implementation would have 15flip-flopschained in series with the Q output of each flip-flop connected to the D input of the next and the D input of the first flip-flop connected to the Q output of the 15th flip-flop. The first flip-flop in the chain represents the first state, the second represents the second state, and so on to the 15th flip-flop, which represents the last state. Upon reset of the state machine all of the flip-flops are reset to '0' except the first in the chain, which is set to '1'. The next clock edge arriving at the flip-flops advances the one 'hot' bit to the second flip-flop. The 'hot' bit advances in this way until the 15th state, after which the state machine returns to the first state.
Anaddress decoderconverts from binary to one-hot representation.
Apriority encoderconverts from one-hot representation to binary.
Innatural language processing, a one-hot vector is a 1 ×Nmatrix (vector) used to distinguish each word in a vocabulary from every other word in the vocabulary.[5]The vector consists of 0s in all cells with the exception of a single 1 in a cell used uniquely to identify the word. One-hot encoding ensures that machine learning does not assume that higher numbers are more important. For example, the value '8' is bigger than the value '1', but that does not make '8' more important than '1'. The same is true for words: the value 'laughter' is not more important than 'laugh'.
In machine learning, one-hot encoding is a frequently used method to deal with categorical data. Because many machine learning models need their input variables to be numeric, categorical variables need to be transformed in the pre-processing part.[6]
Categorical data can be eithernominalorordinal.[7]Ordinal data has a ranked order for its values and can therefore be converted to numerical data through ordinal encoding.[8]An example of ordinal data would be the ratings on a test ranging from A to F, which could be ranked using numbers from 6 to 1. Since there is no quantitative relationship between nominal variables' individual values, using ordinal encoding can potentially create a fictional ordinal relationship in the data.[9]Therefore, one-hot encoding is often applied to nominal variables, in order to improve the performance of the algorithm.
For each unique value in the original categorical column, a new column is created in this method. These dummy variables are then filled up with zeros and ones (1 meaning TRUE, 0 meaning FALSE).[citation needed]
Because this process creates multiple new variables, it is prone to creating a 'big p' problem (too many predictors) if there are many unique values in the original column. Another downside of one-hot encoding is that it causes multicollinearity between the individual variables, which potentially reduces the model's accuracy.[citation needed]
Also, if the categorical variable is an output variable, you may want to convert the values back into a categorical form in order to present them in your application.[10]
In practical usage, this transformation is often directly performed by a function that takes categorical data as an input and outputs the corresponding dummy variables. An example would be the dummyVars function of the Caret library in R.[11] | https://en.wikipedia.org/wiki/1-of-10_code |
Indigital circuitsandmachine learning, aone-hotis a group ofbitsamong which the legal combinations of values are only those with a single high (1) bit and all the others low (0).[1]A similar implementation in which all bits are '1' except one '0' is sometimes calledone-cold.[2]Instatistics,dummy variablesrepresent a similar technique for representingcategorical data.
One-hot encoding is often used for indicating the state of astate machine. When usingbinary, adecoderis needed to determine the state. A one-hot state machine, however, does not need a decoder as the state machine is in thenth state if, and only if, thenth bit is high.
Aring counterwith 15 sequentially ordered states is an example of a state machine. A 'one-hot' implementation would have 15flip-flopschained in series with the Q output of each flip-flop connected to the D input of the next and the D input of the first flip-flop connected to the Q output of the 15th flip-flop. The first flip-flop in the chain represents the first state, the second represents the second state, and so on to the 15th flip-flop, which represents the last state. Upon reset of the state machine all of the flip-flops are reset to '0' except the first in the chain, which is set to '1'. The next clock edge arriving at the flip-flops advances the one 'hot' bit to the second flip-flop. The 'hot' bit advances in this way until the 15th state, after which the state machine returns to the first state.
Anaddress decoderconverts from binary to one-hot representation.
Apriority encoderconverts from one-hot representation to binary.
Innatural language processing, a one-hot vector is a 1 ×Nmatrix (vector) used to distinguish each word in a vocabulary from every other word in the vocabulary.[5]The vector consists of 0s in all cells with the exception of a single 1 in a cell used uniquely to identify the word. One-hot encoding ensures that machine learning does not assume that higher numbers are more important. For example, the value '8' is bigger than the value '1', but that does not make '8' more important than '1'. The same is true for words: the value 'laughter' is not more important than 'laugh'.
In machine learning, one-hot encoding is a frequently used method to deal with categorical data. Because many machine learning models need their input variables to be numeric, categorical variables need to be transformed in the pre-processing part.[6]
Categorical data can be eithernominalorordinal.[7]Ordinal data has a ranked order for its values and can therefore be converted to numerical data through ordinal encoding.[8]An example of ordinal data would be the ratings on a test ranging from A to F, which could be ranked using numbers from 6 to 1. Since there is no quantitative relationship between nominal variables' individual values, using ordinal encoding can potentially create a fictional ordinal relationship in the data.[9]Therefore, one-hot encoding is often applied to nominal variables, in order to improve the performance of the algorithm.
For each unique value in the original categorical column, a new column is created in this method. These dummy variables are then filled up with zeros and ones (1 meaning TRUE, 0 meaning FALSE).[citation needed]
Because this process creates multiple new variables, it is prone to creating a 'big p' problem (too many predictors) if there are many unique values in the original column. Another downside of one-hot encoding is that it causes multicollinearity between the individual variables, which potentially reduces the model's accuracy.[citation needed]
Also, if the categorical variable is an output variable, you may want to convert the values back into a categorical form in order to present them in your application.[10]
In practical usage, this transformation is often directly performed by a function that takes categorical data as an input and outputs the corresponding dummy variables. An example would be the dummyVars function of the Caret library in R.[11] | https://en.wikipedia.org/wiki/One-hot_code |
Anumeral systemis a writing system for expressingnumbers; that is, amathematical notationfor representing numbers of a given set, usingdigitsor other symbols in a consistent manner.
The same sequence of symbols may represent different numbers in different numeral systems. For example, "11" represents the numberelevenin thedecimal or base-10numeral system (today, the most common system globally), the numberthreein thebinary or base-2numeral system (used in modern computers), and the numbertwoin theunary numeral system(used intallyingscores).
The number the numeral represents is called itsvalue. Additionally, not all number systems can represent the same set of numbers; for example,Roman,Greek, andEgyptian numeralsdon't have a representation of the numberzero.
Ideally, a numeral system will:
For example, the usualdecimal representationgives every nonzeronatural numbera unique representation as a finitesequenceof digits, beginning with a non-zero digit.
Numeral systems are sometimes callednumber systems, but that name is ambiguous, as it could refer to different systems of numbers, such as the system ofreal numbers, the system ofcomplex numbers, varioushypercomplex numbersystems, the system ofp-adic numbers, etc. Such systems are, however, not the topic of this article.
Early numeral systems varied across civilizations, with the Babylonians using a base-60 system, the Egyptians developing hieroglyphic numerals, and the Chinese employing rod numerals. The Mayans independently created a vigesimal (base-20) system that included a symbol for zero. Indian mathematicians, such as Brahmagupta in the 7th century, played a crucial role in formalizing arithmetic rules and the concept of zero, which was later refined by scholars like Al-Khwarizmi in the Islamic world. As these numeral systems evolved, the efficiency of positional notation and the inclusion of zero helped shape modern numerical representation, influencing global commerce, science, and technology. The first true writtenpositional numeral systemis considered to be theHindu–Arabic numeral system. This system was established by the 7th century in India,[1]but was not yet in its modern form because the use of the digitzerohad not yet been widely accepted. Instead of a zero sometimes the digits were marked with dots to indicate their significance, or a space was used as a placeholder. The first widely acknowledged use of zero was in 876.[2]The original numerals were very similar to the modern ones, even down to theglyphsused to represent digits.[1]
By the 13th century,Western Arabic numeralswere accepted in European mathematical circles (Fibonacciused them in hisLiber Abaci). Initially met with resistance, Hindu–Arabic numerals gained wider acceptance in Europe due to their efficiency in arithmetic operations, particularly in banking and trade. The invention of the printing press in the 15th century helped standardize their use, as printed mathematical texts favored this system over Roman numerals. They began to enter common use in the 15th century.[3]By the end of the 20th century virtually all non-computerized calculations in the world were done with Arabic numerals, which have replaced native numeral systems in most cultures. By the 17th century, the system had become dominant in scientific works, influencing mathematical advancements by figures like Isaac Newton and René Descartes. In the 19th and 20th centuries, the widespread adoption of Arabic numerals facilitated global finance, engineering, and technological developments, forming the foundation for modern computing and digital data representation.
The exact age of theMaya numeralsis unclear, but it is possible that it is older than the Hindu–Arabic system. The system wasvigesimal(base 20), so it has twenty digits. The Mayas used a shell symbol to represent zero. Numerals were written vertically, with the ones place at the bottom. TheMayashad no equivalent of the moderndecimal separator, so their system could not represent fractions.[citation needed]
TheThai numeral systemis identical to theHindu–Arabic numeral systemexcept for the symbols used to represent digits. The use of these digits is less common inThailandthan it once was, but they are still used alongside Arabic numerals.[4]
The rod numerals, the written forms ofcounting rodsonce used byChineseandJapanesemathematicians, are a decimal positional system used for performing decimal calculations. Rods were placed on a counting board and slid forwards or backwards to change the decimal place. TheSūnzĭ Suànjīng, a mathematical treatise dated to between the 3rd and 5th centuries AD, provides detailed instructions for the system, which is thought to have been in use since at least the 4th century BC.[5]Zero was not initially treated as a number, but as a vacant position.[6]Later sources introduced conventions for the expression of zero and negative numbers. The use of a round symbol〇for zero is first attested in theMathematical Treatise in Nine Sectionsof 1247 AD.[7]The origin of this symbol is unknown; it may have been produced by modifying a square symbol.[8]TheSuzhou numerals, a descendant of rod numerals, are still used today for some commercial purposes.[citation needed]
The most commonly used system of numerals isdecimal.Indian mathematiciansare credited with developing the integer version, theHindu–Arabic numeral system.[9]AryabhataofKusumapuradeveloped theplace-value notationin the 5th century and a century laterBrahmaguptaintroduced the symbol for zero. The system slowly spread to other surrounding regions like Arabia due to their commercial and military activities with India. Middle-Eastern mathematicians extended the system to include negative powers of 10 (fractions), as recorded in a treatise by Syrian mathematicianAbu'l-Hasan al-Uqlidisiin 952–953, and the decimal point notation was introduced[when?]bySind ibn Ali, who also wrote the earliest treatise on Arabic numerals. The Hindu–Arabic numeral system then spread to Europe due to merchants trading, and the digits used in Europe are calledArabic numerals, as they learned them from the Arabs.
The simplest numeral system is theunary numeral system, in which everynatural numberis represented by a corresponding number of symbols. If the symbol/is chosen, for example, then the number seven would be represented by///////.Tally marksrepresent one such system still in common use. The unary system is only useful for small numbers, although it plays an important role intheoretical computer science.Elias gamma coding, which is commonly used indata compression, expresses arbitrary-sized numbers by using unary to indicate the length of a binary numeral.
The unary notation can be abbreviated by introducing different symbols for certain new values. Very commonly, these values are powers of 10; so for instance, if / stands for one, − for ten and + for 100, then the number 304 can be compactly represented as+++ ////and the number 123 as+ − − ///without any need for zero. This is calledsign-value notation. The ancientEgyptian numeral systemwas of this type, and theRoman numeral systemwas a modification of this idea.
More useful still are systems which employ special abbreviations for repetitions of symbols; for example, using the first nine letters of the alphabet for these abbreviations, with A standing for "one occurrence", B "two occurrences", and so on, one could then write C+ D/ for the number 304 (the number of these abbreviations is sometimes called thebaseof the system). This system is used when writingChinese numeralsand other East Asian numerals based on Chinese. The number system of the English language is of this type ("three hundred [and] four"), as are those of other spoken languages, regardless of what written systems they have adopted. However, many languages use mixtures of bases, and other features, for instance 79 in French issoixante dix-neuf(60 + 10 + 9) and in Welsh ispedwar ar bymtheg a thrigain(4 + (5 + 10) + (3 × 20)) or (somewhat archaic)pedwar ugain namyn un(4 × 20 − 1). In English, one could say "four score less one", as in the famousGettysburg Addressrepresenting "87 years ago" as "four score and seven years ago".
More elegant is apositional system, also known as place-value notation. The positional systems are classified by theirbaseorradix, which is the number of symbols calleddigitsused by the system. In base 10, ten different digits 0, ..., 9 are used and the position of a digit is used to signify the power of ten that the digit is to be multiplied with, as in304 = 3×100 + 0×10 + 4×1or more precisely3×102+ 0×101+ 4×100. Zero, which is not needed in the other systems, is of crucial importance here, in order to be able to "skip" a power. The Hindu–Arabic numeral system, which originated in India and is now used throughout the world, is a positional base 10 system.
Arithmetic is much easier in positional systems than in the earlier additive ones; furthermore, additive systems need a large number of different symbols for the different powers of 10; a positional system needs only ten different symbols (assuming that it uses base 10).[10]
The positional decimal system is presently universally used in human writing. The base 1000 is also used (albeit not universally), by grouping the digits and considering a sequence of three decimal digits as a single digit. This is the meaning of the common notation 1,000,234,567 used for very large numbers.
In computers, the main numeral systems are based on the positional system in base 2 (binary numeral system), with twobinary digits, 0 and 1. Positional systems obtained by grouping binary digits by three (octal numeral system) or four (hexadecimal numeral system) are commonly used. For very large integers, bases 232or 264(grouping binary digits by 32 or 64, the length of themachine word) are used, as, for example, inGMP.
In certain biological systems, theunary codingsystem is employed. Unary numerals used in theneural circuitsresponsible forbirdsongproduction.[11]The nucleus in the brain of the songbirds that plays a part in both the learning and the production of bird song is the HVC (high vocal center). The command signals for different notes in the birdsong emanate from different points in the HVC. This coding works as space coding which is an efficient strategy for biological circuits due to its inherent simplicity and robustness.
The numerals used when writing numbers with digits or symbols can be divided into two types that might be called thearithmeticnumerals (0, 1, 2, 3, 4, 5, 6, 7, 8, 9) and thegeometricnumerals (1, 10, 100, 1000, 10000 ...), respectively. The sign-value systems use only the geometric numerals and the positional systems use only the arithmetic numerals. A sign-value system does not need arithmetic numerals because they are made by repetition (except for theIonic system), and a positional system does not need geometric numerals because they are made by position. However, the spoken language usesbotharithmetic and geometric numerals.
In some areas of computer science, a modified basekpositional system is used, calledbijective numeration, with digits 1, 2, ...,k(k≥ 1), and zero being represented by an empty string. This establishes abijectionbetween the set of all such digit-strings and the set of non-negative integers, avoiding the non-uniqueness caused by leading zeros. Bijective base-knumeration is also calledk-adic notation, not to be confused withp-adic numbers. Bijective base 1 is the same as unary.
In a positional basebnumeral system (withbanatural numbergreater than 1 known as theradixorbaseof the system),bbasic symbols (or digits) corresponding to the firstbnatural numbers including zero are used. To generate the rest of the numerals, the position of the symbol in the figure is used. The symbol in the last position has its own value, and as it moves to the left its value is multiplied byb.
For example, in thedecimalsystem (base 10), the numeral 4327 means(4×103) + (3×102) + (2×101) + (7×100), noting that100= 1.
In general, ifbis the base, one writes a number in the numeral system of basebby expressing it in the formanbn+an− 1bn− 1+an− 2bn− 2+ ... +a0b0and writing the enumerated digitsanan− 1an− 2...a0in descending order. The digits are natural numbers between 0 andb− 1, inclusive.
If a text (such as this one) discusses multiple bases, and if ambiguity exists, the base (itself represented in base 10) is added in subscript to the right of the number, like this: numberbase. Unless specified by context, numbers without subscript are considered to be decimal.
By using a dot to divide the digits into two groups, one can also write fractions in the positional system. For example, the base 2 numeral 10.11 denotes1×21+ 0×20+ 1×2−1+ 1×2−2= 2.75.
In general, numbers in the basebsystem are of the form:
The numbersbkandb−kare theweightsof the corresponding digits. The positionkis thelogarithmof the corresponding weightw, that isk=logbw=logbbk{\displaystyle k=\log _{b}w=\log _{b}b^{k}}. The highest used position is close to theorder of magnitudeof the number.
The number oftally marksrequired in theunary numeral systemfordescribing the weightwould have beenw. In the positional system, the number of digits required to describe it is onlyk+1=logbw+1{\displaystyle k+1=\log _{b}w+1}, fork≥ 0. For example, to describe the weight 1000 then four digits are needed becauselog101000+1=3+1{\displaystyle \log _{10}1000+1=3+1}. The number of digits required todescribe the positionislogbk+1=logblogbw+1{\displaystyle \log _{b}k+1=\log _{b}\log _{b}w+1}(in positions 1, 10, 100,... only for simplicity in the decimal example).
A number has a terminating or repeating expansionif and only ifit isrational; this does not depend on the base. A number that terminates in one base may repeat in another (thus0.310= 0.0100110011001...2). An irrational number stays aperiodic (with an infinite number of non-repeating digits) in all integral bases. Thus, for example in base 2,π= 3.1415926...10can be written as the aperiodic 11.001001000011111...2.
Puttingoverscores,n, or dots,ṅ, above the common digits is a convention used to represent repeating rational expansions. Thus:
Ifb=pis aprime number, one can define base-pnumerals whose expansion to the left never stops; these are called thep-adic numbers.
It is also possible to define a variation of basebin which digits may be positive or negative; this is called asigned-digit representation.
More general is using amixed radixnotation (here writtenlittle-endian) likea0a1a2{\displaystyle a_{0}a_{1}a_{2}}fora0+a1b1+a2b1b2{\displaystyle a_{0}+a_{1}b_{1}+a_{2}b_{1}b_{2}}, etc.
This is used inPunycode, one aspect of which is the representation of a sequence of non-negative integers of arbitrary size in the form of a sequence without delimiters, of "digits" from a collection of 36: a–z and 0–9, representing 0–25 and 26–35 respectively. There are also so-called threshold values (t0,t1,…{\displaystyle t_{0},t_{1},\ldots }) which are fixed for every position in the number. A digitai{\displaystyle a_{i}}(in a given position in the number) that is lower than its corresponding threshold valueti{\displaystyle t_{i}}means that it is the most-significant digit, hence in the string this is the end of the number, and the next symbol (if present) is the least-significant digit of the next number.
For example, if the threshold value for the first digit isb(i.e. 1) thena(i.e. 0) marks the end of the number (it has just one digit), so in numbers of more than one digit, first-digit range is only b–9 (i.e. 1–35), therefore the weightb1is 35 instead of 36. More generally, iftnis the threshold for then-th digit, it is easy to show thatbn+1=36−tn{\displaystyle b_{n+1}=36-t_{n}}.
Suppose the threshold values for the second and third digits arec(i.e. 2), then the second-digit range is a–b (i.e. 0–1) with the second digit being most significant, while the range is c–9 (i.e. 2–35) in the presence of a third digit. Generally, for anyn, the weight of the (n+ 1)-th digit is the weight of the previous one times (36 − threshold of then-th digit). So the weight of the second symbol is36−t0=35{\displaystyle 36-t_{0}=35}. And the weight of the third symbol is35(36−t1)=35⋅34=1190{\displaystyle 35(36-t_{1})=35\cdot 34=1190}.
So we have the following sequence of the numbers with at most 3 digits:
a(0),ba(1),ca(2), ..., 9a(35),bb(36),cb(37), ..., 9b(70),bca(71), ..., 99a(1260),bcb(1261), ..., 99b(2450).
Unlike a regularn-based numeral system, there are numbers like 9bwhere 9 andbeach represent 35; yet the representation is unique becauseacandacaare not allowed – the firstawould terminate each of these numbers.
The flexibility in choosing threshold values allows optimization for number of digits depending on the frequency of occurrence of numbers of various sizes.
The case with all threshold values equal to 1 corresponds tobijective numeration, where the zeros correspond to separators of numbers with digits which are non-zero. | https://en.wikipedia.org/wiki/Numeral_system |
Backward inductionis the process of determining asequenceof optimal choices by reasoning from the endpoint of a problem or situation back to its beginning using individual events or actions.[1]Backward induction involves examining the final point in a series of decisions and identifying the optimal process or action required to arrive at that point. This process continues backward until the best action for every possible point along the sequence is determined. Backward induction was first utilized in 1875 byArthur Cayley, who discovered the method while attempting to solve thesecretary problem.[2]
Indynamic programming, a method ofmathematical optimization, backward induction is used for solving theBellman equation.[3][4]In the related fields ofautomated planning and schedulingandautomated theorem proving, the method is called backward search orbackward chaining. In chess, it is calledretrograde analysis.
Ingame theory, a variant of backward induction is used to computesubgame perfect equilibriainsequential games.[5]The difference is that optimization problems involve onedecision makerwho chooses what to do at each point of time. In contrast, game theory problems involve the interacting decision of severalplayers. In this situation, it may still be possible to apply a generalization of backward induction, since it may be possible to determine what the second-to-last player will do by predicting what the last player will do in each situation, and so on. This variant of backward induction has been used to solve formal games from the beginning of game theory.John von NeumannandOskar Morgensternsuggested solvingzero-sum, two-person formal games through this method in theirTheory of Games and Economic Behaviour(1944), the book which established game theory as a field of study.[6][7]
Consider a person evaluating potential employment opportunities for the next ten years, denoted as timest=1,2,3,...,10{\displaystyle t=1,2,3,...,10}. At eacht{\displaystyle t}, they may encounter a choice between two job options: a 'good' job offering asalaryof$100{\displaystyle \$100}or a 'bad' job offering a salary of$44{\displaystyle \$44}. Each job type has an equal probability of being offered. Upon accepting a job, the individual will maintain that particular job for the entire remainder of the ten-year duration.
This scenario is simplified by assuming that the individual's entire concern is their total expected monetary earnings, without any variable preferences for earnings across different periods. In economic terms, this is a scenario with an implicitinterest rateof zero and a constantmarginal utilityof money.
Whether the person in question should accept a 'bad' job can be decided by reasoning backwards from timet=10{\displaystyle t=10}.
By continuing to work backwards, it can be verified that a 'bad' offer should only be accepted if the person is still unemployed att=9{\displaystyle t=9}ort=10{\displaystyle t=10}; a bad offer should be rejected at any time up to and includingt=8{\displaystyle t=8}. Generalizing this example intuitively, it corresponds to the principle that if one expects to work in a job for a long time, it is worth picking carefully.
A dynamic optimization problem of this kind is called anoptimal stoppingproblem because the issue at hand is when to stop waiting for a better offer.Search theoryis a field of microeconomics that applies models of this type to matters such as shopping, job searches, and marriage.
Ingame theory, backward induction is a solution methodology that follows from applying sequential rationality to identify an optimal action for each information set in a givengame tree. It develops the implications of rationality via individual information sets in theextensive-formrepresentation of a game.[8]
In order to solve for asubgame perfect equilibriumwith backwards induction, the game should be written out inextensive formand then divided intosubgames. Starting with the subgame furthest from the initial node, or starting point, the expected payoffs listed for this subgame are weighed, and a rational player will select the option with the higher payoff for themselves. The highest payoffvectoris selected and marked. To solve for the subgame perfect equilibrium, one should continually work backwards from subgame to subgame until the starting point is reached. As this process progresses, the initial extensive form game will become shorter and shorter. The marked path of vectors is the subgame perfect equilibrium.[9]
The application of backward induction in game theory can be demonstrated with a simple example. Consider amulti-stage gameinvolving two players planning to go to a movie.
Once they both observe the choices, the second stage begins. In the second stage, players choose whether to go to the movie or stay home.
For this example, payoffs are added across different stages. The game is aperfect informationgame. Thenormal-formmatrices for these games are:
Theextensive formof this multi-stage game can be seen to the right. The steps for solving this game with backward induction are as follows:
Backward induction can be applied to only limited classes of games. The procedure is well-defined for any game of perfect information with no ties of utility. It is also well-defined and meaningful for games of perfect information with ties. However, in such cases it leads to more than one perfect strategy. The procedure can be applied to some games with nontrivial information sets, but it is not applicable in general. It is best suited to solve games with perfect information. If all players are not aware of the other players' actions and payoffs at each decision node, then backward induction is not so easily applied.[10]
A second example demonstrates that even in games that formally allow for backward induction in theory, it may not accurately predict empirical game play in practice. This example of an asymmetric game consists of two players: Player 1 proposes to split a dollar with Player 2, which Player 2 then accepts or rejects. This is called theultimatum game. Player 1 acts first by splitting the dollar however they see fit. Next, Player 2 either accepts the portion they have been offered by Player 1 or rejects the split. If Player 2 accepts the split, then both Player 1 and Player 2 get the payoffs matching that split. If Player 2 decides to reject Player 1's offer, then both players get nothing. In other words, Player 2 has veto power over Player 1's proposed allocation, but applying the veto eliminates any reward for both players.[11]
Considering the choice and response of Player 2 given any arbitrary proposal by Player 1, formal rationality prescribes that Player 2 would accept any payoff that is greater than or equal to $0. Accordingly, by backward induction Player 1 ought to propose giving Player 2 as little as possible in order to gain the largest portion of the split. Player 1 giving Player 2 the smallest unit of money and keeping the rest for themselves is the unique subgame-perfect equilibrium. The ultimatum game does have several other Nash Equilibria which are not subgame perfect and therefore do not arise via backward induction.
The ultimatum game is a theoretical illustration of the usefulness of backward induction when considering infinite games, but the ultimatum games theoretically predicted results do not match empirical observation. Experimental evidence has shown that a proposer, Player 1, very rarely offers $0 and the responder, Player 2, sometimes rejects offers greater than $0. What is deemed acceptable by Player 2 varies with context. The pressure or presence of other players and external implications can mean that the game's formal model cannot necessarily predict what a real person will choose. According toColin Camerer, an American behavioral economist, Player 2 "rejects offers of less than 20 percent of X about half the time, even though they end up with nothing."[12]
While backward induction assuming formal rationality would predict that a responder would accept any offer greater than zero, responders in reality are not formally rational players and therefore often seem to care more about offer 'fairness' or perhaps other anticipations of indirect or external effects rather than immediate potential monetary gains.
Adynamic gamein which the players are an incumbent firm in an industry and a potential entrant to that industry is to be considered. As it stands, the incumbent has amonopolyover the industry and does not want to lose some of its market share to the entrant. If the entrant chooses not to enter, the payoff to the incumbent is high (it maintains its monopoly) and the entrant neither loses nor gains (its payoff is zero). If the entrant enters, the incumbent can "fight" or "accommodate" the entrant. It will fight by lowering its price, running the entrant out of business (and incurring exit costs—a negative payoff) and damaging its own profits. If it accommodates the entrant it will lose some of its sales, but a high price will be maintained and it will receive greater profits than by lowering its price (but lower than monopoly profits).
If the incumbent accommodates given the case that the entrant enters, the best response for the entrant is to enter (and gain profit). Hence the strategy profile in which the entrant enters and the incumbent accommodates if the entrant enters is aNash equilibriumconsistent with backward induction. However, if the incumbent is going to fight, the best response for the entrant is to not enter, and if the entrant does not enter, it does not matter what the incumbent chooses to do in the hypothetical case that the entrant does enter. Hence the strategy profile in which the incumbent fights if the entrant enters, but the entrant does not enter is also a Nash equilibrium. However, were the entrant to deviate and enter, the incumbent's best response is to accommodate—the threat of fighting is not credible. This second Nash equilibrium can therefore be eliminated by backward induction.
Finding a Nash equilibrium in each decision-making process (subgame) constitutes as perfect subgame equilibria. Thus, these strategy profiles that depict subgame perfect equilibria exclude the possibility of actions like incredible threats that are used to "scare off" an entrant. If the incumbent threatens to start aprice warwith an entrant, they are threatening to lower their prices from a monopoly price to slightly lower than the entrant's, which would be impractical, and incredible, if the entrant knew a price war would not actually happen since it would result in losses for both parties. Unlike a single-agent optimization which might include suboptimal or infeasible equilibria, a subgame perfect equilibrium accounts for the actions of another player, ensuring that no player reaches a subgame mistakenly. In this case, backwards induction yielding perfect subgame equilibria ensures that the entrant will not be convinced of the incumbent's threat knowing that it was not a best response in the strategy profile.[13]
Theunexpected hanging paradoxis aparadoxrelated to backward induction. The prisoner described in the paradox uses backwards induction to reach a false conclusion. The description of the problem assumes it is possible to surprise someone who is performing backward induction. The mathematical theory of backward induction does not make this assumption, so the paradox does not call into question the results of this theory.
Backward induction works only if both players arerational, i.e., always select an action that maximizes their payoff. However, rationality is not enough: each player should also believe that all other players are rational. Even this is not enough: each player should believe that all other players know that all other players are rational, and so on, ad infinitum. In other words, rationality should becommon knowledge.[14]
Limited backward induction is a deviation from fully rational backward induction. It involves enacting the regular process of backward induction without perfect foresight. Theoretically, this occurs when one or more players have limited foresight and cannot perform backward induction through all terminal nodes.[15]Limited backward induction plays a much larger role in longer games as the effects of limited backward induction are more potent in later periods of games.
Experiments have shown that in sequential bargaining games, such as theCentipede game, subjects deviate from theoretical predictions and instead engage in limited backward induction. This deviation occurs as a result ofbounded rationality, where players can only perfectly see a few stages ahead.[16]This allows for unpredictability in decisions and inefficiency in finding and achievingsubgame perfect Nash equilibria.
There are three broad hypotheses for this phenomenon:
Violations of backward induction is predominantly attributed to the presence of social factors. However, data-driven model predictions for sequential bargaining games (using thecognitive hierarchy model) have highlighted that in some games the presence of limited backward induction can play a dominant role.[17]
Within repeated public goods games, team behavior is impacted by limited backward induction; where it is evident that team members' initial contributions are higher than contributions towards the end. Limited backward induction also influences how regularly free-riding occurs within a team's public goods game. Early on, when the effects of limited backward induction are low, free riding is less frequent, whilst towards the end, when effects are high, free-riding becomes more frequent.[18]
Limited backward induction has also been tested for within a variant of the race game. In the game, players would sequentially choose integers inside a range and sum their choices until a target number is reached. Hitting the target earns that player a prize; the other loses. Partway through a series of games, a small prize was introduced. The majority of players then performed limited backward induction, as they solved for the small prize rather than for the original prize. Only a small fraction of players considered both prizes at the start.[19]
Most tests of backward induction are based on experiments, in which participants are only to a small extent incentivized to perform the task well, if at all. However, violations of backward induction also appear to be common in high-stakes environments. A large-scale analysis of the American television game showThe Price Is Right, for example, provides evidence of limited foresight. In every episode, contestants play theShowcase Showdown, a sequential game of perfect information for which the optimal strategy can be found through backward induction. The frequent and systematic deviations from optimal behavior suggest that a sizable proportion of the contestants fail to properly backward induct and myopically consider the next stage of the game only.[20] | https://en.wikipedia.org/wiki/Backward_induction |
Incombinatorial game theory,cooling,heating, andoverheatingare operations onhot gamesto make them more amenable to the traditional methods of the theory,
which was originally devised forcold gamesin which the winner is the last player to have a legal move.[1]Overheatingwas generalised byElwyn Berlekampfor the analysis ofBlockbusting.[2]Chilling(orunheating) andwarmingare variants used in the analysis of the endgame ofGo.[3][4]
Cooling and chilling may be thought of as a tax on the player who moves, making them pay for the privilege of doing so,
while heating, warming and overheating are operations that more or less reverse cooling and chilling.
ThecooledgameGt{\displaystyle G_{t}}("G{\displaystyle G}cooled byt{\displaystyle t}") for a gameG{\displaystyle G}and a(surreal) numbert{\displaystyle t}is defined by[5]
The amountt{\displaystyle t}by whichG{\displaystyle G}is cooled is known as thetemperature; the minimumτ{\displaystyle \tau }for whichGτ{\displaystyle G_{\tau }}is infinitesimally close tom{\displaystyle m}is known as thetemperaturet(G){\displaystyle t(G)}ofG{\displaystyle G};G{\displaystyle G}is said tofreezetoGτ{\displaystyle G_{\tau }};m{\displaystyle m}is themean value(or simplymean) ofG{\displaystyle G}.
Heatingis the inverse of cooling and is defined as the "integral"[6]
Norton multiplicationis an extension ofmultiplicationto a gameG{\displaystyle G}and a positive gameU{\displaystyle U}(the "unit")
defined by[7]
The incentivesΔ(U){\displaystyle \Delta (U)}of a gameU{\displaystyle U}are defined as{u−U:u∈UL}∪{U−u:u∈UR}{\displaystyle \{u-U:u\in U^{L}\}\cup \{U-u:u\in U^{R}\}}.
Overheatingis an extension of heating used in Berlekamp'ssolutionofBlockbusting,
whereG{\displaystyle G}overheated froms{\displaystyle s}tot{\displaystyle t}is defined for arbitrary gamesG,s,t{\displaystyle G,s,t}withs>0{\displaystyle s>0}as[8]
Winning Waysalso defines overheating of a gameG{\displaystyle G}by a positive gameX{\displaystyle X}, as[9]
Chillingis a variant of cooling by1{\displaystyle 1}used to analyse theGo endgameofGoand is defined by[10]
This is equivalent to cooling by1{\displaystyle 1}whenG{\displaystyle G}is an "even elementary Go position in canonical form".[11]
Warmingis a special case of overheating, namely∫1∗1{\displaystyle \int _{1*}^{1}}, normally written simply as∫{\displaystyle \int }which inverts chilling whenG{\displaystyle G}is an "even elementary Go position in canonical form".
In this case the previous definition simplifies to the form[12]
Thiscombinatorics-related article is astub. You can help Wikipedia byexpanding it. | https://en.wikipedia.org/wiki/Cooling_and_heating_(combinatorial_game_theory) |
Aconnection gameis a type ofabstract strategy gamein which players attempt to complete a specific type of connection with their pieces. This could involve forming a path between two or more endpoints, completing a closed loop, or connecting all of one's pieces so they are adjacent to each other.[1]Connection games typically have simple rules, but complex strategies. They have minimal components and may be played as board games, computer games, or evenpaper-and-pencil games.
In many connection games, the goal is to connect two opposite sides of the board. In these games, players take turns placing or moving pieces until one player has a continuous line of pieces connecting their two sides of the playing area.Hex,TwixT, andPÜNCTare typical examples of this type of game.
According to Browne,Hex(developed independently by the mathematicians Piet Hein and John Nash in the 1940s) is considered to be the first connection game, although earlier games involving connectivity have been noted to predateHex, includingLightning(1890s) andZig-Zag(1932).[1]: 4[2][3]Martin Gardneris credited with popularizing the genre in his writeup ofHexinScientific American(1957),[1]: 4[4]expanded and republished inMathematical Puzzles & Diversions(1959).[5]It was shown, starting with smaller boards, the player making the first move had a decided advantage, depending on where the initial move was made.[5]: 76In his 1959 book, Gardner also mentions thatClaude Shannonproposed a modified version ofHexthat would be played on a board with three equal-length sides; the winning condition would be changed to the first to connect all three sides.[5]: 79This was a variant of the gameY, which was a generalization ofHexthat had been invented independently by John Milnor, Charles Titus, and Craige Schensted in the early 1950s.[6]
HexandYwere examples of games where the players competed to build a path connecting sides of the board. In the June 2000 issue ofGames,[7]R. Wayne Schmittberger identified an additional sub-class of connection game in which points were bridged to form connections although the overall goal – forging a path connecting opposite sides of the board – was the same. These games includedGale/Bridg-it(1958/1960)[8][9]andTwixT(1962). Schmittberger also identified a third sub-class whereserpentileswith preprinted paths, such asPsyche-paths/Kaliko(1970) andTrax(1981), were used. In 1984, Larry Back began developing what would becomeOnyx, a connection game with a capturing mechanic.[10]
Havannah is a two-playerabstract strategyboard gameinvented byChristian Freeling. Unlike Hex or other connection games, Havannah has three conditions that enable a player to win: creating a Fork; creating a Bridge; or creating a Ring. Aringis a loop around one or more cells regardless of whether or not the encircled cells are occupied by any player or empty. Abridgeconnects any two of the six corner cells of the board. Aforkconnects any three edges of the board (a corner point is not considered part of an edge). Havannah has "a sophisticated and varied strategy" and is best played on a base-10 hexagonal board, 10 hex cells to a side.[11]
The game was published for a period in Germany byRavensburger, with a smaller, base-8 board suitable for beginners. It is currently only produced by Hexboards, a Dutch company that produces laser-carved gaming boards.[12]
Hex is a two playerabstract strategyboard gamein which players attempt to connect opposite sides of ahexagonal board. Hex was invented by mathematician and poetPiet Heinin 1942 and independently byJohn Nashin 1948.
It is traditionally played on an 11×11rhombusboard, although 13×13 and 19×19 boards are also popular. Each player is assigned a pair of opposite sides of the board which they must try to connect by taking turns placing a stone of their color onto any empty space. Once placed, the stones cannot be moved or removed. A player wins when they successfully connect their sides together through a chain of adjacent stones. Draws are impossible in Hex due to thetopologyof the game board.
The game has deep strategy, sharp tactics and a profound mathematical underpinning related to theBrouwer fixed-point theorem. The game was first marketed as a board game inDenmarkunder the nameCon-tac-tix, andParker Brothersmarketed a version of it in 1952 calledHex; they are no longer in production. Hex can also be played with paper and pencil on hexagonally ruledgraph paper.
Tak is a two-playerabstract strategy gamedesigned byJames ErnestandPatrick Rothfussand published byCheapass Gamesin 2016. Its design was based around the fictional game of Tak described in Patrick Rothfuss' 2011 fantasy novelThe Wise Man's Fear.[13]
The goal of Tak is to be the first to connect two opposite edges of the board with your pieces, called "stones", and create a road. To accomplish this, players take turns placing their own stones and building their road while blocking and capturing their opponent's pieces to hinder their efforts at the same. A player "captures" a stone by stacking one of their pieces on top of the opponent's. This creates athree dimensionalelement to the game play absent in other well known connection games, such ashex. In addition the player may place and move a piece called the capstone or play normal stones "standing" up on their edge. The capstone and standing stones have different powers and rules regarding their use in the game.
Y is anabstract strategyboard game, first described byJohn Milnorin the early 1950s.[14][15]: 87[16]The goal of Y is similar to Hex except that each player has the identical goal of making a connection between all three sides forming a "Y" rather than "owning" specific sides that must be connected. The game was independently invented in 1953 byCraige Schenstedand Charles Titus. It is an early member in a long line of games Schensted has developed, each game more complex but also more generalized.
Thisboard game-related article or section is astub. You can help Wikipedia byexpanding it. | https://en.wikipedia.org/wiki/Connection_game |
Inchess, theendgame tablebase, or simply thetablebase, is a computeriseddatabasecontaining precalculated evaluations ofendgamepositions. Tablebases are used to analyse finished games, as well as bychess enginesto evaluate positions during play. Tablebases are typically exhaustive, covering every legal arrangement of a specific selection ofpieceson the board, with bothWhite and Blackto move. For each position, the tablebase records the ultimate result of the game (i.e. a win for White, a win for Black, or adraw) and the number of moves required to achieve that result, both assumingperfect play. Because every legal move in a covered position results in another covered position, the tablebase acts as anoraclethat always provides the optimal move.
Tablebases are generated byretrograde analysis, working backwards fromcheckmatedpositions. By 2005, tablebases for all positions having up to six pieces, including the twokings, had been created.[1]By August 2012, tablebases had solved chess for almost every position with up to seven pieces, with certain subclasses omitted due to their assumed triviality;[2][3]these omitted positions were included by August 2018.[4]As of 2025[update], work is still underway to solve all eight-piece positions.
Tablebases have profoundly advanced the chess community's understanding ofendgame theory. Some positions which humans had analysed as draws were proven to be winnable; in some cases, tablebase analysis found a mate in more than five hundred moves, far beyond the ability of humans, and beyond the capability of a computer during play. This caused thefifty-move ruleto be called into question, since many positions were discovered that were winning for one side but drawn during play because of this rule. Initially, some exceptions to the fifty-move rule were introduced, but when more extreme cases were later discovered, these exceptions were removed. Tablebases also facilitate the composition ofendgame studies.
While endgame tablebases exist for other board games, such ascheckers,[5]nine men's morris,[6]and somechess variants,[7]the termendgame tablebaseis usually assumed to refer to chess tablebases.
Physical limitations ofcomputer hardwareaside, in principle it is possible tosolve any gameunder the condition that thecomplete state is knownand there is norandom chance. Strong solutions, i.e. algorithms that can produce perfect play from any position,[8]are known for some simple games such asTic Tac Toe/Noughts and crosses (draw with perfect play) andConnect Four(first player wins). Weak solutions exist for somewhat more complex games, such ascheckers(with perfect play on both sides the game is known to be a draw, but it is not known for every position created by less-than-perfect play what the perfect next move would be). Other games, such as chess andGo, have not been solved because theirgame complexityis far too vast for computers to evaluate all possible positions. To reduce the game complexity, researchers have modified these complex games by reducing the size of the board, or the number of pieces, or both.
Computer chessis one of the oldest domains ofartificial intelligence, having begun in the early 1930s.Claude Shannonproposed formal criteria for evaluating chess moves in 1949. In 1951,Alan Turingdesigned a primitive chess-playing program, which assigned values for material andmobility; the program "played" chess based on Turing's manual calculations.[9]However, even as competent chess programs began to develop, they exhibited a glaring weakness in playing the endgame. Programmers added specificheuristicsfor the endgame – for example, thekingshould move to the center of the board.[10]However, a more comprehensive solution was needed.
In 1965,Richard Bellmanproposed the creation of a database to solve chess andcheckersendgames usingretrograde analysis.[11][12]Instead of analyzingforwardfrom the position currently on the board, the database would analyzebackwardfrom positions where one player wascheckmatedorstalemated. Thus, a chess computer would no longer need to analyze endgame positions during the game because they were solved beforehand. It would no longer make mistakes because the tablebase always played the best possible move.
In 1970, Thomas Ströhlein published a doctoral thesis[13][14]with analysis of the followingclasses of endgame:KQK,KRK,KPK,KQKR,KRKB, andKRKN.[15]In 1977,Ken Thompson's KQKR tablebase was used in a match againstGrandmasterWalter Browne.[16][17]
Thompson and others helped extend tablebases to cover all four- and five-piece endgames, includingKBBKN,KQPKQ, andKRPKR.[18][19]Lewis Stiller published a thesis with research on some six-piece tablebase endgames in 1991.[20][21]
More recent contributors include:
The tablebases of all endgames with up to seven pieces are available for free download, and may also be queried using web interfaces.[28]Research on creating an eight-piece tablebase started in 2021.[29]During an interview withGooglein 2010,Garry Kasparovsaid that "maybe" the limit will be 8 pieces. Because the starting position of chess is the ultimate endgame, with 32 pieces, he claimed that chess can not be solved by computers.[30]
Before creating a tablebase, a programmer must choose ametricof optimality which means they must define at what point a player has "won" the game. Every position solved by the tablebase will either have a distance (i.e. the number of moves or plies) from this specific point or will get classified as a draw. To date, three different metrics have been used:[34]
DTZ is the only metric which supports thefifty-move ruleas it determines the distance to a "zeroing-move" (i.e. a move which resets the move count to zero under the fifty-move rule).[35]By definition, all "won" positions will always have DTZ≤{\displaystyle \leq }DTC≤{\displaystyle \leq }DTM. Inpawnlesspositions or positions with only blocked pawns, DTZ is identical to DTC.
The difference between DTC and DTM can be understood by analyzing the diagram at the right. The optimal play depends on which metric is used.
According to the DTC metric, White should capture the rook because that leads immediately to a position which will certainly win (DTC = 1), but it will take two more moves actually to checkmate (DTM = 3). In contrast according to the DTM metric, White mates in two moves, so DTM = DTC = 2.
This difference is typical of many endgames. DTC is always smaller than or equal to DTM, but the DTM metric always leads to the quickest checkmate. Incidentally, DTC = DTM in the unusual endgame oftwo knights versus one pawnbecause capturing the pawn (the only material Black has) results in a draw, unless the capture is also checkmate.
Once a metric is chosen, the first step is to generate all the positions with a given material. For example, to generate a DTM tablebase for the endgame of king and queen versus king (KQK), the computer must describe approximately 40,000 unique legal positions.
Levy and Newborn explain that the number 40,000 derives from asymmetryargument. The Black king can be placed on any of ten squares: a1, b1, c1, d1, b2, c2, d2, c3, d3, and d4 (see diagram). On any other square, its position can be considered equivalent by symmetry of rotation or reflection. Thus, there is no difference whether a Black king in a corner resides on a1, a8, h8, or h1. Multiply this number of 10 by at most 60 (legal remaining) squares for placing the White king and then by at most 62 squares for the White queen. The product 10×60×62 = 37,200. Several hundred of these positions are illegal, impossible, or symmetrical reflections of each other, so the actual number is somewhat smaller.[36][37]
For each position, the tablebase evaluates the situation separately for White-to-move and Black-to-move. Assuming that White has the queen, almost all the positions are White wins, with checkmate forced in no more than ten moves. Some positions are draws because ofstalemateor the unavoidable loss of the queen.
Each additional piece added to apawnless endgamemultiplies the number of unique positions by about a factor of sixty which is the approximate number of squares not already occupied by other pieces.
Endgames with one or more pawns increase the complexity because the symmetry argument is reduced. Since pawns can move forward but not sideways, rotation and vertical reflection of the board produces a fundamental change in the nature of the position.[38]The best calculation of symmetry is achieved by limiting one pawn to 24 squares in the rectangle a2-a7-d7-d2. All other pieces and pawns may be located in any of the 64 squares with respect to the pawn. Thus, an endgame with pawns has a complexity of 24/10 = 2.4 times a pawnless endgame with the same number of pieces.
Tim Krabbéexplains the process of generating a tablebase as follows:
"The idea is that a database is made with all possible positions with a given material [note: as in the preceding section]. Then a subdatabase is made of all positions where Black is mated. Then one where White can give mate. Then one where Black cannot stop White giving mate next move. Then one where White can always reach a position where Black cannot stop [them] from giving mate next move. And so on, always a ply further away from mate until all positions that are thus connected to mate have been found. Then all of these positions are linked back to mate by the shortest path through the database. That means that, apart from 'equi-optimal' moves, all the moves in such a path are perfect: White's move always leads to the quickest mate, Black's move always leads to the slowest mate."[39]
Theretrograde analysisis only necessary from thecheckmatedpositions, because every position that cannot be reached by moving backward from a checkmated position must be a draw.[40]
Figure 1 illustrates the idea of retrograde analysis. White can force mate in two moves by playing 1. Kc6, leading to the position in Figure 2. There are only two legal moves for black from this position, both of which lead to checkmate: if 1...Kb8 2. Qb7#, and if 1...Kd8 2. Qd7# (Figure 3).
Figure 3, before White's second move, is defined as "mate in oneply." Figure 2, after White's first move, is "mate in two ply," regardless of how Black plays. Finally, the initial position in Figure 1 is "mate in three ply" (i.e., two moves) because it leads directly to Figure 2, which is already defined as "mate in two ply." This process, which links a current position to another position that could have existed one ply earlier, can continue indefinitely.
Each position is evaluated as a win or loss in a certain number of moves. At the end of the retrograde analysis, positions which are not designated as wins or losses are necessarily draws.
After the tablebase has been generated, and every position has been evaluated, the result must be verified independently. The purpose is to check theself-consistencyof the tablebase results.[41]
For example, in Figure 1 above, the verification program sees the evaluation "mate in three ply (Kc6)." It then looks at the position in Figure 2,afterKc6, and sees the evaluation "mate in two ply." These two evaluations are consistent with each other. If the evaluation of Figure 2 were anything else, it would be inconsistent with Figure 1, so the tablebase would need to be corrected.[clarification needed]
A four-piece tablebase must rely on three-piece tablebases that could result if one piece is captured. Similarly, a tablebase containing a pawn must be able to rely on other tablebases that deal with the new set of material afterpawn promotionto a queen or other piece. The retrograde analysis program must account for the possibility of a capture or pawn promotion on the previous move.[42]
Tablebases assume thatcastlingis not possible for two reasons. First, in practical endgames, this assumption is almost always correct. (However, castling is allowed by convention incomposed problemsandstudies.) Second, if the king and rook are on their original squares, castling may or may not be allowed. Because of this ambiguity, it would be necessary to make separate evaluations for states in which castling is or is not possible.
The same ambiguity exists for theen passantcapture, since the possibility ofen passantdepends on the opponent's previous move. However, practical applications ofen passantoccur frequently in pawn endgames, so tablebases account for the possibility ofen passantfor positions where both sides have at least one pawn.
According to the method described above, the tablebase must allow the possibility that a given piece might occupy any of the 64 squares. In some positions, it is possible to restrict the search space without affecting the result. This saves computational resources and enables searches which would otherwise be impossible.
An early analysis of this type was published in 1987, in the endgameKRP(a2)KBP(a3), where the Black bishop moves on the dark squares (see example position at right).[43]In this position, we can make the followinga prioriassumptions:
The result of this simplification is that, instead of searching for 48 * 47 = 2,256 permutations for the pawns' locations, there is only one permutation. Reducing the search space by a factor of 2,256 facilitates a much quicker calculation.
Bleicher has designed a commercial program called "Freezer," which allows users to build new tablebases from existing Nalimov tablebases witha prioriinformation. The program could produce a tablebase for positions with seven or more pieces with blocked pawns, even before tablebases for seven pieces became available.[45]
Incorrespondence chess, a player may consult a chess computer for assistance, provided that the etiquette of the competition allows this. Some correspondence organizations draw a distinction in their rules between utilizingchess engineswhich calculate a position in real time and the use of a precomputeddatabasestored on a computer. Use of an endgame tablebase might be permitted in a live game even if engine use is forbidden. Players have also used tablebases to analyze endgames from over-the-board play after the game is over. A six-piece tablebase (KQQKQQ) was used to analyze the endgame that occurred in the correspondence gameKasparov versus The World.[46]
Competitive players must know that some tablebases ignore thefifty-move rule. According to that rule, if fifty moves have passed without a capture or a pawn move, either player may claim a draw.FIDEchanged the rules several times, starting in 1974, to allow one hundred moves for endgames where fifty moves were insufficient to win. In 1988, FIDE allowed seventy-five moves for KBBKN, KNNKP, KQKBB, KQKNN, KRBKR, and KQPKQ with the pawn on the seventh rank, because tablebases had uncovered positions in these endgames requiring more than fifty moves to win. In 1992, FIDE canceled these exceptions and restored the fifty-move rule to its original standing.[35]Thus a tablebase may identify a position as won or lost, when it is in fact drawn by the fifty-move rule. Such a position is sometimes termed a "cursed win" (where mate can be forced, but it runs afoul of the 50-move rule), or a "blessed loss" from the perspective of the other player.[47]
In 2013,ICCFchanged the rules for correspondence chess tournaments starting from 2014; a player may claim a win or draw based on six-man tablebases.[48]In this case the fifty-move rule is not applied, and the number of moves to mate is not taken into consideration. In 2020, this was increased to seven-man tablebases.[49]
The knowledge contained in tablebases allows the computer a tremendous advantage in the endgame. Not only can computers play perfectly within an endgame, but they can simplify to a winning tablebase position from a more complicated endgame.[50]For the latter purpose, some programs use "bitbases" which give the game-theoretical value of positions without the number of moves until conversion or mate – that is, they only reveal whether the position is won, lost or draw. Sometimes even this data is compressed and the bitbase reveals only whether a position is won or not, making no difference between a lost and a drawn game.[40]Shredderbases, for example, used by theShredderprogram, are a type of bitbase,[51]which fits all 3-, 4- and 5-piece bitbases in 157MB. This is a mere fraction of the 7.05 GB that the Nalimov tablebases require.[52]
Somecomputer chessexperts have observed practical drawbacks to the use of tablebases.[53]In addition to ignoring the fifty-move rule, a computer in a difficult position might avoid the losing side of a tablebase ending even if the opponent cannot practically win without themselves knowing the tablebase. The adverse effect could be a premature resignation, or an inferior line of play that loses with less resistance than a play without tablebase might offer. Another drawback is that tablebases require a lot ofmemoryto store trillions of positions. The Nalimov tablebases, which use advancedcompressiontechniques, require 7.05GBof hard disk space for all 5-piece endings and 1.2 TB for 6-piece endings.[32][54]The 7-piece Lomonosov tablebase requires 140TBof storage space. Some computers play better overall if their memory is devoted instead to the ordinary search and evaluation function. Modern engines play endgames significantly better, and using tablebases only results in a very minor improvement to their performance.[55]
Syzygy tablebases were developed by Ronald de Man and released in April 2013 in a form optimized for use by a chess program during search. This variety consists of two tables per endgame: a smaller WDL (win/draw/loss) table which contains knowledge of the 50-move rule, and a larger DTZ table (distance to zero ply, i.e., pawn move or capture). The WDL tables were designed to be small enough to fit on asolid-state drivefor quick access during search, whereas the DTZ form is for use at the root position to choose the game-theoretically quickest distance to resetting the 50-move rule while retaining a winning position, instead of performing a search. Syzygy tablebases are available for all 6-piece endings, and are now supported by many top engines, includingStockfish,Leela,Dragon, andTorch.[56]Since August 2018, all 7-piece Syzygy tables are also available.[4]
In 2020, Ronald de Man estimated that 8-man tablebases would be economically feasible within 5 to 10 years, as just 2 PB of disk space would store them in Syzygy format,[33]and they could be generated using existing code on a conventional server with 64 TB of RAM.[57]
In contexts where the fifty-move rule may be ignored, tablebases have answered longstanding questions about whether certain combinations of material are wins or draws. The following interesting results have emerged:
For some years, a "mate-in-200" position (first diagram below) held the record for the longest computer-generated forced mate. (Otto Blathyhad composed a "mate in 292 moves" problem in 1889, albeit from an illegal starting position.[66]) In May 2006, Bourzutschky and Konoval discovered a KQNKRBN position with a DTC of 517 moves,[67][68]whose DTM was later found to be 545 moves.[69]In 2012, when the Lomonosov 7-piece tablebase was being completed, a position was found with a record DTM of 549 moves (third diagram below).[69]It was initially assumed that a 1000-move mate in one of the 8-man endgames would be found.[69]However, cursory targeted research has currently only found a position with DTC 584, which was discovered in 2021 by Bourzutschky.[34]Assuming this projection holds true, Haworth’s Law (which states that the number of moves roughly doubles for each piece added) breaks down at this point.
Many positions are winnable despite seeming to be non-winnable by force at first glance. For example, the position in the middle diagram is a win for Black in 154 moves (the white pawn is captured after around 80 moves).[23]
Since many composedendgame studiesdeal with positions that exist in tablebases, their soundness can be checked using the tablebases. Some studies have been proved unsound by the tablebases. That can be either because the composer's solution does not work, or else because there is an equally effective alternative that the composer did not consider. Another way tablebasescookstudies is a change in the evaluation of an endgame. For instance, the endgame with a queen and bishop versus two rooks was thought to be a draw, but tablebases proved it to be a win for the queen and bishop, so almost all studies based on this endgame are unsound.[70]
For example, Erik Pogosyants composed the study at right, with White to play and win. The intended main line was 1. Ne3! Rxh2 2. 0-0-0# A tablebase discovered that 1. h4 also wins for White in 33 moves, even though Black can capture the pawn (which is not the best move – in case of capturing the pawn black loses in 21 moves, while Kh1-g2 loses in 32 moves). Incidentally, the tablebase does not recognize the composer's solution because it includes castling.[71]
While tablebases have cooked some studies, they have assisted in the creation of other studies. Composers can search tablebases for interesting positions, such aszugzwang. For all three- to five-piece endgames and pawnless six-piece endgames, a complete list ofmutual zugzwangshas been tabulated and published.[72][73][74]
There has been some controversy whether to allow endgame studies composed with tablebase assistance into composing tournaments. In 2003, the endgame composer and expertJohn Roycroftsummarized the debate:
[N]ot only do opinions diverge widely, but they are frequently adhered to strongly, even vehemently: at one extreme is the view that since we can never be certain that a computer has been used it is pointless to attempt a distinction, so we should simply evaluate a 'study' on its content, without reference to its origins; at the other extreme is the view that using a 'mouse' to lift an interesting position from a ready-made computer-generated list is in no sense composing, so we should outlaw every such position.[75]
Roycroft himself agrees with the latter approach. He continues, "One thing alone is clear to us: the distinction between classical composing and computer composing should be preserved for as long as possible: if there is a name associated with a study diagram that name is a claim of authorship."[75]
Mark Dvoretsky, anInternational Master, chess trainer, and author, took a more permissive stance. He was commenting in 2006 on a study byHarold van der Heijden, published in 2001, which reached the position at right after three introductory moves. The drawing move for White is 4. Kb4!! (and not 4. Kb5), based on a mutual zugzwang that may occur three moves later.
Dvoretsky comments:
Here, we should touch on one delicate question. I am sure that this unique endgame position was discovered with the help of Thompson’s famous computer database. Is this a 'flaw,' diminishing the composer's achievement?
Yes, the computer database is an instrument, available to anyone nowadays. Out of it, no doubt, we could probably extract yet more unique positions – there are some chess composers who do so regularly. The standard for evaluation here should be the result achieved. Thus: miracles, based upon complex computer analysis rather than on their content of sharp ideas, are probably of interest only to certain aesthetes.[76]
On theBell Labswebsite,Ken Thompsononce maintained a link to some of his tablebase data. The headline read, "Play chess with God."[77]
Regarding Stiller's long wins, Tim Krabbé struck a similar note:
Playing over these moves is an eerie experience. They are not human; a grandmaster does not understand them any better than someone who has learned chess yesterday. The knights jump, the kings orbit, the sun goes down, and every move is the truth. It's like being revealed the Meaning of Life, but it's in Estonian.[78]
Originally, an endgame tablebase was called an "endgame data base" or "endgame database". This name appeared in bothEGand theICCA Journalstarting in the 1970s, and is sometimes used today. According to Haworth, theICCA Journalfirst used the word "tablebase" in connection with chess endgames in 1995.[79]According to that source, a tablebase contains a complete set of information, but a database might lack some information.
Haworth prefers the term "Endgame Table", and has used it in the articles he has authored.[80]Roycroft has used the term "oracle database" throughout his magazine,EG.[81]Nonetheless, the mainstream chess community has adopted "endgame tablebase" as the most common name.
John Nunnhas written three books based on detailed analysis of endgame tablebases: | https://en.wikipedia.org/wiki/Endgame_tablebase |
Theexpectiminimaxalgorithm is a variation of theminimaxalgorithm, for use inartificial intelligencesystems that play two-playerzero-sum games, such asbackgammon, in which the outcome depends on a combination of the player's skill andchance elementssuch as dice rolls. In addition to "min" and "max" nodes of the traditional minimax tree, this variant has "chance" ("move by nature") nodes, which take theexpected valueof a random event occurring.[1]Ingame theoryterms, an expectiminimax tree is the game tree of anextensive-form gameofperfect, butincomplete information.
In the traditionalminimaxmethod, the levels of the tree alternate from max to min until the depth limit of the tree has been reached. In an expectiminimax tree, the "chance" nodes are interleaved with the max and min nodes. Instead of taking the max or min of theutility valuesof their children, chance nodes take a weighted average, with the weight being the probability that child is reached.[1]
The interleaving depends on the game. Each "turn" of the game is evaluated as a "max" node (representing the AI player's turn), a "min" node (representing a potentially-optimal opponent's turn), or a "chance" node (representing a random effect or player).[1]
For example, consider a game in which each round consists of a single die throw, and then decisions made by first the AI player, and then another intelligent opponent. The order of nodes in this game would alternate between "chance", "max" and then "min".[1]
The expectiminimax algorithm is a variant of theminimaxalgorithm and was firstly proposed byDonald Michiein 1966.[2]Itspseudocodeis given below.
Note that for random nodes, there must be a known probability of reaching each child. (For most games of chance, child nodes will be equally-weighted, which means the return value can simply be the average of all child values.)
Expectimax search is a variant described inUniversal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability(2005) by Tom Everitt andMarcus Hutter.
Bruce Ballard was the first to develop a technique, called *-minimax, that enablesalpha-beta pruningin expectiminimax trees.[3][4]The problem with integratingalpha-beta pruninginto the expectiminimax algorithm is that the scores of a chance node's children may exceed the alpha or beta bound of its parent, even if the weighted value of each child does not. However, it is possible to bound the scores of a chance node's children, and therefore bound the score of the CHANCE node.
If a standard iterative search is about to score thei{\displaystyle i}th child of a chance node withN{\displaystyle N}equally likely children, that search has computed scoresv1,v2,…,vi−1{\displaystyle v_{1},v_{2},\ldots ,v_{i-1}}for child nodes 1 throughi−1{\displaystyle i-1}. Assuming a lowest possible scoreL{\displaystyle L}and a highest possible scoreU{\displaystyle U}for each unsearched child, the bounds of the chance node's score is as follows:
score≤1n((v1+…+vi−1)+vi+U×(n−i)){\displaystyle {\text{score}}\leq {\frac {1}{n}}\left((v_{1}+\ldots +v_{i-1})+v_{i}+U\times (n-i)\right)}
score≥1n((v1+…+vi−1)+vi+L×(n−i)){\displaystyle {\text{score}}\geq {\frac {1}{n}}\left((v_{1}+\ldots +v_{i-1})+v_{i}+L\times (n-i)\right)}
If an alpha and/or beta bound are given in scoring the chance node, these bounds can be used to cut off the search of thei{\displaystyle i}th child. The above equations can be rearranged to find a new alpha & beta value that will cut off the search if it would cause the chance node to exceed its own alpha and beta bounds:
αi=N×α−(v1+…+vi−1)+U×(n−i){\displaystyle \alpha _{i}=N\times \alpha -\left(v_{1}+\ldots +v_{i-1}\right)+U\times (n-i)}
βi=N×β−(v1+…+vi−1)+L×(n−i){\displaystyle \beta _{i}=N\times \beta -\left(v_{1}+\ldots +v_{i-1}\right)+L\times (n-i)}
Thepseudocodefor extending expectiminimax with fail-hardalpha-beta pruningin this manner is as follows:
This technique is one of a family of variants of algorithms which can bound the search of a CHANCE node and its children based on collecting lower and upper bounds of the children during search. Other techniques which can offer performance benefits include probing each child with a heuristic to establish a min or max before performing a full search on each child, etc. | https://en.wikipedia.org/wiki/Expectiminimax_tree |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.