text stringlengths 0 473k |
|---|
[SOURCE: https://en.wikipedia.org/wiki/Desktop_computer] | [TOKENS: 3513] |
Contents Desktop computer A desktop computer, often abbreviated as desktop, is a personal computer designed for regular use at a stationary location on or near a desk (as opposed to a portable computer) due to its size and power requirements. The most common configuration has a case that houses the power supply, motherboard (a printed circuit board with a microprocessor as the central processing unit, memory, bus, certain peripherals and other electronic components), disk storage (usually one or more hard disk drives, solid-state drives, optical disc drives, and in early models floppy disk drives); a keyboard and mouse for input; and a monitor, speakers, and, often, a printer for output. The case may be oriented horizontally or vertically and placed either underneath, beside, or on top of a desk. Desktop computers with their cases oriented vertically are referred to as towers. As the majority of cases offered since the mid 1990s are in this form factor, the term desktop has been retronymically used to refer to modern cases offered in the traditional horizontal orientation. History Prior to the widespread use of microprocessors, a computer that could fit on a desk was considered remarkably small; the type of computers most commonly used were minicomputers, which, despite the name, were rather large and were "mini" only compared to the so-called "big iron". Early computers, and later the general purpose high throughput "mainframes", took up the space of a whole room. Minicomputers, on the contrary, generally fit into one or a few refrigerator-sized racks, or, for the few smaller ones, were built into a fairly large desk, not put on top of it. It was not until the 1970s when fully programmable computers appeared that could fit entirely on top of a desk. 1970 saw the introduction of the Datapoint 2200, a "smart" computer terminal complete with keyboard and monitor, was designed to connect with a mainframe computer but that did not stop owners from using its built-in computational abilities as a stand-alone desktop computer. The HP 9800 series, which started out as programmable calculators in 1971 but was programmable in BASIC by 1972, used a smaller version of a minicomputer design based on ROM memory and had small one-line LED alphanumeric displays and displayed graphics with a plotter. The Wang 2200 of 1973 had a full-size cathode-ray tube (CRT) and cassette tape storage. The IBM 5100 in 1975 had a small CRT display and could be programmed in BASIC and APL. These were generally expensive specialized computers sold for business or scientific uses. Apple II, TRS-80 and Commodore PET were first-generation personal home computers launched in 1977, which were aimed at the consumer market – rather than businessmen or computer hobbyists. Byte magazine referred to these three as the "1977 Trinity" of personal computing. Throughout the 1980s and 1990s, desktop computers became the predominant type, the most popular being the IBM PC and its clones, followed by the Apple Macintosh, with the third-placed Commodore Amiga having some success in the mid-1980s but declining by the early 1990s. Early personal computers, like the original IBM Personal Computer, were enclosed in a "desktop case", horizontally oriented to have the display screen placed on top, thus saving space on the user's actual desk, although these cases had to be sturdy enough to support the weight of CRT displays that were widespread at the time. Over the course of the 1990s, desktop cases gradually became less common than the more-accessible tower cases that may be located on the floor under or beside a desk rather than on a desk. Not only do these tower cases have more room for expansion, they have also freed up desk space for monitors which were becoming larger every year. Desktop cases, particularly the compact form factors, remain popular for corporate computing environments and kiosks. Some computer cases can be interchangeably positioned either horizontally (desktop) or upright (mini-tower). Influential games such as Doom and Quake during the 1990s had pushed gamers and enthusiasts to frequently upgrade to the latest CPUs and graphics cards (3dfx, ATI, and Nvidia) for their desktops (usually a tower case) in order to run these applications, though this has slowed since the late 2000s as the growing popularity of Intel integrated graphics forced game developers to scale back. Creative Technology's Sound Blaster series were a de facto standard for sound cards in desktop PCs during the 1990s until the early 2000s, when they were reduced to a niche product, as OEM desktop PCs came with sound boards integrated directly onto the motherboard. While desktops have long been the most common configuration for PCs, by the mid-2000s the growth shifted from desktops to laptops. Laptops had long been produced by contract manufacturers based in Asia, such as Foxconn, and this shift led to the closure of the many desktop assembly plants in the United States by 2010. Another trend around this time was the increasing proportion of inexpensive base-configuration desktops being sold, hurting PC manufacturers such as Dell whose build-to-order customization of desktops relied on upselling added features to buyers. Battery-powered portable computers had just a 2% worldwide market share in 1986. However, laptops have become increasingly popular, both for business and personal use. Around 109 million notebook PCs shipped worldwide in 2007, a growth of 33% compared to 2006. In 2008, it was estimated that 145.9 million notebooks were sold and that the number would grow in 2009 to 177.7 million. The third quarter of 2008 was the first time when worldwide notebook PC shipments exceeded desktops, with 38.6 million units versus 38.5 million units. The sales breakdown of the Apple Macintosh has seen sales of desktop Macs staying mostly constant while being surpassed by that of Mac notebooks whose sales rate has grown considerably; seven out of ten Macs sold were laptops in 2009, a ratio projected to rise to three out of four by 2010. The change in sales of form factors is due to the desktop iMac moving from affordable G3 to upscale G4 model and subsequent releases are considered premium all-in-ones. By contrast, the MSRP of the MacBook laptop lines have dropped through successive generations such that the MacBook Air and MacBook Pro constitute the lowest price of entry to a Mac, with the exception of the even more inexpensive Mac Mini (albeit without a monitor and keyboard), and the MacBooks are the top-selling form factors of the Macintosh platform today. The decades of development mean that most people already own desktop computers that meet their needs and have no need of buying a new one merely to keep pace with advancing technology. Notably, the successive release of new versions of Windows (Windows 95, 98, XP, Vista, 7, 8, 10 and so on) had been drivers for the replacement of PCs in the 1990s, but this slowed in the 2000s. IDC analyst Jay Chou suggested that Windows 8 actually hurt sales of PCs in 2012, as businesses decided to stick with Windows 7 rather than upgrade. Some suggested that Microsoft had acknowledged "implicitly ringing the desktop PC death knell" as Windows 8 offered little upgrade in desktop PC functionality over Windows 7; instead, Windows 8's innovations were mostly on the mobile side. The post-PC trend saw a decline in the sales of desktop and laptop PCs. The decline was attributed to increased power and applications of alternative computing devices, namely smartphones and tablet computers. Although most people exclusively use their smartphones and tablets for more basic tasks such as social media and casual gaming, these devices have in many instances replaced a second or third PC in the household that would have performed these tasks, though most families still retain a powerful PC for serious work. Among PC form factors, desktops remain a staple in the enterprise market but lost popularity among home buyers. PC makers and electronics retailers responded by investing their engineering and marketing resources towards laptops (initially netbooks in the late 2000s, and then the higher-performance Ultrabooks from 2011 onwards), which manufacturers believed had more potential to revive the PC market than desktops. In April 2017, StatCounter declared a "Milestone in technology history and end of an era" with the mobile Android operating system becoming more popular than Windows (the operating system that made desktops dominant over mainframe computers). Windows is still most popular on desktops (and laptops), while smartphones (and tablets) use Android or iOS. Towards the middle of the 2010s, media sources began to question the existence of the post-PC trend, at least as conventionally defined, stating that the so-called post-PC devices are just other portable forms of PCs joining traditional desktop PCs which still have their own operation areas and evolve. Although for casual use traditional desktops and laptops have seen a decline in sales, in 2018, global PC sales experienced a resurgence, driven by the business market. Desktops remain a solid fixture in the commercial and educational sectors. In 2019 the global PC market recorded its first full year of growth in eight years. Inclusive of desktops, notebooks and workstations, 268.1 million units were shipped, up 2.7% on 2018. According to the International Data Corporation (IDC), PC sales shot up 14.8% between 2020 and 2021 and desktop market grew faster than the laptop market in the second quarter of 2021. Total PC shipments during 2021 reached 348.8 million units, up 14.8% from 2020. This represents the highest level of shipments the PC market has seen since 2012. In addition, gaming desktops have seen a global revenue increase of 54% annually. For gaming the global market of gaming desktops, laptops, and monitors was expected to grow to 61.1 million shipments by the end of 2023, up from 42.1 million, with desktops growing from 15.1 million shipments to 19 million. PC gaming as a whole accounts for 28% of the total gaming market as of 2017. This is partially due to the increasing affordability of desktop PCs. In 2024 255.5 million PCs (including desktops and laptops) were shipped, up from 246 million in 2023 – a 3.8% year-over-year growth with Lenovo maintaining the largest market share. Types Full-sized desktops are characterized by separate display and processing components. These components are connected to each other by cables or wireless connections. They often come in a tower form factor. These computers are easy to customize and upgrade per user requirements, e.g. by expansion card. Early extended-size (significantly larger than mainstream ATX case) tower computers sometimes were labeled as "deskside computers", but currently this naming is quite rare. Compact desktops are reduced in physical proportions compared to full-sized desktops. They are typically small-sized, inexpensive, low-power computers designed for basic tasks such as web browsing, accessing web-based applications, document processing, and audio/video playback. Hardware specifications and processing power are usually reduced and hence make them less appropriate for running complex or resource-intensive applications. A nettop is a notable example of a compact desktop. A laptop without a screen can functionally be used as a compact desktop, sometimes called a "slabtop". An all-in-one (AIO) desktop computer integrates the system's internal components into the same case as the display, thus occupying a smaller footprint (with fewer cables) than desktops that incorporate a tower. The All-in-one systems are rarely labeled as desktop computers. In personal computing, a tower is a form factor of desktop computer case whose height is much greater than its width, thus having the appearance of an upstanding tower block. In computing, a pizza box enclosure is a design for desktop computers. Pizza box cases tend to be wide and flat, resembling pizza delivery boxes, hence the name. Cube Workstations have a cube case enclosure to house the motherboard, PCI-E expansion cards, GPU, CPU, DRAM DIMM slots, computer cooling equipment, chipsets, I/O ports, hard disk drives, and solid-state drives. Open Frame cases have easy service access, and have no airflow problems, are great for building liquid-cooled systems and have an industrial design look, but will draw a lot of dust on components and needs cleaning them more often, yet the design allows blowing dust away with ease. Gaming computers are desktop computers with high performance CPU, GPU, and RAM optimized for playing video games at high resolution and frame rates. Gaming computer peripherals usually include mechanical keyboards for faster response time, and a gaming computer mouse which can track higher dots per inch movement. These desktops are connected to home entertainment systems and typically used for amusement purposes. They come with high definition display, video graphics, surround sound and TV tuner systems to complement typical PC features. Over time some traditional desktop computers have been replaced with thin clients utilizing off-site computing solutions like the cloud. As more services and applications are served over the internet from off-site servers, local computing needs decrease, this drives desktop computers to be smaller, cheaper, and need less powerful hardware. More applications and in some cases entire virtual desktops are moved off-site and the desktop computer runs only an operating system or a shell application while the actual content is served from a server. Thin client computers may do almost all of their computing on a virtual machine in another site. Internal, hosted virtual desktops can offer users a completely consistent experience from anywhere. Workstations are advanced class of personal computers designed for a user and more powerful than a regular PC but less powerful than a server in regular computing. They are capable of high-resolution and three-dimensional interfaces, and typically used to perform scientific and engineering work. Like server computers, they are often connected with other workstations. The main form-factor for this class is a Tower case, but most vendors produce compact or all-in-one low-end workstations. Most tower workstations can be converted to a rack-mount version. Oriented for small business class of servers; typically entry-level server machines, with similar to workstation/gaming PC computing powers and with some mainstream servers features, but with only basic graphic abilities; and some desktop servers can be converted to workstations. Comparison with laptops Desktops have an advantage over laptops in that the spare parts and extensions tend to be standardized, resulting in lower prices and greater availability. For example, the size and mounting of the motherboard are standardized into ATX, microATX, BTX or other form factors. Desktops have several standardized expansion slots, like conventional PCI or PCI Express, while laptops tend to have only one mini-PCI slot and one PC Card slot (or ExpressCard slot). Procedures for assembly and disassembly of desktops tend to be simple and standardized as well. This tends not to be the case for laptops, though adding or replacing some parts, like the optical drive, hard disk, or adding an extra memory module is often quite simple. This means that a desktop computer configuration, usually a tower case, can be customized and upgraded to a greater extent than laptops. This customization has kept tower cases popular among gamers and enthusiasts. Another advantage of the desktop is that (apart from environmental concerns) power consumption is not as critical as in laptop computers because the desktop is exclusively powered from the wall socket. Desktop computers also provide more space for cooling fans and vents to dissipate heat, allowing enthusiasts to overclock with less risk. The two large microprocessor manufacturers, Intel and AMD, have developed special CPUs for mobile computers (e.g. laptops) that consume less power and lower heat, but with lower performance levels. Laptop computers, conversely, offer portability that desktop systems (including small form factor and all-in-one desktops) cannot due to their compact size and clamshell design. The laptop's all-in-one design provides a built-in keyboard and a pointing device (such as a touchpad) for its user and can draw on power supplied by a rechargeable battery. Laptops also commonly integrate wireless technologies like Wi-Fi, Bluetooth, and 3G, giving them a broader range of options for connecting to the internet, though this trend is changing as newer desktop computers come integrated with one or more of these technologies. A desktop computer needs a UPS to handle electrical disturbances like short interruptions, blackouts, and spikes; achieving an on-battery time of more than 20–30 minutes for a desktop PC requires a large and expensive UPS. A laptop with a sufficiently charged battery can continue to be used for hours in case of a power outage and is not affected by short power interruptions and blackouts. A desktop computer often has the advantage over a comparable laptop in computational capacity. Overclocking is often more feasible on a desktop than on a laptop; similarly, hardware add-ons such as discrete graphics co-processors may be possible to install only in a desktop. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Peer_group] | [TOKENS: 3419] |
Contents Peer group In sociology, a peer group is both a social group and a primary group of people who have similar interests (homophily), age, background, or social status. Members of peer groups are likely to influence each other's beliefs and behaviour. During adolescence, peer groups tend to face dramatic changes. Adolescents tend to spend more time with their peers and have less adult supervision. Peer groups give a sense of security and identity. A study found that during the adolescent phase as adolescents spend double time with their peers compared to the time youth spend with their parents. Adolescents' communication shifts during this time as well. They prefer to talk about school and their careers with their parents, and they enjoy talking about sex and other interpersonal relationships with their peers. Children look to join peer groups who accept them, even if the group is involved in negative activities. Children are less likely to accept those who are different from them. Friendship and support is important for people to have an active social life. Similarly, it is equally important to people with disability as it can help them to feel included, valued and happier. Social interaction among peers may influence development; quality of life outcomes. This interaction and positive relationship benefit subjective wellbeing and have a positive effect on mental and physical health. Cliques are small groups typically defined by common interests or by friendship. Cliques typically have 2–12 members and tend to be formed by age, gender, race, and social class. Clique members are usually the same in terms of academics and risk behaviors. Cliques can serve as an agent of socialization and social control. Being part of a clique can be advantageous since it may provide a sense of autonomy, a secure social environment, and overall well-being. Crowds are larger, more vaguely defined groups that may not have a friendship base. Crowds serve as peer groups, and they increase in importance during early adolescence, and decrease by late adolescence. The level of involvement in adult institutions and peer culture describes crowds. Socialization At an early age, the peer group becomes an important part of socialization Unlike other agents of socialization, such as family and school, peer groups allow children to escape the direct supervision of adults. Among peers, children learn to form relationships on their own, and have the chance to discuss interests that adults may not share with children, such as clothing and popular music, or may not permit, such as drugs and sex. Peer groups can have great influence or peer pressure on each other's behavior, depending on the amount of pressure. However, currently more than 23 percent of children globally lack enough connections with their age group, and their cognitive, emotional and social development are delayed than other kids. Developmental psychology Developmental psychologists, Lev Vygotsky, Jean Piaget, Erik Erikson, Harry Stack Sullivan, and social learning theorists have all argued that peer relationships provide a unique context for cognitive, social, and emotional development. Modern research echoes these sentiments, showing that social and emotional gains are indeed provided by peer interaction. Vygotsky's Sociocultural Theory focuses on the importance of a child's culture and notes that a child is continually acting in social interactions with others. He also focuses on language development and identifies the zone of proximal development. The Zone of Proximal development is defined as the gap between what a student can do alone and what the student can achieve through teacher assistance. The values and attitudes of the peer group are essential elements in learning. Those who surround themselves with academically focused peers will be more likely to internalize this type of behavior. Piaget's theory of cognitive development identifies four stages of cognitive development. He believes that children actively construct their understanding of the world based on their own experiences. In addition Piaget identified with aspects of development, occurring from middle childhood onwards, for which peer groups are essential. He suggested that children's speech to peers is less egocentric than their speech to adults. Egocentric speech is referring to the speech that is not adapted to what the listener just said. Erikson's stages of psychosocial development include eight stages ranging from birth to old age. He has emphasized the idea that the society, not just the family, influences one's ego and identity through developmental stages. Erikson went on to describe how peer pressure is a key event during the adolescences stage of psychosocial development. In his Latency stage, which includes children from 6–12 years old and this is when the adolescents begin to develop relationships among their peers. Harry Stack Sullivan has developed the Theory of Interpersonal Relations. Sullivan described friendships as providing the following functions: (a) offering consensual validation, (b) bolstering feelings of self-worth, (c) providing affection and a context for intimate disclosure, (d) promoting interpersonal sensitivity, and (e) setting the foundation for romantic and parental relationships. Sullivan believed these functions developed during childhood and that true friendships were formed around the age of 9 or 10. Social learning theorists such as John B. Watson, B.F. Skinner, and Albert Bandura, all argue for the influences of the social group in learning and development. Behaviourism, Operant Learning Theory, and Cognitive Social Learning Theory all consider the role the social world plays on development. In The Nurture Assumption and No Two Alike, psychologist Judith Rich Harris suggests that an individual's peer group significantly influences their intellectual and personal development. Several longitudinal studies support the conjecture that peer groups significantly affect scholastic achievement, particularly when adult involvement is low. Relatively few studies have examined the effect peer groups have on tests of cognitive ability. However, there is some evidence that peer groups influence tests of cognitive ability. Positive attributes (advantages) Peer groups provide perspective outside of the individual's viewpoints. Members inside peer groups also learn to develop relationships with others in the social system. Peers, particularly group members, become important social referents for teaching other members customs, social norms, and different ideologies. Positive peer relationships improve social interaction and enhance positive engagement levels in adolescents with and without disabilities. Peers foster overall well-being by offering practical, emotional, and social support. Peer groups can also serve as a venue for teaching members gender roles. Through gender-role socialization, group members learn about sex differences, and social and cultural expectations. While boys and girls differ greatly, there is not a one-to-one link between sex and gender roles with males always being masculine and females always being feminine. Both genders can contain different levels of masculinity and femininity. Peer groups can consist of all males, all females, or both males and females. Studies show that the majority of peer groups are unisex. Adolescent peer groups provide support as teens assimilate into adulthood. Major changes include: decreasing dependence on parents, increasing feelings of self-sufficiency, and connecting with a much larger social network. Adolescents are expanding their perspective beyond the family and learning how to negotiate relationships with others in different parts of the social system. Peers, particularly group members, become important social referents. Peer groups also influence individual members' attitudes and behaviours on many cultural and social issues, such as: drug use, violence, and academic achievement. and even the development and expression of prejudice. Peer groups provide an influential social setting in which group norms are developed and enforced through socialization processes that promote in-group similarity. Peer groups' cohesion is determined and maintained by such factors as group communication, group consensus, and group conformity concerning attitude and behavior. As members of peer groups interconnect and agree on what defines them as a group, a normative code arises. This normative code can become very rigid, such as when deciding on group behavior and clothing attire. Member deviation from the strict normative code can lead to rejection from the group. Peer groups (friends group) can help individuals form their own identity. Identity formation is a developmental process where a person acquires a sense of self. One of the major factors that influence the formation of a person's identity is his or her peers. Studies have shown that peers provide normative regulation, and that they provide a staging ground for the practice of social behaviors. This allows individuals to experiment with roles and discover their identities. The identity formation process is an important role in an individual's development. Erik Erikson emphasized the importance of identity formation, and he illustrated the steps one takes in developing his or her sense of self. He believed this process occurs throughout one's entire life. Peer interactions have a significant impact on adolescents, developing empathy, conflict resolution, and interpersonal skills, these relationships also play a crucial role in shaping body image and satisfaction. Negative attributes (disadvantages) The term peer pressure is often used to describe instances where an individual feels indirectly pressured into changing their behavior to match that of their peers. Taking up smoking and underage drinking are two of the best known examples. In spite of the often negative connotations of the term, peer pressure can be used positively, for example, to encourage other peers to study, or not to engage in activities such as the ones discussed above. Although peer pressure is not isolated to one age group, it is usually most common during the adolescent stage. Adolescence is a period characterized by experimentation, and adolescents typically spend a lot of time with their peers in social contexts. Teenagers compel each other to go along with certain beliefs or behaviors, and studies have shown that boys are more likely to give in to it than girls. There has been much research done to gain a better understanding about the effects of peer pressure, and this research will allow parents to handle and understand their children's behaviors and obstacles they will face due to their peer groups. Learning how peer pressure impacts individuals is a step to minimizing the negative effects it leads to. Success of peer relationships is linked to later psychological development and to academic achievement. Therefore, if one does not have successful peer relationships it may lead to developmental delays and poor academic achievement—perhaps even in-completion of a high school degree. Children with poor peer relationships may also experience job related and marital problems later in life. Several studies have shown that peer groups are powerful agents of risk behaviors in adolescence. Adolescents typically replace family with peers regarding social and leisure activities, and many problematic behaviors occur in the context of these groups. A study done in 2012 focused on adolescents' engagement in risk behaviors. Participants completed a self-report measure of identity commitment, which explores values, beliefs, and aspirations, as well as a self-report that measures perceived peer group pressure and control. Both peer group pressure and control were positively related to risky behaviors. However, adolescents who were more committed to a personal identity had lower rates of risk behaviors. Overall, this study shows us that adolescent identity development may help prevent negative effects of peer pressure in high-risk adolescents. Social behaviors can be promoted or discouraged by social groups, and several studies have shown that aggression and prosociality are susceptible to peer influence. A longitudinal study done in 2011 focused on these two behaviors. A sample of adolescents was followed over a one-year period, and results showed that adolescents who joined an aggressive group were more likely to increase their aggression levels. Also, adolescents were likely to display prosocial behaviors that were similar to the consistent behaviors of the group they were in. An adolescent's peer group plays a role in shaping him or her into an adult, and the lack of positive behavior can lead to consequences in the future. Adolescence is also characterized by physical changes, new emotions, and sexual urges, and teenagers are likely to participate in sexual activity. A longitudinal study done in 2012 followed a group of adolescents for thirteen years. Self-reports, peer nominations, teacher ratings, counselor ratings, and parent reports were collected, and results showed a strong correlation between deviant peer groups and sexual promiscuity. Many teens claimed that the reasons for having sex at a young age include peer pressure or pressure from their partner. The effects of sexual activity at a young age are of great concern. Pregnancy and sexually transmitted diseases are only a few of the consequences that can occur. In peer-dominated contexts, functional diversity may lead to marginalization and exclusion. Socially excluded children may have unsatisfying peer relationships, low self-esteem, and lack of achievement motivation, which affect their social and academic aspects of life, mental health, and general well-being. Individuals with disabilities encounter challenges in peer relationships, including deficits in social skills such as emotion detection, conflict resolution, and conceptual understanding. Adolescents and their peer groups In one cross-sectional, correlational study, four different developmental stages were examined: preadolescence (Grades 5 and 6), early adolescence (Grades 7 and 8), middle adolescence (Grades 9 and 10) and late adolescence (Grades 11 and 12). Self-report measures were used in which adolescents completed questionnaires. First, the students rated the importance of being in a popular group. Next, positive and negative behaviour were assessed. The extent to which students were bothered by negative behaviour targeted at them by others in their groups was also assessed. Structural group properties were also examined, including: group leadership or status hierarchy, group permeability, and group conformity. Researchers found that middle adolescents reported placing more importance on being in a popular group and perceived more group conformity and leadership within their groups than pre- and late adolescents. Early and middle adolescents also reported more negative interactions and fewer positive interactions with group members and more negative interactions with those not part of their peer groups. Girls reported having more positive group interactions, being more bothered by negative interactions, and having more permeable group boundaries. Boys reported more negative interactions with those outside their groups and are more likely to have leaders in their peer groups. Researchers believe that the decrease in conformity throughout adolescence relates to the decrease in importance of leadership in late adolescence because having a group leader provides a person to model oneself after. They also note the relationship between the importance of being in a popular peer group and conformity. Both become less important in late adolescence, showing that it is less important to conform when the value of group membership decreases. It is believed that positive interactions outside of peer groups increase and negative interactions outside of peer groups decrease by late adolescence because older adolescents feel more comfortable and have less need to control the behaviours of others. Findings that boys have more leaders are consistent with research showing that boys partake in more dominance struggles. A questionnaire was handed out to 58 males and 57 females, aged 14–15 in the Midlands region of the UK. The first section dealt with group structure and activities of participants' peer groups. Participants were asked how many people were in their group, the gender composition of the group, frequency of group meetings, and the group's usual meeting places. The second section addressed the participants' levels of identification with their peer groups. The next section of the questionnaire was an intergroup comparison task in which participants compared their peer group to an outgroup. The comparison referred to how sixteen different adjectives "fit" or "described" both their ingroup and outgroup. The final part of the questionnaire was designed to check the manipulation of the adjective valence. In this section, participants rated the desirability of the above sixteen adjectives in their own opinions. Findings supported social identity theory as participants consistently favoured the ingroup in two ways: the ingroup was always associated with a greater number of positive characteristics compared to the outgroup, and the more a participant identified with the ingroup, the higher their evaluations were for it. Consistent with the dictionary definition of peer groups, youth tend to form groups based on similarities. It has been found that one of these similarities is by race. Preference for same race grows stronger as youth develop. When Latino and Caucasian youth were given surveys asking them to indicate who in their school they had the highest preference to spend time with, they both nominated peers of their same race over peers of different races. This is especially prevalent in classrooms and schools that have a clear cut majority and minority racial groups. Though benefits of homophily are met, preference for one's own racial group can lead to rejection of the racial out group, which can cause stress for both groups particularly in females. For classrooms and schools that have a more equal distribution of racial groups, there can be more socialization across peer groups. Cross racial peers groups can be very beneficial, lowering prejudice and increasing prosocial behaviors. Having a cross racial friend has also been shown to give youth a higher status and feel more socially satisfied. Diverse peer groups also lower the feelings of victimization felt by youth. An effective approach to promoting peer relationships among adolescents with disabilities may require a comprehensive strategy that addresses the individual and social aspects of support, fostering understanding. This might involve imparting information and resources on disabilities to both peers and schools, organizing meaningful social activities with friends, and providing emotional support. See also References Further reading |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Action!_(programming_language)] | [TOKENS: 1621] |
Contents Action! (programming language) Action! is a procedural programming language and integrated development environment written by Clinton Parker for the Atari 8-bit computers. The language, similar to ALGOL, maps cleanly to the MOS Technology 6502 of the Atari computer without complex compiler optimizations. Fast execution speed of the resulting programs was a key selling point. Parker, working with Henry Baker, had previously developed Micro-SPL, a systems programming language for the Xerox Alto. Action! is largely a port of Micro-SPL concepts to the Atari with changes to support the 6502 processor and the addition of an integrated fullscreen editor and debugger. Action! was distributed on ROM cartridge by Optimized Systems Software starting in 1983. It was one of the company's first bank-switched 16 kB "Super Cartridges". The runtime library is stored in the cartridge; to make a standalone application requires the Action! Toolkit which was sold separately by OSS. Action! was used to develop at least two commercial products—the HomePak productivity suite and Games Computers Play client program—and numerous programs in ANALOG Computing and Antic magazines. The editor inspired the PaperClip word processor. The language was not ported to other platforms. The assembly language source code for Action! was made available under the GNU General Public License by the author in 2015. Development environment Action! is one of the earlier examples of the OSS SuperCartridge format. Although ROM cartridges for the Atari could support 16 kB, OSS opted for bank-switching 16 kB, organized as four 4 kB blocks, mapped onto 8kB of address space. The lower 4 kB did not change, and system could bank switch between the other three blocks by changing the value in address $AFFF. This allowed for more RAM available for user programs. Action! used this design by breaking the system into four sections, the editor, the compiler, a monitor for testing code and switching between the editor and compiler, and the run-time library. The run-time library is stored in the cartridge itself. To distribute standalone applications requires a separate run-time package which was sold by OSS as the Action! Toolkit. Action! constructs were designed to map cleanly to 6502 opcodes, to provide high performance without needing complex optimizations in the one-pass compiler. For example, local variables are assigned fixed addresses in memory, instead of being allocated on a stack of activation records. This eliminates the significant overhead associated with stack management, which is especially difficult in the case of the 6502's 256-byte stack. However, this precludes the use of recursion. Unlike the integrated Atari BASIC and Atari Assembler Editor environments, the Action! editor does not use line numbers. It has a fullscreen, scrolling display capable of displaying two windows, and includes block operations and global search and replace. The monitor serves as a debugger, allowing an entire program or individual functions to be run, memory to be displayed and modified, and program execution to be traced. Language Action! has three fundamental data types, all of which are numeric. BYTE is internally represented as an unsigned 8-bit integer. Values range from 0 to 255. The CHAR keyword can also be used to declare BYTE variables. CARDinal is internally represented as an unsigned 16-bit integer. Values range from 0 to 65,535. INTeger is internally represented as a signed 16-bit integer. Values range from -32,768 to 32,767. Action! also has ARRAYs, POINTERs and user-defined TYPEs. No floating point support is provided. An example of a user-defined TYPE: A reserved word is any identifier or symbol that the Action! compiler recognizes as something special. It can be an operator, a data type name, a statement, or a compiler directive. The following is example code for Sieve of Eratosthenes written in Action!. In order to increase performance, it disables the ANTIC graphics coprocessor, preventing its DMA engine from "stealing" CPU cycles during computation. History While taking his postgraduate studies, Parker started working part-time at Xerox PARC working on printer drivers. He later moved to the Xerox Alto project where he wrote several games for the system. His PhD was in natural language parsing and he had worked on compiler theory during his graduate work. Henry Baker and Parker released Micro-SPL in September 1979. Micro-SPL was intended to be used as a systems programming language on the Xerox Alto workstation computer, which was normally programmed in BCPL. The Alto used a microcode system which the BCPL compiler output. Micro-SPL output the same format, allowing BCPL programs to call Micro-SPL programs. Aside from differences in syntax, the main difference between Micro-SPL and BCPL, and the reason for its existence, was that Micro-SPL produced code that was many times faster than the native BCPL compiler. In general, Micro-SPL programs were expected to run about ten times as fast as BCPL, and about half as fast as good hand-written microcode. In comparison to microcode, they claimed it would take half as long to write and 10% of the time to debug it. It was during this period that Parker purchased an Atari computer for use at home. He was disappointed with the lack of development systems for it, which was the impetus for creating Action! Parker considered releasing the system himself, but decided to partner with Optimized Systems Software (OSS) for sales and distribution. OSS focused on utilities and programming languages like BASIC XL, so this was a natural fit for Action! Sales were strong enough for Parker to make a living off the royalties for several years. The IBM PC had C compilers available, and Parker decided there was no point in porting Action! to that platform. As the sales of the Atari 8-bit computers wound down in North America, OSS wound down as well. Late in its history Action! distribution moved from OSS to Electronic Arts, but they did little with the language and sales ended shortly after. In a 2015 interview, Parker expressed his surprise in the level of interest in the language continued to receive, suggesting it was greater than it had been in the late 1980s. Reception Brian Moriarty, in a February 1984 review for ANALOG Computing, concluded that Action! was "one of the most valuable development tools ever published for the Atari." He cited the manual as the only weak point of the package, claiming it "suffers from lack of confidence, uncertain organization and a shortage of good, hard technical data." Leo Laporte reviewed Action in the May/June 1984 edition of Hi-Res. He began the review, "This is the best thing to happen to Atari since Nolan Bushnell figured out people would play ping-pong on a TV screen." Laporte praised the editor, noting its split-screen and cut and paste capabilities and describing it as a "complete word processing system that's very responsive." He said that Action! ran about 200 times as fast as Atari BASIC, concluding that "This language is like a finely tuned racing car." BYTE in 1985 praised the compilation and execution speed of software written in Action!. Using the Byte Sieve benchmark as a test, ten iterations of the sieve completed in 18 seconds in Action!, compared to 10 seconds for assembly and 38 minutes in BASIC. The magazine also lauded the language's editor. BYTE reported that the language resembled C closely enough to "routinely convert programs between the two", and approved of its pointer support. The magazine concluded that "Action! is easy to use, quick, and efficient. It can exploit the Atari's full power. Action! puts programming for the Atari in a whole new dimension". Ian Chadwick wrote in Mapping the Atari that "Action! is probably the best language yet for the Atari; it's a bit like C and Pascal, with a dash of Forth. I recommend it." References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/ThinkCentre#S50] | [TOKENS: 1469] |
Contents ThinkCentre ThinkCentre is a brand of business-oriented desktop computers and all-in-one computers, the early models of which were designed, developed and marketed by International Business Machines (IBM) since 2003. In 2005, IBM sold its PC business, including the ThinkCentre brand, to Lenovo. ThinkCentre computers typically include mid-range to high-end processors, options for discrete graphics cards, and multi-monitor support. History The ThinkCentre line of desktop computers was introduced by IBM in 2003. The first three models in this line were the S50, the M50, and A50p. All three desktops were equipped with Intel Pentium 4 processors. The chassis was made of steel and designed for easy component access without the use of tools. The hard disk was fixed in place by a 'caddy' without the use of screws. The caddy had rubber bumpers to reduce vibration and operational noise. Additional updates to the desktops included greater use of ThinkVantage technologies. All desktop models were made available with ImageUltra. The three desktop models also included an 'Access IBM' button, allowing access to onboard resources, diagnostic tools, automated software, and links to online updates and services. Select models featured IBM's Embedded Security Subsystem, with an integrated security chip and IBM Client Security Software. In 2005, after completing its acquisition of IBM's personal computing business, leading to the IBM/Lenovo partnership, IBM/Lenovo announced the ThinkCentre E Series desktops, designed specifically for small businesses. The ThinkCentre E50 was made available in tower and small form factor, with a silver and black design. In 2005, Technology Business Research (TBR) observed an increase in the customer satisfaction rate for ThinkCentre desktops. According to TBR's "Corporate IT Buying Behavior and Customer Satisfaction Study” published in the second quarter of 2005, Lenovo was the only one of four surveyed companies that displayed a substantial increase in ratings. In May 2005, the ThinkCentre M52 and A52 desktops were announced by Lenovo. These desktops marked the first time the ThinkCentre line incorporated dual-core processors and 64-bit technology. At the time of release, Lenovo also announced plans to incorporate Intel Active Management Technology (iAMT) in future products. Product series The ThinkCentre desktops available from IBM/Lenovo are: Notable models The ThinkCentre X1 is a mid-range all-in-one desktop computer announced by Lenovo at the 2016 International CES. The X1 is powered by a 6th generation Intel Core i7 processor paired with 16 gigabytes of 2,333 megahertz DDR4 RAM and a variety of storage media such as hard drives, hybrid drives, and solid state drives. The display uses 23.8-inch 1920 pixel by 1080 pixel panel with an anti-glare coating. A 1080p webcam is mounted just above the screen. Five USB 3.0 ports, DisplayPort video output, and an Ethernet port come standard. A memory card reader is optional. One variant of the X1 is a display-only device. The ThinkCentre Tiny-In-One II is Lenovo's second-generation all-in-one desktop computer. Its modular design allows its display and internals to be upgraded as needed. The ThinkCentre Tiny-In-One II comes in versions with 22-inch and 24-inch anti-glare displays with thin bezels and optional multitouch input. Both versions use 1920x1080 display panels. Two USB 3.0 ports, two USB 2.0 ports, one mini USB 2.0 port. and a Kensington security slot is included. Options for Microsoft Windows and Google's ChromeOS are both available. The Chromebox Tiny is a small desktop computer with a Core i3-5005U processor, 4 gigabytes of memory, a 16 GB solid-state drive, integrated graphics that runs Google's ChromeOS. It was designed for education and business. Its largest side measures about 7 inches square. It is 1.4 inches thick and weighs 2.2 pounds. Computers with this form factor are called "one-liter" machines in some countries that use the metric system. The Tiny can be mounted on the back of monitors or placed on walls with a VESA mount. The Chromebox Tiny has two USB 3.0 ports on its front and two more on its rear. Dual-band 802.11ac Wi-Fi and Bluetooth 4.0 are both supported. An external antenna is included to improve reception. A mouse and keyboard come standard. The ThinkCentre M83 Tiny is a ultra small-form-factor desktop computer released in 2014. The M83 Tiny uses an Intel Core i5 processor. It comes standard with one DisplayPort jack, and Ethernet port, five USB 3.0 ports, and a VGA port. There is a customizable port that can be configured with another DisplayPort jack, a serial port, another USB port, or an HDMI port. Wi-fi is 802.11ac. Wireless accessories are supported via Bluetooth 4.0. In 2004, an ultra-small version of the S50 was announced, the smallest desktop PC introduced until that time by IBM. The ultra-small ThinkCentre S50 desktop weighed approximately the same as IBM's first notebook (IBM 5140 PC Convertible). The ultra-small desktop was roughly the size of a New York City phonebook, or a box of cereal. The ultra-small desktop also featured a tool-free tool-less steel chassis and IBM ThinkVantage Technologies. In August 2006, the ThinkCentre A60 desktop was announced. It was the first ThinkCentre with AMD processors. In September 2006, Lenovo announced that its ThinkPad, ThinkCentre, and ThinkVision products received high ratings from EPEAT. A total of 42 products were rated by EPEAT. The ThinkCentre desktops received an overall rating of EPEAT silver. This indicated that all criteria for environmentally safe computing had been met – including the minimum requirements and additional optional implementations. Some of the criteria met included reduced levels of cadmium, mercury, and lead, energy efficiency, and reduced greenhouse gas emissions. In September 2006, Lenovo announced several desktops in the ThinkCentre line, including the M55p, M55, M55e, A55 and A53. In January 2007, the ThinkCentre A55 small-form-factor desktop was announced by Lenovo. The A55 was approximately 64% smaller than Lenovo's traditional tower desktops and 25% smaller than Lenovo's traditional small desktops. In September 2007, Lenovo announced the ultra-small-form-factor A61e. Also in September 2007, two new M Series desktops were announced: the M57 and M57p. In March 2009, two small, low-cost desktops were announced by Lenovo: the ThinkCentre A58 and the ThinkCentre M58e. The A58 desktop was designed for small and medium businesses, while the M58e was designed for medium-sized and large enterprises. The desktops were made available in both tower and small form-factor versions. Timeline See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Rule_of_three_(writing)#Comedy] | [TOKENS: 1227] |
Contents Rule of three (writing) The rule of three is a writing principle which suggests that a trio of entities such as events or characters is more satisfying, effective, or humorous than other numbers, hence also more memorable, because it combines both brevity and rhythm with the smallest amount of information needed to create a pattern. Slogans, film titles, and a variety of other things have been structured in threes, a tradition that grew out of oral storytelling and continues in narrative fiction. Examples include the Three Little Pigs, the Three Billy Goats Gruff, Goldilocks and the Three Bears, and the Three Musketeers. Similarly, adjectives are often grouped in threes to emphasize an idea. Meaning The rule of three can refer to a collection of three words, phrases, sentences, lines, paragraphs/stanzas, chapters/sections of writing and even whole books. The three elements together are known as a triad. The technique is used not just in prose, but also in poetry, oral storytelling, films, and advertising. A tricolon is a more specific use of the rule of three where three words or phrases are equal in length and grammatical form. A hendiatris is a figure of speech where three successive words are used to express a single central idea. As a slogan or motto, this is known as a tripartite motto. Slogans and catchphrases Many advertising campaigns and public information slogans use the technique to create a catchy, memorable way of displaying information. In marketing theory, American advertising and sales pioneer E. St. Elmo Lewis laid out his three chief copywriting principles, which he felt were crucial for effective advertising: The mission of an advertisement is to attract a reader so that he will look at the advertisement and start to read it; then to interest him, so that he will continue to read it; then to convince him, so that when he has read it, he will believe it. If an advertisement contains these three qualities of success, it is a successful advertisement. Some examples include: Comedy In comedy, the rule of three is also called a comic triple and is one of the many comedic devices regularly used by humorists, writers, and comedians. The third element of the triple is often used to create an effect of surprise with the audience, and is frequently the punch line of the joke itself. For instance, jokes might feature three stereotyped individuals—such as an Englishman, an Irishman and a Scotsman; or a blonde, a brunette, and a redhead—where the surprise or punch line of the joke comes from the third character. The comedic rule of three is often paired with quick timing, ensuring that viewers have less time to catch on to the pattern before the punch line hits. As a whole, the comedic rule of threes relies on setting up a pattern of two items and then subverting viewer expectations by breaking that pattern with the third item. One example comes from The Dick Van Dyke Show – "Can I get you anything? Cup of coffee? Doughnut? Toupee?" Just like most comedic writing, the rule of threes in comedy relies on building tension to a comedic release. In the case of the rule of threes, tension is built with the first two items in the pattern and then released with the final item, which should be the funniest of the three. Most triples are short in length, often only two or three sentences, but the rule can also be implemented effectively at longer length as long as base formula is still followed. The effectiveness of a pattern of three items has also been noted in the visual arts. Cartoonist Art Spiegelman described the rule of three as being key to the work of Nancy creator Ernie Bushmiller, giving the example that "a drawing of three rocks in a background scene was Ernie's way of showing us there were some rocks in the background. It was always three. Why? Because two rocks wouldn't be 'some rocks.' Two rocks would be a pair of rocks. And four rocks were unacceptable because four rocks would indicate 'some rocks' but it would be one rock more than was necessary to convey the idea of 'some rocks.'" Storytelling and folklore In storytelling, authors often create triplets or structures in three parts. In the rule's simplest form, this is merely beginning, middle, and end, as expressed in Aristotle's Poetics. Vladimir Propp, in his Morphology of the Folk Tale, concluded that any of the elements in a folktale could be negated twice so that it would repeat thrice. This is common not only in the Russian tales he studied but throughout folk tales and fairy tales: most commonly, perhaps, in that the youngest son is usually the third, although fairy tales often display the rule of three in the most blatant form. A small sample of the latter includes: Literature Rhetoric and public speaking The use of a series of three elements is also a well-known feature of public oratory. Max Atkinson, in his book on oratory entitled Our Masters' Voices, gives examples of how public speakers use three-part phrases to generate what he calls 'claptraps', evoking audience applause. Martin Luther King Jr., the civil rights activist and preacher, was known for his uses of tripling and the rule of three throughout his many influential speeches. For example, the speech "Non-Violence and Racial Justice" contained a binary opposition made up of the rule of three: "insult, injustice and exploitation", followed a few lines later by "justice, good will, and brotherhood". Conversely, the segregationist Alabama governor George Wallace inveighed "segregation now, segregation tomorrow, segregation forever" during his 1963 inaugural address. The appeal of the three-fold pattern is also illustrated by the transformation of Winston Churchill's reference to "blood, toil, tears and sweat" (echoing Giuseppe Garibaldi and Theodore Roosevelt) in its popular recollection to "blood, sweat and tears". See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Power_(social_and_political)] | [TOKENS: 4226] |
Contents Power (social and political) 1800s: Martineau · Tocqueville · Marx · Spencer · Le Bon · Ward · Pareto · Tönnies · Veblen · Simmel · Durkheim · Addams · Mead · Weber · Du Bois · Mannheim · Elias In political science, power is the ability to influence or direct the actions, beliefs, or conduct of actors. Power does not exclusively refer to the threat or use of force (coercion) by one actor against another, but may also be exerted through diffuse means (such as institutions). Power may also take structural forms, as it orders actors in relation to one another (such as distinguishing between a master and an enslaved person, a householder and their relatives, an employer and their employees, a parent and a child, a political representative and their voters, etc.), and discursive forms, as categories and language may lend legitimacy to some behaviors and groups over others. The term authority is often used for power that is perceived as legitimate or socially approved by the social structure. Scholars have distinguished between soft power and hard power. Types One can classify such power types along three different dimensions: People tend to vary in their use of power tactics, with different types of people opting for different tactics. For instance, interpersonally oriented people tend to use soft and rational tactics. Moreover, extroverts use a greater variety of power tactics than do introverts. People will also choose different tactics based on the group situation, and based on whom they wish to influence. People also tend to shift from soft to hard tactics when they face resistance. Because power operates both relationally and reciprocally, sociologists speak of the "balance of power" between parties to a relationship: all parties to all relationships have some power: the sociological examination of power concerns itself with discovering and describing the relative strengths: equal or unequal, stable or subject to periodic change. Sociologists usually analyse relationships in which the parties have relatively equal or nearly equal power in terms of constraint rather than of power.[citation needed] In this context, "power" has a connotation of unilateralism. If this were not so, then all relationships could be described in terms of "power", and its meaning would be lost. Given that power is not innate and can be granted to others, to acquire power one must possess or control a form of power currency.[need quotation to verify] In authoritarian regimes, political power is concentrated in the hands of a single leader or a small group of leaders who exercise almost complete control over the government and its institutions. Because some authoritarian leaders are not elected by a majority, their main threat is that posed by the masses. They often maintain their power through political control tactics like: Although several regimes follow these general forms of control, different authoritarian sub-regime types rely on different political control tactics. Power politics Power politics is a term which denotes an approach to political matters which aims to enhance the power of government actors. The term has much usage in the realm of international relations, and it is often used pejoratively. The German term for it, Machtpolitik, emphasizes conflict between nations as a way to assert national will and strengthen the state. This idea is related to Realpolitik but specifically acknowledges the use of force in establishing the German Empire. It often involves a romanticized view of military virtues and the belief that international conflicts serve a moral purpose. In the context of social and political power more broadly, historians argue that people in power tend to use more coercive tactics, increase social distance from those with less power, distrust those with less power, and undervalue their work and abilities. Effects Power changes those in the position of power and those who are targets of that power. Developed by D. Keltner and colleagues, approach/inhibition theory assumes that having power and using power alters psychological states of individuals. The theory is based on the notion that most organisms react to environmental events in two common ways. The reaction of approach is associated with action, self-promotion, seeking rewards, increased energy and movement. Inhibition, on the contrary, is associated with self-protection, avoiding threats or danger, vigilance, loss of motivation and an overall reduction in activity. Overall, approach/inhibition theory holds that power promotes approach tendencies, while a reduction in power promotes inhibition tendencies. Theories In a now-classic study (1959), social psychologists John R. P. French and Bertram Raven developed a schema of sources of power by which to analyse how power plays work (or fail to work) in a specific relationship. According to French and Raven, power must be distinguished from influence in the following way: power is that state of affairs that holds in a given relationship, A-B, such that a given influence attempt by A over B makes A's desired change in B more likely. Conceived this way, power is fundamentally relative; it depends on the specific understandings A and B each apply to their relationship and requires B's recognition of a quality in A that would motivate B to change in the way A intends. A must draw on the 'base' or combination of bases of power appropriate to the relationship to effect the desired outcome. Drawing on the wrong power base can have unintended effects, including a reduction in A's own power.[citation needed] French and Raven argue that there are five significant categories of such qualities, while not excluding other minor categories. Further bases have since been proposed, in particular by Gareth Morgan in his 1986 book, Images of Organization. Expert power is an individual's power deriving from the skills or expertise of the person and the organization's needs for those skills and expertise. Unlike the others, this type of power is usually highly specific and limited to the particular area in which the expert is trained and qualified. When they have knowledge and skills that enable them to understand a situation, suggest solutions, use solid judgment, and generally outperform others, then people tend to listen to them. When individuals demonstrate expertise, people tend to trust them and respect what they say. As subject-matter experts, their ideas will have more value, and others will look to them for leadership in that area. In terms of cancel culture, the mass ostracization used to reconcile unchecked injustice and abuse of power is an "upward power". Policies for policing the internet against these processes as a pathway for creating due process for handling conflicts, abuses, and harm that is done through established processes are known as "downward power". Coercive power is the application of negative influences. It includes the ability to defer or withhold other rewards. This is a type of power commonly seen in the fashion industry by coupling with legitimate power; it is referred to in the industry-specific literature as "glamorization of structural domination and exploitation". According to Laura K. Guerrero and Peter A. Andersen in Close Encounters: Communication in Relationships, power in relationships is multifaceted. It can be perceived, relational, resource-based, and dependent on interest and commitment levels. While power often stems from controlling valued, scarce resources or having less dependence in a relationship, it is also shaped by behavior, social skills, and how others interpret one’s actions. Power can be enabling when used with confidence and skill, but disabling when it leads to manipulation, communication breakdowns, or relational dissatisfaction. In the Marxist tradition, the Italian writer Antonio Gramsci elaborated on the role of ideology in creating a cultural hegemony, which becomes a means of bolstering the power of capitalism and of the nation-state. Drawing on Niccolò Machiavelli in The Prince and trying to understand why there had been no Communist revolution in Western Europe while it was claimed there had been one in Russia, Gramsci conceptualised this hegemony as a centaur, consisting of two halves. The back end, the beast, represented the more classic material image of power: power through coercion, through brute force, be it physical or economic. But the capitalist hegemony, he argued, depended even more strongly on the front end, the human face, which projected power through 'consent'. In Russia, this power was lacking, allowing for a revolution. However, in Western Europe, specifically in Italy, capitalism had succeeded in exercising consensual power, convincing the working classes that their interests were the same as those of capitalists. In this way, a revolution had been avoided.[citation needed] While Gramsci stresses the significance of ideology in power structures, Marxist-feminist writers such as Michele Barrett stress the role of ideologies in extolling the virtues of family life. The classic argument to illustrate this point of view is the use of women as a 'reserve army of labour'. In wartime, it is accepted that women perform masculine tasks, while after the war, the roles are easily reversed. Therefore, according to Barrett, the destruction of capitalist economic relations is necessary but not sufficient for the liberation of women. Eugen Tarnow considers what power hijackers have over air plane passengers and draws similarities with power in the military. He shows that power over an individual can be amplified by the presence of a group. If the group conforms to the leader's commands, the leader's power over an individual is greatly enhanced, while if the group does not conform, the leader's power over an individual is non-existent. For Michel Foucault, the real power will always rely on the ignorance of its agents. No single human, group, or actor runs the dispositif (machine or apparatus), but power is dispersed through the apparatus as efficiently and silently as possible, ensuring its agents do whatever is necessary. It is because of this action that power is unlikely to be detected and remains elusive to 'rational' investigation. Foucault quotes a text reputedly written by political economist Jean Baptiste Antoine Auget de Montyon, entitled Recherches et considérations sur la population de la France (1778), but turns out to be written by his secretary Jean-Baptise Moheau (1745–1794), and by emphasizing biologist Jean-Baptiste Lamarck, who constantly refers to milieus as a plural adjective and sees into the milieu as an expression as nothing more than water, air, and light confirming the genus within the milieu, in this case the human species, relates to a function of the population and its social and political interaction in which both form an artificial and natural milieu. This milieu (both artificial and natural) appears as a target of intervention for power, according to Foucault, which is radically different from the previous notions on sovereignty, territory, and disciplinary space interwoven into social and political relations that function as a species (biological species). Foucault originated and developed the concept of "docile bodies" in his book Discipline and Punish. He writes, "A body is docile that may be subjected, used, transformed and improved. Stewart Clegg proposes another three-dimensional model with his "circuits of power" theory. This model likens the production and organization of power to an electric circuit board consisting of three distinct interacting circuits: episodic, dispositional, and facilitative. These circuits operate at three levels: two are macro and one is micro. The episodic circuit is at the micro level and is constituted of irregular exercise of power as agents address feelings, communication, conflict, and resistance in day-to-day interrelations. The outcomes of the episodic circuit are both positive and negative. The dispositional circuit is constituted of macro level rules of practice and socially constructed meanings that inform member relations and legitimate authority. The facilitative circuit is constituted of macro level technology, environmental contingencies, job design, and networks, which empower or disempower and thus punish or reward agency in the episodic circuit. All three independent circuits interact at "obligatory passage points", which are channels for empowerment or disempowerment. John Kenneth Galbraith (1908–2006) in The Anatomy of Power (1983) summarizes the types of power as "condign" (based on force), "compensatory" (through the use of various resources) or "conditioned" (the result of persuasion),[citation needed] and the sources of power as "personality" (individuals), "property" (power-wielders' material resources), and/or "organizational" (from sitting higher in an organisational power structure). Gene Sharp, an American professor of political science, believes that power ultimately depends on its bases. Thus, a political regime maintains power because people accept and obey its dictates, laws, and policies. Sharp cites the insight of Étienne de La Boétie. Sharp's key theme is that power is not monolithic; that is, it does not derive from some intrinsic quality of those who are in power. For Sharp, political power, the power of any state – regardless of its particular structural organization – ultimately derives from the subjects of the state. His fundamental belief is that any power structure relies upon the subjects' obedience to the orders of the ruler(s). If subjects do not obey, leaders have no power. His work is thought to have been influential in the overthrow of Slobodan Milošević, in the 2011 Arab Spring, and other nonviolent revolutions. Björn Kraus deals with the epistemological perspective on power regarding the question of the possibilities of interpersonal influence by developing a special form of constructivism (named relational constructivism). Instead of focusing on the valuation and distribution of power, he asks first and foremost what the term can describe at all. Coming from Max Weber's definition of power, he realizes that the term power has to be split into "instructive power" and "destructive power".: 105 : 126 More precisely, instructive power means the chance to determine the actions and thoughts of another person, whereas destructive power means the chance to diminish the opportunities of another person. How significant this distinction really is, becomes evident by looking at the possibilities of rejecting power attempts: Rejecting instructive power is possible; rejecting destructive power is not. By using this distinction, proportions of power can be analyzed in a more sophisticated way, helping to sufficiently reflect on matters of responsibility.: 139 f. This perspective permits people to get over an "either-or-position" (either there is power or there is not), which is common, especially in epistemological discourses about power theories, and to introduce the possibility of an "as well as-position".: 120 The idea of unmarked categories originated in feminism. As opposed to looking at social difference by focusing on what or whom is perceived to be different, theorists who use the idea of unmarked categories insist that one must also look at how whatever is "normal" comes to be perceived as unremarkable and what effects this has on social relations. Attending the unmarked category is thought to be a way to analyze linguistic and cultural practices to provide insight into how social differences, including power, are produced and articulated in everyday occurrences. Feminist linguist Deborah Cameron describes an "unmarked" identity as the default, which requires no explicit acknowledgment. Heterosexuality, for instance, is unmarked, assumed as the norm, unlike homosexuality, which is "marked" and requires clearer signaling as it differs from the majority. Similarly, masculinity is often unmarked, while femininity is marked, leading to studies that examine distinctive features in women's speech, whereas men's speech is treated as the neutral standard. Although the unmarked category is typically not explicitly noticed and often goes overlooked, it is still necessarily visible. The term 'counter-power' (sometimes written 'counterpower') is used in a range of situations to describe the countervailing force that can be utilised by the oppressed to counterbalance or erode the power of elites. A general definition has been provided by the anthropologist David Graeber as 'a collection of social institutions set in opposition to the state and capital: from self-governing communities to radical labor unions to popular militias'. Graeber also notes that counter-power can also be referred to as 'anti-power' and 'when institutions [of counter-power] maintain themselves in the face of the state, this is usually referred to as a 'dual power' situation'. Tim Gee, in his 2011 book Counterpower: Making Change Happen, put forward the theory that those disempowered by governments' and elite groups' power can use counterpower to counter this. In Gee's model, counterpower is split into three categories: idea counterpower, economic counterpower, and physical counterpower. Although the term has come to prominence through its use by participants in the global justice/anti-globalization movement of the 1990s onwards, the word has been used for at least 60 years; for instance, Martin Buber's 1949 book 'Paths in Utopia' includes the line 'Power abdicates only under the stress of counter-power'.: 13 Reactions A number of studies demonstrate that harsh power tactics (e.g. punishment (both personal and impersonal), rule-based sanctions, and non-personal rewards) are less effective than soft tactics (expert power, referent power, and personal rewards). It is probably because harsh tactics generate hostility, depression, fear, and anger, while soft tactics are often reciprocated with cooperation. Coercive and reward power can also lead group members to lose interest in their work, while instilling a feeling of autonomy in one's subordinates can sustain their interest in work and maintain high productivity even in the absence of monitoring. Coercive influence creates conflict that can disrupt entire group functioning. When disobedient group members are severely reprimanded, the rest of the group may become more disruptive and uninterested in their work, leading to negative and inappropriate activities spreading from one troubled member to the rest of the group. This effect is called Disruptive contagion or ripple effect and it is strongly manifested when reprimanded member has a high status within a group, and authority's requests are vague and ambiguous. Coercive influence can be tolerated when the group is successful, the leader is trusted, and the use of coercive tactics is justified by group norms. Furthermore, coercive methods are more effective when applied frequently and consistently to punish prohibited actions. However, in some cases, group members chose to resist the authority's influence. When low-power group members have a feeling of shared identity, they are more likely to form a Revolutionary Coalition, a subgroup formed within a larger group that seeks to disrupt and oppose the group's authority structure. Group members are more likely to form a revolutionary coalition and resist an authority when authority lacks referent power, uses coercive methods, and asks group members to carry out unpleasant assignments. It is because these conditions create reactance, individuals strive to reassert their sense of freedom by affirming their agency for their own choices and consequences.[citation needed] Herbert Kelman identified three basic, step-like reactions that people display in response to coercive influence: compliance, identification, and internalization. This theory explains how groups convert hesitant recruits into zealous followers over time. At the stage of compliance, group members comply with authority's demands, but personally do not agree with them. If authority does not monitor the members, they will probably not obey. Identification occurs when the target of the influence admires and therefore imitates the authority, mimics authority's actions, values, characteristics, and takes on behaviours of the person with power. If prolonged and continuous, identification can lead to the final stage – internalization. When internalization occurs, individual adopts the induced behaviour because it is congruent with his/her value system. At this stage, group members no longer carry out authority orders but perform actions that are congruent with their personal beliefs and opinions. Extreme obedience often requires internalization. Power literacy Power literacy refers to how one perceives power, how it is formed and accumulates, and the structures that support it and who is in control of it. Education can be helpful for heightening power literacy. In a 2014 TED talk Eric Liu notes that "we don't like to talk about power" as "we find it scary" and "somehow evil" with it having a "negative moral valence" and states that the pervasiveness of power illiteracy causes a concentration of knowledge, understanding and clout. Joe L. Kincheloe describes a "cyber-literacy of power" that is concerned with the forces that shape knowledge production and the construction and transmission of meaning, being more about engaging knowledge than "mastering" information, and a "cyber-power literacy" that is focused on transformative knowledge production and new modes of accountability. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Internet_access] | [TOKENS: 11080] |
Contents Internet access Internet access is a facility or service that provides connectivity for a computer, a computer network, or other network device to the Internet, and for individuals or organizations to access or use applications such as email and the World Wide Web. Internet access is offered for sale by an international hierarchy of Internet service providers (ISPs) using various networking technologies. At the retail level, many organizations, including municipal entities, also provide cost-free access to the general public. Types of connections range from fixed-line cable (such as DSL and fiber optic) to mobile (via cellular) and satellite. The availability of Internet access to the general public began with the commercialization of the early Internet in the early 1990s, and has grown with the availability of useful applications, such as the World Wide Web. In 1995, only 0.04 percent of the world's population had access, with well over half of those living in the United States and consumer use was through dial-up. By the first decade of the 21st century, many consumers in developed nations used faster broadband technology. By 2014, 41 percent of the world's population had access, broadband was almost ubiquitous worldwide, and global average connection speeds exceeded one megabit per second. History The Internet developed from the ARPANET, which was funded by the US government to support projects within the government, at universities and research laboratories in the US, but grew over time to include most of the world's large universities and the research arms of many technology companies. Use by a wider audience only came in 1995 when restrictions on the use of the Internet to carry commercial traffic were lifted. In the early to mid-1980s, most Internet access was from personal computers and workstations directly connected to local area networks (LANs) or from dial-up connections using modems and analog telephone lines. LANs typically operated at 10 Mbit/s while modem data-rates grew from 1200 bit/s in the early 1980s to 56 kbit/s by the late 1990s. Initially, dial-up connections were made from terminals or computers running terminal-emulation software to terminal servers on LANs. These dial-up connections did not support end-to-end use of the Internet protocols and only provided terminal-to-host connections. The introduction of network access servers supporting the Serial Line Internet Protocol (SLIP) and later the point-to-point protocol (PPP) extended the Internet protocols and made the full range of Internet services available to dial-up users; although slower, due to the lower data rates available using dial-up. An important factor in the rapid rise of Internet access speed has been advances in MOSFET (MOS transistor) technology. The MOSFET invented at Bell Labs between 1955 and 1960 following Frosch and Derick discoveries, is the building block of the Internet telecommunications networks. The laser, originally demonstrated by Charles H. Townes and Arthur Leonard Schawlow in 1960, was adopted for MOS light-wave systems around 1980, which led to exponential growth of Internet bandwidth. Continuous MOSFET scaling has since led to online bandwidth doubling every 18 months (Edholm's law, which is related to Moore's law), with the bandwidths of telecommunications networks rising from bits per second to terabits per second. Broadband Internet access, often shortened to just broadband, is simply defined as "Internet access that is always on, and faster than the traditional dial-up access" and so covers a wide range of technologies. The core of these broadband Internet technologies are complementary MOS (CMOS) digital circuits, the speed capabilities of which were extended with innovative design techniques. Broadband connections are typically made using a computer's built in Ethernet networking capabilities, or by using a NIC expansion card. Most broadband services provide a continuous "always on" connection; there is no dial-in process required, and it does not interfere with voice use of phone lines. Broadband provides improved access to Internet services such as: In the 1990s, the National Information Infrastructure initiative in the U.S. made broadband Internet access a public policy issue. In 2000, most Internet access to homes was provided using dial-up, while many businesses and schools were using broadband connections. In 2000 there were just under 150 million dial-up subscriptions in the 34 OECD countries and fewer than 20 million broadband subscriptions. By 2004, broadband had grown and dial-up had declined so that the number of subscriptions were roughly equal at 130 million each. In 2010, in the OECD countries, over 90% of the Internet access subscriptions used broadband, broadband had grown to more than 300 million subscriptions, and dial-up subscriptions had declined to fewer than 30 million. The broadband technologies in widest use are of digital subscriber line (DSL), ADSL, and cable Internet access. Newer technologies include VDSL and optical fiber extended closer to the subscriber in both telephone and cable plants. Fiber-optic communication, while only recently being used in premises and to the curb schemes, has played a crucial role in enabling broadband Internet access by making transmission of information at very high data rates over longer distances much more cost-effective than copper wire technology. In areas not served by ADSL or cable, some community organizations and local governments are installing Wi-Fi networks. Wireless, satellite, and microwave Internet are often used in rural, undeveloped, or other hard to serve areas where wired Internet is not readily available. Newer technologies being deployed for fixed (stationary) and mobile broadband access include WiMAX, LTE, and fixed wireless. Starting in roughly 2006, mobile broadband access is increasingly available at the consumer level using "3G" and "4G" technologies such as HSPA, EV-DO, HSPA+, and LTE. Availability In addition to access from home, school, and the workplace Internet access may be available from public places such as libraries and Internet cafés, where computers with Internet connections are available. Some libraries provide stations for physically connecting users' laptops to LANs. Wireless Internet access points are available in public places such as airport halls, in some cases just for brief use while standing. Some access points may also provide coin-operated computers. Various terms are used, such as "public Internet kiosk", "public access terminal", and "Web payphone". Many hotels also have public terminals, usually fee based. Coffee shops, shopping malls, and other venues increasingly offer wireless access to computer networks, referred to as hotspots, for users who bring their own wireless-enabled devices such as a laptop or PDA. These services may be free to all, free to customers only, or fee-based. A Wi-Fi hotspot need not be limited to a confined location since multiple ones combined can cover a whole campus or park, or even an entire city can be enabled. Additionally, mobile broadband access allows smartphones and other digital devices to connect to the Internet from any location from which a mobile phone call can be made, subject to the capabilities of that mobile network. The bit rates for dial-up modems range from as little as 110 bit/s in the late 1950s, to a maximum of from 33 to 64 kbit/s (V.90 and V.92) in the late 1990s. Dial-up connections generally require the dedicated use of a telephone line. Data compression can boost the effective bit rate for a dial-up modem connection from 220 (V.42bis) to 320 (V.44) kbit/s. However, the effectiveness of data compression is quite variable, depending on the type of data being sent, the condition of the telephone line, and a number of other factors. In reality, the overall data rate rarely exceeds 150 kbit/s. Broadband technologies supply considerably higher bit rates than dial-up, generally without disrupting regular telephone use. Various minimum data rates and maximum latencies have been used in definitions of broadband, ranging from 64 kbit/s up to 4.0 Mbit/s. In 1988 the CCITT standards body defined "broadband service" as requiring transmission channels capable of supporting bit rates greater than the primary rate which ranged from about 1.5 to 2 Mbit/s. A 2006 Organisation for Economic Co-operation and Development (OECD) report defined broadband as having download data transfer rates equal to or faster than 256 kbit/s. And in 2015 the U.S. Federal Communications Commission (FCC) defined "Basic Broadband" as data transmission speeds of at least 25 Mbit/s downstream (from the Internet to the user's computer) and 3 Mbit/s upstream (from the user's computer to the Internet). The trend is to raise the threshold of the broadband definition as higher data rate services become available. The higher data rate dial-up modems and many broadband services are "asymmetric"—supporting much higher data rates for download (toward the user) than for upload (toward the Internet). Data rates, including those given in this article, are usually defined and advertised in terms of the maximum or peak download rate. In practice, these maximum data rates are not always reliably available to the customer. Actual end-to-end data rates can be lower due to a number of factors. In late June 2016, internet connection speeds averaged about 6 Mbit/s globally. Physical link quality can vary with distance and for wireless access with terrain, weather, building construction, antenna placement, and interference from other radio sources. Network bottlenecks may exist at points anywhere on the path from the end-user to the remote server or service being used and not just on the first or last link providing Internet access to the end-user. Users may share access over a common network infrastructure. Since most users do not use their full connection capacity all of the time, this aggregation strategy (known as contended service) usually works well, and users can burst to their full data rate at least for brief periods. However, peer-to-peer (P2P) file sharing and high-quality streaming video can require high data-rates for extended periods, which violates these assumptions and can cause a service to become oversubscribed, resulting in congestion and poor performance. The TCP protocol includes flow-control mechanisms that automatically throttle back on the bandwidth being used during periods of network congestion. This is fair in the sense that all users who experience congestion receive less bandwidth, but it can be frustrating for customers and a major problem for ISPs. In some cases, the amount of bandwidth actually available may fall below the threshold required to support a particular service such as video conferencing or streaming live video–effectively making the service unavailable. When traffic is particularly heavy, an ISP can deliberately throttle back the bandwidth available to classes of users or for particular services. This is known as traffic shaping and careful use can ensure a better quality of service for time critical services even on extremely busy networks. However, overuse can lead to concerns about fairness and network neutrality or even charges of censorship, when some types of traffic are severely or completely blocked. An Internet blackout or outage can be caused by local signaling interruptions. Disruptions of submarine communications cables may cause blackouts or slowdowns to large areas, such as in the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to a small number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging for scrap metal severed most connectivity for the nation of Armenia. Internet blackouts affecting almost entire countries can be achieved by governments as a form of Internet censorship, as in the blockage of the Internet in Egypt, whereby approximately 93% of networks were without access in 2011 in an attempt to stop mobilization for anti-government protests. On April 25, 1997, due to a combination of human error and a software bug, an incorrect routing table at MAI Network Service (a Virginia Internet service provider) propagated across backbone routers and caused major disruption to Internet traffic for a few hours. Technologies When the Internet is accessed using a modem, digital data is converted to analog for transmission over analog networks such as the telephone and cable networks. A computer or other device accessing the Internet would either be connected directly to a modem that communicates with an Internet service provider (ISP) or the modem's Internet connection would be shared via a LAN which provides access in a limited area such as a home, school, computer laboratory, or office building. Although a connection to a LAN may provide very high data-rates within the LAN, actual Internet access speed is limited by the upstream link to the ISP. LANs may be wired or wireless. Ethernet over twisted pair cabling and Wi-Fi are the two most common technologies used to build LANs today, but ARCNET, Token Ring, LocalTalk, FDDI, and other technologies were used in the past. Ethernet is the name of the IEEE 802.3 standard for physical LAN communication and Wi-Fi is a trade name for a wireless local area network (WLAN) that uses one of the IEEE 802.11 standards. Ethernet cables are interconnected via switches & routers. Wi-Fi networks are built using one or more wireless antenna called access points. Many "modems" (cable modems, DSL gateways or Optical Network Terminals (ONTs)) provide the additional functionality to host a LAN so most Internet access today is through a LAN such as that created by a WiFi router connected to a modem or a combo modem router,[citation needed] often a very small LAN with just one or two devices attached. And while LANs are an important form of Internet access, this raises the question of how and at what data rate the LAN itself is connected to the rest of the global Internet. The technologies described below are used to make these connections, or in other words, how customers' modems (Customer-premises equipment) are most often connected to internet service providers (ISPs). Dial-up Internet access uses a modem and a phone call placed over the public switched telephone network (PSTN) to connect to a pool of modems operated by an ISP. The modem converts a computer's digital signal into an analog signal that travels over a phone line's local loop until it reaches a telephone company's switching facilities or central office (CO) where it is switched to another phone line that connects to another modem at the remote end of the connection. Operating on a single channel, a dial-up connection monopolizes the phone line and is one of the slowest methods of accessing the Internet. Dial-up is often the only form of Internet access available in rural areas as it requires no new infrastructure beyond the already existing telephone network, to connect to the Internet. Typically, dial-up connections do not exceed a speed of 56 kbit/s, as they are primarily made using modems that operate at a maximum data rate of 56 kbit/s downstream (towards the end user) and 34 or 48 kbit/s upstream (toward the global Internet). Multilink dial-up provides increased bandwidth by channel bonding multiple dial-up connections and accessing them as a single data channel. It requires two or more modems, phone lines, and dial-up accounts, as well as an ISP that supports multilinking – and of course any line and data charges are also doubled. This inverse multiplexing option was briefly popular with some high-end users before ISDN, DSL and other technologies became available. Diamond and other vendors created special modems to support multilinking. The term broadband includes a broad range of technologies, all of which provide higher data rate access to the Internet. The following technologies use wires or cables in contrast to wireless broadband described later. Integrated Services Digital Network (ISDN) is a switched telephone service capable of transporting voice and digital data, and is one of the oldest Internet access methods. ISDN has been used for voice, video conferencing, and broadband data applications. ISDN was very popular in Europe, but less common in North America. Its use peaked in the late 1990s before the availability of DSL and cable modem technologies. Basic rate ISDN, known as ISDN-BRI, has two 64 kbit/s "bearer" or "B" channels. These channels can be used separately for voice or data calls or bonded together to provide a 128 kbit/s service. Multiple ISDN-BRI lines can be bonded together to provide data rates above 128 kbit/s. Primary rate ISDN, known as ISDN-PRI, has 23 bearer channels (64 kbit/s each) for a combined data rate of 1.5 Mbit/s (US standard). An ISDN E1 (European standard) line has 30 bearer channels and a combined data rate of 1.9 Mbit/s. ISDN has been replaced by DSL technology, and it required special telephone switches at the service provider. Leased lines are dedicated lines used primarily by ISPs, business, and other large enterprises to connect LANs and campus networks to the Internet using the existing infrastructure of the public telephone network or other providers. Delivered using wire, optical fiber, and radio, leased lines are used to provide Internet access directly as well as the building blocks from which several other forms of Internet access are created. T-carrier technology dates to 1957 and provides data rates that range from 56 and 64 kbit/s (DS0) to 1.5 Mbit/s (DS1 or T1), to 45 Mbit/s (DS3 or T3). A T1 line carries 24 voice or data channels (24 DS0s), so customers may use some channels for data and others for voice traffic or use all 24 channels for clear channel data. A DS3 (T3) line carries 28 DS1 (T1) channels. Fractional T1 lines are also available in multiples of a DS0 to provide data rates between 56 and 1500 kbit/s. T-carrier lines require special termination equipment such as Data service units that may be separate from or integrated into a router or switch and which may be purchased or leased from an ISP. In Japan the equivalent standard is J1/J3. In Europe, a slightly different standard, E-carrier, provides 32 user channels (64 kbit/s) on an E1 (2.0 Mbit/s) and 512 user channels or 16 E1s on an E3 (34.4 Mbit/s). Synchronous Optical Networking (SONET, in the U.S. and Canada) and Synchronous Digital Hierarchy (SDH, in the rest of the world) are the standard multiplexing protocols used to carry high-data-rate digital bit-streams over optical fiber using lasers or highly coherent light from light-emitting diodes (LEDs). At lower transmission rates data can also be transferred via an electrical interface. The basic unit of framing is an OC-3c (optical) or STS-3c (electrical) which carries 155.520 Mbit/s. Thus an OC-3c will carry three OC-1 (51.84 Mbit/s) payloads each of which has enough capacity to include a full DS3. Higher data rates are delivered in OC-3c multiples of four providing OC-12c (622.080 Mbit/s), OC-48c (2.488 Gbit/s), OC-192c (9.953 Gbit/s), and OC-768c (39.813 Gbit/s). The "c" at the end of the OC labels stands for "concatenated" and indicates a single data stream rather than several multiplexed data streams. Optical transport network (OTN) may be used instead of SONET for higher data transmission speeds of up to 400 Gbit/s per OTN channel. The 1, 10, 40, and 100 Gigabit Ethernet IEEE standards (802.3) allow digital data to be delivered over copper wiring at distances to 100 m and over optical fiber at distances to 40 km. Cable Internet provides access using a cable modem on hybrid fiber coaxial (HFC) wiring originally developed to carry television signals. Either fiber-optic or coaxial copper cable may connect a node to a customer's location at a connection known as a cable drop. Using a cable modem termination system, all nodes for cable subscribers in a neighborhood connect to a cable company's central office, known as the "head end." The cable company then connects to the Internet using a variety of means – usually fiber optic cable or digital satellite and microwave transmissions. Like DSL, broadband cable provides a continuous connection with an ISP. Downstream, the direction toward the user, bit rates can be as much as 1000 Mbit/s in some countries, with the use of DOCSIS 3.1. Upstream traffic, originating at the user, ranges from 384 kbit/s to more than 50 Mbit/s. DOCSIS 4.0 promises up to 10 Gbit/s downstream and 6 Gbit/s upstream, however this technology is yet to have been implemented in real-world usage. Broadband cable access tends to service fewer business customers because existing television cable networks tend to service residential buildings; commercial buildings do not always include wiring for coaxial cable networks. In addition, because broadband cable subscribers share the same local line, communications may be intercepted by neighboring subscribers. Cable networks regularly provide encryption schemes for data traveling to and from customers, but these schemes may be thwarted. Digital subscriber line (DSL) service provides a connection to the Internet through the telephone network. Unlike dial-up, DSL can operate using a single phone line without preventing normal use of the telephone line for voice phone calls. DSL uses the high frequencies, while the low (audible) frequencies of the line are left free for regular telephone communication. These frequency bands are subsequently separated by filters installed at the customer's premises. DSL originally stood for "digital subscriber loop". In telecommunications marketing, the term digital subscriber line is widely understood to mean asymmetric digital subscriber line (ADSL), the most commonly installed variety of DSL. The data throughput of consumer DSL services typically ranges from 256 kbit/s to 20 Mbit/s in the direction to the customer (downstream), depending on DSL technology, line conditions, and service-level implementation. In ADSL, the data throughput in the upstream direction, (i.e., in the direction to the service provider) is lower than that in the downstream direction (i.e. to the customer), hence the designation of asymmetric. With a symmetric digital subscriber line (SDSL), the downstream and upstream data rates are equal. Very-high-bit-rate digital subscriber line (VDSL or VHDSL, ITU G.993.1) is a digital subscriber line (DSL) standard approved in 2001 that provides data rates up to 52 Mbit/s downstream and 16 Mbit/s upstream over copper wires and up to 85 Mbit/s down- and upstream on coaxial cable. VDSL is capable of supporting applications such as high-definition television, as well as telephone services (voice over IP) and general Internet access, over a single physical connection. VDSL2 (ITU-T G.993.2) is a second-generation version and an enhancement of VDSL. Approved in February 2006, it is able to provide data rates exceeding 100 Mbit/s simultaneously in both the upstream and downstream directions. However, the maximum data rate is achieved at a range of about 300 meters and performance degrades as distance and loop attenuation increases. DSL Rings (DSLR) or Bonded DSL Rings is a ring topology that uses DSL technology over existing copper telephone wires to provide data rates of up to 400 Mbit/s. Fiber-to-the-home (FTTH) is one member of the Fiber-to-the-x (FTTx) family that includes Fiber-to-the-building or basement (FTTB), Fiber-to-the-premises (FTTP), Fiber-to-the-desk (FTTD), Fiber-to-the-curb (FTTC), and Fiber-to-the-node (FTTN). These methods all bring data closer to the end user on optical fibers. The differences between the methods have mostly to do with just how close to the end user the delivery on fiber comes. All of these delivery methods are similar in function and architecture to hybrid fiber-coaxial (HFC) systems used to provide cable Internet access. Fiber internet connections to customers are either AON (Active optical network) or more commonly PON (Passive optical network). Examples of fiber optic internet access standards are G.984 (GPON, G-PON) and 10G-PON (XG-PON). ISPs may instead use Metro Ethernet as a replacement for T1 and Frame Relay lines for corporate and institutional customers, or offer carrier-grade Ethernet. Dedicated internet access (DIA) in which the bandwidth is not shared among customers, can be offered over PON fiber optic networks. The use of optical fiber offers much higher data rates over relatively longer distances. Most high-capacity Internet and cable television backbones already use fiber optic technology, with data switched to other technologies (DSL, cable, LTE) for final delivery to customers. Fiber optic is immune to electromagnetic interference. In 2010, Australia began rolling out its National Broadband Network across the country using fiber-optic cables to 93 percent of Australian homes, schools, and businesses. The project was abandoned by the subsequent LNP government, in favor of a hybrid FTTN design, which turned out to be more expensive and introduced delays. Similar efforts are underway in Italy, Canada, India, and many other countries (see Fiber to the premises by country). Power-line Internet, also known as Broadband over power lines (BPL), carries Internet data on a conductor that is also used for electric power transmission. Because of the extensive power line infrastructure already in place, this technology can provide people in rural and low population areas access to the Internet with little cost in terms of new transmission equipment, cables, or wires. Data rates are asymmetric and generally range from 256 kbit/s to 2.7 Mbit/s. Because these systems use parts of the radio spectrum allocated to other over-the-air communication services, interference between the services is a limiting factor in the introduction of power-line Internet systems. The IEEE P1901 standard specifies that all power-line protocols must detect existing usage and avoid interfering with it. Power-line Internet has developed faster in Europe than in the U.S. due to a historical difference in power system design philosophies. Data signals cannot pass through the step-down transformers used and so a repeater must be installed on each transformer. In the U.S. a transformer serves a small cluster of from one to a few houses. In Europe, it is more common for a somewhat larger transformer to service larger clusters of from 10 to 100 houses. Thus a typical U.S. city requires an order of magnitude more repeaters than a comparable European city. Asynchronous Transfer Mode (ATM) and Frame Relay are wide-area networking standards that can be used to provide Internet access directly or as building blocks of other access technologies. For example, many DSL implementations use an ATM layer over the low-level bitstream layer to enable a number of different technologies over the same link. Customer LANs are typically connected to an ATM switch or a Frame Relay node using leased lines at a wide range of data rates. While still widely used, with the advent of Ethernet over optical fiber, MPLS, VPNs and broadband services such as cable modem and DSL, ATM and Frame Relay no longer play the prominent role they once did. Wireless broadband is used to provide both fixed and mobile Internet access with the following technologies. Satellite Internet access provides fixed, portable, and mobile Internet access. Data rates range from 2 kbit/s to 1 Gbit/s downstream and from 2 kbit/s to 10 Mbit/s upstream. In the northern hemisphere, satellite antenna dishes require a clear line of sight to the southern sky, due to the equatorial position of all geostationary satellites. In the southern hemisphere, this situation is reversed, and dishes are pointed north. Service can be adversely affected by moisture, rain, and snow (known as rain fade). The system requires a carefully aimed directional antenna. Satellites in geostationary Earth orbit (GEO) operate in a fixed position 35,786 km (22,236 mi) above the Earth's equator. At the speed of light (about 300,000 km/s or 186,000 miles per second), it takes a quarter of a second for a radio signal to travel from the Earth to the satellite and back. When other switching and routing delays are added and the delays are doubled to allow for a full round-trip transmission, the total delay can be 0.75 to 1.25 seconds. This latency is large when compared to other forms of Internet access with typical latencies that range from 0.015 to 0.2 seconds. Long latencies negatively affect some applications that require real-time response, particularly online games, voice over IP, and remote control devices. TCP tuning and TCP acceleration techniques can mitigate some of these problems. GEO satellites do not cover the Earth's polar regions. HughesNet, Exede, AT&T and Dish Network have GEO systems. Satellite internet constellations in low Earth orbit (LEO, below 2,000 km or 1,243 miles) and medium Earth orbit (MEO, between 2,000 and 35,786 km or 1,243 and 22,236 miles) operate at lower altitudes, and their satellites are not fixed in their position above the Earth. Because they operate at a lower altitude, more satellites and launch vehicles are needed for worldwide coverage. This makes the initial required investment very large which initially caused OneWeb and Iridium to declare bankruptcy. However, their lower altitudes allow lower latencies and higher speeds which make real-time interactive Internet applications more feasible. LEO systems include Globalstar, Starlink, OneWeb and Iridium. The O3b constellation is a medium Earth-orbit system with a latency of 125 ms. COMMStellation™ is a LEO system, scheduled for launch in 2015,[needs update] that is expected to have a latency of just 7 ms. Mobile broadband is the marketing term for wireless Internet access delivered through mobile phone towers (cellular networks) to computers, mobile phones (called "cell phones" in North America and South Africa, and "hand phones" in Asia), and other digital devices using portable modems. Some mobile services allow more than one device to be connected to the Internet using a single cellular connection using a process called tethering. The modem may be built into laptop computers, tablets, mobile phones, and other devices, added to some devices using PC cards, USB modems, and USB sticks or dongles, or separate wireless modems can be used. New mobile phone technology and infrastructure is introduced periodically and generally involves a change in the fundamental nature of the service, non-backwards-compatible transmission technology, higher peak data rates, new frequency bands, wider channel frequency bandwidth in Hertz becomes available. These transitions are referred to as generations. The first mobile data services became available during the second generation (2G). The download (to the user) and upload (to the Internet) data rates given above are peak or maximum rates and end users will typically experience lower data rates. WiMAX was originally developed to deliver fixed wireless service with wireless mobility added in 2005. CDPD, CDMA2000 EV-DO, and MBWA are no longer being actively developed. In 2011, 90% of the world's population lived in areas with 2G coverage, while 45% lived in areas with 2G and 3G coverage. 5G was designed to be faster and have lower latency than its predecessor, 4G. It can be used for mobile broadband in smartphones or separate modems that emit WiFi or can be connected through USB to a computer, or for fixed wireless. Fixed wireless internet connections that do not use a satellite nor are designed to support moving equipment such as smartphones due to the use of, for example, customer premises equipment such as antennas that can't be moved over a significant geographical area without losing the signal from the ISP, unlike smartphones. Microwave wireless broadband or 5G may be used for fixed wireless. Worldwide Interoperability for Microwave Access (WiMAX) is a set of interoperable implementations of the IEEE 802.16 family of wireless-network standards certified by the WiMAX Forum. It enables "the delivery of last mile wireless broadband access as an alternative to cable and DSL". The original IEEE 802.16 standard, now called "Fixed WiMAX", was published in 2001 and provided 30 to 40 megabit-per-second data rates. Mobility support was added in 2005. A 2011 update provides data rates up to 1 Gbit/s for fixed stations. WiMax offers a metropolitan area network with a signal radius of about 50 km (30 miles), far surpassing the 30-metre (100-foot) wireless range of a conventional Wi-Fi LAN. WiMAX signals also penetrate building walls much more effectively than Wi-Fi. WiMAX is most often used as a fixed wireless standard. Wireless Internet service providers (WISPs) operate independently of mobile phone operators. WISPs typically employ low-cost IEEE 802.11 Wi-Fi radio systems to link up remote locations over great distances (Long-range Wi-Fi), but may use other higher-power radio communications systems as well, such as microwave and WiMAX. Traditional 802.11a/b/g/n/ac is an unlicensed omnidirectional service designed to span between 100 and 150 m (300 to 500 ft). By focusing the radio signal using a directional antenna (where allowed by regulations), 802.11 can operate reliably over a distance of many km(miles), although the technology's line-of-sight requirements hamper connectivity in areas with hilly or heavily foliated terrain. In addition, compared to hard-wired connectivity, there are security risks (unless robust security protocols are enabled); data rates are usually slower (2 to 50 times slower); and the network can be less stable, due to interference from other wireless devices and networks, weather and line-of-sight problems. With the increasing popularity of unrelated consumer devices operating on the same 2.4 GHz band, many providers have migrated to the 5GHz ISM band. If the service provider holds the necessary spectrum license, it could also reconfigure various brands of off the shelf Wi-Fi hardware to operate on its own band instead of the crowded unlicensed ones. Using higher frequencies carries various advantages: Proprietary technologies like Motorola Canopy & Expedience can be used by a WISP to offer wireless access to rural and other markets that are hard to reach using Wi-Fi or WiMAX. There are a number of companies that provide this service. Local Multipoint Distribution Service (LMDS) is a broadband wireless access technology that uses microwave signals operating between 26 GHz and 29 GHz. Originally designed for digital television transmission (DTV), it is conceived as a fixed wireless, point-to-multipoint technology for utilization in the last mile. Data rates range from 64 kbit/s to 155 Mbit/s. Distance is typically limited to about 1.5 miles (2.4 km), but links of up to 5 miles (8 km) from the base station are possible in some circumstances. LMDS has been surpassed in both technological and commercial potential by the LTE and WiMAX standards. In some regions, notably in rural areas, the length of the copper lines makes it difficult for network operators to provide high-bandwidth services. One alternative is to combine a fixed-access network, typically XDSL, with a wireless network, typically LTE. The Broadband Forum has standardized an architecture for such Hybrid Access Networks. Deploying multiple adjacent Wi-Fi access points is sometimes used to create city-wide wireless networks. It is usually ordered by the local municipality from commercial WISPs. Grassroots efforts have also led to wireless community networks widely deployed in numerous countries, both developing and developed ones. Rural wireless-ISP installations are typically not commercial in nature and are instead a patchwork of systems built up by hobbyists mounting antennas on radio masts and towers, agricultural storage silos, very tall trees, or whatever other tall objects are available. Where radio spectrum regulation is not community-friendly, the channels are crowded or when equipment can not be afforded by local residents, free-space optical communication can also be deployed in a similar manner for point to point transmission in air (rather than in fiber optic cable). Packet radio connects computers or whole networks operated by radio amateurs with the option to access the Internet. Note that as per the regulatory rules outlined in the HAM license, Internet access and email should be strictly related to the activities of hardware amateurs. The term, a tongue-in-cheek play on net(work) as in Internet or Ethernet, refers to the wearing of sneakers as the transport mechanism for the data. For those who do not have access to or can not afford broadband at home, downloading large files and disseminating information is done by transmission through workplace or library networks, taken home and shared with neighbors by sneakernet. The Cuban El Paquete Semanal is an organized example of this. There are various decentralized, delay tolerant peer to peer applications which aim to fully automate this using any available interface, including both wireless (Bluetooth, Wi-Fi mesh, P2P or hotspots) and physically connected ones (USB storage, Ethernet, etc.). Sneakernets may also be used in tandem with computer network data transfer to increase data security or overall throughput for big data use cases. Innovation continues in the area to this day; for example, AWS has recently announced Snowball, and bulk data processing is also done in a similar fashion by many research institutes and government agencies. Pricing and spending Internet access is limited by the relation between pricing and available resources to spend. Regarding the latter, it is estimated that 40% of the world's population has less than US$20 per year available to spend on information and communications technology (ICT). In Mexico, the poorest 30% of the society spend an estimated US$35 per year (US$3 per month) and in Brazil, the poorest 22% of the population merely has US$9 per year to spend on ICT (US$0.75 per month). From Latin America, it is known that the borderline between ICT as a necessity good and ICT as a luxury good is roughly around the "magical number" of US$10 per person per month, or US$120 per year. This is the amount of ICT spending people esteem to be a basic necessity. Current Internet access prices exceed the available resources by large in many countries. Dial-up users pay the costs for making local or long-distance phone calls, usually pay a monthly subscription fee, and may be subject to additional per minute or traffic based charges, and connect time limits by their ISP. Though less common today than in the past, some dial-up access is offered for "free" in return for watching banner ads as part of the dial-up service. NetZero, BlueLight, Juno, Freenet (NZ), and Free-nets are examples of services providing free access. Some Wireless community networks continue the tradition of providing free Internet access. Fixed broadband Internet access is often sold under an "unlimited" or flat rate pricing model, with price determined by the maximum data rate chosen by the customer, rather than a per minute or traffic based charge. Per minute and traffic based charges and traffic caps are common for mobile broadband Internet access. Internet services like Facebook, Wikipedia and Google have built special programs to partner with mobile network operators (MNO) to introduce zero-rating the cost for their data volumes as a means to provide their service more broadly into developing markets. With increased consumer demand for streaming content such as video on demand and peer-to-peer file sharing, demand for bandwidth has increased rapidly and for some ISPs the flat rate pricing model may become unsustainable. However, with fixed costs estimated to represent 80–90% of the cost of providing broadband service, the marginal cost to carry additional traffic is low. Most ISPs do not disclose their costs, but the cost to transmit a gigabyte of data in 2011 was estimated to be about $0.03. Some ISPs estimate that a small number of their users consume a disproportionate portion of the total bandwidth. In response some ISPs are considering, are experimenting with, or have implemented combinations of traffic based pricing, time of day or "peak" and "off peak" pricing, and bandwidth or traffic caps. Others claim that because the marginal cost of extra bandwidth is very small with 80 to 90 percent of the costs fixed regardless of usage level, that such steps are unnecessary or motivated by concerns other than the cost of delivering bandwidth to the end user. In Canada, Rogers Hi-Speed Internet and Bell Canada have imposed bandwidth caps. In 2008 Time Warner began experimenting with usage-based pricing in Beaumont, Texas. In 2009 an effort by Time Warner to expand usage-based pricing into the Rochester, New York area met with public resistance, however, and was abandoned. On August 1, 2012, in Nashville, Tennessee and on October 1, 2012, in Tucson, Arizona Comcast began tests that impose data caps on area residents. In Nashville exceeding the 300 Gbyte cap mandates a temporary purchase of 50 Gbytes of additional data. Digital divide Despite its tremendous growth, Internet access is not distributed equally within or between countries. The digital divide refers to "the gap between people with effective access to information and communications technology (ICT), and those with very limited or no access". The gap between people with Internet access and those without is one of many aspects of the digital divide. Whether someone has access to the Internet can depend greatly on financial status, geographical location as well as government policies. "Low-income, rural, and minority populations have received special scrutiny as the technological 'have-nots'." Government policies play a tremendous role in bringing Internet access to or limiting access for underserved groups, regions, and countries. For example, in Pakistan, which is pursuing an aggressive IT policy aimed at boosting its drive for economic modernization, the number of Internet users grew from 133,900 (0.1% of the population) in 2000 to 31 million (17.6% of the population) in 2011. In North Korea there is relatively little access to the Internet due to the governments' fear of political instability that might accompany the benefits of access to the global Internet. The U.S. trade embargo is a barrier limiting Internet access in Cuba. Access to computers is a dominant factor in determining the level of Internet access. In 2011, in developing countries, 25% of households had a computer and 20% had Internet access, while in developed countries the figures were 74% of households had a computer and 71% had Internet access. The majority of people in developing countries do not have Internet access. About 4 billion people do not have Internet access. When buying computers was legalized in Cuba in 2007, the private ownership of computers soared (there were 630,000 computers available on the island in 2008, a 23% increase over 2007). Internet access has changed the way in which many people think and has become an integral part of people's economic, political, and social lives. The United Nations has recognized that providing Internet access to more people in the world will allow them to take advantage of the "political, social, economic, educational, and career opportunities" available over the Internet. Several of the 67 principles adopted at the World Summit on the Information Society convened by the United Nations in Geneva in 2003, directly address the digital divide. To promote economic development and a reduction of the digital divide, national broadband plans have been and are being developed to increase the availability of affordable high-speed Internet access throughout the world. The Global Gateway, the EU's initiative to assist infrastructure development throughout the world, plans to raise €300 billion for connectivity projects, including those in the digital sector, between 2021 and 2027. Access to the Internet grew from an estimated 10 million people in 1993, to almost 40 million in 1995, to 670 million in 2002, and to 2.7 billion in 2013. With market saturation, growth in the number of Internet users is slowing in industrialized countries, but continues in Asia, Africa, Latin America, the Caribbean, and the Middle East. Across Africa, an estimated 900 million people are still not connected to the internet; for those who are, connectivity fees remain generally expensive, and bandwidth is severely constrained in many locations. The number of mobile customers in Africa, however, is expanding faster than everywhere else. Mobile financial services also allow for immediate payment of products and services. There were roughly 0.6 billion fixed broadband subscribers and almost 1.2 billion mobile broadband subscribers in 2011. In developed countries people frequently use both fixed and mobile broadband networks. In developing countries mobile broadband is often the only access method available. Traditionally the divide has been measured in terms of the existing numbers of subscriptions and digital devices ("have and have-not of subscriptions"). Recent studies have measured the digital divide not in terms of technological devices, but in terms of the existing bandwidth per individual (in kbit/s per capita). As shown in the Figure on the side, the digital divide in kbit/s is not monotonically decreasing, but re-opens up with each new innovation. For example, "the massive diffusion of narrow-band Internet and mobile phones during the late 1990s" increased digital inequality, as well as "the initial introduction of broadband DSL and cable modems during 2003–2004 increased levels of inequality". This is because a new kind of connectivity is never introduced instantaneously and uniformly to society as a whole at once, but diffuses slowly through social networks. As shown by the Figure, during the mid-2000s, communication capacity was more unequally distributed than during the late 1980s, when only fixed-line phones existed. The most recent increase in digital equality stems from the massive diffusion of the latest digital innovations (i.e. fixed and mobile broadband infrastructures, e.g. 3G and fiber optics FTTH). As shown in the Figure, Internet access in terms of bandwidth is more unequally distributed in 2014 as it was in the mid-1990s. For example, only 0.4% of the African population has a fixed-broadband subscription. The majority of internet users use it through mobile broadband. One of the great challenges for Internet access in general and for broadband access in particular is to provide service to potential customers in areas of low population density, such as to farmers, ranchers, and small towns. In cities where the population density is high, it is easier for a service provider to recover equipment costs, but each rural customer may require expensive equipment to get connected. While 66% of Americans had an Internet connection in 2010, that figure was only 50% in rural areas, according to the Pew Internet & American Life Project. Virgin Media advertised over 100 towns across the United Kingdom "from Cwmbran to Clydebank" that have access to their 100 Mbit/s service. Wireless Internet service providers (WISPs) are rapidly becoming a popular broadband option for rural areas. The technology's line-of-sight requirements may hamper connectivity in some areas with hilly and heavily foliated terrain. However, the Tegola project, a successful pilot in remote Scotland, demonstrates that wireless can be a viable option. The Canadian Broadband for Rural Nova Scotia initiative public private partnership is the first program in North America to guarantee access to "100% of civic addresses" in a region. It is based on Motorola Canopy technology. As of November 2011, under 1000 households have reported access problems. Deployment of a new cell network by one Canopy provider (Eastlink) was expected to provide the alternative of 3G/4G service, possibly at a special unmetered rate, for areas harder to serve by Canopy. In New Zealand, a fund has been formed by the government to improve rural broadband, and mobile phone coverage. Current proposals include: (a) extending fiber coverage and upgrading copper to support VDSL, (b) focusing on improving the coverage of cellphone technology, or (c) regional wireless. Several countries have started Hybrid Access Networks to provide faster Internet services in rural areas by enabling network operators to efficiently combine their XDSL and LTE networks. The actions, statements, opinions, and recommendations outlined below have led to the suggestion that Internet access itself is or should become a civil or perhaps a human right. Several countries have adopted laws requiring the state to work to ensure that Internet access is broadly available or preventing the state from unreasonably restricting an individual's access to information and the Internet: In December 2003, the World Summit on the Information Society (WSIS) was convened under the auspice of the United Nations. After lengthy negotiations between governments, businesses and civil society representatives the WSIS Declaration of Principles was adopted reaffirming the importance of the Information Society to maintaining and strengthening human rights: The WSIS Declaration of Principles makes specific reference to the importance of the right to freedom of expression in the "Information Society" in stating: A poll of 27,973 adults in 26 countries, including 14,306 Internet users, conducted for the BBC World Service between 30 November 2009 and 7 February 2010 found that almost four in five Internet users and non-users around the world felt that access to the Internet was a fundamental right. 50% strongly agreed, 29% somewhat agreed, 9% somewhat disagreed, 6% strongly disagreed, and 6% gave no opinion. The 88 recommendations made by the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression in a May 2011 report to the Human Rights Council of the United Nations General Assembly include several that bear on the question of the right to Internet access: Network neutrality (also net neutrality, Internet neutrality, or net equality) is the principle that Internet service providers and governments should treat all data on the Internet equally, not discriminating or charging differentially by user, content, site, platform, application, type of attached equipment, or mode of communication. Advocates of net neutrality have raised concerns about the ability of broadband providers to use their last mile infrastructure to block Internet applications and content (e.g. websites, services, and protocols), and even to block out competitors. Opponents claim net neutrality regulations would deter investment into improving broadband infrastructure and try to fix something that isn't broken. In April 2017, a recent attempt to compromise net neutrality in the United States is being considered by the newly appointed FCC chairman, Ajit Varadaraj Pai. The vote on whether or not to abolish net neutrality was passed on December 14, 2017, and ended in a 3–2 split in favor of abolishing net neutrality. Natural disasters and access Natural disasters disrupt internet access in profound ways. This is important—not only for telecommunication companies who own the networks and the businesses who use them, but for emergency crew and displaced citizens as well. The situation is worsened when hospitals or other buildings necessary for disaster response lose their connection. Knowledge gained from studying past internet disruptions by natural disasters could be put to use in planning or recovery. Additionally, because of both natural and man-made disasters, studies in network resiliency are now being conducted to prevent large-scale outages. One way natural disasters impact internet connection is by damaging end sub-networks (subnets), making them unreachable. A study on local networks after Hurricane Katrina found that 26% of subnets within the storm coverage were unreachable. At Hurricane Katrina's peak intensity, almost 35% of networks in Mississippi were without power, while around 14% of Louisiana's networks were disrupted. Of those unreachable subnets, 73% were disrupted for four weeks or longer and 57% were at "network edges were important emergency organizations such as hospitals and government agencies are mostly located". Extensive infrastructure damage and inaccessible areas were two explanations for the long delay in returning service. The company Cisco has revealed a Network Emergency Response Vehicle (NERV), a truck that makes portable communications possible for emergency responders despite traditional networks being disrupted. A second way natural disasters destroy internet connectivity is by severing submarine cables—fiber-optic cables placed on the ocean floor that provide international internet connection. A sequence of undersea earthquakes cut six out of seven international cables connected to Taiwan and caused a tsunami that wiped out one of its cable and landing stations. The impact slowed or disabled internet connection for five days within the Asia-Pacific region as well as between the region and the United States and Europe. With the rise in popularity of cloud computing, concern has grown over access to cloud-hosted data in the event of a natural disaster. Amazon Web Services (AWS) has been in the news for major network outages in April 2011 and June 2012. AWS, like other major cloud hosting companies, prepares for typical outages and large-scale natural disasters with backup power as well as backup data centers in other locations. AWS divides the globe into five regions and then splits each region into availability zones. A data center in one availability zone should be backed up by a data center in a different availability zone. Theoretically, a natural disaster would not affect more than one availability zone. This theory plays out as long as human error is not added to the mix. The June 2012 major storm only disabled the primary data center, but human error disabled the secondary and tertiary backups, affecting companies such as Netflix, Pinterest, Reddit, and Instagram. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Ancient_Egypt] | [TOKENS: 16076] |
Contents Ancient Egypt Ancient Egypt was a cradle of civilization concentrated along the lower reaches of the Nile River in Northeast Africa. It emerged from prehistoric Egypt around 3150 BC (according to conventional Egyptian chronology), when Upper and Lower Egypt were united by Menes, who is believed by the majority of Egyptologists to have been the same person as Narmer. The history of ancient Egypt unfolded as a series of stable kingdoms interspersed by the "Intermediate Periods" of relative instability. These stable kingdoms existed in one of three periods: the Old Kingdom of the Early Bronze Age; the Middle Kingdom of the Middle Bronze Age; or the New Kingdom of the Late Bronze Age. The pinnacle of ancient Egyptian power was achieved during the New Kingdom, which extended its rule to much of Nubia and a considerable portion of the Levant. After this period, Egypt entered an era of slow decline. Over the course of its history, it was invaded or conquered by a number of foreign civilizations, including the Hyksos, the Kushites, the Assyrians, the Persians, and the Greeks and then the Romans. The end of ancient Egypt is variously defined as occurring with the end of the Late Period during the Wars of Alexander the Great in 332 BC or with the end of the Greek-ruled Ptolemaic Kingdom during the Roman conquest of Egypt in 30 BC. In AD 642, the Arab conquest of Egypt brought an end to the region's millennium-long Greco-Roman period. The success of ancient Egyptian civilization came partly from its ability to adapt to the Nile's conditions for agriculture. The predictable flooding of the Nile and controlled irrigation of its fertile valley produced surplus crops, which supported a more dense population, and thereby substantial social and cultural development. With resources to spare, the administration sponsored the mineral exploitation of the valley and its surrounding desert regions, the early development of an independent writing system, the organization of collective construction and agricultural projects, trade with other civilizations, and a military to assert Egyptian dominance throughout the Near East. Motivating and organizing these activities was a bureaucracy of elite scribes, religious leaders, and administrators under the control of the reigning pharaoh, who ensured the cooperation and unity of the Egyptian people in the context of an elaborate system of religious beliefs. Among the many achievements of ancient Egypt are: the quarrying, surveying, and construction techniques that supported the building of monumental pyramids, temples, and obelisks; a system of mathematics; a practical and effective system of medicine; irrigation systems and agricultural production techniques; the first known planked boats; Egyptian faience and glass technology; new forms of literature; and the earliest known peace treaty, which was ratified with the Anatolia-based Hittite Empire. Its art and architecture were widely copied and its antiquities were carried off to be studied, admired, or coveted in the far corners of the world. Likewise, its monumental ruins inspired the imaginations of travelers and writers for millennia. A newfound European and Egyptian respect for antiquities and excavations that began in earnest in the early modern period has led to much scientific investigation of ancient Egypt and its society, as well as a greater appreciation of its cultural legacy. History The Nile has been the lifeline of its region for much of human history. The fertile floodplain of the Nile gave humans the opportunity to develop a settled agricultural economy and a more sophisticated, centralized society that became a cornerstone in the history of human civilization. In Predynastic and Early Dynastic times, the Egyptian climate was much less arid than it is today. Large regions of Egypt were savanna and traversed by herds of grazing ungulates. Foliage and fauna were far more prolific in all environs, and the Nile region supported large populations of waterfowl. Hunting would have been common for Egyptians, and this is also the period when many animals were first domesticated. By about 5500 BC, small tribes living in the Nile valley had developed into a series of cultures demonstrating firm control of agriculture and animal husbandry, and identifiable by their pottery and personal items, such as combs, bracelets, and beads. The largest of these early cultures in upper (Southern) Egypt was the Badarian culture, which probably originated in the Western Desert; it was known for its high-quality ceramics, stone tools, and its use of copper. The Badari was followed by the Naqada culture: the Naqada I (Amratian), the Naqada II (Gerzeh), and Naqada III (Semainean). These brought a number of technological improvements. As early as the Naqada I Period, predynastic Egyptians imported obsidian from Ethiopia, used to shape blades and other objects from flakes. Mutual trade with the Levant was established during Naqada II (c. 3600–3350 BC); this period was also the beginning of trade with Mesopotamia, which continued into the early dynastic period and beyond. Over a period of about 1,000 years, the Naqada culture developed from a few small farming communities into a powerful civilization whose leaders were in complete control of the people and resources of the Nile valley. Establishing a power center at Nekhen, and later at Abydos, Naqada III leaders expanded their control of Egypt northwards along the Nile. They also traded with Nubia to the south, the oases of the western desert to the west, and the cultures of the eastern Mediterranean and Near East to the east. The Naqada culture manufactured a diverse selection of material goods, reflective of the increasing power and wealth of the elite, as well as societal personal-use items, which included combs, small statuary, painted pottery, high quality decorative stone vases, cosmetic palettes, and jewelry made of gold, lapis, and ivory. They also developed a ceramic glaze known as faience, which was used well into the Roman Period to decorate cups, amulets, and figurines. During the last predynastic phase, the Naqada culture began using written symbols that eventually were developed into a full system of hieroglyphs for writing the ancient Egyptian language. The Early Dynastic Period was approximately contemporary to the early Sumerian-Akkadian civilization of Mesopotamia and of ancient Elam. The third-century BC Egyptian priest Manetho grouped the long line of kings from Menes to his own time into 30 dynasties, a system still used today. He began his official history with the king named "Meni" (or Menes in Greek), who was believed to have united the two kingdoms of Upper and Lower Egypt. The transition to a unified state happened more gradually than ancient Egyptian writers represented, and there is no contemporary record of Menes. Some scholars now believe, however, that the mythical Menes may have been the king Narmer, who is depicted wearing royal regalia on the ceremonial Narmer Palette, in a symbolic act of unification. In the Early Dynastic Period, which began about 3000 BC, the first of the Dynastic kings solidified control over Lower Egypt by establishing a capital at Memphis, from which he could control the labor force and agriculture of the fertile delta region, as well as the lucrative and critical trade routes to the Levant. The increasing power and wealth of the kings during the early dynastic period was reflected in their elaborate mastaba tombs and mortuary cult structures at Abydos, which were used to celebrate the deified king after his death. The strong institution of kingship developed by the kings served to legitimize state control over the land, labor, and resources that were essential to the survival and growth of ancient Egyptian civilization. Major advances in architecture, art, and technology were made during the Old Kingdom, fueled by the increased agricultural productivity and resulting population growth, made possible by a well-developed central administration. Some of ancient Egypt's crowning achievements, the Giza pyramids and Great Sphinx, were constructed during the Old Kingdom. Under the direction of the vizier, state officials collected taxes, coordinated irrigation projects to improve crop yield, and drafted peasants to work on construction projects. With the rise of central administration in Egypt, a new class of educated scribes and officials emerged and were granted estates by the king as payment for their services. Kings also made land grants to their mortuary cults and local temples, to ensure that these institutions had the resources to worship the king after his death. Scholars believe that five centuries of these practices slowly eroded the economic vitality of Egypt, and that the economy could no longer afford to support a large centralized administration. As the power of the kings diminished, regional governors called nomarchs began to challenge the supremacy of the office of king. This, coupled with severe droughts between 2200 and 2150 BC, is believed to have caused the country to enter the 140-year period of famine and strife known as the First Intermediate Period. After Egypt's central government collapsed at the end of the Old Kingdom, the administration could no longer support or stabilize the country's economy. The ensuing food shortages and political disputes escalated into famines and small-scale civil wars. Yet despite difficult problems, local leaders, owing no tribute to the king, used their new-found independence to establish a thriving culture in the provinces. Once in control of their own resources, the provinces became economically richer—which was demonstrated by larger and better burials among all social classes. Free from their loyalties to the king, local rulers began competing with each other for territorial control and political power. By 2160 BC, rulers in Herakleopolis controlled Lower Egypt in the north, while a rival clan based in Thebes, the Intef family, took control of Upper Egypt in the south. As the Intefs grew in power and expanded their control northward, a clash between the two rival dynasties became inevitable. Around 2055 BC the northern Theban forces under Nebhepetre Mentuhotep II finally defeated the Herakleopolitan rulers, reuniting the Two Lands. They inaugurated a period of economic and cultural renaissance known as the Middle Kingdom. The kings of the Middle Kingdom restored the country's stability, which saw a resurgence of art and monumental building projects, and a new flourishing of literature. Mentuhotep II and his Eleventh Dynasty successors ruled from Thebes, but the vizier Amenemhat I, upon assuming the kingship at the beginning of the Twelfth Dynasty around 1985 BC, shifted the kingdom's capital to the city of Itjtawy, located in Faiyum. From Itjtawy, the kings of the Twelfth Dynasty undertook a far-sighted land reclamation and irrigation scheme to increase agricultural output in the region. Moreover, the military reconquered territory in Nubia that was rich in quarries and gold mines, while laborers built a defensive structure in the Eastern Delta, called the "Walls of the Ruler", to defend against foreign attack. With the kings having secured the country militarily and politically and with vast agricultural and mineral wealth at their disposal, the nation's population, arts, and religion flourished. The Middle Kingdom displayed an increase in expressions of personal piety toward the gods. Middle Kingdom literature featured sophisticated themes and characters written in a confident, eloquent style. The relief and portrait sculpture of the period captured subtle, individual details that reached new heights of technical sophistication. Around 1785 BC, as the power of the Middle Kingdom kings weakened, a Western Asian people called the Hyksos, who had already settled in the Delta, seized control of Egypt and established their capital at Avaris, forcing the former central government to retreat to Thebes. The king was treated as a vassal and expected to pay tribute. The Hyksos ('foreign rulers') retained Egyptian models of government and identified as kings, thereby integrating Egyptian elements into their culture. After retreating south, the native Theban kings found themselves trapped between the Canaanite Hyksos ruling the north and the Hyksos' Nubian allies, the Kushites, to the south. After years of vassalage, Thebes gathered enough strength to challenge the Hyksos in a conflict that lasted more than 30 years, until 1555 BC. Ahmose I waged a series of campaigns that permanently eradicated the Hyksos' presence in Egypt. He is considered the founder of the Eighteenth Dynasty, and the military became a central priority for his successors, who sought to expand Egypt's borders and attempted to gain mastery of the Near East. The New Kingdom pharaohs established a period of unprecedented prosperity by securing their borders and strengthening diplomatic ties with their neighbours, including the Mitanni Empire, Assyria, and Canaan. Military campaigns waged under Tuthmosis I and his grandson Tuthmosis III extended the influence of the pharaohs to the largest empire Egypt had ever seen. Between their reigns, Hatshepsut, a queen who established herself as pharaoh, launched many building projects, including the restoration of temples damaged by the Hyksos, and sent trading expeditions to Punt and the Sinai. When Tuthmosis III died in 1425 BC, Egypt had an empire extending from Niya in north west Syria to the Fourth Cataract of the Nile in Nubia, cementing loyalties and opening access to critical imports such as bronze and wood. The New Kingdom pharaohs began a large-scale building campaign to promote the god Amun, whose growing cult was based in Karnak. They also constructed monuments to glorify their own achievements, both real and imagined. The Karnak temple is the largest Egyptian temple ever built. Around 1350 BC, the stability of the New Kingdom was threatened when Amenhotep IV ascended the throne and instituted a series of radical and chaotic reforms. Changing his name to Akhenaten, he touted the previously obscure sun deity Aten as the supreme deity, suppressed the worship of most other deities, and moved the capital to the new city of Akhetaten (modern-day Amarna). He was devoted to his new religion and artistic style. After his death, the cult of the Aten was quickly abandoned and the traditional religious order restored. The subsequent pharaohs, Tutankhamun, Ay, and Horemheb, worked to erase all mention of Akhenaten's heresy, now known as the Amarna Period. Around 1279 BC, Ramesses II, also known as Ramesses the Great, ascended the throne, and went on to build more temples, erect more statues and obelisks, and sire more children than any other pharaoh in history.[c] A bold military leader, Ramesses II led his army against the Hittites in the Battle of Kadesh (in modern Syria) and, after fighting to a stalemate, finally agreed to the first recorded peace treaty, around 1258 BC. Egypt's wealth, however, made it a tempting target for invasion, particularly by the Libyan Berbers to the west, and the Sea Peoples, a conjectured confederation of seafarers from the Aegean Sea.[d] Initially, the military was able to repel these invasions, but Egypt eventually lost control of its remaining territories in southern Canaan, much of it falling to the Assyrians. The effects of external threats were exacerbated by internal problems such as corruption, tomb robbery, and civil unrest. After regaining their power, the high priests at the temple of Amun in Thebes accumulated vast tracts of land and wealth, and their expanded power splintered the country during the Third Intermediate Period. Following the death of Ramesses XI in 1078 BC, Smendes assumed authority over the northern part of Egypt, ruling from the city of Tanis. The south was effectively controlled by the High Priests of Amun at Thebes, who recognized Smendes in name only. During this time, Libyans had been settling in the western delta, and chieftains of these settlers began increasing their autonomy. Libyan princes took control of the delta under Shoshenq I in 945 BC, founding the so-called Libyan or Bubastite dynasty that would rule for some 200 years. Shoshenq also gained control of southern Egypt by placing his family members in important priestly positions. Libyan control began to erode as a rival dynasty in the delta arose in Leontopolis, and Kushites threatened from the south. Around 727 BC the Kushite king Piye invaded northward, seizing control of Thebes and eventually the Delta, which established the 25th Dynasty. During the 25th Dynasty, Pharaoh Taharqa created an empire nearly as large as the New Kingdom's. Twenty-fifth Dynasty pharaohs built, or restored, temples and monuments throughout the Nile valley, including at Memphis, Karnak, Kawa, and Jebel Barkal. During this period, the Nile valley saw the first widespread construction of pyramids (many in modern Sudan) since the Middle Kingdom. Egypt's far-reaching prestige declined considerably toward the end of the Third Intermediate Period. Its foreign allies had fallen into the Assyrian sphere of influence, and by 700 BC war between the two states became inevitable. Between 671 and 667 BC the Assyrians began the Assyrian conquest of Egypt. The reigns of both Taharqa and his successor, Tanutamun, were filled with frequent conflict with the Assyrians. Ultimately, the Assyrians pushed the Kushites back into Nubia, occupied Memphis, and sacked the temples of Thebes. The Assyrians left control of Egypt to a series of vassals who became known as the Saite kings of the Twenty-Sixth Dynasty. By 653 BC, the Saite king Psamtik I was able to oust the Assyrians with the help of Greek mercenaries, who were recruited to form Egypt's first navy. Greek influence expanded greatly as the city-state of Naucratis became the home of Greeks in the Nile Delta. The Saite kings based in the new capital of Sais witnessed a brief but spirited resurgence in the economy and culture, but in 525 BC, the Persian Empire, led by Cambyses II, began its conquest of Egypt, eventually defeating the pharaoh Psamtik III at the Battle of Pelusium. Cambyses II then assumed the formal title of pharaoh, but ruled Egypt from Iran, leaving Egypt under the control of a satrap. A few revolts against the Persians marked the 5th century BC, but Egypt was never able to overthrow the Persians until the end of the century. Following its annexation by Persia, Egypt was joined with Cyprus and Phoenicia in the sixth satrapy of the Achaemenid Persian Empire. This first period of Persian rule over Egypt, also known as the Twenty-Seventh Dynasty, ended in 402 BC, when Egypt regained independence under a series of native dynasties. The last of these dynasties, the Thirtieth, proved to be the last native royal house of ancient Egypt, ending with the kingship of Nectanebo II. A brief restoration of Persian rule, sometimes known as the Thirty-First Dynasty, began in 343 BC, but shortly after, in 332 BC, the Persian ruler Mazaces handed Egypt over to Alexander the Great without a fight. In 332 BC, Alexander the Great conquered Egypt with little resistance from the Persians and was welcomed by the Egyptians as a deliverer. The administration established by Alexander's successors, the Macedonian Ptolemaic Kingdom, was based on an Egyptian model and based in the new capital city of Alexandria. The city showcased the power and prestige of Hellenistic rule, and became a centre of learning and culture that included the famous Library of Alexandria and the Mouseion. The Lighthouse of Alexandria lit the way for the many ships that kept trade flowing through the city—as the Ptolemies made commerce and revenue-generating enterprises, such as papyrus manufacturing, their top priority. Hellenistic culture did not supplant native Egyptian culture, as the Ptolemies supported time-honored traditions in an effort to secure the loyalty of the populace. They built new temples in Egyptian style, supported traditional cults, and portrayed themselves as pharaohs. Some traditions merged, as Greek and Egyptian gods were syncretized into composite deities, such as Serapis, and classical Greek forms of sculpture influenced traditional Egyptian motifs. Despite their efforts to appease the Egyptians, the Ptolemies were challenged by native rebellion, bitter family rivalries, and frequent mob violence in Alexandria. In addition, as Rome relied more heavily on imports of grain from Egypt, the Romans took great interest in the political situation in the country. Continued Egyptian revolts, ambitious politicians, and powerful opponents from the Near East made this situation unstable, leading Rome to send forces to secure the country as a province of its empire. Egypt became a province of the Roman Empire in 30 BC, following the defeat of Mark Antony and Ptolemaic Queen Cleopatra VII by Octavian (later Emperor Augustus) in the Battle of Actium. The Romans relied heavily on grain shipments from Egypt, and the Roman army, under the control of a prefect appointed by the emperor, quelled rebellions, strictly enforced the collection of heavy taxes, and prevented attacks by bandits, which had become a notorious problem during the period. Alexandria became an increasingly important center on the trade route with the orient, as exotic luxuries were in high demand in Rome. Although the Romans had a more hostile attitude than the Greeks towards the Egyptians, some traditions such as mummification and worship of the traditional gods continued. The art of mummy portraiture flourished, and some Roman emperors had themselves depicted as pharaohs, though not to the extent that the Ptolemies had. The former lived outside Egypt and did not perform the ceremonial functions of Egyptian kingship. Local administration became Roman in style and closed to native Egyptians. From the mid-first century AD, Christianity took root in Egypt and it was originally seen as another cult that could be accepted. However, it was an uncompromising religion that sought to win converts from the pagan Egyptian and Greco-Roman religions and threatened popular religious traditions. This led to the persecution of converts to Christianity, culminating in the great purges of Diocletian starting in 303, but eventually Christianity won out. In 391, the Christian emperor Theodosius introduced legislation that banned pagan rites and closed temples. Alexandria became the scene of great anti-pagan riots with public and private religious imagery destroyed. As a consequence, Egypt's native religious culture was continually in decline. While the native population continued to speak their language, the ability to read hieroglyphic writing slowly disappeared as the role of the Egyptian temple priests and priestesses diminished. The temples themselves were sometimes converted to churches or abandoned to the desert. Government and economy The pharaoh was the absolute monarch of the country and, at least in theory, wielded complete control of the land and its resources. The king was the supreme military commander and head of the government, who relied on a bureaucracy of officials to manage his affairs. In charge of the administration was his second in command, the vizier, who acted as the king's representative and coordinated land surveys, the treasury, building projects, the legal system, and the archives. At a regional level, the country was divided into as many as 42 administrative regions called nomes each governed by a nomarch, who was accountable to the vizier for his jurisdiction. The temples formed the backbone of the economy. Not only were they places of worship, but were also responsible for collecting and storing the kingdom's wealth in a system of granaries and treasuries administered by overseers, who redistributed grain and goods. Much of the economy was centrally organized and strictly controlled. Although the ancient Egyptians did not use coinage until the Late period, they did use a type of money-barter system, with standard sacks of grain and the deben, a weight of roughly 91 g (3 oz) of copper or silver, forming a common denominator. Workers were paid in grain: A simple laborer might earn 5+1⁄2 sacks or ca. 200 kg (440 lb) of grain per month, while a foreman might earn 7+1⁄2 sacks or roughly 250 kg (550 lb). Prices were fixed across the country and recorded in lists to facilitate trading; for example a shirt cost five copper deben, while a cow cost 140 deben. Grain could be traded for other goods, according to the fixed price list. During the fifth century BC coined money was introduced into Egypt from abroad. At first the coins were used as standardized pieces of precious metal rather than true money, but in the following centuries international traders came to rely on coinage. Egyptian society was highly stratified, and social status was expressly displayed. Farmers made up the bulk of the population, but agricultural produce was owned directly by the state, temple, or noble family that owned the land. Farmers were also subject to a labor tax and were required to work on irrigation or construction projects in a corvée system. Artists and craftsmen were of higher status than farmers, but they were also under state control, working in the shops attached to the temples and paid directly from the state treasury. Scribes and officials formed the upper class in ancient Egypt, known as the "white kilt class" in reference to the bleached linen garments that served as a mark of their rank. The upper class prominently displayed their social status in art and literature. Below the nobility were the priests, physicians, and engineers with specialized training in their field. It is unclear whether slavery as understood today existed in ancient Egypt; there is difference of opinions among authors. The ancient Egyptians viewed men and women, including people from all social classes, as essentially equal under the law, and even the lowliest peasant was entitled to petition the vizier and his court for redress. Although slaves were mostly used as indentured servants, they were able to buy and sell their servitude, work their way to freedom or nobility, and were usually treated by doctors in the workplace. Both men and women had the right to own and sell property, make contracts, marry and divorce, receive inheritance, and pursue legal disputes in court. Married couples could own property jointly and protect themselves from divorce by agreeing to marriage contracts, which stipulated the financial obligations of the husband to his wife and children should the marriage end. Compared with their counterparts in ancient Greece, Rome, and even more modern places around the world, ancient Egyptian women had a greater range of personal choices, legal rights, and opportunities for achievement. Women such as Hatshepsut and Cleopatra VII even became pharaohs, while others wielded power as Divine Wives of Amun. Despite these freedoms, ancient Egyptian women did not often take part in official roles in the administration, aside from the royal high priestesses, apparently served only secondary roles in the temples (not much data for many dynasties), and were not so probably to be as educated as men. The head of the legal system was officially the pharaoh, who was responsible for enacting laws, delivering justice, and maintaining law and order, a concept the ancient Egyptians referred to as Ma'at. Although no legal codes from ancient Egypt survive, court documents show that Egyptian law was based on a common-sense view of right and wrong that emphasized reaching agreements and resolving conflicts rather than strictly adhering to a complicated set of statutes. Local councils of elders, known as Kenbet in the New Kingdom, were responsible for ruling in court cases involving small claims and minor disputes. More serious cases involving murder, major land transactions, and tomb robbery were referred to the Great Kenbet, over which the vizier or pharaoh presided. Plaintiffs and defendants were expected to represent themselves and were required to swear an oath that they had told the truth. In some cases, the state took on both the role of prosecutor and judge, and it could torture the accused with beatings to obtain a confession and the names of any co-conspirators. Whether the charges were trivial or serious, court scribes documented the complaint, testimony, and verdict of the case for future reference. Punishment for minor crimes involved either imposition of fines, beatings, facial mutilation, or exile, depending on the severity of the offense. Serious crimes such as murder and tomb robbery were punished by execution, carried out by decapitation, drowning, or impaling the criminal on a stake. Punishment could also be extended to the criminal's family. Beginning in the New Kingdom, oracles played a major role in the legal system, dispensing justice in both civil and criminal cases. The procedure was to ask the god a "yes" or "no" question concerning the right or wrong of an issue. The god, carried by a number of priests, rendered judgement by choosing one or the other, moving forward or backward, or pointing to one of the answers written on a piece of papyrus or an ostracon. A combination of favorable geographical features contributed to the success of ancient Egyptian culture, the most important of which was the rich fertile soil resulting from annual inundations of the Nile River. The ancient Egyptians were thus able to produce an abundance of food, allowing the population to devote more time and resources to cultural, technological, and artistic pursuits. Land management was crucial in ancient Egypt because taxes were assessed based on the amount of land a person owned. Farming in Egypt was dependent on the cycle of the Nile River. The Egyptians recognized three seasons: Akhet (flooding), Peret (planting), and Shemu (harvesting). The flooding season lasted from June to September, depositing on the river's banks a layer of mineral-rich silt ideal for growing crops. After the floodwaters had receded, the growing season lasted from October to February. Farmers plowed and planted seeds in the fields, which were irrigated with ditches and canals. Egypt received little rainfall, so farmers relied on the Nile to water their crops. From March to May, farmers used sickles to harvest their crops, which were then threshed with a flail to separate the straw from the grain. Winnowing removed the chaff from the grain, and the grain was then ground into flour, brewed to make beer, or stored for later use. The ancient Egyptians cultivated emmer and barley, and several other cereal grains, all of which were used to make the two main food staples of bread and beer. Flax plants, uprooted before they started flowering, were grown for the fibers of their stems. These fibers were split along their length and spun into thread, which was used to weave sheets of linen and to make clothing. Papyrus growing on the banks of the Nile River was used to make paper. Vegetables and fruits were grown in garden plots, close to habitations and on higher ground, and had to be watered by hand. Vegetables included leeks, garlic, melons, squashes, pulses, lettuce, and other crops, in addition to grapes that were made into wine. The Egyptians believed that a balanced relationship between people and animals was an essential element of the cosmic order; thus humans, animals and plants were believed to be members of a single whole. Animals, both domesticated and wild, were therefore a critical source of spirituality, companionship, and sustenance to the ancient Egyptians. Cattle were the most important livestock; the administration collected taxes on livestock in regular censuses, and the size of a herd reflected the prestige and importance of the estate or temple that owned them. In addition to cattle, the ancient Egyptians kept sheep, goats, and pigs. Poultry, such as ducks, geese, and pigeons, were captured in nets and bred on farms, where they were force-fed with dough to fatten them. The Nile provided a plentiful source of fish. Bees were also domesticated from at least the Old Kingdom, and provided both honey and wax. The ancient Egyptians used donkeys and oxen as beasts of burden, and they were responsible for plowing the fields and trampling seed into the soil. The slaughter of a fattened ox was also a central part of an offering ritual. Horses were introduced by the Hyksos in the Second Intermediate Period. Camels, although known from the New Kingdom, were not used as beasts of burden until the Late Period. There is also evidence to suggest that elephants were briefly used in the Late Period but largely abandoned due to lack of grazing land. Cats, dogs, and monkeys were common family pets, while more exotic pets imported from the heart of Africa, such as Sub-Saharan African lions, were reserved for royalty. Herodotus observed that the Egyptians were the only people to keep their animals with them in their houses. During the Late Period, the worship of the gods in their animal form was extremely popular, such as the cat goddess Bastet and the ibis god Thoth, and these animals were kept in large numbers for the purpose of ritual sacrifice. Egypt is rich in building and decorative stone, copper and lead ores, gold, and semiprecious stones. These natural resources allowed the ancient Egyptians to build monuments, sculpt statues, make tools, and fashion jewelry. Embalmers used salts from the Wadi Natrun for mummification, which also provided the gypsum needed to make plaster. Ore-bearing rock formations were found in distant, inhospitable wadis in the Eastern Desert and the Sinai, requiring large, state-controlled expeditions to obtain natural resources found there. There were extensive gold mines in Nubia, and one of the first maps known is of a gold mine in this region. The Wadi Hammamat was a notable source of granite, greywacke, and gold. Flint was the first mineral collected and used to make tools, and flint handaxes are the earliest pieces of evidence of habitation in the Nile valley. Nodules of the mineral were carefully flaked to make blades and arrowheads of moderate hardness and durability even after copper was adopted for this purpose. Ancient Egyptians were among the first to use minerals such as sulfur as cosmetic substances. The Egyptians worked deposits of the lead ore galena at Gebel Rosas to make net sinkers, plumb bobs, and small figurines. Copper was the most important metal for toolmaking in ancient Egypt and was smelted in furnaces from malachite ore mined in the Sinai. Workers collected gold by washing the nuggets out of sediment in alluvial deposits, or by the more labor-intensive process of grinding and washing gold-bearing quartzite. Iron deposits found in upper Egypt were used in the Late Period. High-quality building stones were abundant in Egypt; the ancient Egyptians quarried limestone all along the Nile valley, granite from Aswan, and basalt and sandstone from the wadis of the Eastern Desert. Deposits of decorative stones such as porphyry, greywacke, alabaster, and carnelian dotted the Eastern Desert and were collected even before the First Dynasty. In the Ptolemaic and Roman Periods, miners worked deposits of emeralds in Wadi Sikait and amethyst in Wadi el-Hudi. The ancient Egyptians engaged in trade with their foreign neighbors to obtain rare, exotic goods not found in Egypt. In the Predynastic Period, they established trade with Nubia to obtain gold and incense. They also established trade with Palestine, as evidenced by Palestinian-style oil jugs found in the burials of the First Dynasty pharaohs. An Egyptian colony stationed in southern Canaan dates to slightly before the First Dynasty. Tell es-Sakan in present-day Gaza was established as an Egyptian settlement in the late 4th millennium BC, and is theorised to have been the main Egyptian colonial site in the region. Narmer had Egyptian pottery produced in Canaan and exported back to Egypt. By the Second Dynasty at latest, ancient Egyptian trade with Byblos yielded a critical source of quality timber not found in Egypt. By the Fifth Dynasty, trade with Punt provided gold, aromatic resins, ebony, ivory, and wild animals such as monkeys and baboons. Egypt relied on trade with Anatolia for essential quantities of tin as well as supplementary supplies of copper, both metals being necessary for the manufacture of bronze. The ancient Egyptians prized the blue stone lapis lazuli, which had to be imported from far-away Afghanistan. Egypt's Mediterranean trade partners also included Greece and Crete, which provided, among other goods, supplies of olive oil. Language The Egyptian language is a northern Afro-Asiatic language closely related to the Berber and Semitic languages. The Ancient Egyptian language likewise shared linguistic ties with other Afro-Asiatic languages such as the Chadic languages of west and central Africa, the Cushitic languages of northeast Africa, and the Ethio-Semitic languages, which are found in Ethiopia and Eritrea. Many scholars have accepted an African phylum language origin since five of the six Afro-Asiatic subfamilies, including the Egyptian language, are spoken on the African continent, and only one in Asia. It has the longest known history of any language having been written from c. 3200 BC to the Middle Ages and remaining as a spoken language for longer. The phases of ancient Egyptian are Old Egyptian, Middle Egyptian (Classical Egyptian), Late Egyptian, Demotic and Coptic. Egyptian writings do not show dialect differences before Coptic, but it was probably spoken in regional dialects around Memphis and later Thebes. Ancient Egyptian was a synthetic language, but it became more analytic later on. Late Egyptian developed prefixal definite and indefinite articles, which replaced the older inflectional suffixes. There was a change from the older verb–subject–object word order to subject–verb–object. The Egyptian hieroglyphic, hieratic, and demotic scripts were eventually replaced by the more phonetic Coptic alphabet. Coptic is still used in the liturgy of the Egyptian Orthodox Church, and traces of it are found in modern Egyptian Arabic. Ancient Egyptian has 25 consonants similar to those of other Afro-Asiatic languages. These include pharyngeal and emphatic consonants, voiced and voiceless stops, voiceless fricatives and voiced and voiceless affricates. It has three long and three short vowels, which expanded in Late Egyptian to about nine. The basic word in Egyptian, similar to Semitic and Berber, is a triliteral or biliteral root of consonants and semiconsonants. Suffixes are added to form words. The verb conjugation corresponds to the person. For example, the triconsonantal skeleton S-Ḏ-M is the semantic core of the word 'hear'; its basic conjugation is sḏm, 'he hears'. If the subject is a noun, suffixes are not added to the verb: sḏm ḥmt, 'the woman hears'. Adjectives are derived from nouns through a process that Egyptologists call nisbation because of its similarity with Arabic. The word order is predicate–subject in verbal and adjectival sentences, and subject–predicate in nominal and adverbial sentences. The subject can be moved to the beginning of sentences if it is long and is followed by a resumptive pronoun. Verbs and nouns are negated by the particle n, but nn is used for adverbial and adjectival sentences. Stress falls on the ultimate or penultimate syllable, which can be open (CV) or closed (CVC). Hieroglyphic writing dates from c. 3000 BC, and is composed of hundreds of symbols. A hieroglyph can represent a word, a sound, or a silent determinative; and the same symbol can serve different purposes in different contexts. Hieroglyphs were a formal script, used on stone monuments and in tombs, that could be as detailed as individual works of art. In day-to-day writing, scribes used a cursive form of writing, called hieratic, which was quicker and easier. While formal hieroglyphs may be read in rows or columns in either direction (though typically written from right to left), hieratic was always written from right to left, usually in horizontal rows. A new form of writing, Demotic, became the prevalent writing style, and it is this form of writing—along with formal hieroglyphs—that accompany the Greek text on the Rosetta Stone. Around the first century AD, the Coptic alphabet started to be used alongside the Demotic script. Coptic is a modified Greek alphabet with the addition of some Demotic signs. Although formal hieroglyphs were used in a ceremonial role until the fourth century, towards the end only a small handful of priests could still read them. As the traditional religious establishments were disbanded, knowledge of hieroglyphic writing was mostly lost. Attempts to decipher them date to the Byzantine and Islamic periods in Egypt, but only in the 1820s, after the discovery of the Rosetta Stone and years of research by Thomas Young and Jean-François Champollion, were hieroglyphs substantially deciphered. Writing first appeared in association with kingship on labels and tags for items found in royal tombs. It was primarily an occupation of the scribes, who worked out of the Per Ankh institution or the House of Life. The latter comprised offices, libraries (called House of Books), laboratories and observatories. Some of the best-known pieces of ancient Egyptian literature, such as the Pyramid and Coffin Texts, were written in Classical Egyptian, which continued to be the language of writing until about 1300 BC. Late Egyptian was spoken from the New Kingdom onward and is represented in Ramesside administrative documents, love poetry and tales, as well as in Demotic and Coptic texts. During this period, the tradition of writing had evolved into the tomb autobiography, such as those of Harkhuf and Weni. The genre known as Sebayt ('instructions') was developed to communicate teachings and guidance from famous nobles; the Ipuwer papyrus, a poem of lamentations describing natural disasters and social upheaval, is a famous example. The Story of Sinuhe, written in Middle Egyptian, might be the classic of Egyptian literature. Also written at this time was the Westcar Papyrus, a set of stories told to Khufu by his sons relating the marvels performed by priests. The Instruction of Amenemope is considered a masterpiece of Near Eastern literature. Towards the end of the New Kingdom, the vernacular language was more often employed to write popular pieces such as the Story of Wenamun and the Instruction of Any. The former tells the story of a noble who is robbed on his way to buy cedar from Lebanon and of his struggle to return to Egypt. From about 700 BC, narrative stories and instructions, such as the popular Instructions of Onchsheshonqy, as well as personal and business documents were written in the demotic script and phase of Egyptian. Many stories written in demotic during the Greco-Roman period were set in previous historical eras, when Egypt was an independent nation ruled by great pharaohs such as Ramesses II. Culture Most ancient Egyptians were farmers tied to the land. Their dwellings were restricted to immediate family members, and were constructed of mudbrick designed to remain cool in the heat of the day. Each home had a kitchen with an open roof, which contained a grindstone for milling grain and a small oven for baking the bread. Ceramics served as household wares for the storage, preparation, transport, and consumption of food, drink, and raw materials. Walls were painted white and could be covered with dyed linen wall hangings. Floors were covered with reed mats, while wooden stools, beds raised from the floor and individual tables comprised the furniture. The ancient Egyptians placed a great value on hygiene and appearance. Most bathed in the Nile and used a pasty soap made from animal fat and chalk. Men shaved their entire bodies for cleanliness; perfumes and aromatic ointments covered bad odors and soothed skin. Clothing was made from simple linen sheets that were bleached white, and both men and women of the upper classes wore wigs, jewelry, and cosmetics. Children went without clothing until maturity, at about age 12, and at this age males were circumcised and had their heads shaved. Mothers were responsible for taking care of the children, while the father provided the family's income. Music and dance were popular entertainments for those who could afford them. Early instruments included flutes and harps, while instruments similar to trumpets, oboes, and pipes developed later and became popular. In the New Kingdom, the Egyptians played on bells, cymbals, tambourines, drums, and imported lutes and lyres from Asia. The sistrum was a rattle-like musical instrument that was especially important in religious ceremonies. The ancient Egyptians enjoyed a variety of leisure activities, including games and music. Senet, a board game where pieces moved according to random chance, was particularly popular from the earliest times; another similar game was mehen, which had a circular gaming board. "Hounds and Jackals" also known as 58 holes is another example of board games played in ancient Egypt. The first complete set of this game was discovered from a Theban tomb of the Egyptian pharaoh Amenemhat IV that dates to the 13th Dynasty. Juggling and ball games were popular with children, and wrestling is also documented in a tomb at Beni Hasan. The wealthy members of ancient Egyptian society enjoyed hunting, fishing, and boating as well. The excavation of the workers' village of Deir el-Medina has resulted in one of the most thoroughly documented accounts of community life in the ancient world, which spans almost four hundred years. There is no comparable site in which the organization, social interactions, and working and living conditions of a community have been studied in such detail. Egyptian cuisine remained remarkably stable over time; indeed, the cuisine of modern Egypt retains some striking similarities to the cuisine of the ancients. The staple diet consisted of bread and beer, supplemented with vegetables such as onions and garlic, and fruit such as dates and figs. Wine and meat were enjoyed by all on feast days while the upper classes indulged on a more regular basis. Fish, meat, and fowl could be salted or dried, and could be cooked in stews or roasted on a grill. The architecture of ancient Egypt includes some of the most famous structures in the world: the Great Pyramids of Giza and the temples at Thebes. Building projects were organized and funded by the state for religious and commemorative purposes, but also to reinforce the wide-ranging power of the pharaoh. The ancient Egyptians were skilled builders; using only simple but effective tools and sighting instruments, architects could build large stone structures with great accuracy and precision that is still envied today. The domestic dwellings of elite and ordinary Egyptians alike were constructed from perishable materials such as mudbricks and wood, and have not survived. Peasants lived in simple homes, while the palaces of the elite and the pharaoh were more elaborate structures. A few surviving New Kingdom palaces, such as those in Malkata and Amarna, show richly decorated walls and floors with scenes of people, birds, water pools, deities and geometric designs. Important structures such as temples and tombs that were intended to last forever were constructed of stone instead of mudbricks. The architectural elements used in the world's first large-scale stone building, Djoser's mortuary complex, include post and lintel supports in the papyrus and lotus motif.[citation needed] The earliest preserved ancient Egyptian temples, such as those at Giza, consist of single, enclosed halls with roof slabs supported by columns. In the New Kingdom, architects added the pylon, the open courtyard, and the enclosed hypostyle hall to the front of the temple's sanctuary, a style that was standard until the Greco-Roman period. The earliest and most popular tomb architecture in the Old Kingdom was the mastaba, a flat-roofed rectangular structure of mudbrick or stone built over an underground burial chamber. The step pyramid of Djoser is a series of stone mastabas stacked on top of each other. Pyramids were built during the Old and Middle Kingdoms, but most later rulers abandoned them in favor of less conspicuous rock-cut tombs. The use of the pyramid form continued in private tomb chapels of the New Kingdom and in the royal pyramids of Nubia. The ancient Egyptians produced art to serve functional purposes. For over 3500 years, artists adhered to artistic forms and iconography that were developed during the Old Kingdom, following a strict set of principles that resisted foreign influence and internal change. These artistic standards—simple lines, shapes, and flat areas of color combined with the characteristic flat projection of figures with no indication of spatial depth—created a sense of order and balance within a composition. Images and text were intimately interwoven on tomb and temple walls, coffins, stelae, and even statues. The Narmer Palette, for example, displays figures that can also be read as hieroglyphs. Because of the rigid rules that governed its highly stylized and symbolic appearance, ancient Egyptian art served its political and religious purposes with precision and clarity. Ancient Egyptian artisans used stone as a medium for carving statues and fine reliefs, but used wood as a cheap and easily carved substitute. Paints were obtained from minerals such as iron ores (red and yellow ochres), copper ores (blue and green), soot or charcoal (black), and limestone (white). Paints could be mixed with gum arabic as a binder and pressed into cakes, which could be moistened with water when needed. Pharaohs used reliefs to record victories in battle, royal decrees, and religious scenes. Common citizens had access to pieces of funerary art, such as shabti statues and books of the dead, which they believed would protect them in the afterlife. During the Middle Kingdom, wooden or clay models depicting scenes from everyday life became popular additions to the tomb. In an attempt to duplicate the activities of the living in the afterlife, these models show laborers, houses, boats, and even military formations that are scale representations of the ideal ancient Egyptian afterlife. Despite the homogeneity of ancient Egyptian art, the styles of particular times and places sometimes reflected changing cultural or political attitudes. After the invasion of the Hyksos in the Second Intermediate Period, Minoan-style frescoes were found in Avaris. The most striking example of a politically driven change in artistic forms comes from the Amarna Period, where figures were radically altered to conform to Akhenaten's revolutionary religious ideas. This style, known as Amarna art, was quickly abandoned after Akhenaten's death and replaced by the traditional forms. Beliefs in the divine and in the afterlife were ingrained in ancient Egyptian civilization from its inception; pharaonic rule was based on the divine right of kings. The Egyptian pantheon was populated by gods who had supernatural powers and were called on for help or protection. However, the gods were not always viewed as benevolent, and Egyptians believed they had to be appeased with offerings and prayers. The structure of this pantheon changed continually as new deities were promoted in the hierarchy, but priests made no effort to organize the diverse and sometimes conflicting myths and stories into a coherent system. These various conceptions of divinity were not considered contradictory but rather layers in the multiple facets of reality. Gods were worshiped in cult temples administered by priests acting on the king's behalf. At the center of the temple was the cult statue in a shrine. Temples were not places of public worship or congregation, and only on select feast days and celebrations was a shrine carrying the statue of the god brought out for public worship. Normally, the god's domain was sealed off from the outside world and was only accessible to temple officials. Common citizens could worship private statues in their homes, and amulets offered protection against the forces of chaos. After the New Kingdom, the pharaoh's role as a spiritual intermediary was de-emphasized as religious customs shifted to direct worship of the gods. As a result, priests developed a system of oracles to communicate the will of the gods directly to the people. The Egyptians believed that every human being was composed of physical and spiritual parts or aspects. In addition to the body, each person had a šwt (shadow), a ba (personality or soul), a ka (life-force), and a name. The heart, rather than the brain, was considered the seat of thoughts and emotions. After death, the spiritual aspects were released from the body and could move at will, but they required the physical remains (or a substitute, such as a statue) as a permanent home. The ultimate goal of the deceased was to rejoin his ka and ba and become one of the "blessed dead", living on as an akh, or "effective one". For this to happen, the deceased had to be judged worthy in a trial, in which the heart was weighed against a "feather of truth". If deemed worthy, the deceased could continue their existence on earth in spiritual form. If they were not deemed worthy, their heart was eaten by Ammit the Devourer and they were erased from the Universe.[citation needed] The ancient Egyptians maintained an elaborate set of burial customs that they believed were necessary to ensure immortality after death. These customs involved preserving the body by mummification, performing burial ceremonies, and interring with the body goods the deceased would use in the afterlife. Before the Old Kingdom, bodies buried in desert pits were naturally preserved by desiccation. The arid, desert conditions were a boon throughout the history of ancient Egypt for burials of the poor, who could not afford the elaborate burial preparations available to the elite. Wealthier Egyptians began to bury their dead in stone tombs and use artificial mummification, which involved removing the internal organs, wrapping the body in linen, and burying it in a rectangular stone sarcophagus or wooden coffin. Beginning in the Fourth Dynasty, some parts were preserved separately in canopic jars. By the New Kingdom, the ancient Egyptians had perfected the art of mummification; the best technique took 70 days and involved removing the internal organs, removing the brain through the nose, and desiccating the body in a mixture of salts called natron. The body was then wrapped in linen with protective amulets inserted between layers and placed in a decorated anthropoid coffin. Mummies of the Late Period were also placed in painted cartonnage mummy cases. Actual preservation practices declined during the Ptolemaic and Roman eras, while greater emphasis was placed on the outer appearance of the mummy, which was decorated. Wealthy Egyptians were buried with larger quantities of luxury items, but all burials, regardless of social status, included goods for the deceased. Funerary texts were often included in the grave, and, beginning in the New Kingdom, so were shabti statues that were believed to perform manual labor for them in the afterlife. Rituals in which the deceased was magically re-animated accompanied burials. After burial, living relatives were expected to occasionally bring food to the tomb and recite prayers on behalf of the deceased. Military The ancient Egyptian military was responsible for defending Egypt against foreign invasion, and for maintaining Egypt's domination in the ancient Near East. The military protected mining expeditions to the Sinai during the Old Kingdom and fought civil wars during the First and Second Intermediate Periods. The military was responsible for maintaining fortifications along important trade routes, such as those found at the city of Buhen on the way to Nubia. Forts also were constructed to serve as military bases, such as the fortress at Sile, which was a base of operations for expeditions to the Levant. In the New Kingdom, a series of pharaohs used the standing Egyptian army to attack and conquer Kush and parts of the Levant. Typical military equipment included bows and arrows, spears, and round-topped shields made by stretching animal skin over a wooden frame. In the New Kingdom, the military began using chariots that had earlier been introduced by the Hyksos invaders. Weapons and armor continued to improve after the adoption of bronze: shields were now made from solid wood with a bronze buckle, spears were tipped with a bronze point, and the khopesh was adopted from Asiatic soldiers. The pharaoh was usually depicted in art and literature riding at the head of the army; it has been suggested that at least a few pharaohs, such as Seqenenre Tao II and his sons, did do so. However, it has also been argued that "kings of this period did not personally act as frontline war leaders, fighting alongside their troops". Soldiers were recruited from the general population, but during, and especially after, the New Kingdom, mercenaries from Nubia, Kush, and Libya were hired to fight for Egypt. Technology, medicine and mathematics In technology, medicine, and mathematics, ancient Egypt achieved a relatively high standard of productivity and sophistication. Traditional empiricism, as evidenced by the Edwin Smith and Ebers papyri (c. 1600 BC), is first credited to Egypt. The Egyptians created their own alphabet and decimal system. Even before the Old Kingdom, the ancient Egyptians had developed a glassy material known as faience, which they treated as a type of artificial semi-precious stone. Faience is a non-clay ceramic made of silica, small amounts of lime and soda, and a colorant, typically copper. The material was used to make beads, tiles, figurines, and small wares. Several methods can be used to create faience, but typically production involved application of the powdered materials in the form of a paste over a clay core, which was then fired. By a related technique, the ancient Egyptians produced a pigment known as Egyptian blue, also called blue frit, which is produced by fusing (or sintering) silica, copper, lime, and an alkali such as natron. The product can be ground up and used as a pigment. The ancient Egyptians could fabricate a wide variety of objects from glass with great skill, but it is not clear whether they developed the process independently. It is also unclear whether they made their own raw glass or merely imported pre-made ingots, which they melted and finished. However, they did have technical expertise in making objects, as well as adding trace elements to control the color of the finished glass. A range of colors could be produced, including yellow, red, green, blue, purple, and white, and the glass could be made either transparent or opaque. The medical problems of the ancient Egyptians stemmed directly from their environment. Living and working close to the Nile brought hazards from malaria and debilitating schistosomiasis parasites, which caused liver and intestinal damage. Dangerous wildlife such as crocodiles and hippos were also a common threat. The lifelong labors of farming and building put stress on the spine and joints, and traumatic injuries from construction and warfare all took a significant toll on the body. The grit and sand from stone-ground flour abraded teeth, leaving them susceptible to abscesses (though caries were rare). The diets of the wealthy were rich in sugars, which promoted periodontal disease. Despite the flattering physiques portrayed on tomb walls, the overweight mummies of many of the upper class show the effects of a life of overindulgence. Adult life expectancy was about 35 for men and 30 for women, but reaching adulthood was difficult as about one-third of the population died in infancy.[e] Ancient Egyptian physicians were renowned in the ancient Near East for their healing skills, and some, such as Imhotep, remained famous long after their deaths. Herodotus remarked that there was a high degree of specialization among Egyptian physicians, with some treating only the head or the stomach, while others were eye-doctors and dentists. Training of physicians took place at the Per Ankh or "House of Life" institution, most notably those headquartered in Per-Bastet during the New Kingdom and at Abydos and Saïs in the Late period. Medical papyri show empirical knowledge of anatomy, injuries, and practical treatments. Wounds were treated by bandaging with raw meat, white linen, sutures, nets, pads, and swabs soaked with honey to prevent infection, while opium, thyme, and belladona were used to relieve pain. The earliest records of burn treatment describe burn dressings that use the milk from mothers of male babies. Prayers were made to the goddess Isis. Moldy bread, honey, and copper salts were also used to prevent infection from dirt in burns. Garlic and onions were used regularly to promote good health and were thought to relieve asthma symptoms. Ancient Egyptian surgeons stitched wounds, set broken bones, and amputated diseased limbs, but they recognized that some injuries were so serious that they could only make the patient comfortable until death occurred. Early Egyptians knew how to assemble planks of wood into a ship hull and had mastered advanced forms of shipbuilding as early as 3000 BC. The Archaeological Institute of America reports that the oldest planked ships known are the Abydos boats. A group of 14 discovered ships in Abydos were constructed of wooden planks "sewn" together. Discovered by Egyptologist David O'Connor of New York University, woven straps were found to have been used to lash the planks together, and reeds or grass stuffed between the planks helped to seal the seams. Because the ships are all buried together and near a mortuary belonging to Pharaoh Khasekhemwy, originally they were all thought to have belonged to him, but one of the 14 ships dates to 3000 BC, and the associated pottery jars buried with the vessels also suggest earlier dating. The ship dating to 3000 BC was 75 feet (23 m) long and is now thought to perhaps have belonged to an earlier pharaoh, perhaps one as early as Hor-Aha. Early Egyptians also knew how to assemble planks of wood with treenails to fasten them together, using pitch for caulking the seams. The "Khufu ship", a 43.6-metre (143 ft) vessel sealed into a pit in the Giza pyramid complex at the foot of the Great Pyramid of Giza in the Fourth Dynasty around 2500 BC, is a full-size surviving example that may have filled the symbolic function of a solar barque. Early Egyptians also knew how to fasten the planks of this ship together with mortise and tenon joints. Large seagoing ships are known to have been heavily used by the Egyptians in their trade with the city states of the eastern Mediterranean, especially Byblos (on the coast of modern-day Lebanon), and in several expeditions down the Red Sea to the Land of Punt. In fact one of the earliest Egyptian words for a seagoing ship is a "Byblos Ship", which originally defined a class of Egyptian seagoing ships used on the Byblos run; however, by the end of the Old Kingdom, the term had come to include large seagoing ships, whatever their destination. In 1977, an ancient north–south canal was discovered extending from Lake Timsah to the Ballah Lakes. It was dated to the Middle Kingdom of Egypt by extrapolating dates of ancient sites constructed along its course.[f] In 2011, archaeologists from Italy, the United States, and Egypt, excavating a dried-up lagoon known as Mersa Gawasis, unearthed traces of an ancient harbor that once launched early voyages, such as Hatshepsut's Punt, expedition onto the open ocean. Some of the site's most evocative evidence for the ancient Egyptians' seafaring prowess include large ship timbers and hundreds of feet of ropes, made from papyrus, coiled in huge bundles. In 2013, a team of Franco-Egyptian archaeologists discovered what is believed to be the world's oldest port, dating back about 4500 years, from the time of King Khufu, on the Red Sea coast, near Wadi el-Jarf (about 110 miles south of Suez). The earliest attested examples of mathematical calculations date to the predynastic Naqada period, and show a fully developed numeral system.[g] The importance of mathematics to an educated Egyptian is suggested by a New Kingdom fictional letter in which the writer proposes a scholarly competition between himself and another scribe regarding everyday calculation tasks such as accounting of land, labor, and grain. Texts such as the Rhind Mathematical Papyrus and the Moscow Mathematical Papyrus show that the ancient Egyptians could perform the four basic mathematical operations—addition, subtraction, multiplication, and division—use fractions, calculate the areas of rectangles, triangles, and circles and compute the volumes of boxes, columns and pyramids. They understood basic concepts of algebra and geometry, and could solve systems of equations. Nubians also exercised a trigonometric methodology comparable to their Egyptian counterparts. Mathematical notation was decimal, and based on hieroglyphic signs for each power of ten up to one million. Each of these could be written as many times as necessary to add up to the desired number; so to write the number eighty or eight hundred, the symbol for ten or one hundred was written eight times respectively. Because their methods of calculation could not handle most fractions with a numerator greater than one, they had to write fractions as the sum of several fractions. For example, they resolved the fraction two-fifths into the sum of one-third + one-fifteenth. Standard tables of values facilitated this. Some common fractions, however, were written with a special glyph—the equivalent of the modern two-thirds is shown on the right. Ancient Egyptian mathematicians knew the Pythagorean theorem as an empirical formula. They were aware, for example, that a triangle had a right angle opposite the hypotenuse when its sides were in a 3–4–5 ratio. They were able to estimate the area of a circle by subtracting one-ninth from its diameter and squaring the result: a reasonable approximation of the formula πr2. Population Estimates of the size of the population range from 1–1.5 million in the 3rd millennium BC to possibly 2–3 million by the 1st millennium BC, before growing significantly towards the end of that millennium. Historical scholarship has generally regarded the peopling of the Egyptian Nile Valley from archaeological and biological data, to be the result of interaction between coastal northern Africans, "neolithic" Saharans, Nilotic hunters, and riverine proto-Nubians with some influence and migration from the Levant. International scholarship reflected in the UNESCO General History of Africa book series have expressed a similar position. A majority of the scholars that contributed to the Volume II edition (1981) considered Egypt an indigenous African civilisation with a mixed population that originated largely in the Sahara and featured a variety of skin colours from north and south of the Saharan region. In the view of Egyptian scholar and featured editor, Gamal Mokhtar, Upper Egypt and Nubia held "similar ethnic composition" with comparable material culture. An updated Volume IX publication launched in 2025, reaffirmed the view that Egypt had African and Eurasian populations. The review section which focused on the 1974 "Peopling of Egypt" symposium stated that accumulated research over three decades had confirmed the migration from Southernly African along with Saharan populations into the early Nile Valley. Upper Egypt was now positioned as a origin point of Pharaonic unification, with supporting archaeological, anthropological, genetic and linguistic sources of evidence having identified close affinities between Upper Egypt and other Sub-Saharan African populations. According to historian William Stiebling and archaeologist Susan N. Helft, conflicting DNA analysis on recent genetic samples such as the Amarna royal mummies has led to a lack of consensus on the genetic makeup of the ancient Egyptians and their geographic origins. The genetic history of Ancient Egypt remains a developing field, and is relevant for the understanding of population demographic events connecting Africa and Eurasia. To date, the amount of genome-wide aDNA analyses on ancient specimens from Egypt and Sudan remain scarce, although studies on uniparental haplogroups in ancient individuals have been carried out several times, pointing broadly to affinities with other African and Eurasian groups. The currently most advanced full genome analyses was published in a 2025 article by the scientific journal Nature, a whole-genome genetic study of an Old Kingdom adult male Egyptian of relatively high-status, codenamed "Old Kingdom individual (NUE001)", who was radiocarbon-dated to 2855–2570 BC, with funerary practices archeologically attributed to the Third and Fourth Dynasty, excavated in Nuwayrat (Nuerat, نويرات), in a cliff 265 km south of Cairo. Before this study, whole-genome sequencing of ancient Egyptians from the early periods of Egyptian Dynastic history had not yet been accomplished, mainly because of the problematic DNA preservation conditions in Egypt. The corpse had been placed intact in a large circular clay pot without embalming, and then installed inside a cliff tomb, which accounts for the comparatively good level of conservation of the skeleton and its DNA. Most of his genome was found to be associated with North African Neolithic ancestry, but about 20% of his genetic ancestry could be sourced to the eastern Fertile Crescent, including Mesopotamia. Overall, the 2025 study "provides direct evidence of genetic ancestry related to the eastern Fertile Crescent in ancient Egypt". This genetic connection suggests that there had been ancient migration flows from the eastern Fertile Crescent to Egypt, in addition to the exchanges of objects and imagery (domesticated animals and plants, writing systems...) already observed. This suggests a pattern of wide cultural and demographic expansion from the Mesopotamian region, which affected both Anatolia and Egypt during this period. The authors acknowledged some limitations of the study, such as the results deriving from one single Egyptian genome and known limitations predicting specific phenotypic traits in understudied populations. The analysis also excluded any substantial ancestry in the Nuwayrat genome related to a previously published 4,500-year-old hunter-gatherer genome from the Mota cave in Ethiopia, or other individuals in central, eastern, or southern Africa. An earlier partial genomic analyses had been made on much later specimens recovered from the Nile River Valley, Abusir el-Meleq, Egypt, dating from the 787 BC-23 AD time period. Two of the individuals were dated to the Pre-Ptolemaic Period (New Kingdom to Late Period), and one individual to the Ptolemaic Period. These results point to a genetic continuity of Ancient Egyptians with modern Egyptians. The results further point to a close genetic affinity between ancient Egyptians and Middle Eastern populations, especially ancient groups from the Levant. Ancient Egyptians also displayed affinities to Nubians to the south of Egypt, in modern-day Sudan. Archaeological and historical evidence support interactions between Egyptian and Nubian populations more than 5000 years ago, with socio-political dynamics between Egyptians and Nubians ranging from peaceful coexistence to variably successful attempts of conquest. A study on sixty-six ancient Nubian individuals revealed significant contact with ancient Egyptians, characterized by the presence of c. 57% Neolithic/Bronze Age Levantine ancestry in these individuals. Such geneflow of Levantine-like ancestry corresponds with archaeological and botanic evidence, pointing to a Neolithic movement around 7,000 years ago. Modern Egyptians, like modern Nubians, also underwent subsequent admixture events, contributing both "Sub-Saharan" African-like and West Asian-like ancestries, since the Roman period, with significance on the African Slave Trade and the Spread of Islam. Genetic analysis of a modern Upper Egyptian population in Adaima by Eric Crubézy had identified genetic markers common across Africa, with 71% of the Adaima samples carrying E1b1 haplogroup and 3% carrying the L0f mitochondrial haplogroup. A secondary review, published in UNESCO General History of Africa Volume IX, in 2025 noted the results were preliminary and need to be confirmed by other laboratories with new sequencing methods. This was supported by an anthropological study which found the notable presence of dental markers, characteristic of Khoisan people, in a predynastic-era cemetery at Adaïma. The genetic marker E1b1 was identified in a number of genetic studies to have wide distribution across Egypt, with "P2/215/M35.1 (E1b1b), for short M35, likely also originated in eastern tropical Africa, and is predominantly distributed in an arc from the Horn of Africa up through Egypt". Multiple STR analyses of the Amarna royal mummies (including Rameses III, Tutankhamun and Amenhotep III) were deployed to estimate their ethnicity have found they had strong affinities with modern Sub-Saharan populations. Nonetheless, these forms of analysis were not exhaustive as only 8 of the 13 CODIS markets were used. Some scholars, such as Christopher Ehret, caution that a wider sampling area is needed and argue that the current data is inconclusive on the origin of ancient Egyptians. They also point out issues with the previously used methodology such as the sampling size, comparative approach and a "biased interpretation" of the genetic data. They argue in favor for a link between Ancient Egypt and the northern Horn of Africa. This latter view has been attributed to the corresponding archaeological, genetic, linguistic and biological anthropological sources of evidence which broadly indicate that the earliest Egyptians and Nubians were the descendants of populations in northeast Africa. Mainstream scholars have situated the ethnicity and the origins of predynastic, southern Egypt as a foundational community primarily in northeast Africa which included the Sudan, tropical Africa and the Sahara whilst recognising the population variability that became characteristic of the pharaonic period. Pharaonic Egypt featured a physical gradation across the regional populations, with Upper Egyptians having shared more biological affinities with Sudanese and southernly African populations, whereas Lower Egyptians had closer genetic links with Levantine and Mediterranean populations. In the view of William Stiebling and Susan Helft, "some ancient Egyptians looked more Middle Eastern and others looked more Sudanese or Ethiopians of today, and some may even have looked like other groups in Africa". Overall, the authors reached the conclusion that Ancient Egypt was a heterogeneous civilization with bio-cultural connections across Africa and Eurasia. Recent studies have emphasized that Ancient Egypt contained much greater ethnic diversity than traditionally assumed in earlier historical approaches. Egyptian notions of identity were formed mainly through social constructs, rather than adhering to fixed biological groups. Riggs and Baines note that the common ideological contrast between "Egyptian" and "Other" oversimplifies population groups which varied according to local traditions, dialects, and naming practices. Archaeological evidence also shows foreigners lived in Egypt, particularly Nubians, Libyans and Asiatics. They would frequently participate in activities that which crossed the social divide described by Riggs and Baines. Furthermore, Smith highlights that while state ideology portrayed outsiders through negative stereotypes, evidence regarding everyday interactions, intermarriage, shared foodways, and visual self-presentations demonstrate that ethnic boundaries flexible across social and political contexts. A number of scholars have argued that Ancient Egyptians shared cultural connections and origins with the Land of Punt. This has been accredited to the Egyptian textual descriptions of Puntland as Ta Nejter which translates into "God's Land", along with temple reliefs which depicted Puntites with reddish-brown skin complexions similar to their Egyptian counterparts. Legacy The culture and monuments of ancient Egypt have left a lasting legacy on the world. Egyptian civilization significantly influenced the Kingdom of Kush and Meroë with both adopting Egyptian religious and architectural norms (hundreds of pyramids (6–30 meters high) were built in Egypt/Sudan), as well as using Egyptian writing as the basis of the Meroitic script. Meroitic is the oldest written language in Africa, other than Egyptian, and was used from the 2nd century BC until the early 5th century AD. The cult of the goddess Isis, for example, became popular in the Roman Empire, as obelisks and other relics were transported back to Rome. The Romans also imported building materials from Egypt to erect Egyptian-style structures. Early historians such as Herodotus, Strabo, and Diodorus Siculus studied and wrote about the land, which Romans came to view as a place of mystery. During the Middle Ages and the Renaissance, Egyptian pagan culture was in decline after the rise of Christianity and later Islam, but interest in Egyptian antiquity continued in the writings of medieval scholars such as Dhul-Nun al-Misri and al-Maqrizi. In the seventeenth and eighteenth centuries, European travelers and tourists brought back antiquities and wrote stories of their journeys, leading to a wave of Egyptomania across Europe, as evident in symbolism such as the Eye of Providence and the Great Seal of the United States. This renewed interest sent collectors to Egypt, who took, purchased, or were given many important antiquities. Napoleon arranged the first studies in Egyptology when he brought some 150 scientists and artists to study and document Egypt's natural history, which was published in the Description de l'Égypte. In the 20th century, the Egyptian Government and archaeologists alike recognized the importance of cultural respect and integrity in excavations. Since the 2010s, the Ministry of Tourism and Antiquities has overseen excavations and the recovery of artifacts. See also Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Senior_Advisor_to_the_President_of_the_United_States] | [TOKENS: 280] |
Contents Senior Advisor to the President of the United States Senior Advisor to the President is a title used by high-ranking political advisors to the president of the United States. Senior advisors to the president do not have formal government decision making authority, but they can have significant influence over decisions. Their role is to provide strategic advice, analysis, and recommendations to the president on key issues. White House senior advisors are senior members of the White House Office. The title has been formally used since 1993. Responsibilities Over time, a senior advisor has had responsibility for the following groups: White House departments (previously headed by a senior advisor in past administrations) Prior administrations In prior administrations before 1993, the position of "senior advisor" was a title used for various other purposes. Numerous examples of the position also exist throughout the executive departments and in the branch's independent agencies. For example, the Food and Drug Administration includes a position with the title Senior Advisor for Science; the Department of the Interior has a position with the title Senior Advisor for Alaskan Affairs. Examples of people who had the responsibilities and/or influence of a senior advisor without the title included Edward M. House (to Woodrow Wilson) and Louis Howe (to Franklin D. Roosevelt) List of senior advisors to the president (born 1971) United States Digital Service (American Rescue Plan) See also Notes Explanatory footnotes References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Flag_of_the_United_States] | [TOKENS: 10291] |
Contents Flag of the United States The national flag of the United States, often referred to as the American flag or the U.S. flag, consists of thirteen horizontal stripes, alternating red and white, with a blue rectangle in the canton bearing fifty small, white, five-pointed stars arranged in nine offset horizontal rows, where rows of six stars alternate with rows of five stars. The 50 stars on the flag form a constellation, representing the 50 U.S. states united, while the 13 stripes represent the thirteen colonies that won independence from Great Britain in the American Revolutionary War. The flag was created as an item of military equipment to identify U.S. ships and forts. It evolved gradually during early American history, and was not designed by any one person. The flag exploded in popularity in 1861 as a symbol of opposition to the Confederate attack on Fort Sumter. It came to symbolize the Union in the American Civil War; Union victory solidified its status as a national flag. Because of the country's emergence as a superpower in the 20th century, the flag is now among the most widely recognized symbols in the world. Well-known nicknames for the flag include "the Stars and Stripes", "Old Glory", "the Star-Spangled Banner", and "the Red, White, and Blue". The Pledge of Allegiance and the holiday Flag Day are dedicated to it. The number of stars on the flag is increased as new states join the United States. The last adjustment was made in 1960, following the admission of Hawaii. History The current design of the U.S. flag is its 27th; the design of the flag has been modified officially 26 times since 1777. The 48-star flag was in effect for 47 years until the 49-star version became official on July 4, 1959. The 50-star flag was ordered by then president Eisenhower on August 21, 1959, and was adopted in July 1960. It is the longest-used version of the U.S. flag and has been in use for over 65 years. The first official flag resembling the "Stars and Stripes" was the Continental Navy ensign (often referred to as the Continental Union Flag, Continental Colours, the first American flag, Cambridge Flag, and Grand Union Flag) was used from 1775 to 1777. It consisted of 13 red-and-white stripes, with the British Union Flag in the canton. It first appeared on December 3, 1775, when Continental Navy Lieutenant John Paul Jones flew it aboard Captain Esek Hopkins' flagship Alfred on the Delaware River. Prospect Hill was the location of George Washington's command post during the Siege of Boston during the American Revolution. On New Year's Day in 1776, Washington conducted a flag-raising ceremony to raise the morale of the men of the Continental Army. The standard account features the Continental Union Flag flying, although in 2006, Peter Ansoff advanced a theory that it was actually a British Union Flag instead. Others, such as Byron DeLear, have argued in favor of the traditional version of events. The Continental Union Flag remained the national flag until June 14, 1777. At the time of the Declaration of Independence in July 1776, there were no flags with any stars on them; the Continental Congress did not adopt flags with "stars, white in a blue field" for another year. It has historically been referred to as the first flag of the United States. Colloquially referred to as the Cambridge Flag and Grand Union Flag; the terms domain did not come into use until the 19th century. Although it has been claimed the more recent moniker, Grand Union Flag, was first applied to the Continental Union Flag by G. Henry Preble in his Reconstruction era book Our Flag; the first substantiated use of the name came from Philadelphia resident T. Westcott in 1852 when replying to an inquiry made in Notes and Queries, a London periodical, as to the origin of the U.S. flag. The flag very closely resembles the East India Company flag of the era. Sir Charles Fawcett argued in 1937 that the company flag inspired the design of the U.S. flag. Both flags could easily have been constructed by adding white stripes to a Red Ensign, one of the three maritime flags used throughout the British Empire at the time. However, the East India Company flag could have from nine to 13 stripes and was not allowed to be flown outside the Indian Ocean. Benjamin Franklin once gave a speech endorsing the adoption of the East India Company flag by the United Colonies. He said to George Washington, "While the field of your flag must be new in the details of its design, it need not be entirely new in its elements. There is already in use a flag, I refer to the flag of the East India Company." This was a way of symbolizing American loyalty to the Crown as well as the colonies' aspirations to be self-governing, as was the East India Company. The theory that the Continental Union Flag was a direct descendant of the East India Company flag has been criticized as lacking written evidence; on the other hand, the resemblance to the company flag is obvious, and some of the Founding Fathers of the United States were aware of the East India Company's activities and of their free administration of India under Company rule. On June 14, 1777, the Continental Congress passed the Flag Resolution which stated: "Resolved, That the flag of the thirteen United States be thirteen stripes, alternate red and white; that the union be thirteen stars, white in a blue field, representing a new constellation." Flag Day is now observed on June 14 of each year. While scholars still argue about this, tradition holds that the new flag was first hoisted in June 1777 by the Continental Army at the Middlebrook encampment. Both the stripes (barry) and the stars (mullets) have precedents in classical heraldry. Mullets were comparatively rare in early modern heraldry. However, an example of mullets representing territorial divisions predating the U.S. flag is the Valais 1618 coat of arms, where seven mullets stood for seven districts. Another widely repeated theory is that the design was inspired by the coat of arms of George Washington's family, which includes three red stars over two horizontal red bars on a white field. Despite the similar visual elements, there is "little evidence" or "no evidence whatsoever" to support the claimed connection with the flag design. The Digital Encyclopedia of George Washington, published by the Fred W. Smith National Library for the Study of George Washington at Mount Vernon, calls it an "enduring myth" backed by "no discernible evidence." The story seems to have originated with the 1876 play Washington: A Drama in Five Acts, by the English poet Martin Farquhar Tupper, and was further popularized through repetition in the children's magazine St. Nicholas. The first official U.S. flag flown during battle was on August 3, 1777, at Fort Schuyler (Fort Stanwix) during the Siege of Fort Stanwix. Massachusetts reinforcements brought news of the adoption by Congress of the official flag to Fort Schuyler. Soldiers cut up their shirts to make the white stripes; scarlet material to form the red was secured from red flannel petticoats of officers' wives, while material for the blue union was secured from Capt. Abraham Swartwout's blue cloth coat. A voucher is extant that Congress paid Capt. Swartwout of Dutchess County for his coat for the flag. The 1777 resolution was probably meant to define a naval ensign. In the late 18th century, the notion of a national flag did not yet exist or was only nascent. The flag resolution appears between other resolutions from the Marine Committee. On May 10, 1779, Secretary of the Board of War Richard Peters expressed concern that "it is not yet settled what is the Standard of the United States." However, the term "Standard" referred to a national standard for the Army of the United States. Each regiment was to carry the national standard in addition to its regimental standard. The national standard was not a reference to the national or naval flag. The Flag Resolution did not specify any particular arrangement, number of points, nor orientation for the stars and the arrangement or whether the flag had to have seven red stripes and six white ones or vice versa. The appearance was up to the maker of the flag. Some flag makers arranged the stars into one big star, in a circle or in rows and some replaced a state's star with its initial. One arrangement features 13 five-pointed stars arranged in a circle, with the stars arranged pointing outwards from the circle (as opposed to up), the Betsy Ross flag. Experts have dated the earliest known example of this flag to be 1792 in a painting by John Trumbull. Despite the 1777 resolution, the early years of American independence featured many different, hand-crafted flags. As late as 1779, Captain John Manley believed that the United States "had no national colors" so each ship flew whatever flag pleased the captain. Some of the early flags included blue stripes as well as red and white. Benjamin Franklin and John Adams, in an October 3, 1778, letter to Ferdinand I of the Two Sicilies, described the American flag as consisting of "13 stripes, alternately red, white, and blue, a small square in the upper angle, next to the flagstaff, is a blue field, with 13 white stars, denoting a new Constellation." John Paul Jones used a variety of 13-star flags on his U.S. Navy ships including the well-documented 1779 flags of the Serapis and the Alliance. The Serapis flag had three rows of eight-pointed stars with red, white, and blue stripes. However, the flag for the Alliance had five rows of eight-pointed stars with 13 red and white stripes, and the white stripes were on the outer edges. Both flags were documented by the Dutch government in October 1779, making them two of the earliest known flags of 13 stars. Francis Hopkinson of New Jersey, a naval flag designer and a signer of the Declaration of Independence, designed a flag in 1777 while he was the chairman of the Continental Navy Board's Middle Department, sometime between his appointment to that position in November 1776 and the time that the flag resolution was adopted in June 1777. The Navy Board was under the Continental Marine Committee. Not only did Hopkinson claim that he designed the U.S. flag, but he also claimed that he designed a flag for the U.S. Navy. Hopkinson was the only person to have made such a claim during his own life when he sent a letter and several bills to Congress for his work. These claims are documented in the Journals of the Continental Congress and George Hasting's biography of Hopkinson. Hopkinson initially wrote a letter to Congress, via the Continental Board of Admiralty, on May 25, 1780. In this letter, he asked for a "Quarter Cask of the Public Wine" as payment for designing the U.S. flag, the seal for the Admiralty Board, the seal for the Treasury Board, Continental currency, the Great Seal of the United States, and other devices. However, in three subsequent bills to Congress, Hopkinson asked to be paid in cash, but he did not list his U.S. flag design. Instead, he asked to be paid for designing the "great Naval Flag of the United States" in the first bill; the "Naval Flag of the United States" in the second bill; and "the Naval Flag of the States" in the third, along with the other items. The flag references were generic terms for the naval ensign that Hopkinson had designed: a flag of seven red stripes and six white ones. The predominance of red stripes made the naval flag more visible against the sky on a ship at sea. By contrast, Hopkinson's flag for the United States had seven white stripes and six red ones – in reality, six red stripes laid on a white background. Hopkinson's sketches have not been found, but we can make these conclusions because Hopkinson incorporated different stripe arrangements in the Admiralty (naval) Seal that he designed in the Spring of 1780 and the Great Seal of the United States that he proposed at the same time. His Admiralty Seal had seven red stripes; whereas his second U.S. Seal proposal had seven white ones. Remnants of Hopkinson's U.S. flag of seven white stripes can be found in the Great Seal of the United States and the President's seal. The stripe arrangement would have been consistent with other flags of the period that had seven stripes below the canton, or blue area with stars. For example, two of the earliest known examples of Stars and Stripes flags were painted by a Dutch artist who witnessed the arrival of Navy Lieutenant John Paul Jones' squadron in Texel, The Netherlands, in 1779. The two flags have seven stripes below the canton. When Hopkinson was chairman of the Navy Board, his position was like that of today's Secretary of the Navy. The payment was not made, most likely, because other people had contributed to designing the Great Seal of the United States, and because it was determined he already received a salary as a member of Congress. This contradicts the legend of the Betsy Ross flag, which suggests that she sewed the first Stars and Stripes flag at the request of the government in the Spring of 1776. On May 10, 1779, a letter from the War Board to George Washington stated that there was still no design established for a national standard, on which to base regimental standards, but also referenced flag requirements given to the board by General von Steuben. On September 3, Richard Peters submitted to Washington "Drafts of a Standard" and asked for his "Ideas of the Plan of the Standard," adding that the War Board preferred a design they viewed as "a variant for the Marine Flag." Washington agreed that he preferred "the standard, with the Union and Emblems in the center." The drafts are lost to history but are likely to be similar to the first Jack of the United States. The origin of the stars and stripes design has been muddled by a story disseminated by the descendants of Betsy Ross. The apocryphal story credits Betsy Ross for sewing one of the first flags from a pencil sketch handed to her by George Washington. No such evidence exists either in George Washington's diaries or the Continental Congress's records. Indeed, nearly a century passed before Ross's grandson, William Canby, first publicly suggested the story in 1870. By her family's own admission, Ross ran an upholstery business, and she had never made a flag as of the supposed visit in June 1776. Furthermore, her grandson admitted that his own search through the Journals of Congress and other official records failed to find corroborating evidence for his grandmother's story. George Henry Preble states in his 1882 text that no combined stars and stripes flag was in common use prior to June 1777, and that no one knows who designed the 1777 flag. Historian Laurel Thatcher Ulrich argues that there was no "first flag" worth arguing over. Researchers accept that the United States flag evolved, and did not have one design. Marla Miller writes, "The flag, like the Revolution it represents, was the work of many hands." The family of Rebecca Young claimed that she sewed the first flag. Young's daughter was Mary Pickersgill, who made the Star-Spangled Banner Flag. She was assisted by Grace Wisher, a 13-year-old African American girl. In 1795, the number of stars and stripes was increased from 13 to 15 (to reflect the entry of Vermont and Kentucky as states of the Union). For a time the flag was not changed when subsequent states were admitted, probably because it was thought that this would cause too much clutter. It was the 15-star, 15-stripe flag that inspired Francis Scott Key to write "Defence of Fort M'Henry", later known as "The Star-Spangled Banner", which is now the American national anthem. The flag is currently on display in the exhibition "The Star-Spangled Banner: The Flag That Inspired the National Anthem" at the Smithsonian Institution National Museum of American History in a two-story display chamber that protects the flag while it is on view. On April 4, 1818, a plan was passed by Congress at the suggestion of U.S. Naval Captain Samuel C. Reid in which the flag was changed to have 20 stars, with a new star to be added when each new state was admitted, but the number of stripes would be reduced to 13 so as to honor the original colonies. The act specified that new flag designs should become official on the first July 4 (Independence Day) following the admission of one or more new states. In 1912, the 48-star flag was adopted. This was the first time that a flag act specified an official arrangement of the stars in the canton, namely six rows of eight stars each, where each star would point upward. The U.S. Army and U.S. Navy, however, had already been using standardized designs. Throughout the 19th century, different star patterns, both rectangular and circular, had been abundant in civilian use.[citation needed] In 1960, the current 50-star flag was adopted, incorporating the most recent change, from 49 stars to 50, when the present design was chosen, after Hawaii gained statehood in August 1959. Before that, the admission of Alaska in January 1959 had prompted the debut of a short-lived 49-star flag. When Alaska and Hawaii were being considered for statehood in the 1950s, more than 3,000 designs from the public were submitted to President Dwight D. Eisenhower's administration for consideration. Although some were 49-star versions, the vast majority were 50-star proposals. The earliest submission of a 50-star flag design was submitted in 1953, with most submissions arriving after the admission of Alaska in 1958 (the designs were diverse in media, from simple pencil sketches to professionally constructed flags). For the case of states admitted in 1912, a joint Army-Navy board submitted recommendations to the President on designs of a new flag, who eventually made the final choice; in that instance the War Department admitted to having received some 150 designs in 1912 from the public for consideration. For the 49- and 50-state flags, on July 14, 1953 President Eisenhower declared his preferred method to select a flag design: by a joint committee with six members, three representatives from the Armed forces and one each from the Interior Department, State Department and Commission on Fine Arts. (In late 1958, the White House issued a press release stating that the Secretaries of State, Defense, and Treasury, along with the Chairman of the Commission on Fine Arts were appointed to informally propose the new flag designs to the President; this committee was responsible for formally submitted designs for the 50-star flag to the President on August 17, 1959.) On January 3, 1959, President Eisenhower issued Executive Order 10798 establishing the design of the 49-star flag, and on August 21, 1959, President Eisenhower issued Executive Order 10834 establishing the design of the (current) 50-star flag. At the time, credit was given by the executive department to the United States Army Institute of Heraldry for the design. The 49- and 50-star flags were each flown for the first time at Fort McHenry on Independence Day, in 1959 and 1960 respectively. There is a popular account of Ohio teenager and later mayor of Napoleon, Ohio, Robert G. Heft, as the originally designer the 50-star flag, however no official account of this is known to exist; like other informal submissions to the government, his submission closely resembled the design of the eventual 50-star American flag, and by the time Heft submitted his design, the final design probably had already been chosen. Biographer Alec Nevala-Lee investigated Heft's story, concluding that main details behind the story seem entirely fabricated. On July 4, 2007, the 50-star flag became the version of the flag in the longest use, surpassing the 48-star flag that was used from 1912 to 1959. The U.S. flag was brought to the city of Canton (Guǎngzhōu) in China in 1784 by the merchant ship Empress of China, which carried a cargo of ginseng. There it gained the designation "Flower Flag" (Chinese: 花旗; pinyin: huāqí; Cantonese Yale: fākeì). According to a pseudonymous account first published in the Boston Courier and later retold by author and U.S. naval officer George H. Preble: When the thirteen stripes and stars first appeared at Canton, much curiosity was excited among the people. News was circulated that a strange ship had arrived from the further end of the world, bearing a flag "as beautiful as a flower". Every body went to see the kwa kee chuen [花旗船; Fākeìsyùhn], or "flower flagship". This name at once established itself in the language, and America is now called the kwa kee kwoh [花旗國; Fākeìgwok], the "flower flag country"—and an American, kwa kee kwoh yin [花旗國人; Fākeìgwokyàhn]—"flower flag countryman"—a more complimentary designation than that of "red headed barbarian"—the name first bestowed upon the Dutch. In the above quote, the Chinese words are written phonetically based on spoken Cantonese. The names given were common usage in the nineteenth and early twentieth centuries. Chinese now refer to the United States as Měiguó from Mandarin (simplified Chinese: 美国; traditional Chinese: 美國). Měi is short for Měilìjiān (simplified Chinese: 美利坚; traditional Chinese: 美利堅, phono-semantic matching of "American") and "guó" means "country", so this name is unrelated to the flag. However, the "flower flag" terminology persists in some places today: for example, American ginseng is called flower flag ginseng (simplified Chinese: 花旗参; traditional Chinese: 花旗參) in Chinese, and Citibank, which opened a branch in China in 1902, is known as Flower Flag Bank (花旗银行). Similarly, Vietnamese also uses the borrowed term from Chinese with Sino-Vietnamese reading for the United States, as Hoa Kỳ from 花旗 ("Flower Flag"). Even though the United States is also called nước Mỹ (or simpler Mỹ) colloquially in Vietnamese before the name Měiguó was popular among Chinese, Hoa Kỳ is always recognized as the formal name for the United States with the Vietnamese state officially designates it as Hợp chúng quốc Hoa Kỳ (chữ Hán: 合眾國花旗, lit. 'United states of the Flower Flag'). By that, in Vietnam, the U.S. is also nicknamed xứ Cờ Hoa ("land of Flower Flag") based on the Hoa Kỳ designation. Additionally, the seal of Shanghai Municipal Council in Shanghai International Settlement from 1869 included the U.S. flag as part of the top left-hand shield near the flag of the UK, as the U.S. participated in the creation of this enclave in the Chinese city of Shanghai. It is also included in the badge of the Gulangyu Municipal Police in the International Settlement of Gulangyu, Amoy. President Richard Nixon presented a U.S. flag and Moon rocks to Mao Zedong during his visit to China in 1972. They are now on display at the National Museum of China.[citation needed] The U.S. flag took its first trip around the world in 1787–1790 on board the Columbia. William Driver, who coined the phrase "Old Glory", took the U.S. flag around the world in 1831–32. The flag attracted the notice of the Japanese when an oversized version was carried to Yokohama by the steamer Great Republic as part of a round-the-world journey in 1871. Prior to the Civil War, the American flag was rarely seen outside of military forts, government buildings and ships. This changed following the Battle of Fort Sumter in 1861. The flag flying over the fort was allowed to leave with the Union troops as they surrendered. It was taken across Northern cities, which spurred a wave of "Flagmania". The Stars and Stripes, which had had no real place in the public conscious, suddenly became a part of the national identity. The flag became a symbol of the Union, and the sale of flags exploded at this time. Historian Adam Goodheart wrote: For the first time American flags were mass-produced rather than individually stitched and even so, manufacturers could not keep up with demand. As the long winter of 1861 turned into spring, that old flag meant something new. The abstraction of the Union cause was transfigured into a physical thing: strips of cloth that millions of people would fight for, and many thousands die for. In the Civil War, the flag was allowed to be carried into battle, reversing the 1847 regulation which prohibited this. (During the American Revolutionary War and War of 1812 the army was not officially sanctioned to carry the United States flag into battle. It was not until 1834 that the artillery was allowed to carry the American flag; the army would be granted to do the same in 1841. However, in 1847, in the middle of the war with Mexico, the flag was limited to camp use and not allowed to be brought into battle.) Some wanted to remove the stars of the states which had seceded but Abraham Lincoln was opposed, believing it would give legitimacy to the Confederate states. In the following table depicting the 28 various designs of the United States flag, the star patterns for the flags are merely the usual patterns, often associated with the United States Navy. Canton designs, prior to the proclamation of the 48-star flag, had no official arrangement of the stars. Symbolism The flag of the United States is the nation's most widely recognized symbol. Within the United States, flags are frequently displayed not only on public buildings but on private residences. The flag is a common motif on decals for car windows, and on clothing ornamentation such as badges and lapel pins. Owing to the United States's emergence as a superpower in the 20th century, the flag is among the most widely recognized symbols in the world, and is used to represent the United States. The flag has become a powerful symbol of Americanism, and is flown on many occasions, with giant outdoor flags used by retail outlets to draw customers. Reverence for the flag has at times reached religion-like fervor: in 1919 William Norman Guthrie's book The Religion of Old Glory discussed "the cult of the flag" and formally proposed vexillolatry. Despite a number of attempts to ban the practice, desecration of the flag remains protected as free speech under the First Amendment to the United States Constitution. Scholars have noted the irony that "[t]he flag is so revered because it represents the land of the free, and that freedom includes the ability to use or abuse that flag in protest". Comparing practice worldwide, Testi noted in 2010 that the United States was not unique in adoring its banner, for the flags of Scandinavian countries are also "beloved, domesticated, commercialized and sacralized objects". When the flag was officially adopted in 1777, the colors of red, white, and blue were not given an official meaning. However, when Charles Thomson, Secretary of the Continental Congress, presented a proposed U.S. seal in 1782, he explained its center section in this way: The colours of the pales are those used in the flag of the United States of America; White signifies purity and innocence, Red, hardiness & valor, and Blue, the colour of the Chief signifies vigilance, perseverance & justice. These meanings have broadly been accepted as official, with some variation, but there are other extant interpretations as well: The stars that redeem the night from darkness, and the beams of red light that beautify the morning, have been united upon its folds. As long as the sun endures, or the stars, may it wave over a nation neither enslaved nor enslaving. The colors of our flag signify the qualities of the human spirit we Americans cherish. Red for courage and readiness to sacrifice; white for pure intentions and high ideals; and blue for vigilance and justice. We take the stars from heaven, the red from our mother country, separating it by white stripes, thus showing that we have separated from her, and the white stripes shall go down to posterity, representing our liberty. Design The basic design of the current flag is specified by 4 U.S.C. § 1 (1947): "The flag of the United States shall be thirteen horizontal stripes, alternate red and white; and the union of the flag shall be forty-eight stars, white in a blue field." 4 U.S.C. § 2 outlines the addition of new stars to represent new states, with no distinction made for the shape, size, or arrangement of the stars. Executive Order 10834 (1959) specifies a 50-star design for use after Hawaii was added as a state, and Federal Specification DDD-F-416F (2005) provides additional details about the production of physical flags for use by federal agencies. The executive order establishing these specifications directly governs only flags made for or by the federal government, but it is also used as the definition of the flag in the Flag Code. In practice, most U.S. national flags available for sale to the public follow the federal star arrangement, but have a different width-to-height ratio; common sizes are 2 × 3 ft. or 4 × 6 ft. (flag ratio 1.5), 2.5 × 4 ft. or 5 × 8 ft. (1.6), or 3 × 5 ft. or 6 × 10 ft. (1.667). Even flags flown over the U.S. Capitol for sale to the public through Representatives or Senators are provided in these sizes. Flags that are made to the prescribed 1.9 ratio are often referred to as "G-spec" (for "government specification") flags. The red, white, and blue colors are derived from the flag of the United Kingdom. The flag colors are not standardized by law, and there are no legally specified shades of red, white, and blue. Despite this, some government agencies have specified the use of certain shades. Federal Specification DDD-F-416E specifies shades of red, white, and blue to be used for physical flags procured by federal agencies with reference to the Standard Color Card of America 10th edition, a set of dyed silk fabric samples produced by The Color Association of the United States. The colors are "White" No. 70001, "Old Glory Red" No. 70180, and "Old Glory Blue" No. 70075. There are no easy or proper methods for converting these colors to digital colors. According to the California flag code, "Old Glory Red" No. 70180 should be used as red on the California state flag. In 2002, the California Military Department suggested PMS 200C as a Pantone color equivalent. Several government websites have given Pantone (PMS) equivalents for the flag colors. These colors are "Old Glory Red" PMS 193C and "Old Glory Blue" PMS 281C. When converted to RGB, the colors are "Old Glory Red" #BF0A30, "Old Glory Blue" #00205B, and #FFFFFF for white. The specific shade of Pantone blue varies in government sources. Blue is either listed as PMS 281C, or as PMS 282C. As early as 1996, the website of the U.S. embassy in London listed red PMS 193C and blue PMS 282C. They later changed blue to be PMS 281C. The website of the U.S. embassy in Stockholm claimed in 2001 that those colors had been suggested by Pantone, and that the U.S. Government Printing Office preferred a different set. Red PMS 186C and blue PMS 288C are preferred by the U.S. Government Printing Office. In 2001, the Texas legislature specified that the colors of the Texas flag should be "(1) the same colors used in the United States flag; and (2) defined as numbers 193 (red) and 281 (dark blue) of the Pantone Matching System." The U.S. Millennium Challenge Corporation on their website listed red PMS 193C and blue PMS 281C as the flag colors. The United States Department of State recognizes red PMS 193C and blue PMS 282C, though they also suggest alternate shades of red and blue and give RGB and CMYK conversions. The current internal style guide of the State Department Bureau of Educational and Cultural Affairs recognizes red PMS 193C and blue PMS 282C, though it also suggests alternate shades of red and blue and gives RGB and CMYK conversions generated by Adobe InDesign. Other colors are often used for mass-market flags, printed reproductions, and other products intended to evoke flag colors. The practice of using more saturated colors than the official cloth is not new. As Taylor, Knoche, and Granville wrote in 1950: "The color of the official wool bunting [of the blue field] is a very dark blue, but printed reproductions of the flag, as well as merchandise supposed to match the flag, present the color as a deep blue much brighter than the official wool." Traditionally, the flag may be decorated with golden fringe surrounding the perimeter of the flag as long as it does not deface the flag proper. Ceremonial displays of the flag, such as those in parades or on indoor posts, often use fringe to enhance the flag's appearance. Traditionally, the Army and Air Force use a fringed flag for parades, color guard and indoor display, while the Navy, Marine Corps, and Coast Guard use a fringeless flag for all occasions.[citation needed] The first recorded use of fringe on a flag dates from 1835, and the Army used it officially in 1895. No specific law governs the legality of fringe. Still, a 1925 opinion of the United States Attorney General addresses the use of fringe (and the number of stars) "is at the discretion of the Commander in Chief of the Army and Navy" as quoted from a footnote in previous volumes of Title 4 of the United States Code law books. This opinion is a source for claims that a flag with fringe is a military ensign rather than a civilian. However, according to the Army Institute of Heraldry, which has official custody of U.S. flag designs and makes any change ordered, there are no implications of symbolism in using fringe. Individuals associated with the sovereign citizen movement and tax protester conspiracy arguments have claimed, based on the military usage, that the presence of a fringed flag in a civilian courtroom changes the nature or jurisdiction of the court. Federal and state courts have rejected this contention. Display and use The flag is customarily flown year-round at most public buildings, and it is not unusual to find private houses flying full-size (3 by 5 feet (0.91 by 1.52 m)) flags. Some private use is year-round, but becomes widespread on civic holidays like Memorial Day, Veterans Day, Presidents' Day, Flag Day, and on Independence Day. On Memorial Day, it is common to place small flags by war memorials and next to the graves of U.S. war veterans. Also, on Memorial Day, it is common to fly the flag at half staff until noon to remember those who lost their lives fighting in U.S. wars. The United States Flag Code outlines certain guidelines for the flag's use, display, and disposal. For example, the flag should never be dipped to any person or thing, unless it is the ensign responding to a salute from a ship of a foreign nation. This tradition may come from the 1908 Summer Olympics in London, where countries were asked to dip their flag to King Edward VII: the American flag bearer did not. Team captain Martin Sheridan is famously quoted as saying, "this flag dips to no earthly king", though the true provenance of this quotation is unclear. The flag should never be allowed to touch the ground and should be illuminated if flown at night. The flag should be repaired or replaced if the edges become tattered through wear. When a flag is so tattered that it can no longer serve as a symbol of the United States, it should be destroyed in a dignified manner, preferably by burning. The American Legion and other organizations regularly conduct flag retirement ceremonies, often on Flag Day, June 14. (The Boy Scouts of America recommends that modern nylon or polyester flags be recycled instead of burned due to hazardous gases produced when such materials are burned.) The Flag Code prohibits using the flag "for any advertising purpose" and also states that the flag "should not be embroidered, printed, or otherwise impressed on such articles as cushions, handkerchiefs, napkins, boxes, or anything intended to be discarded after temporary use". Both of these codes are generally ignored, almost always without comment. Section 8, entitled "Respect For Flag", states in part: "The flag should never be used as wearing apparel, bedding, or drapery", and "No part of the flag should ever be used as a costume or athletic uniform". Section 3 of the Flag Code defines "the flag" as anything "by which the average person seeing the same without deliberation may believe the same to represent the flag of the United States of America". An additional provision that is frequently violated at sporting events is part (c) "The flag should never be carried flat or horizontally, but always aloft and free." Although the Flag Code is U.S. federal law, there is no penalty for a private citizen or group failing to comply with the Flag Code, and it is not widely enforced—punitive enforcement would conflict with the First Amendment right to freedom of speech. Passage of the proposed Flag Desecration Amendment would overrule the legal precedent that has been established. When the flag is affixed to the right side of a vehicle of any kind (e.g., cars, boats, planes, any physical object that moves), it should be oriented so that the canton is towards the front of the vehicle, as if the flag were streaming backward from its hoist as the vehicle moves forward. Therefore, U.S. flag decals on the right sides of vehicles may appear to be reversed, with the union to the observer's right instead of left as more commonly seen.[citation needed] The flag has been displayed on every U.S. spacecraft designed for crewed flight starting from John Glenn's Friendship 7 flight in 1962, including Mercury, Gemini, Apollo Command/Service Module, Apollo Lunar Module, and the Space Shuttle. The flag also appeared on the S-IC first stage of the Saturn V launch vehicle used for Apollo. Nevertheless, Mercury, Gemini, and Apollo were launched and landed vertically instead of horizontally as the Space Shuttle did on its landing approach, so the streaming convention was not followed. These flags were oriented with the stripes running horizontally, perpendicular to the direction of flight. On some U.S. military uniforms, flag patches are worn on the right shoulder, following the vehicle convention with the union toward the front. This rule dates back to the Army's early history when mounted cavalry and infantry units would designate a standard-bearer who carried the Colors into battle. As he charged, his forward motion caused the flag to stream back. Since the Stars and Stripes are mounted with the canton closest to the pole, that section stayed to the right, while the stripes flew to the left. Several U.S. military uniforms, such as flight suits worn by members of the United States Air Force and Navy, have the flag patch on the left shoulder. Other organizations that wear flag patches on their uniforms can have the flag facing in either direction. The congressional charter of the Boy Scouts of America stipulates that Boy Scout uniforms should not imitate U.S. military uniforms; consequently, the flags are displayed on the right shoulder with the stripes facing front, the reverse of the military style. Law enforcement officers often wear a small flag patch, either on a shoulder or above a shirt pocket. Every U.S. astronaut since the crew of Gemini 4 has worn the flag on the left shoulder of their space suits, except for the crew of Apollo 1, whose flags were worn on the right shoulder. In this case, the canton was on the left. The flag did not appear on U.S. postal stamp issues until the Battle of White Plains Issue was released in 1926, depicting the flag with a circle of 13 stars. The 48-star flag first appeared on the General Casimir Pulaski issue of 1931, though in a small monochrome depiction. The first U.S. postage stamp to feature the flag as the sole subject was issued July 4, 1957, Scott catalog number 1094. Since then, the flag has frequently appeared on U.S. stamps. In 1907, Eben Appleton, New York stockbroker and grandson of Lieutenant Colonel George Armistead (the commander of Fort McHenry during the 1814 bombardment), loaned the Star-Spangled Banner Flag to the Smithsonian Institution. In 1912 he converted the loan into a gift. Appleton donated the flag with the wish that it would always be on view to the public. In 1994, the National Museum of American History determined that the Star-Spangled Banner Flag required further conservation treatment to remain on public display. In 1998 teams of museum conservators, curators, and other specialists helped move the flag from its home in the Museum's Flag Hall into a new conservation laboratory. Following the reopening of the National Museum of American History on November 21, 2008, the flag is now on display in a special exhibition, "The Star-Spangled Banner: The Flag That Inspired the National Anthem," where it rests at a 10-degree angle in dim light for conservation purposes. U.S. flags are displayed continuously at certain locations by presidential proclamation, acts of Congress, and custom. The flag should especially be displayed on the following days: The flag is displayed at half-staff (half-mast in naval usage) as a sign of respect or mourning. Nationwide, this action is proclaimed by the president; statewide or territory-wide, the proclamation is made by the governor. In addition, there is no prohibition against municipal governments, private businesses, or citizens flying the flag at half-staff as a local sign of respect and mourning. However, many flag enthusiasts feel this type of practice has somewhat diminished the meaning of the original intent of lowering the flag to honor those who held high positions in federal or state offices. President Dwight D. Eisenhower issued the first proclamation on March 1, 1954, standardizing the dates and periods for flying the flag at half-staff from all federal buildings, grounds, and naval vessels; other congressional resolutions and presidential proclamations ensued. However, they are only guidelines to all other entities: typically followed at state and local government facilities and encouraged of private businesses and citizens.[citation needed] To properly fly the flag at half-staff, one should first briefly hoist it top of the staff, then lower it to the half-staff position, halfway between the top and bottom of the staff. Similarly, when the flag is to be lowered from half-staff, it should be first briefly hoisted to the top of the staff. Federal statutes provide that the flag should be flown at half-staff on the following dates: The flag of the United States is sometimes burned as a cultural or political statement, in protest of the policies of the U.S. government, or for other reasons, both within the U.S. and abroad. The United States Supreme Court in Texas v. Johnson, 491 U.S. 397 (1989), and reaffirmed in U.S. v. Eichman, 496 U.S. 310 (1990), has ruled that due to the First Amendment to the United States Constitution, it is unconstitutional for a government (whether federal, state, or municipal) to prohibit the desecration of a flag, due to its status as "symbolic speech." However, content-neutral restrictions may still be imposed to regulate the time, place, and manner of such expression. If the flag that was burned was someone else's property (as it was in the Johnson case, since Johnson had stolen the flag from a Texas bank's flagpole), the offender could be charged with petty larceny, or with destruction of private property, or possibly both.[citation needed] The original meaning of displaying a U.S. flag upside down is "a signal of dire distress in instances of extreme danger to life or property." More recently, it has been used by extension to make a statement about distress in civic, political, or other areas. It is most often meant as political protest, and is usually interpreted as such. The musical group Rage Against the Machine, a group known for songs expressing revolutionary political views, displayed two upside-down American flags from their amplifiers on the April 13, 1996, episode of Saturday Night Live. This was intended to indicate protest about the host, billionaire businessman Steve Forbes. The flags were ripped down by stagehands about 20 seconds before the group's performance of "Bulls on Parade". Afterward, show officials asked band members to leave the building as they were waiting in their dressing room to perform "Bullet in the Head" later in the show. Flying flags upside down has been used as a sign of protest against U.S. presidents. In 2020, as protests spread across the U.S. demanding an end to police brutality, some U.S. citizens chose to fly their flags upside down as part of the protests. In 2020–21, some individuals in the "Stop the Steal" movement flew upside down flags to protest the 2020 presidential election amid claims it was rigged against Donald Trump. Such a flag was flown at the home of Supreme Court justice Samuel Alito in 2021. The upside-down flag was frequently flown in response to Trump's conviction of 34 felonies by his right-wing supporters. On February 22, 2025, a giant upside down flag was displayed in Yosemite park by staff recently fired by the Trump administration. Rallies held on March 1, 2025, also saw upside down flags displayed in iconic spots during a day of action in numerous national parks opposing reductions in staff and protections for public lands. Folding for storage Though not part of the official Flag Code, according to military custom, flags should be folded into a triangular shape when not in use. To properly fold the flag: There is also no specific meaning for each fold of the flag. However, there are scripts read by non-government organizations and also by the Air Force that are used during the flag folding ceremony. These scripts range from historical timelines of the flag to religious themes. Use in funerals Traditionally, the flag of the United States plays a role in military funerals, and occasionally in funerals of other civil servants (such as law enforcement officers, fire fighters, and U.S. presidents). A burial flag is draped over the deceased's casket as a pall during services. Just prior to the casket being lowered into the ground, the flag is ceremonially folded and presented to the deceased's next of kin as a token of respect. Surviving historical flags This is a list of surviving flags that have been displayed at or otherwise associated with notable historical battles or events. Related flags The U.S. flag has inspired many other flags for regions, political movements, and cultural groups, resulting in a stars and stripes flag family. The other national flags belonging to this family are those of Chile, Cuba, Greece, Liberia, Malaysia, Togo, and Uruguay. Unicode The flag of the United States is represented as the Unicode emoji sequence U+1F1FA 🇺 REGIONAL INDICATOR SYMBOL LETTER U and U+1F1F8 🇸 REGIONAL INDICATOR SYMBOL LETTER S, making "🇺🇸". Platforms also use the flag of the United States to represent the United States Minor Outlying Islands in the sequence U+1F1FA 🇺 REGIONAL INDICATOR SYMBOL LETTER U and U+1F1F2 🇲 REGIONAL INDICATOR SYMBOL LETTER M, making 🇺🇲. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Orbital_period] | [TOKENS: 1545] |
Contents Orbital period The orbital period (also revolution period) is the amount of time a given astronomical object takes to complete one orbit around another object. In astronomy, it usually applies to planets or asteroids orbiting the Sun, moons orbiting planets, exoplanets orbiting other stars, or binary stars. It may also refer to the time it takes a satellite orbiting a planet or moon to complete one orbit. For celestial objects in general, the orbital period is determined by a 360° revolution of one body around its primary, e.g. Earth around the Sun. Periods in astronomy are expressed in units of time, usually hours, days, or years. Its reciprocal is the orbital frequency, a kind of revolution frequency, in units of hertz. Small body orbiting a central body According to Kepler's Third Law, the orbital period T of two point masses orbiting each other in a circular or elliptic orbit is: where: For all ellipses with a given semi-major axis the orbital period is the same, regardless of eccentricity. Inversely, for calculating the distance where a body has to orbit in order to have a given orbital period T: For instance, for completing an orbit every 24 hours around a mass of 100 kg, a small body has to orbit at a distance of 1.08 meters from the central body's center of mass. In the special case of perfectly circular orbits, the semimajor axis a is equal to the radius of the orbit, and the orbital velocity is constant and equal to where: This corresponds to 1⁄√2 times (≈ 0.707 times) the escape velocity. For a perfect sphere of uniform density, it is possible to rewrite the first equation without measuring the mass as: where: For instance, a small body in circular orbit 10.5 cm above the surface of a sphere of tungsten half a metre in radius would travel at slightly more than 1 mm/s, completing an orbit every hour. If the same sphere were made of lead the small body would need to orbit just 6.7 mm above the surface for sustaining the same orbital period. When a very small body is in a circular orbit barely above the surface of a sphere of any radius and mean density ρ (in kg/m3), the above equation simplifies to (since r now nearly equals a). Thus the orbital period in low orbit depends only on the density of the central body, regardless of its size. So, for the Earth as the central body (or any other spherically symmetric body with the same mean density, about 5,515 kg/m3, e.g. Mercury with 5,427 kg/m3 and Venus with 5,243 kg/m3) we get: and for a body made of water (ρ ≈ 1,000 kg/m3), or bodies with a similar density, e.g. Saturn's moons Iapetus with 1,088 kg/m3 and Tethys with 984 kg/m3 we get: Thus, as an alternative for using a very small number like G, the strength of universal gravity can be described using some reference material, such as water: the orbital period for an orbit just above the surface of a spherical body of water is 3 hours and 18 minutes. Conversely, this can be used as a kind of "universal" unit of time if we have a unit of density.[citation needed][original research?] Two bodies orbiting each other In celestial mechanics, when both orbiting bodies' masses have to be taken into account, the orbital period T can be calculated as follows: where: In a parabolic or hyperbolic trajectory, the motion is not periodic, and the duration of the full trajectory is infinite. Related periods For celestial objects in general, the orbital period typically refers to the sidereal period, determined by a 360° revolution of one body around its primary relative to the fixed stars projected in the sky. For the case of the Earth orbiting around the Sun, this period is referred to as the sidereal year. This is the orbital period in an inertial (non-rotating) frame of reference. Orbital periods can be defined in several ways. The tropical period is more particularly about the position of the parent star. It is the basis for the solar year, and respectively the calendar year. The synodic period refers not to the orbital relation to the parent star, but to other celestial objects, making it not a merely different approach to the orbit of an object around its parent, but a period of orbital relations with other objects, normally Earth, and their orbits around the Sun. It applies to the elapsed time where planets return to the same kind of phenomenon or location, such as when any planet returns between its consecutive observed conjunctions with or oppositions to the Sun. For example, Jupiter has a synodic period of 398.8 days from Earth; thus, Jupiter's opposition occurs once roughly every 13 months. There are many periods related to the orbits of objects, each of which are often used in the various fields of astronomy and astrophysics, particularly they must not be confused with other revolving periods like rotational periods. Examples of some of the common orbital ones include the following: Periods can be also defined under different specific astronomical definitions that are mostly caused by the small complex external gravitational influences of other celestial objects. Such variations also include the true placement of the centre of gravity between two astronomical bodies (barycenter), perturbations by other planets or bodies, orbital resonance, general relativity, etc. Most are investigated by detailed complex astronomical theories using celestial mechanics using precise positional observations of celestial objects via astrometry. One of the observable characteristics of two bodies which orbit a third body in different orbits, and thus have different orbital periods, is their synodic period, which is the time between conjunctions. An example of this related period description is the repeated cycles for celestial bodies as observed from the Earth's surface, the synodic period, applying to the elapsed time where planets return to the same kind of phenomenon or location — for example, when any planet returns between its consecutive observed conjunctions with or oppositions to the Sun. For example, Jupiter has a synodic period of 398.8 days from Earth; thus, Jupiter's opposition occurs once roughly every 13 months. If the orbital periods of the two bodies around the third are called T1 and T2, so that T1 < T2, their synodic period is given by: Examples of sidereal and synodic periods Table of synodic periods in the Solar System, relative to Earth:[citation needed] In the case of a planet's moon, the synodic period usually means the Sun-synodic period, namely, the time it takes the moon to complete its illumination phases, completing the solar phases for an astronomer on the planet's surface. The Earth's motion does not determine this value for other planets because an Earth observer is not orbited by the moons in question. For example, Deimos's synodic period is 1.2648 days, 0.18% longer than Deimos's sidereal period of 1.2624 d.[citation needed] The concept of synodic period applies not just to the Earth, but also to other planets as well;[citation needed] the computation of synodic periods applies the same formula as above.[citation needed] The following table lists the synodic periods of some planets relative to each other:[original research?][citation needed] See also Notes Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/File:Dall-e_3_(jan_%2724)_artificial_intelligence_icon.png] | [TOKENS: 284] |
File:Dall-e 3 (jan '24) artificial intelligence icon.png Summary العربية ∙ azərbaycanca ∙ Deutsch ∙ English ∙ español ∙ فارسی ∙ français ∙ galego ∙ हिन्दी ∙ Bahasa Indonesia ∙ 日本語 ∙ 한국어 ∙ မြန်မာဘာသာ ∙ português do Brasil ∙ русский ∙ slovenščina ∙ Türkçe ∙ Tiếng Việt ∙ 中文 ∙ 中文(简体) ∙ 中文(繁體) ∙ +/− File history Click on a date/time to view the file as it appeared at that time. File usage More than 100 pages use this file. The following list shows the first 100 pages that use this file only. A full list is available. View more links to this file. Global file usage The following other wikis use this file: View more global usage of this file. Metadata This file contains additional information, probably added from the digital camera or scanner used to create or digitize it. If the file has been modified from its original state, some details may not fully reflect the modified file. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/IAU_designated_constellations_by_solid_angle] | [TOKENS: 233] |
Contents IAU designated constellations by solid angle The International Astronomical Union (IAU) designates 88 constellations of stars. In the table below, they are ranked by the solid angle that they subtend in the sky, measured in square degrees and millisteradians. These solid angles depend on arbitrary boundaries between the constellations: the list below is based on constellation boundaries drawn up by Eugène Delporte in 1930 on behalf of the IAU and published in Délimitation scientifique des constellations (Cambridge University Press). Before Delporte's work, there was no standard list of the boundaries of each constellation. Delporte drew the boundaries along vertical and horizontal lines of right ascension and declination; however, he did so for the epoch B1875.0, which means that due to precession of the equinoxes, the borders on a modern star map (e.g., for epoch J2000) are already somewhat skewed and no longer perfectly vertical or horizontal. This skew will increase over the centuries to come. However, this does not change the solid angle of any constellation. See also Sources |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Korean_birthday_celebrations] | [TOKENS: 1948] |
Contents Korean birthday celebrations Korean birthday celebrations are one of the important facets of Korean culture. When a person reaches an important age in his or her life, Koreans have unique celebrations to mark these milestones. Dol means it has been 365 days since the baby's birth. Dol (돌) Dol (doljanchi, or tol) is probably one of the best-known of the Korean birthday celebrations. Dol is celebrated for the first year of a child. The first part of the dol celebration is prayer. Traditionally, Koreans would pray to two of the many Korean gods: Sansin (the mountain god) and Samsin (the birth goddess). Koreans would prepare the praying table with specific foods: a bowl of steamed white rice, seaweed soup (miyeok-guk) and a bowl of pure water. Layered red bean rice cakes (samsin siru) were placed next to the prayer table. The rice cakes were not shared outside the family; it was believed that sharing this particular item with people outside the family would bring bad luck to the child. After everything on the praying table was ready the mother (or grandmother) of the child would pray to Sansin and Samsin, placing her hands together and rubbing her palms. She would ask for her child's longevity, wish luck to the mountain god, and give thanks to the birth goddess. After she finished her prayer, she bowed to Samsin several times. Women were the only ones allowed to participate in this ceremony; men were forbidden to be part of the praying. When the praying ceremony commenced depended on the region. People from Seoul would pray in the early morning of the child's birthday; other regions prayed the night before. Today this part of the celebrations is usually skipped, because Muism (the religion that worshiped the Korean gods) is rarely practiced. Before the main part of the celebration, the baby is dressed in very colorful, ornate clothing called dol-bok. The dol-bok that the child wears differs according to the child's sex. A boy would traditionally wear a pink or striped jeogori (jacket) with purple or gray baji (pants), a striped durumagi (long jacket), a blue vest printed with a gold or silver pattern or a striped magoja (jacket), a jeonbok (long blue vest) with a gold or silver pattern, a bokgeon (black hat with long tail), and tarae-beoseon (traditional socks). A girl would wear a striped jeogori, a long red chima (skirt), a gold-and-silver printed jobawi (hat) and tarae-beoseon. In addition to their dol-bok, boys and girls would wear a long dol-tti (belt that wraps around the body twice) for longevity and a dol-jumeoni (pouch) for luck. The dol-jumeoni would be made of fine silk, with a thread to open and close it. Buttons were not used in the dol-bok, to symbolize longevity. The doljabi is the main celebration of dol. A large table is prepared with over a dozen different types of rice cakes or tteok (the main food). Some types of tteok are baekseolgi (white steamed rice cakes), susu-gyeongdan (rice cakes coated with rough red bean powder), mujigae-tteok (rainbow-colored steamed rice cakes) and gyepi-tteok (puffed-air rice cakes). Along with the tteok, fruit is also served; the fruit on the table varies, depending on the season. There is also a bowl of rice and various other foods placed on the table. Food is not the only thing on the table, however; there is also a large spool of thread, a brush, a Korean calligraphy set, a pencil, a book, money (10,000-won bills) and a bow and arrow (or a needle, ruler and pair of scissors for girls). After the table is set, the parents sit the baby on a traditional Korean mattress (boryo) and Korean cushions (bangseok). This is done so that relatives can get better pictures of the infant. There is also a traditional screen in the background. The doljabi then begins. The baby picks up various items on the table that attracts him or her. The items that the child picks up are said to predict the child's future. If the child picks up the thread, the child will have a long life. A child who picks up the pencil, book or calligraphy set is forecast to be a good scholar. A child who picks the rice, rice cakes, or money will become rich; some say that choosing the rice (or a rice cake) means the child is unintelligent, or that they will never be hungry. If the ruler, pair of scissors or needle is chosen, it is said that the child will be dexterous. If the child chooses the knife, they will become a good chef. In the modern era, people often prepare modern objects such as an airplane, ice skates, a microphone, a stethoscope or a computer mouse, to symbolize current successful occupations. In the past, families would use items they had in their household but in modern times, people purchase either a modern or traditional Doljabi set from Korean stores that specialize in Korean traditions. Seire (세이레) The baby's well-being is celebrated 21 days after the birth with a meal of white rice, Miyeok guk (Miyeok seaweed soup), and Baekseolgi (white rice cake tteok). The Baekseolgi symbolizes sacredness. By this time, the baby and mother are still recovering from birth, so people were not allowed to see them. However, close family members are met and prayed for the healthy recovery of the baby's mother on this day. Baek Il (백일) Another birthday celebration is baegil [ko] (100th-day celebration). The 100th day celebration originates from a time in pre-modern Korea when infant mortality was high and families waited until a child's 100th day of life to celebrate their birth. Making it past the 100th day of life was an indicator that a child would live until at least their first birthday. Modern celebrations are a time to congratulate the parents and family on the birth of their child. Typically celebrations include special food items, especially rice cakes, and are an opportunity for numerous family photos to be taken with the infant. During this celebration, the family worships Samsin. They make her offerings of rice and soup for having cared for the infant and the mother, and for having helped them live through a difficult period. They give thanks to Samsin and also pray for jae-ak (wealth), longevity, and chobok (traditional word for "luck"). After the prayer the family, relatives and friends celebrate with rice cakes, wine, and other delicacies such as red and black bean cakes sweetened with sugar or honey. In order to protect the child, red bean rice cakes are placed at the four compass points of the house. This not only brought protection, but was also believed to bring good fortune and happiness. It is widely believed that if 100 people share the rice cakes the child will live a long life, so the family would also send rice cakes to neighbors and others. Those who receive rice cakes return the dishes with lengths of thread (expressing the hope for longevity), rice and money (symbolizing future wealth). Hwangap (환갑) When a person turns 60, there was a celebration known as hwangap. This was considered an auspicious year, since when someone turned 60 the cycle of the Korean zodiac is complete. Each person is born under one of the twelve zodiac animals. It takes 60 years for the zodiac animal and the element under which one is born to align. Another reason that hwangap is so important is that many years ago (before the advent of modern medicine), it was uncommon for a person to live 60 years. There is a celebration; children honor their parents with a feast and merrymaking. Part of the celebration involves the children of the birthday celebrant; starting with the eldest, they bow and offer wine to their parents. After the children give their respects to their parents, their children show respect to them; again starting with the eldest, in the same way. While these rituals are being carried out, traditional music is played and professional entertainers sing songs, encouraging people to drink. In order to make the recipient of the hwangap feel young, adults and teens dress in children's clothing. They also sing children's songs and dance children's dances. Coming-of-age rites A less well-known birthday celebration is when a boy or girl reaches adult age (20 for the boy and 15 for the girl). When a boy turned into an adult he would tie his hair into a top knot and be given a gat (traditional cylindrical Korean hat made of horsehair). He would be required to lift a heavy rock as a test of his strength. If he can lift and move the rock, he is considered a man. A girl would become an adult when she married and showed her non-single status by rolling her braided hair into a chignon bun and fixing it with a binyeo, a long ornamental hairpin. See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Social_structure] | [TOKENS: 2078] |
Contents Social structure 1800s: Martineau · Tocqueville · Marx · Spencer · Le Bon · Ward · Pareto · Tönnies · Veblen · Simmel · Durkheim · Addams · Mead · Weber · Du Bois · Mannheim · Elias In the social sciences, social structure is the aggregate of patterned social arrangements in society that are both emergent from and determinant of the actions of individuals. Likewise, society is believed to be grouped into structurally related groups or sets of roles, with different functions, meanings, or purposes. Examples of social structure include family, religion, law, economy, and class. It contrasts with "social system", which refers to the parent structure in which these various structures are embedded. Thus, social structures significantly influence larger systems, such as economic systems, legal systems, political systems, cultural systems, etc. Social structure can also be said to be the framework upon which a society is established. It determines the norms and patterns of relations between the various institutions of the society. Since the 1920s, the term has been in general use in social science, especially as a variable whose sub-components needed to be distinguished in relationship to other sociological variables, as well as in academic literature, as result of the rising influence of structuralism. The concept of "social stratification", for instance, uses the idea of social structure to explain that most societies are separated into different strata (levels), guided (if only partially) by the underlying structures in the social system. There are three conditions for a social class to be steady, that of class cohesiveness, the self-consciousness of classes, and the self-awareness of one's own class. It is also important in the modern study of organizations, as an organization's structure may determine its flexibility, capacity to change, and success. In this sense, structure is an important issue for management. On the macro scale, social structure pertains to the system of socioeconomic stratification (most notably the class structure), social institutions, or other patterned relations between large social groups. On the meso scale, it concerns the structure of social networks between individuals or organizations. On the micro scale, "social structure" includes the ways in which 'norms' shape the behavior of individuals within the social system. These scales are not always kept separate. Social norms are the shared standards of acceptable behavior by a group. When norms are internalized, they take on a "for granted" quality and are difficult to alter on the individual and societal levels. History The early study of social structures has considerably informed the study of institutions, culture and agency, social interaction, and history. Alexis de Tocqueville was supposedly the first to use the term "social structure". Later, Karl Marx, Herbert Spencer, Ferdinand Tönnies, Émile Durkheim, and Max Weber would all contribute to structural concepts in sociology. The latter, for example, investigated and analyzed the institutions of modern society: market, bureaucracy (private enterprise and public administration), and politics (e.g. democracy). One of the earliest and most comprehensive accounts of social structure was provided by Karl Marx, who related political, cultural, and religious life to the mode of production (an underlying economic structure). Marx argued that the economic base substantially determined the cultural and political superstructure of a society. Subsequent Marxist accounts, such as that of Louis Althusser, proposed a more complex relationship that asserted the relative autonomy of cultural and political institutions, and a general determination by economic factors only "in the last instance." In 1905, German sociologist Ferdinand Tönnies published his study The Present Problems of Social Structure, in which he argues that only the constitution of a multitude into a unity creates a "social structure", basing his approach on his concept of social will. Émile Durkheim, drawing on the analogies between biological and social systems popularized by Herbert Spencer and others, introduced the idea that diverse social institutions and practices played a role in assuring the functional integration of society through assimilation of diverse parts into a unified and self-reproducing whole. In this context, Durkheim distinguished two forms of structural relationship: mechanical solidarity and organic solidarity. The former describes structures that unite similar parts through a shared culture, while the latter describes differentiated parts united through social exchange and material interdependence. As did Marx and Weber, Georg Simmel, more generally, developed a wide-ranging approach that provided observations and insights into domination and subordination; competition; division of labor; formation of parties; representation; inner solidarity and external exclusiveness; and many similar features of the state, religious communities, economic associations, art schools, and of family and kinship networks. However diverse the interests that give rise to these associations, the forms in which interests are realized may yet be identical. The notion of social structure was extensively developed in the 20th century with key contributions from structuralist perspectives drawing on theories of Claude Lévi-Strauss, as well as feminist, marxist, functionalist (e.g. those developed by Talcott Parsons and followers), and a variety of other analytic perspectives. Some follow Marx in trying to identify the basic dimensions of society that explain the other dimensions, most emphasizing either economic production or political power. Others follow Lévi-Strauss in seeking logical order in cultural structures. Still others, notably Peter Blau, follow Simmel in attempting to base a formal theory of social structure on numerical patterns in relationships—analyzing, for example, the ways in which factors like group size shape intergroup relations. The notion of social structure is intimately related to a variety of central topics in social science, including the relation of structure and agency. The most influential attempts to combine the concept of social structure with agency are Anthony Giddens' theory of structuration and Pierre Bourdieu's practice theory. Giddens emphasizes the duality of structure and agency, in the sense that structures and agency cannot be conceived apart from one another. This permits him to argue that structures are neither independent of actors nor determining of their behavior, but rather sets of rules and competencies on which actors draw, and which, in the aggregate, they reproduce. Giddens's analysis, in this respect, closely parallels Jacques Derrida's deconstruction of the binaries that underlie classic sociological and anthropological reasoning (notably the universalizing tendencies of Lévi-Strauss's structuralism). Bourdieu's practice theory also seeks a more subtle account of social structure as embedded in, rather than determinative of, individual behavior. Other recent work by Margaret Archer (morphogenesis theory), Tom R. Burns and Helena Flam (actor-system dynamics theory and social rule system theory), and Immanuel Wallerstein (World Systems Theory) provide elaborations and applications of the sociological classics in structural sociology. Definitions and concepts As noted above, social structure has been conceptualized as: Furthermore, Lopez and Scott (2000) distinguish between two types of structure: Social structure can also be divided into microstructure and macrostructure: Sociologists also distinguish between: Modern sociologists sometimes differentiate between three types of social structures: Social rule system theory reduces the structures of (3) to particular rule system arrangements, i.e. the types of basic structures of (1 and 2). It shares with role theory, organizational and institutional sociology, and network analysis the concern with structural properties and developments and at the same time provides detailed conceptual tools needed to generate interesting, fruitful propositions and models and analyses. Origin and development of structures Some believe that social structure develops naturally, caused by larger systemic needs (e.g. the need for labour, management, professional, and military functions), or by conflicts between groups (e.g. competition among political parties or élites and masses). Others believe that structuring is not a result of natural processes, but of social construction. Research from scholars Nicole M. Stephens and Sarah Townsend demonstrated that the cultural mismatch between institutions' ideals of independence and the interdependence common among working-class individuals can hinder workers' opportunities to succeed. In this sense, social structure may be created by the power of élites who seek to retain their power, or by economic systems that place emphasis upon competition or upon cooperation. Ethnography has contributed to understandings about social structure by revealing local practices and customs that differ from Western practices of hierarchy and economic power in its construction. Social structures can be influenced by individuals, but individuals are often influenced by agents of socialization (e.g., the workplace, family, religion, and school). The way these agents of socialization influence individualism varies on each separate member of society; however, each agent is critical in the development of self-identity. Critical implications The notion of social structure may mask systematic biases, as it involves many identifiable sub-variables (e.g. gender). Some argue that men and women who have otherwise equal qualifications receive different treatment in the workplace because of their gender, which would be termed a "social structural" bias, but other variables (such as time on the job or hours worked) might be masked. Modern social structural analysis takes this into account through multivariate analysis and other techniques, but the analytic problem of how to combine various aspects of social life into a whole remains. Development of Individualism Sociologists such as Georges Palante have written on how social structures coerce our individuality and social groups by shaping the actions, thoughts, and beliefs of every individual human being. In terms of agents of socialization, social structures are slightly influenced by individuals but individuals are more greatly influenced by them. Some examples of these agents of socialization are the workplace, family, religion, and school. The way these agents of socialization influence your individualism varies on each one; however, they all play a big role in your self-identity development. Agents of socialization can also affect how you see yourself individually or as part of a collective. Our identities are constructed through social influences that we encounter in our daily lives. The way you are raised to view your individuality can hinder your ability to succeed by capping your abilities or it could become an obstacle in certain environments in which individuality is embraced like colleges or friend groups. References Further reading |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Thud_(media_company)] | [TOKENS: 447] |
Contents Thud (media company) Thud was a satirical media company founded by Ben Berkley, Cole Bolton, and Elon Musk in 2017. The company launched satirical websites and products. After Musk pulled his funding, the company went defunct in May 2019. Background In 2014, Elon Musk expressed interest in purchasing satirical news site The Onion, however, the purchase did not make it past preliminary negotiations. In 2017, two former Onion editors who had left the site due to creative differences, Ben Berkley and Cole Bolton, were offered US$2,000,000 in funding by Musk to start a satirical media company focused on real-world events. Berkley and Bolton began building out the company by hiring several other Onion writers and editors. In March 2018, Musk formally announced the venture by tweeting "Thud!" followed by "That's the name of my new intergalactic media empire, exclamation point optional." Musk claimed that the name was chosen because "It’s the sound something thick and dull makes when it hits the ground." Sometime around the end of 2018, Musk sold the company to Berkley and Bolton, citing concerns that Thud might satirize his own companies. While the two were able to launch several satirical projects over the course of six months, they were unable to find new investors. Berkley and Bolton spent the remaining funds from Musk's investment on web hosting and the company shut down in May 2019. Reception Thud's public reception was mixed. Upon its public launch in 2019, a review in Vulture praised the fact that Thud credited its contributors and the darker outlook its satire took but criticized some projects as playing into outdated tropes. Several outlets praised "DNA Friend," a project satirizing at-home DNA testing. After the company shut down, Bolton told The Verge that the investors they reached out to generally did not understand what Thud was satirizing. In a 2021 retrospective in Mic, Amanda Silberling criticized Thud's projects, describing them as failing "to live up to the hype of an Elon Musk-funded ex-Onion powerhouse." Projects References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Ada_(programming_language)] | [TOKENS: 4259] |
Contents Ada (programming language) Ada is a structured, statically typed, imperative, and object-oriented high-level programming language, inspired by Pascal and other languages. It has built-in language support for design by contract (DbC), extremely strong typing, explicit concurrency, tasks, synchronous message passing, protected objects, and non-determinism. Ada improves code safety and maintainability by using the compiler to find errors in favor of runtime errors. Ada is an international technical standard, jointly defined by the International Organization for Standardization (ISO), and the International Electrotechnical Commission (IEC). As of May 2023[update], the standard, ISO/IEC 8652:2023, is called Ada 2022 informally. Ada was originally designed by a team led by French computer scientist Jean Ichbiah of Honeywell under contract to the United States Department of Defense (DoD) from 1977 to 1983 to supersede over 450 programming languages then used by the DoD. Ada was named after Ada Lovelace (1815–1852), who has been credited as the first computer programmer. Features Ada was originally designed for embedded and real-time systems. The Ada 95 revision, designed by S. Tucker Taft of Intermetrics between 1992 and 1995, improved support for systems, numerical, financial, and object-oriented programming (OOP). Features of Ada include strong typing, modular programming mechanisms (packages), run-time checking, parallel processing (tasks, synchronous message passing, protected objects, and nondeterministic select statements), exception handling, and generics. Ada 95 added support for object-oriented programming, including dynamic dispatch. The syntax of Ada minimizes choices of ways to perform basic operations, and prefers English keywords (such as or else and and then) to symbols (such as || and &&). Ada uses the basic arithmetical operators +, -, *, and /, but avoids using other symbols. Code blocks are delimited by words such as 'declare', 'begin', and 'end', where the 'end' (in most cases) is followed by the keyword of the block that it closes (e.g., if ... end if, loop ... end loop). In the case of conditional blocks this avoids a dangling else that could pair with the wrong nested 'if'-expression in other languages such as C or Java. Ada is designed for developing very large software systems. Ada packages can be compiled separately. Ada package specifications (the package interface) can also be compiled separately without the implementation to check for consistency. This makes it possible to detect problems early during the design phase, before implementation starts. A large number of compile-time checks are supported to help avoid bugs that would not be detectable until run time in some other languages or would require explicit checks to be added to the source code. For example, the syntax requires explicitly named closing of blocks to prevent errors due to mismatched end tokens. The adherence to strong typing allows detecting many common software errors (wrong parameters, range violations, invalid references, mismatched types, etc.) either during compile time, or otherwise during run time. As concurrency is part of the language specification, the compiler can in some cases detect potential deadlocks. Compilers also commonly check for misspelled identifiers, visibility of packages, redundant declarations, etc. and can provide warnings and useful suggestions on how to fix the error. Ada also supports run-time checks to protect against access to unallocated memory, buffer overflow errors, range violations, off-by-one errors, array access errors, and other detectable bugs. These checks can be disabled in the interest of runtime efficiency, but can often be compiled efficiently. It also includes facilities to help program verification. For these reasons, Ada is sometimes used in critical systems, where any anomaly might lead to very serious consequences, e.g., accidental death, injury or severe financial loss. Examples of systems where Ada is used include avionics, air traffic control, railways, banking, military and space technology. Ada's dynamic memory management is high-level and type-safe. Ada has no generic or untyped pointers, nor does it implicitly declare any pointer type. Instead, all dynamic memory allocation and deallocation must occur via explicitly declared access types. Each access type has an associated storage pool that handles the low-level details of memory management; the programmer can either use the default storage pool or define new ones (this is particularly relevant for non-uniform memory access). It is even possible to declare several different access types that all designate the same type but use different storage pools. Also, the language provides for accessibility checks, both at compile time and at run time, that ensures that an access value cannot outlive the type of the object it points to. Though the semantics of the language allow automatic garbage collection of inaccessible objects, most implementations do not support it by default, as it would cause unpredictable behaviour in real-time systems. Ada supports a limited form of region-based memory management, and in Ada, destroying a storage pool also destroys all the objects in the pool. A double dash (--), resembling an em dash, denotes comment text. Comments stop at end of line; there is intentionally no way to make a comment span multiple lines, to prevent unclosed comments from accidentally voiding whole sections of source code. Disabling a whole block of code therefore requires the prefixing of each line (or column) individually with --. While this clearly denotes disabled code by creating a column of repeated '--' down the page, it also renders the experimental dis/re-enablement of large blocks a more drawn-out process in editors without block commenting support. The semicolon (;) is a statement terminator, and the null or no-operation statement is null;. A single ; without a statement to terminate is not allowed. Unlike most ISO standards, the Ada language definition (known as the Ada Reference Manual or ARM, or sometimes the Language Reference Manual or LRM) is free content. Thus, it is a common reference for Ada programmers, not only programmers implementing Ada compilers. Apart from the reference manual, there is also an extensive rationale document which explains the language design and the use of various language constructs. This document is also widely used by programmers. When the language was revised, a new rationale document was written. One notable free software tool that is used by many Ada programmers to aid them in writing Ada source code is the GNAT Programming Studio, and GNAT which is part of the GNU Compiler Collection. Alire is a package and toolchain management tool for Ada. History In the 1970s the US Department of Defense (DoD) became concerned by the number of different programming languages being used for its embedded computer system projects, many of which were obsolete or hardware-dependent, and none of which supported safe modular programming. In 1975, a working group, the High Order Language Working Group (HOLWG), was formed with the intent to reduce this number by finding or creating a programming language generally suitable for the department's and the UK Ministry of Defence's requirements. After many iterations beginning with an original straw-man proposal the eventual programming language was named Ada. The total number of high-level programming languages in use for such projects fell from over 450 in 1983 to 37 by 1996. HOLWG crafted the Steelman language requirements, a series of documents stating the requirements they felt a programming language should satisfy. Many existing languages were formally reviewed, but the team concluded in 1977 that no existing language met the specifications. The requirements were created by the United States Department of Defense in The Department of Defense Common High Order Language program in 1978. The predecessors of this document were called, in order, "Strawman", "Woodenman", "Tinman" and "Ironman". The requirements focused on the needs of embedded computer applications, and emphasised reliability, maintainability, and efficiency. Notably, they included exception handling facilities, run-time checking, and parallel computing. It was concluded that no existing language met these criteria to a sufficient extent, so a contest was called to create a language that would be closer to fulfilling them. The design that won this contest became the Ada programming language. The resulting language followed the Steelman requirements closely, though not exactly. Requests for proposals for a new programming language were issued and four contractors were hired to develop their proposals under the names of Red (Intermetrics led by Benjamin Brosgol), Green (Honeywell, led by Jean Ichbiah), Blue (SofTech, led by John Goodenough) and Yellow (SRI International, led by Jay Spitzen). In April 1978, after public scrutiny, the Red and Green proposals passed to the next phase. In May 1979, the Green proposal, designed by Jean Ichbiah at Honeywell, was chosen and given the name Ada—after Augusta Ada King, Countess of Lovelace, usually known as Ada Lovelace. This proposal was influenced by the language LIS that Ichbiah and his group had developed in the 1970s. The preliminary Ada reference manual was published in ACM SIGPLAN Notices in June 1979. The Military Standard reference manual was approved on December 10, 1980 (Ada Lovelace's birthday), and given the number MIL-STD-1815 in honor of Ada Lovelace's birth year. In 1981, Tony Hoare took advantage of his Turing Award speech to criticize Ada for being overly complex and hence unreliable, but subsequently seemed to recant in the foreword he wrote for an Ada textbook. Ada attracted much attention from the programming community as a whole during its early days. Its backers and others predicted that it might become a dominant language for general purpose programming and not only defense-related work. Ichbiah publicly stated that within ten years, only two programming languages would remain: Ada and Lisp. Early Ada compilers struggled to implement the large, complex language, and both compile-time and run-time performance tended to be slow and tools primitive. Compiler vendors expended most of their efforts in passing the massive, language-conformance-testing, government-required Ada Compiler Validation Capability (ACVC) validation suite that was required in another novel feature of the Ada language effort. The first validated Ada implementation was the NYU Ada/Ed translator, certified on April 11, 1983. NYU Ada/Ed is implemented in the high-level set language SETL. Several commercial companies began offering Ada compilers and associated development tools, including Alsys, TeleSoft, DDC-I, Advanced Computer Techniques, Tartan Laboratories, Irvine Compiler, TLD Systems, and Verdix. Computer manufacturers who had a significant business in the defense, aerospace, or related industries, also offered Ada compilers and tools on their platforms; these included Concurrent Computer Corporation, Cray Research, Inc., Digital Equipment Corporation, Harris Computer Systems, and Siemens Nixdorf Informationssysteme AG. In 1991, the US Department of Defense began to require the use of Ada (the Ada mandate) for all software, though exceptions to this rule were often granted. The Department of Defense Ada mandate was effectively removed in 1997, as the DoD began to embrace commercial off-the-shelf (COTS) technology. Similar requirements existed in other NATO countries: Ada was required for NATO systems involving command and control and other functions, and Ada was the mandated or preferred language for defense-related applications in countries such as Sweden, Germany, and Canada. By the late 1980s and early 1990s, Ada compilers had improved in performance, but there were still barriers to fully exploiting Ada's abilities, including a tasking model that was different from what most real-time programmers were used to. Because of Ada's safety-critical support features, it is now used not only for military applications, but also in commercial projects where a software bug can have severe consequences, e.g., avionics and air traffic control, commercial rockets such as the Ariane 4 and 5, satellites and other space systems, railway transport and banking. For example, the Primary Flight Control System, the fly-by-wire system software in the Boeing 777, was written in Ada, as were the fly-by-wire systems for the aerodynamically unstable Eurofighter Typhoon, Saab Gripen, Lockheed Martin F-22 Raptor and the DFCS replacement flight control system for the Grumman F-14 Tomcat. The Canadian Automated Air Traffic System was written in 1 million lines of Ada (SLOC count). It featured advanced distributed processing, a distributed Ada database, and object-oriented design. Ada is also used in other air traffic systems, e.g., the UK's next-generation Interim Future Area Control Tools Support (iFACTS) air traffic control system is designed and implemented using SPARK Ada. It is also used in the French TVM in-cab signalling system on the TGV high-speed rail system, and the metro suburban trains in Paris, London, Hong Kong and New York City. The Ada 95 revision of the language went beyond the Steelman requirements, targeting general-purpose systems in addition to embedded ones, and adding features supporting object-oriented programming. Standardization Preliminary Ada can be found in ACM Sigplan Notices Vol 14, No 6, June 1979 Ada was first published in 1980 as an ANSI standard ANSI/MIL-STD 1815. As this very first version held many errors and inconsistencies,[a] the revised edition was published in 1983 as ANSI/MIL-STD 1815A. Without any further changes, it became an ISO standard in 1987. This version of the language is commonly known as Ada 83, from the date of its adoption by ANSI, but is sometimes referred to also as Ada 87, from the date of its adoption by ISO. There is also a French translation; DIN translated it into German as DIN 66268 in 1988. Ada 95, the joint ISO/IEC/ANSI standard ISO/IEC 8652:1995 was published in February 1995, making it the first ISO standard object-oriented programming language. To help with the standard revision and future acceptance, the US Air Force funded the development of the GNAT Compiler. Presently, the GNAT Compiler is part of the GNU Compiler Collection. Work has continued on improving and updating the technical content of the Ada language. A Technical Corrigendum to Ada 95 was published in October 2001, and a major Amendment, ISO/IEC 8652:1995/Amd 1:2007 was published on March 9, 2007, commonly known as Ada 2005 because work on the new standard was finished that year. At the Ada-Europe 2012 conference in Stockholm, the Ada Resource Association (ARA) and Ada-Europe announced the completion of the design of the latest version of the Ada language and the submission of the reference manual to the ISO/IEC JTC 1/SC 22/WG 9 of the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) for approval. ISO/IEC 8652:2012 (see Ada 2012 RM) was published in December 2012, known as Ada 2012. A technical corrigendum, ISO/IEC 8652:2012/COR 1:2016, was published (see RM 2012 with TC 1). The Ada-based SPARK technology has been made possible by the enabling characteristics of the Ada language, including its separation of specification and implementation via packages, its support for user-defined scalar types, and its support for composite type usage without resorting to pointers. The Ada 2012 revision was especially important for SPARK, as its support for contracts as a part of the language permitted SPARK to be redesigned from the beginning towards fulfilling its goal of co-developing programs alongside their proofs of correctness. On May 2, 2023, the Ada community saw the formal approval of publication of the Ada 2022 edition of the programming language standard. Despite the names Ada 83, 95 etc., legally there is only one Ada standard, the last ISO/IEC standard: with the acceptance of a new standard version, the previous one becomes withdrawn. The other names are informal ones referencing a certain edition. Other related standards include ISO/IEC 8651-3:1988 Information processing systems—Computer graphics—Graphical Kernel System (GKS) language bindings—Part 3: Ada. Language constructs Ada is an ALGOL-like programming language featuring control structures with reserved words such as if, then, else, while, for, and so on. However, Ada also has many data structuring facilities and other abstractions which were not included in the original ALGOL 60, such as type definitions, records, pointers, and enumerations. Such constructs were in part inherited from or inspired by Pascal. A common example of a language's syntax is the "Hello, World!" program: (hello.adb) This program can be compiled by using the freely available open source compiler GNAT, by executing Ada's type system is not based on a set of predefined primitive types but allows users to declare their own types. This declaration in turn is not based on the internal representation of the type but on describing the goal which should be achieved. This allows the compiler to determine a suitable memory size for the type, and to check for violations of the type definition at compile time and run time (i.e., range violations, buffer overruns, type consistency, etc.). Ada supports numerical types defined by a range, modulo types, aggregate types (records and arrays), and enumeration types. Access types define a reference to an instance of a specified type; untyped pointers are not permitted. Special types provided by the language are task types and protected types. For example, a date might be represented as: Day_type, Month_type, Year_type, Hours are incompatible types, meaning that for instance the following expression is illegal: The predefined plus operator can only add values of the same type, so the expression is illegal. Types can be refined by declaring subtypes: Types can have modifiers such as limited, abstract, private etc. Private types do not show their inner structure; objects of limited types cannot be copied. Ada 95 adds further features for object-oriented extension of types. Ada is a structured programming language, meaning that the flow of control is structured into standard statements. All standard constructs and deep-level early exit are supported, so the use of the also supported "go to" commands is seldom needed. Among the parts of an Ada program are packages, procedures and functions. Functions differ from procedures in that they must return a value. Function calls cannot be used "as a statement", and their result must be assigned to a variable. However, since Ada 2012, functions are not required to be pure and may mutate their suitably declared parameters or the global state. Example: Package specification (example.ads) Package body (example.adb) This program can be compiled, e.g., by using the freely available open-source compiler GNAT, by executing Packages, procedures and functions can nest to any depth, and each can also be the logical outermost block. Each package, procedure or function can have its own declarations of constants, types, variables, and other procedures, functions and packages, which can be declared in any order. A pragma is a compiler directive that conveys information to the compiler to allow specific manipulating of compiled output. Certain pragmas are built into the language, while others are implementation-specific. Examples of common usage of compiler pragmas would be to disable certain features, such as run-time type checking or array subscript boundary checking, or to instruct the compiler to insert object code instead of a function call (as C/C++ does with inline functions). Ada has had generics since it was first designed in 1977–1980. The standard library uses generics to provide many services. Ada 2005 adds a comprehensive generic container library to the standard library, which was inspired by C++'s Standard Template Library. A generic unit is a package or a subprogram that takes one or more generic formal parameters. A generic formal parameter is a value, a variable, a constant, a type, a subprogram, or even an instance of another, designated, generic unit. For generic formal types, the syntax distinguishes between discrete, floating-point, fixed-point, access (pointer) types, etc. Some formal parameters can have default values. To instantiate a generic unit, the programmer passes actual parameters for each formal. The generic instance then behaves just like any other unit. It is possible to instantiate generic units at run-time, for example, inside a loop. See also Notes References These documents have been published in various forms, including print. Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Windows_Central] | [TOKENS: 1063] |
Contents Future plc Future plc is a British publishing company founded in 1985 by Chris Anderson. It is listed on the London Stock Exchange and is a constituent of the FTSE 250 Index. History The company was founded by Chris Anderson as Future Publishing in Somerton, Somerset, England, with the sole magazine Amstrad Action in 1985. An early innovation was the inclusion of free software on magazine covers. It acquired GP Publications and established what would become Future US in 1994. Anderson sold the company to Pearson plc for £52.7m in 1994, but bought it back in 1998, for £142 million. The company was floated on the London Stock Exchange in 1999. Anderson left the company in 2001. In 2004, the company was accused of corruption when it published positive reviews for the video game Driver 3 in two of its owned magazines, Xbox World and PSM2. Future published the official magazines for the consoles of all three major games console manufacturers (Microsoft, Nintendo, and Sony); however PlayStation: The Official Magazine ceased publishing in November 2012, and Official Nintendo Magazine ceased publishing in October 2014. The chief executive and finance director both resigned at short notice after a profit warning in October 2011. It was noted that a re-structuring would be necessary as the company moved to a digital model. Future announced it would cut 55 jobs from its UK operation as part of a restructuring to adapt "more effectively to the company's rapid transition to a primarily digital business model." The company announced in March 2014 that it would close all of its U.S.-based print publications and shift U.S. print support functions such as consumer marketing, production and editorial leadership for Future's international print brands to the UK. Later in 2014, Future sold its sport and craft titles to Immediate Media, and its auto titles to Kelsey Media. In April 2014, Zillah Byng-Thorne (then finance director) was appointed chief executive to replace Mark Wood, who had been in the position since 2011. In 2018, Future made further major acquisitions. It bought the What Hi-Fi?, FourFourTwo, Practical Caravan, and Practical Motorhome brands from Haymarket; and it acquired NewBay Media, publisher of numerous broadcast, professional-video, and systems-integration trade titles, as well as several consumer music magazines. This acquisition returned most of the U.S. consumer music magazines to Future, with the exception of Revolver which had been sold to Project M Group in 2017. It bought the Purch Group for $132m by September 2018, and in February 2019 bought Mobile Nations including the titles Android Central, iMore, Windows Central and Thrifter for $115 million. Future also acquired Procycling and Cyclingnews.com from Immediate Media. In July 2019 the company bought SmartBrief, a digital media publisher, for an initial sum of $45 million. In November 2019, the company bought Barcroft Studios for £23.5 million in a combination of cash and shares. It renamed it Future Studios and announced the launch of "Future Originals", an anthology gaming series, a "factual" series focusing on the paranormal, and a new true-crime show, in partnership with Marie Claire. In April 2020, it acquired TI Media with 41 brands for £140 million. In November, it agreed to a £594m takeover of GoCo plc, known for its Gocompare.com price-comparison website. In August 2021, it acquired Dennis Publishing and its 12 magazines, for £300 million. The company was criticised in February 2022 for the size of the remuneration package being offered to Zillah Byng-Thorne, the chief executive. It was noted that she could receive £40 million if the company performed well. Byng-Thorne resigned with effect from 3 April 2023 and was replaced as chief executive by Jon Steinberg. In April 2023, the company sold its shooting magazines including Shooting Times and Sporting Gun to Fieldsports Press. In August 2024, the company announced that its American trade papers Broadcasting & Cable and Multichannel News would be closing after more than 90 years, with the main title Broadcasting having been first published in 1931 and the merged title Multichannel News dating from 1980. In October 2024, the company closed a number of consumer titles in the United Kingdom, including Play, All About Space, Total 911, and 3D World, with the monthly movie magazine Total Film ceasing publication after 27 years. Kevin Li Ying took over the position of CEO on 31 March 2025. Organisation In addition to media and magazines, the company has two other businesses: Brands Future's portfolio of brands includes TechRadar, PC Gamer, Tom's Guide, Tom's Hardware, Marie Claire, GamesRadar+, MusicRadar, How it Works, Digital Camera World, Creative Bloq, CinemaBlend, Android Central, IT Pro, BikePerfect, Truly, Windows Central, Chat, and the website GoodToKnow.co.uk. References External links |
======================================== |
[SOURCE: https://techcrunch.com/2026/02/20/threads-posts-can-now-be-shared-directly-to-your-instagram-story-without-leaving-the-app/] | [TOKENS: 744] |
Save up to $680 on your pass with Super Early Bird rates. REGISTER NOW. Save up to $680 on your Disrupt 2026 pass. Ends February 27. REGISTER NOW. Latest AI Amazon Apps Biotech & Health Climate Cloud Computing Commerce Crypto Enterprise EVs Fintech Fundraising Gadgets Gaming Google Government & Policy Hardware Instagram Layoffs Media & Entertainment Meta Microsoft Privacy Robotics Security Social Space Startups TikTok Transportation Venture Staff Events Startup Battlefield StrictlyVC Newsletters Podcasts Videos Partner Content TechCrunch Brand Studio Crunchboard Contact Us Threads posts can now be shared directly to your Instagram Story without leaving the app Threads already has over 400 million monthly users, but Meta wants to push that number even higher. With the launch of a new Threads feature this week, the company is making it easier for Threads users to share posts on the app to their Instagram Stories — a move that could capitalize on Instagram’s larger user base to bring more people to Meta’s X competitor. The company announced on Thursday a new feature that lets you share a Threads post to your Instagram Story without having to leave the Threads app, instead previewing how the post would look on your Story directly within the Threads app. The app previously allowed you to share anyone’s Threads post to your Instagram Story similar to how you would reshare someone’s Instagram post to your Story. It also already offered tools for sharing posts to your Instagram Feed or DMs. Meta’s text-first, Twitter-like app Threads first launched in July 2023 and benefited from its ties with Instagram to rapidly grow its initial user base. To sign up, users had to authenticate with their Instagram credentials, which allowed Threads to populate with account details, like username, bio, and photo, as well as verification status and followers. With one tap, users could immediately follow the accounts they already followed on Instagram — and those not on Threads would get a notification that someone had added them. In the months and years following its launch, Meta has heavily leaned on its other, bigger social platforms to continue to grow Threads, including by displaying popular Threads posts on Facebook and adding a similar carousel of Threads posts to Instagram users. The company also made it easy for users to cross-post from Instagram and Facebook to Threads, which also helped boost adoption. These moves have paid off. Data from market intelligence provider Similarweb last month indicated that Threads is now seeing more daily usage than Elon Musk’s X on mobile devices. (X still dominates on the web, however.) Threads’ numbers overall have been steadily growing as well, doubling usage from 200 million monthly active users in August 2024 to 400 million monthly users as of August 2025. The company announced in October that Threads reached 150 million daily active users, as well. Topics Consumer News Editor Save up to $680 on your pass before February 27.Meet investors. Discover your next portfolio company. Hear from 250+ tech leaders, dive into 200+ sessions, and explore 300+ startups building what’s next. Don’t miss these one-time savings. Most Popular FBI says ATM ‘jackpotting’ attacks are on the rise, and netting hackers millions in stolen cash Meta’s own research found parental supervision doesn’t really help curb teens’ compulsive social media use How Ricursive Intelligence raised $335M at a $4B valuation in 4 months After all the hype, some AI experts don’t think OpenClaw is all that exciting OpenClaw creator Peter Steinberger joins OpenAI Hollywood isn’t happy about the new Seedance 2.0 video generator The great computer science exodus (and where students are going instead) © 2025 TechCrunch Media LLC. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Great_Seal_of_the_United_States#Obverse] | [TOKENS: 6971] |
Contents Great Seal of the United States The Great Seal of the United States is the seal of the United States of America. The phrase is used both for the impression device itself, which is kept by the United States secretary of state, and more generally for the impression it produces. The obverse of the Great Seal depicts the national coat of arms of the United States while the reverse features a truncated pyramid topped by an Eye of Providence. The year of the U.S. Declaration of Independence, 1776, is noted in Roman numerals at the base of the pyramid. The seal contains three Latin phrases: E Pluribus Unum ("Out of many, one"), Annuit cœptis ("He has favored our undertakings"), and Novus ordo seclorum ("A new order of the ages"). Largely designed by Charles Thomson, then secretary of the Continental Congress, and William Barton, and first used in 1782, the seal is used to authenticate certain documents issued by the federal government of the United States. Since 1935, both sides of the Great Seal have appeared on the reverse of the one-dollar bill. The coat of arms is used on official documents—including United States passports—military insignia, embassy placards, and various flags. The seal of the president is directly based on the Great Seal, and its elements are used in numerous government agency and state seals. Today's official versions from the Department of State are largely unchanged from the 1885 designs. The current rendering of the reverse was made by Teagle & Little of Norfolk, Virginia, in 1972. It is nearly identical to previous versions, which in turn were based on Lossing's 1856 version. Obverse The obverse (or front) of the seal depicts the full achievement of the national coat of arms. The 1782 resolution of Congress adopting the arms, still in force, legally blazoned the shield as: Paleways of 13 pieces, argent and gules; a chief, azure. As the designers recognized, this is a technically incorrect blazon under traditional English heraldic rules, since in English practice a vertically striped shield would be described as "paly", not "paleways", and it would not have had an odd number of stripes. A more technically proper blazon would have been argent, six pallets gules ... (six red stripes on a white field), but the phrase used was chosen to preserve the reference to the 13 original states. The escutcheon (shield) bears resemblance to the United States flag, with two exceptions in particular: The supporter of the shield is a bald eagle with its wings outstretched (or "displayed", in heraldic terms). From the eagle's perspective, it holds a bundle of 13 arrows in its left talon, and an olive branch in its right talon. Although not specified by law, the olive branch is usually depicted with 13 leaves and 13 olives. In its beak, the eagle clutches a scroll with the motto E pluribus unum ("Out of Many, One"). Over its head there appears a glory with 13 mullets (stars) on a blue field. The recurring number 13 refers to the 13 original states. The arrows and olive branch together symbolize that the United States has "a strong desire for peace, but will always be ready for war" (see Olive Branch Petition). E Pluribus Unum contains 13 letters. The eagle has its head turned towards the olive branch, on its right side, to symbolize a preference for peace. The primary official explanation of the symbolism of the great seal was given by Charles Thomson upon presenting the final design for adoption by Congress. He wrote: The Escutcheon is composed of the chief & pale, the two most honorable ordinaries. The Pieces, paly, represent the several states all joined in one solid compact entire, supporting a Chief, which unites the whole & represents Congress. The Motto alludes to this union. The pales in the arms are kept closely united by the chief and the Chief depends upon that union & the strength resulting from it for its support, to denote the Confederacy of the United States of America & the preservation of their union through Congress. The colours of the pales are those used in the flag of the United States of America; White signifies purity and innocence, Red, hardiness & valor, and Blue, the colour of the Chief signifies vigilance, perseverance & justice. The Olive branch and arrows denote the power of peace & war which is exclusively vested in Congress. The Constellation denotes a new State taking its place and rank among other sovereign powers. The Escutcheon is born on the breast of an American Eagle without any other supporters to denote that the United States of America ought to rely on their own Virtue. Thomson took the symbolism for the colors from Elements of Heraldry, by Antoine Pyron du Martre, which William Barton had lent him. That book said that argent (white) "signifies Purity, Innocence, Beauty, and Genteelness", gules (red) "denotes martial Prowess, Boldness, and Hardiness", and azure (blue) "signifies Justice, Perseverance, and Vigilance". A brief and official explanation of the symbolism was much later published in the form of a historical sketch, or pamphlet, entitled: The Seal of the United States: How it was Developed and Adopted. It was written by Gaillard Hunt in 1892 under the direction of then Secretary of State James G. Blaine. When the copyright on the pamphlet expired, Hunt expounded upon its information in more detail in a 1909 book entitled The History of the Seal of the United States. This work was largely based on a two-volume work written in 1897 by Charles A. L. Totten titled Our Inheritance in the Great Seal of Manasseh, the United States of America: Its History and Heraldry; and Its Signification unto the 'Great People' thus Sealed. Hunt's 1909 account details: how the seal was chosen; sketches of other suggestions which were made but not chosen, such as Franklin's suggested motto ("Rebellion to tyrants is obedience to God"); iterations and changes made to the seal; information on the illegal seal; and the symbology of the seal (such as that provided by Charles Thomson). The colors (tinctures) of the coat of arms are either reproduced directly, or represented monochromatically by means of heraldic hatching. The latter applies when the seal is affixed to paper. In the Department of State, the term "Great Seal" refers to a physical mechanism which is used by the department to affix the seal to official government documents. This mechanism includes not only the die (metal engraved with a raised inverse image of the seal), but also the counterdie (also known as a counter-seal), the press, and cabinet in which it is housed. There have been several presses used since the seal was introduced, but none of the mechanisms used from 1782 through 1904 have survived. The seal and its press were saved when Washington, D.C. was burned in 1814, though no one knows who rescued the pieces. The press in use today was made in 1903 by R. Hoe & Co's chief cabinetmaker Frederick S. Betchley in conjunction with the 1904 die, with the cabinet being made of mahogany. It is marked with the contracted completion date of June 15, 1903, but delays and reworking pushed final delivery into early 1904. From 1945 to 1955, the Great Seal changed quarters almost once a year. In 1955, the seal was put on public display for the first time in a central location in the department's main building. In 1961 the Seal became the focus of the new Department Exhibit Hall, where it resides today in a glass enclosure. The enclosure remains locked at all times, even during the sealing of a document. The seal can only be affixed by an officer of the Department of State, under the authority of the secretary of state. To seal a document, first a blank paper wafer is glued onto its front in a space provided for it. The document is then placed between the die and counterdie, with the wafer lined up between them. Holding the document with one hand, the weighted arm of the press is pulled with the other, driving the die down onto the wafer, impressing the seal in relief. When envelopes containing letters need to be sealed, the wafer is imprinted first and then glued to the sealed envelope. It is used approximately 2,000 to 3,000 times a year. Documents which require the seal include treaty ratifications, international agreements, appointments of ambassadors and civil officers, and communications from the President to heads of foreign governments. The seal was once required on presidential proclamations, and on some now-obsolete documents such as exequaturs and Mediterranean passports. The metallic die of the obverse side of the Great Seal is what actually embosses the design onto documents. These dies eventually wear down, requiring replacements to be made. The current die is the seventh engraving of the seal, and the actual design on the dies has evolved over time. The first die depicts a relatively crude crested eagle, thin-legged and somewhat awkward. There is no fruit on the olive branch, and the engraver added a border of acanthus leaves. Depicting an eagle with a crest is typical in heraldry, but is at odds with the official blazon of the seal which specifies a bald eagle (which have no crests). The blazon does not specify the arrangement of the stars (which were randomly placed in Thomson's sketch) nor the number of points; the engraver chose six-pointed stars (typical of U.S. heraldry), and arranged them in a larger six-pointed star. No drawing made by the engraver has ever been found, and it is not known if Thomson provided any. This first die was used until 1841, and is now on display in the National Archives in Washington, D.C. There was no die made of the reverse side of the seal (and in fact, one has never been made). The intended use was for pendant seals, which are discs of wax attached to the document by a cord or ribbon, and thus have two sides. However, the United States did not use pendant seals at the time, and there was no need for a die of the reverse. In an essay published in Harper's from 1856 Bernard Lossing alluded to a version half the size for the purpose of impressing wax and paper. More recent research has not been able to verify this claim, with no record of this seal being found (although the second seal committee of 1780 had recommended a half-size seal). These seals were transported in metallic boxes called skippets, which protected the actual wax seal from damage. The skippets themselves also were engraved with the seal design. Several skippets were made at a time, which the State Department used as needed. Usually skippets were made out of sterling silver, though for the Japanese treaty following Commodore Perry's mission a golden box was used (the ratification of that treaty, made later in 1854, had an even more elaborate and expensive seal and heavy gold skippet). The Masi treaty die was used until 1871, almost exclusively for treaties, at which point the U.S. government discontinued the use of pendant seals. The die is also currently on display at the National Archives. Masi's company made most of the skippets for almost twenty years, after which the State Department switched to nearly identical versions made by Samuel Lewis. At least one 1871 treaty seal was actually made using a Lewis skippet mold instead of the Masi die, meaning it too is technically an official die. The seal was 2+1⁄8 inches (5.4 cm) in diameter. In 1866, the first counter die was made, which is the same design in opposite relief. The paper was placed between the die and counter die, resulting in a sharper impression in the paper than from one die alone. The use of counterdies continues to this day. The new die was engraved by Herman Baumgarten of Washington, D.C. His version followed the 1841 die very closely, including the errors, and was the same size. The most notable differences were slightly larger stars and lettering. The workmanship on the die was relatively poor, with no impression being very clear, and it is considered the poorest of all Great Seal die. It was the one in use during the seal's centennial in 1882. Theodore F. Dwight, Chief of the Bureau of Rolls and Library of the Department of State, supervised the process. He brought in several consultants to consider design from historical, heraldic, and artistic points of view. These included Justin Winsor, a historical scholar, Charles Eliot Norton, a Harvard professor, William H. Whitmore, author of Elements of Heraldry, John Denison Chaplin, Jr., an expert on engraving and associate editor of American Cyclopædia, the sculptor Augustus Saint-Gaudens, the Unitarian minister Edward Everett Hale, and even the botanist Asa Gray to help with the olive branch. Tiffany's chief designer, James Horton Whitehouse, was the artist responsible for the actual design. On December 13, 1884, following much research and discussion among the group, Whitehouse submitted his designs. The result was a much more formal and heraldic look, completely different from previous dies, and has remained essentially unchanged since. The eagle is a great deal more robust and clutches the olive branch and arrows from behind. The 13 arrows were restored, in accordance with the original law, and the olive branch was depicted with 13 leaves and 13 olives. The clouds surrounding the constellation were made a complete circle for the first time. The resulting die was made of steel, was 3 inches (76 mm) wide, and weighed one pound six ounces. In a letter accompanying their designs, Tiffany gave their reasonings behind various elements. The eagle was made as realistic as the rules of heraldry would permit, and the scroll style was chosen to least interfere with the eagle. There were no stars in the chief (the area at the top of the shield), as is sometimes seen, as there are none specified in the blazon and thus including them would violate the rules of heraldry. Some had suggested allowing the rays of the sun to extend through the clouds, as appears to be specified in the original law and sometimes seen in other versions, but Whitehouse rejected that idea and kept with the traditional die representation. He also considered adding flowers to the olive branch, but decided against it, as "the unspecified number of flowers would be assumed to mean something when it would not". Tiffany also submitted a design for the reverse of the seal, but even though Congress had ordered one a die was not created. The members of the consulting group were somewhat disparaging of the design of even the obverse, but especially critical of the reverse, and suggested not making it at all. Dwight eventually agreed and did not order the die, though he said it was "not improper" that one eventually be made. To this day, there has never been an official die made of the reverse. The die was engraved by Max Zeitler of the Philadelphia firm of Baily Banks & Biddle in 1903 (and is thus sometimes called the 1903 die), but final delivery was delayed until January 1904 due to issues with the press. There were slight differences; the impressions were sharper, the feathers more pointed, and the talons have shorter joints. Also, two small heraldic errors which had persisted on all previous seal dies were fixed: the rays of the glory were drawn with dots to indicate the tincture gold, and the background of the stars was drawn with horizontal lines to indicate azure. The die was first used on January 26, 1904, and was used for 26 years. All dies made since have followed exactly the same design, and in 1986 the Bureau of Engraving and Printing made a master die from which all future dies will be made. The current die is the seventh and was made in 1986. Reverse The 1782 resolution adopting the seal blazons the image on the reverse as "A pyramid unfinished. In the zenith an eye in a triangle, surrounded by a glory, proper." The pyramid is conventionally shown as consisting of 13 courses to refer to the thirteen original states. The adopting resolution provides that it is inscribed on its base with the date MDCCLXXVI (1776, the year of the United States Declaration of Independence) in Roman numerals. Where the top of the pyramid should be, the Eye of Providence watches over it. Two mottos appear: Annuit cœptis signifies that Providence has "approved of (our) undertakings." Novus ordo seclorum, freely taken from Virgil, is Latin for "a new order of the ages." The reverse has never been cut (as a seal) but appears, for example, on the back of the one-dollar bill. The primary official explanation of the symbolism of the great seal was given by Charles Thomson upon presenting the final design for adoption by Congress. About the elements on the seal's reverse, he wrote: The pyramid signifies Strength and Duration: The Eye over it & the Motto allude to the many signal interpositions of providence in favour of the American cause. The date underneath is that of the Declaration of Independence and the words under it signify the beginning of the new American Æra, which commences from that date. Some conspiracy theories state that the Great Seal shows a sinister influence by Freemasonry in the founding of the United States. Such theories usually claim that the Eye of Providence (found, in the Seal, above the pyramid) is a common Masonic emblem, and that the Great Seal was created by Freemasons. These claims, however, misstate the facts. While the Eye of Providence is today a common Masonic motif, this was not the case during the 1770s and 1780s, when the Great Seal was designed and approved. According to David Barrett, a Masonic researcher, the Eye seems to have been used only sporadically by the Masons in those decades, and was not adopted as a common Masonic symbol until 1797, several years after the Great Seal of the United States had already been designed. The Eye of Providence was, on the other hand, a fairly common Christian motif throughout the Middle Ages and Renaissance, and was commonly used as such in Europe as well as America throughout the 18th century. It is still found in Catholic, Orthodox, and Protestant churches, and it symbolizes the Holy Trinity (the triangle) and God's omniscience (the eye) surrounded by rays of glory, denoting God's divinity. Furthermore, contrary to the claims of these conspiracy theories, the Great Seal was not created by Freemasons. While Benjamin Franklin was a Mason, he was the only member of any of the various Great Seal committees definitively known to be so, and his ideas were not adopted. Of the four men whose ideas were adopted, neither Charles Thomson, Pierre du Simitière nor William Barton was a Mason and, while Francis Hopkinson has been alleged to have had Masonic connections, there is no firm evidence to support the claim. Origin On July 4, 1776, the same day that independence from Great Britain was declared by the thirteen colonies, the Continental Congress named the first committee to design a Great Seal, or national emblem, for the country. Similar to other nations, the United States needed an official symbol of sovereignty to formalize and seal (or sign) international treaties and transactions. It took six years, three committees, and the contributions of fourteen men before the Congress finally accepted a design (which included elements proposed by each of the three committees) in 1782. The first committee consisted of Benjamin Franklin, Thomas Jefferson, and John Adams. While they were three of the five primary authors of the Declaration of Independence, they had little experience in heraldry and sought the help of Pierre Eugene du Simitiere, an artist living in Philadelphia who would later also design the state seals of Delaware and New Jersey and start a museum of the Revolutionary War. Each of these men proposed a design for the seal. Franklin chose an allegorical scene from Exodus, described in his notes as "Moses standing on the Shore, and extending his Hand over the Sea, thereby causing the same to overwhelm Pharaoh who is sitting in an open Chariot, a Crown on his Head and a Sword in his Hand. Rays from a Pillar of Fire in the Clouds reaching to Moses, to express that he acts by Command of the Deity." Motto, "Rebellion to Tyrants is Obedience to God." Jefferson suggested a depiction of the Children of Israel in the wilderness, led by a cloud by day and a pillar of fire by night for the front of the seal; and Hengest and Horsa, the two brothers who were the legendary leaders of the first Anglo-Saxon settlers in Britain, for the reverse side of the seal. Adams chose a painting known as the "Judgment of Hercules" where the young Hercules must choose to travel either on the flowery path of self-indulgence or the rugged, more difficult, uphill path of duty to others and honor to himself. In August 1776, du Simitière showed his design, which was more along conventional heraldic lines. The shield had six sections, each representing "the Countries from which these States have been peopled" (using the symbols for England, Scotland, Ireland, France, Germany, and Holland), surrounded by the initials of all thirteen states. The supporters were a female figure representing Liberty holding an anchor of hope and a spear with a cap, and on the other side an American soldier holding a rifle and tomahawk. The crest was the "Eye of Providence in a radiant Triangle whose Glory extends over the Shield and beyond the Figures", and the motto E Pluribus Unum (Out of Many, One) in a scroll at the bottom. On August 20, 1776, the committee presented their report to Congress. The committee members chose du Simitière's design, though it was changed to remove the anchor of hope and replace the soldier with Lady Justice holding a sword and a balance. Surrounding the main elements was the inscription "Seal of the United States of America MDCCLXXVI". For the reverse, Franklin's design of Moses parting the Red Sea was used. Congress was however not impressed, and on the same day ordered that the report "lie on the table", ending the work of the committee. While the designs in their entirety were not used, the E Pluribus Unum motto was chosen for the final seal, and the reverse used the Roman numeral for 1776 and the Eye of Providence. Jefferson also liked Franklin's motto so much, he ended up using it on his personal seal. The motto was almost certainly taken from the title page of Gentleman's Magazine, a monthly magazine published in London which had used it from its first edition in 1731, and was well known in the colonies. The motto alluded to the magazine being a collection of articles obtained from other newspapers, and was used in most of its editions until 1833. The motto was taken in turn from Gentleman's Journal, a similar magazine which ran briefly from 1692 to 1694. While variants turn up in other places (for example a poem often ascribed to Virgil called Moretum contains the phrase E Pluribus Unus), this is the oldest known use of the exact phrase. Another source was some of the Continental currency issued earlier in 1776; these were designed by Franklin and featured the motto We Are One surrounded by thirteen rings, each with the name of a colony. This design is echoed in the seal submitted by the first committee, and the motto was quite possibly a Latin version of this concept. The Eye of Providence had been a well-known classical symbol of the deity since at least the Renaissance, which du Simitiere was familiar with. For three and a half years no further action was taken, during which time the Continental Congress was forced out of Philadelphia before returning in 1778. On March 25, 1780, a second committee to design a great seal was formed, which consisted of James Lovell, John Morin Scott, and William Churchill Houston. Like the first committee, they sought the help of someone more experienced in heraldry, this time Francis Hopkinson, who did most of the work. Hopkinson, a signer of the Declaration of Independence, designed the American flag, and also helped design state and other government seals. He made two similar proposals, each having an obverse and reverse side, with themes of war and peace. Hopkinson's first design had a shield with thirteen diagonal red and white stripes, supported on one side by a figure bearing an olive branch and representing peace, and on the other an Indian warrior holding a bow and arrow, and holding a quiver. The crest was a radiant constellation of thirteen stars. The motto was Bello vel pace paratus, meaning "prepared in war or in peace". The reverse, in Hopkinson's words, was "Liberty is seated in a chair holding an olive branch and her staff is topped by a Liberty cap. The motto 'Virtute perennis' means 'Everlasting because of virtue.' The date in Roman numerals is 1776." In his second proposal, the Indian warrior was replaced by a soldier holding a sword, and the motto was shortened to Bello vel paci, meaning "For war or for peace". The committee chose the second version, and reported back to Congress on May 10, 1780, six weeks after being formed. Their final blazon, printed in Congress journals on May 17, was: "The Shield charged on the Field Azure with 13 diagonal stripes alternate rouge and argent. Supporters; dexter, a Warriour holding a Sword; sinister, a Figure representing Peace bearing an Olive Branch. The Crest; a radiant Constellation of 13 Stars. The motto, Bella vel Paci." Once again, Congress did not find the result acceptable. They referred the matter back to the committee, which did no further work on the matter. As with the first design, several elements were eventually used in the final seal; the thirteen stripes on the shield with their colors, the constellation of stars surrounded by clouds, the olive branch, and the arrows (from Hopkinson's first proposal). Hopkinson had previously used the constellation and clouds on a $40 Continental currency note he designed in 1778. The same note also used an Eye of Providence, taken from the first committee's design. The shield of the Great Seal has seven white stripes and six red ones—essentially, a white background with six red stripes. Hopkinson incorporated this stripe arrangement into the Great Seal from the Flag of the United States that he had designed. Hopkinson also designed a seal for the Admiralty (Navy), which incorporated a chevron consisting of seven red stripes and six white ones. The seven red stripes in his Admiralty seal reflected the number of red stripes in his Naval flag. When Hopkinson designed these flags, he was running the Navy as chairman of the Continental Navy Board. After two more years, Congress formed a third committee on May 4, 1782, this time consisting of John Rutledge, Arthur Middleton, and Elias Boudinot. Arthur Lee replaced Rutledge, although he was not officially appointed. As with the previous two committees, most of the work was delegated to a heraldic expert, this time 28-year-old William Barton. Barton drew a design very quickly, using a rooster on the crest, but it was much too complex. No drawing of this design seems to have survived. Barton then came up with another design, which the committee submitted back to Congress on May 9, 1782, just five days after being formed. This time, the figures on each side of the shield were the "Genius of the American Confederated Republic" represented by a maiden, and on the other side an American warrior. At the top is an eagle and on the pillar in the shield is a "Phoenix in Flames". The mottos were In Vindiciam Libertatis (In Defense of Liberty) and Virtus sola invicta (Only virtue unconquered). For the reverse, Barton used a pyramid of thirteen steps, with the radiant Eye of Providence overhead, and used the mottos Deo Favente ("With God favoring") and Perennis (Everlasting). The pyramid had come from another Continental currency note designed in 1778 by Hopkinson, this time the $50 note, which had a nearly identical pyramid and the motto Perennis. Barton had at first specified "on the Summit of it a Palm Tree, proper", with the explanation that "The Palm Tree, when burnt down to the very Root, naturally rises fairer than ever," but later crossed it out and replaced it with the Eye of Providence, taken from the first committee's design. Congress again took no action on the submitted design. On June 13, 1782, the Congress turned to its Secretary Charles Thomson, and provided all material submitted by the first three committees. Thomson was 53 years old, and had been a Latin master at a Philadelphia academy. Thomson took elements from all three previous committees, coming up with a new design which provided the basis for the final seal. Thomson used the eagle—this time specifying an American bald eagle—as the sole supporter on the shield. The shield had thirteen stripes, this time in a chevron pattern, and the eagle's claws held an olive branch and a bundle of thirteen arrows. For the crest, he used Hopkinson's constellation of thirteen stars. The motto was E Pluribus Unum, taken from the first committee, and was on a scroll held in the eagle's beak. An eagle holding symbols of war and peace has a long history, and also echoed the second committee's themes. Franklin owned a 1702 emblem book, which included an eagle with olive branch and arrows near its talons, which may have been a source for Thomson. The arrows also mirror those in the arms of the Dutch Republic, the only country in Europe with a representative government at the time, which depicted a lion holding seven arrows representing their seven provinces. State currency may have provided further inspiration; a 1775 South Carolina bill showed a bundle of 13 arrows and a 1775 Maryland note depicted a hand with an olive branch of 13 leaves. Finally, it has been suggested that the bundle of arrows is a reference to the symbol of the bundle of five arrows in the Iroquois Great Law of Peace, which represents strength in unity, as one arrow can be broken easily while a bundle of them cannot. For the reverse, Thomson essentially kept Barton's design, but re-added the triangle around the Eye of Providence and changed the mottos to Annuit Cœptis and Novus Ordo Seclorum. Thomson sent his designs back to Barton, who made some final alterations. The stripes on the shield were changed again, this time to "palewise" (vertical), and the eagle's wing position was changed to "displayed" (wingtips up) instead of "rising". Barton also wrote a more properly heraldic blazon. The design was submitted to Congress on June 20, 1782, and was accepted the same day. Thomson included a page of explanatory notes, but no drawing was submitted. This remains the official definition of the Great Seal today. The first brass die was cut sometime between June and September, and placed in the State House in Philadelphia. It was first used by Thomson on September 16, 1782, to verify signatures on a document which authorized George Washington to negotiate an exchange of prisoners. Charles Thomson, as the secretary of Congress, remained the keeper of the seal until the federal government was formed in 1789. On July 24, 1789, President Washington asked Thomson to deliver the seal to the Department of Foreign Affairs in the person of Roger Alden, who kept it until the Department of State was created. All subsequent secretaries of state have been responsible for applying the seal to diplomatic documents. On September 15, 1789, the United States Congress ordered "that the seal heretofore used by the United States in Congress assembled, shall be, and hereby is declared to be, the seal of the United States." Notable depictions The Great Seal very quickly became a popular symbol of the country. It inspired both the flag of North Dakota and that of the US Virgin Islands (adopted in 1911 and 1921, respectively). Combined with the heraldic tradition of artistic freedom so long as the particulars of the blazon are followed, a wide variety of official and unofficial emblazonments appeared, especially in the first hundred years. This is evident even in the different versions of the seal die. The quality of the 1885 design, coupled with a spirit of bureaucratic standardization that characterized that era, has driven most of these out of official use. The Great Seal symbol (or a close variant) has been used by former presidents after leaving office. In February 2021, the Seal continued to feature in the logo of the Office of Barack and Michelle Obama, and the logo of the Office of George W. Bush. See also References Notes External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Annum] | [TOKENS: 5358] |
Contents Year A year is a unit of time based on how long it takes the Earth to orbit the Sun. In scientific use, the tropical year (approximately 365 solar days, 5 hours, 48 minutes, 45 seconds) and the sidereal year (about 20 minutes longer) are more exact. The modern calendar year, as reckoned according to the Gregorian calendar, approximates the tropical year by using a system of leap years. The term 'year' is also used to indicate other periods of roughly similar duration, such as the lunar year (a roughly 354-day cycle of twelve of the Moon's phases – see lunar calendar), as well as periods loosely associated with the calendar or astronomical year, such as the seasonal year, the fiscal year, the academic year, etc. Due to the Earth's axial tilt, the course of a year sees the passing of the seasons, marked by changes in weather, the hours of daylight, and, consequently, vegetation and soil fertility. In temperate and subpolar regions around the planet, four seasons are generally recognized: spring, summer, autumn, and winter. In tropical and subtropical regions, several geographical sectors do not present defined seasons; but in the seasonal tropics, the annual wet and dry seasons are recognized and tracked. By extension, the term 'year' can also be applied to the time taken for the orbit of any astronomical object around its primary – for example the Martian year of roughly 1.88 Earth years. The term can also be used in reference to any long period or cycle, such as the Great Year. Calendar year A calendar year is an approximation of the number of days of the Earth's orbital period, as counted in a given calendar. The Gregorian calendar, or modern calendar, presents its calendar year to be either a common year of 365 days or a leap year of 366 days, as do the Julian calendars. For the Gregorian calendar, the average length of the calendar year (the mean year) across the complete leap cycle of 400 years is 365.2425 days (97 out of 400 years are leap years). Abbreviation In English, the unit of time for year is commonly abbreviated as "y" or "yr". The symbol "a" (for Latin: annus, year) is sometimes used in scientific literature, though its exact duration may be inconsistent.[citation needed] Etymology English year (via West Saxon ġēar (/jɛar/), Anglian ġēr) continues Proto-Germanic *jǣran (*jē₁ran). Cognates are German Jahr, Old High German jār, Old Norse ár and Gothic jer, from the Proto-Indo-European noun **yeh₁r-om "year, season". Cognates also descended from the same Proto-Indo-European noun (with variation in suffix ablaut) are Avestan yārǝ "year", Greek ὥρα (hṓra) "year, season, period of time" (whence "hour"), Old Church Slavonic jarŭ, and Latin hornus "of this year".[citation needed] Latin annus (a 2nd declension masculine noun; annum is the accusative singular; annī is genitive singular and nominative plural; annō the dative and ablative singular) is from a PIE noun *h₂et-no-, which also yielded Gothic aþn "year" (only the dative plural aþnam is attested). Although most languages treat the word as thematic *yeh₁r-o-, there is evidence for an original derivation with an *-r/n suffix, *yeh₁-ro-. Both Indo-European words for year, *yeh₁-ro- and *h₂et-no-, would then be derived from verbal roots meaning "to go, move", *h₁ey- and *h₂et-, respectively (compare Vedic Sanskrit éti "goes", atasi "thou goest, wanderest"). A number of English words are derived from Latin annus, such as annual, annuity, anniversary, etc.; per annum means "each year", annō Dominī means "in the year of the Lord". The Greek word for "year", ἔτος, is cognate with Latin vetus "old", from the PIE word *wetos- "year", also preserved in this meaning in Sanskrit vat-sa-ras "year" and vat-sa- "yearling (calf)", the latter also reflected in Latin vitulus "bull calf", English wether "ram" (Old English weðer, Gothic wiþrus "lamb"). In some languages, it is common to count years by referencing to one season, as in "summers", or "winters", or "harvests". Examples include Chinese 年 "year", originally 秂, an ideographic compound of a person carrying a bundle of wheat denoting "harvest". Slavic besides godŭ "time period; year" uses lěto "summer; year". Intercalation Astronomical years do not have an integer number of days or lunar months. Any calendar that follows an astronomical year must have a system of intercalation such as leap years. In the Julian calendar, the average (mean) length of a year is 365.25 days. In a non-leap year, there are 365 days, in a leap year there are 366 days. A leap year occurs every fourth year during which a leap day is intercalated into the month of February. The name "Leap Day" is applied to the added day. In astronomy, the Julian year is a unit of time defined as 365.25 days, each of exactly 86400 seconds (SI base unit), totaling exactly 31,557,600 seconds in the Julian astronomical year. The Revised Julian calendar, proposed in 1923 and used in some Eastern Orthodox Churches, has 218 leap years every 900 years, for the average (mean) year length of 365.2422222 days, close to the length of the mean tropical year, 365.24219 days (relative error of 9·10). In the year 2800 CE, the Gregorian and Revised Julian calendars will begin to differ by one calendar day. The Gregorian calendar aims to ensure that the northward equinox falls on or shortly before March 21 and hence it follows the northward equinox year, or tropical year. Because 97 out of 400 years are leap years, the mean length of the Gregorian calendar year is 365.2425 days, with a relative error below one ppm (8·10) relative to the current length of the mean tropical year (365.242189 days) and even closer to the current March equinox year of 365.242374 days that it aims to match. Historically, lunisolar calendars intercalated entire leap months on an observational basis. Lunisolar calendars have mostly fallen out of use except for liturgical reasons (Hebrew calendar, various Hindu calendars). A modern adaptation of the historical Jalali calendar, known as the Solar Hijri calendar (1925), is a purely solar calendar with an irregular pattern of leap days based on observation (or astronomical computation), aiming to place new year (Nowruz) on the day of vernal equinox (for the time zone of Tehran), as opposed to using an algorithmic system of leap years. Year numbering A calendar era assigns a cardinal number to each sequential year, using a reference event in the past (called the epoch) as the beginning of the era. The Gregorian calendar era is the world's most widely used civil calendar. Its epoch is a 6th century estimate of the date of birth of Jesus of Nazareth. Two notations are used to indicate year numbering in the Gregorian calendar: the Christian "Anno Domini" (meaning "in the year of the Lord"), abbreviated AD; and "Common Era", abbreviated CE, preferred by many of other faiths and none. Year numbers are based on inclusive counting, so that there is no "year zero". Years before the epoch are abbreviated BC for Before Christ or BCE for Before the Common Era. In Astronomical year numbering, positive numbers indicate years AD/CE, the number 0 designates 1 BC/BCE, −1 designates 2 BC/BCE, and so on. Other eras include that of Ancient Rome, Ab Urbe Condita ("from the foundation of the city), abbreviated AUC; Anno Mundi ("year of the world"), used for the Hebrew calendar and abbreviated AM; and the Japanese imperial eras. The Islamic Hijri year, (year of the Hijrah, Anno Hegirae abbreviated AH), is a lunar calendar of twelve lunar months and thus is shorter than a solar year. Pragmatic divisions Financial and scientific calculations often use a 365-day calendar to simplify daily rates. A fiscal year or financial year is a 12-month period used for calculating annual financial statements in businesses and other organizations. In many jurisdictions, regulations regarding accounting require such reports once per twelve months, but do not require that the twelve months constitute a calendar year. For example, in Canada and India the fiscal year runs from April 1; in the United Kingdom it runs from April 1 for purposes of corporation tax and government financial statements, but from April 6 for purposes of personal taxation and payment of state benefits; in Australia it runs from July 1; while in the United States the fiscal year of the federal government runs from October 1. An academic year is the annual period during which a student attends an educational institution. The academic year may be divided into academic terms, such as semesters or quarters. The school year in many countries in the Northern Hemisphere starts in August or September and ends in May, June or July, providing a summer break from study between academic years. In Israel the academic year begins around October or November, aligned with the second month of the Hebrew calendar. Some schools in the UK, Canada and the United States divide the academic year into three roughly equal-length terms (called trimesters or quarters in the United States), roughly coinciding with autumn, winter, and spring. At some, a shortened summer session, sometimes considered part of the regular academic year, is attended by students on a voluntary or elective basis. Other schools break the year into two main semesters, a first (typically August through December) and a second semester (January through May). Each of these main semesters may be split in half by mid-term exams, and each of the halves is referred to as a quarter (or term in some countries). There may also be a voluntary summer session or a short January session. Some other schools, including some in the United States, have four marking periods. Some schools in the United States, notably Boston Latin School, may divide the year into five or more marking periods. Some state in defense of this that there is perhaps a positive correlation between report frequency and academic achievement. There are typically 180 days of teaching each year in schools in the US, excluding weekends and breaks, while there are 190 days for pupils in state schools in Canada, New Zealand and the United Kingdom, and 200 for pupils in Australia.[citation needed] In India the academic year normally starts from June 1 and ends on May 31. Though schools start closing from mid-March, the actual academic closure is on May 31 and in Nepal it starts from July 15.[citation needed] Schools and universities in Australia typically have academic years that roughly align with the calendar year (i.e., starting in February or March and ending in October to December), as the southern hemisphere experiences summer from December to February. Astronomical years The Julian year, as used in astronomy and other sciences, is a time unit now defined as exactly 365.25 days of 86400 SI seconds each ("ephemeris days"). This is one meaning of the unit "year" used in various scientific contexts. The Julian century of 36525 ephemeris days and the Julian millennium of 365250 ephemeris days are used in astronomical calculations. Fundamentally, expressing a time interval in Julian years is a way to precisely specify an amount of time (not how many "real" years), for long time intervals where stating the number of ephemeris days would be unwieldy and unintuitive. By convention, the Julian year is used in the computation of the distance covered by a light-year. In the Unified Code for Units of Measure (but not according to the International Union of Pure and Applied Physics or the International Union of Geological Sciences, see below), the symbol 'a' (without subscript) always refers to the Julian year, 'aj', of exactly 31557600 seconds. The SI multiplier prefixes may be applied to it to form "ka", "Ma", etc. The scientific Julian year is not to be confused with a year in the Jullian calendar. The scientific Julian year is a multiple of the SI second; it is today "astronomical" only in the sense "used in astronomy", whilst true astronomical years are based on the movements of celestial bodies. Each of these three years can be loosely called an astronomical year. The sidereal year is the time taken for the Earth to complete one revolution of its orbit, as measured against a fixed frame of reference (such as the fixed stars, Latin sidera, singular sidus). Its average duration is 365.256363004 days (365 d 6 h 9 min 9.76 s) (at the epoch J2000.0 = January 1, 2000, 12:00:00 TT). Today the mean tropical year is defined as the period of time for the mean ecliptic longitude of the Sun to increase by 360 degrees. Since the Sun's ecliptic longitude is measured with respect to the equinox, the tropical year comprises a complete cycle of the seasons and is the basis of solar calendars such as the internationally used Gregorian calendar. The modern definition of mean tropical year differs from the actual time between passages of, e.g., the northward equinox, by a minute or two, for several reasons explained below. Because of the Earth's axial precession, this year is about 20 minutes shorter than the sidereal year. The mean tropical year is approximately 365 days, 5 hours, 48 minutes, 45 seconds, using the modern definition (= 365.2421875 d × 86400 s = 31556925 s). The length of the tropical year varies a bit over thousands of years because the rate of axial precession is not constant. The anomalistic year is the time taken for the Earth to complete one revolution with respect to its apsides. The orbit of the Earth is elliptical; the extreme points, called apsides, are the perihelion, where the Earth is closest to the Sun, and the aphelion, where the Earth is farthest from the Sun. The anomalistic year is usually defined as the time between perihelion passages. Its average duration is 365.259636 days (365 d 6 h 13 min 52.6 s) (at the epoch J2011.0). The draconic year, draconitic year, eclipse year, or ecliptic year is the time taken for the Sun (as seen from the Earth) to complete one revolution with respect to the same lunar node (a point where the Moon's orbit intersects the ecliptic). The year is associated with eclipses: these occur only when both the Sun and the Moon are near these nodes; so eclipses occur within about a month of every half eclipse year. Hence there are two eclipse seasons every eclipse year. The average duration of the eclipse year is This term is sometimes erroneously used for the draconic or nodal period of lunar precession, that is the period of a complete revolution of the Moon's ascending node around the ecliptic: 18.612815932 Julian years (6798.331019 days; at the epoch J2000.0). The full moon cycle is the time for the Sun (as seen from the Earth) to complete one revolution with respect to the perigee of the Moon's orbit. This period is associated with the apparent size of the full moon, and also with the varying duration of the synodic month. The duration of one full moon cycle is: The lunar year comprises twelve full cycles of the phases of the Moon, as seen from Earth. It has a duration of approximately 354.37 days. Muslims use this for religious purposes, including calculating the date of the Hajj and the fasting month of Ramadan, and thus also the Eids. The Jewish calendar is also mainly lunar, but with the addition of an intercalary lunar month once every two or three years, designed to keep the calendar broadly synchronous with the solar cycle. Thus, a lunar year on the Jewish (Hebrew) calendar consists of either twelve or thirteen lunar months. The vague year, from annus vagus or wandering year, is an integral approximation to the year equaling 365 days, which wanders in relation to more exact years. Typically the vague year is divided into 12 schematic months of 30 days each plus 5 epagomenal days. The vague year was used in the calendars of Ethiopia, Ancient Egypt, Iran, Armenia and in Mesoamerica among the Aztecs and Maya. It is still used by many Zoroastrian communities. A heliacal year is the interval between the heliacal risings of a star. It differs from the sidereal year for stars away from the ecliptic due mainly to the precession of the equinoxes. The Sothic year is the heliacal year, the interval between heliacal risings, of the star Sirius. It is currently less than the sidereal year and its duration is very close to the Julian year of 365.25 days. The Gaussian year is the sidereal year for a planet of negligible mass (relative to the Sun) and unperturbed by other planets that is governed by the Gaussian gravitational constant. Such a planet would be slightly closer to the Sun than Earth's mean distance. Its length is: The Besselian year is a tropical year that starts when the (fictitious) mean Sun reaches an ecliptic longitude of 280°. This is currently on or close to January 1. It is named after the 19th-century German astronomer and mathematician Friedrich Bessel. The following equation can be used to compute the current Besselian epoch (in years): The TT subscript indicates that for this formula, the Julian date should use the Terrestrial Time scale, or its predecessor, ephemeris time. The exact length of an astronomical year changes over time. Numerical value of year variation Mean year lengths in this section are calculated for 2000, and differences in year lengths, compared to 2000, are given for past and future years. In the tables a day is 86400 SI seconds long. Some of the year lengths in this table are in average solar days, which are slowly getting longer (at a rate that cannot be exactly predicted in advance) and are now around 86400.002 SI seconds. An average Gregorian year may be said to be 365.2425 days (52.1775 weeks, and if an hour is defined as one twenty-fourth of a day, 8765.82 hours, 525949.2 minutes or 31556952 seconds). Note however that in absolute time the average Gregorian year is not adequately defined unless the period of the averaging (start and end dates) is stated, because each period of 400 years is longer (by more than 1000 seconds) than the preceding one as the rotation of the Earth slows. In this calendar, a common year is 365 days (8760 hours, 525600 minutes or 31536000 seconds), and a leap year is 366 days (8784 hours, 527040 minutes or 31622400 seconds). The 400-year civil cycle of the Gregorian calendar has 146097 days and hence exactly 20871 weeks. Greater astronomical years The Great Year, or equinoctial cycle, corresponds to a complete revolution of the equinoxes around the ecliptic. Its length is about 25,700 years. The Galactic year is the time it takes Earth's Solar System to revolve once around the Galactic Center. It comprises roughly 230 million Earth years. IUPAC–IUGS proposal In 2011, a task group of the IUPAC and the International Union of Geological Sciences (IUGS) jointly recommended defining the annus for geological purposes as 1 a = 31556925.445 seconds (approximately 365.24219265 ephemeris days) They chose a value close to the length of tropical year for the epoch 2000.0 (which is roughly the length of the tropical year 2000; the length of the tropical year is slowly decreasing). However, the definition is as a multiple of the second, the SI base unit of time, and independent of astronomical definitions, since "[d]efinitions of the annus that are based on an intermediate relationship via the day, such as the Julian and Gregorian year, bear an inherent, pre-programmed obsolescence because of the variability of Earth's orbital movement". It differs from the Julian year of 365.25 days (3.1557600 × 107 s) by about 21 parts per million. As of April 2025, the IUPAC Green Book (4th edition) provides a definition of the year as a = 31556925.9747 seconds. Seasonal year A seasonal year is the time between successive recurrences of a seasonal event such as the flooding of a river, the migration of a species of bird, the flowering of a species of plant, the first frost, or the first scheduled game of a certain sport. All of these events can have wide variations of more than a month from year to year. Symbols and abbreviations A common symbol for the year as a unit of time is "a", taken from the Latin word annus. For example, the U.S. National Institute of Standards and Technology (NIST) Guide for the Use of the International System of Units (SI) supports the symbol "a" as the unit of time for a year. In English, the abbreviations "y" or "yr" are more commonly used in non-scientific literature. In some Earth sciences branches (geology and paleontology), "kyr, myr, byr" (thousands, millions, and billions of years, respectively) and similar abbreviations are used to denote intervals of time remote from the present. In astronomy the abbreviations kyr, Myr and Gyr are in common use for kiloyears, megayears and gigayears. The Unified Code for Units of Measure (UCUM) disambiguates the varying symbologies of ISO 1000, ISO 2955 and ANSI X3.50 by using: In the UCUM, the symbol "a", without any qualifier, equals 1 aj. The UCUM also minimizes confusion with are, a unit of area, by using the abbreviation "ar". Since 1989, the International Astronomical Union (IAU) recognizes the symbol "a" rather than "yr" for a year, notes the different kinds of year, and recommends adopting the Julian year of 365.25 days, unless otherwise specified (IAU Style Manual). Since 1987, the International Union of Pure and Applied Physics (IUPAP) notes "a" as the general symbol for the time unit year (IUPAP Red Book). Since 1993, the International Union of Pure and Applied Chemistry (IUPAC) Green Book also uses the same symbol "a", notes the difference between Gregorian year and Julian year, and adopts the former (a = 365.2425 days), also noted in the IUPAC Gold Book. In 2011, a task group of IUPAC and IUGS recommended the use of a as the symbol for the annus (along with multiples such as Ma) for both time intervals and absolute ages. This proved controversial as it conflicts with an earlier convention among geoscientists to use "a" specifically for absolute age before the present (e.g. 1 Ma for 1 million years ago), and "y" or "yr" (and My, Myr etc) for a time interval or period of time. For the following, there are alternative forms that elide the consecutive vowels, such as kilannus, megannus, etc. The exponents and exponential notations are typically used for calculating and in displaying calculations, and for conserving space, as in tables of data. In geology and paleontology, a distinction sometimes is made between abbreviation "yr" for years and "ya" for years ago, combined with prefixes for thousand, million, or billion. In archaeology, dealing with more recent periods, normally expressed dates, e.g. "10,000 BC", may be used as a more traditional form than Before Present ("BP"). These abbreviations include: Around 200 kyaAround 60 kyaAround 20 kyaAround 10 kya Use of "mya" and "bya" is deprecated in modern geophysics, the recommended usage being "Ma" and "Ga" for dates Before Present, but "m.y." for the durations of epochs. This ad hoc distinction between "absolute" time and time intervals is somewhat controversial amongst members of the Geological Society of America. See also References Further reading |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Internet_censorship] | [TOKENS: 6738] |
Contents Internet censorship Internet censorship is the legal control or suppression of what can be accessed, published, or viewed on the Internet. Censorship is most often applied to specific internet domains (such as Wikipedia.org, for example) but exceptionally may extend to all Internet resources located outside the jurisdiction of the censoring state. Internet censorship may also put restrictions on what information can be made internet accessible. Organizations providing internet access – such as schools and libraries – may choose to preclude access to material that they consider undesirable, offensive, age-inappropriate or even illegal, and regard this as ethical behavior rather than censorship. Individuals and organizations may engage in self-censorship of material they publish, for moral, religious, or business reasons, to conform to societal norms, political views, due to intimidation, or out of fear of legal or other consequences. The extent of Internet censorship varies on a country-to-country basis. While some countries have moderate Internet censorship, other countries go as far as to limit the access of information such as news and suppress and silence discussion among citizens. Internet censorship also occurs in response to or in anticipation of events such as elections, protests, and riots. An example is the increased censorship due to the events of the Arab Spring. Other types of censorship include the use of copyrights, defamation, harassment, and various obscene material claims as a way to deliberately suppress content. Support for and opposition to Internet censorship also varies. In a 2012 Internet Society survey, 71% of respondents agreed that "censorship should exist in some form on the Internet". In the same survey, 83% agreed that "access to the Internet should be considered a basic human right" and 86% agreed that "freedom of expression should be guaranteed on the Internet". According to GlobalWebIndex, over 400 million people use virtual private networks to circumvent censorship or for increased user privacy. Overview Many of the challenges associated with Internet censorship are similar to those for offline censorship of more traditional media such as newspapers, magazines, books, music, radio, television, and film. One difference is that national borders are more permeable online: residents of a country that bans certain information can find it on websites hosted outside the country. Thus censors must work to prevent access to information even though they lack physical or legal control over the websites themselves. This in turn requires the use of technical censorship methods that are unique to the Internet, such as site blocking and content filtering. Views about the feasibility and effectiveness of Internet censorship have evolved in parallel with the development of the Internet and censorship technologies: Blocking and filtering can be based on relatively static blacklists or be determined more dynamically based on a real-time examination of the information being exchanged. Blacklists may be produced manually or automatically and are often not available to non-customers of the blocking software. Blocking or filtering can be done at a centralized national level, at a decentralized sub-national level, or at an institutional level, e.g., in libraries, universities or Internet cafés. Blocking and filtering may also vary within a country across different ISPs. Countries may filter sensitive content on an ongoing basis and/or introduce temporary filtering during key time periods such as elections. In some cases, the censoring authorities may surreptitiously block content to mislead the public into believing that censorship has not been applied. This is achieved by returning a fake "Not Found" error message when an attempt is made to access a blocked website. Unless the censor has total control over all Internet-connected computers, such as in North Korea (who employs an intranet that only privileged citizens can access), or Cuba, total censorship of information is very difficult or impossible to achieve due to the underlying distributed technology of the Internet. Pseudonymity and data havens (such as Hyphanet) protect free speech using technologies that guarantee material cannot be removed and prevents the identification of authors. Technologically savvy users can often find ways to access blocked content. Nevertheless, blocking remains an effective means of limiting access to sensitive information for most users when censors, such as those in China, are able to devote significant resources to building and maintaining a comprehensive censorship system. The term "splinternet" is sometimes used to describe the effects of national firewalls. The verb "rivercrab" colloquially refers to censorship of the Internet, particularly in Asia. Content suppression methods Various parties are using different technical methods of preventing public access to undesirable resources, with varying levels of effectiveness, costs and side effects. Entities mandating and implementing the censorship usually identify them by one of the following items: keywords, domain names and IP addresses. Lists are populated from different sources, ranging from private supplier through courts to specialized government agencies (Ministry of Industry and Information Technology of China, Islamic Guidance in Iran). As per Hoffmann, different methods are used to block certain websites or pages including DNS spoofing, blocking access to IPs, analyzing and filtering URLs, inspecting filter packets and resetting connections. Enforcement of the censor-nominated technologies can be applied at various levels of countries and Internet infrastructure: Internet content is subject to technical censorship methods, including: Technical censorship techniques are subject to both over- and under-blocking since it is often impossible to always block exactly the targeted content without blocking other permissible material or allowing some access to targeted material and so providing more or less protection than desired. An example is blocking an IP-address of a server that hosts multiple websites, which prevents access to all of the websites rather than just those that contain content deemed offensive. Writing in 2009, Ronald Deibert, professor of political science at the University of Toronto and co-founder and one of the principal investigators of the OpenNet Initiative, and, writing in 2011, Evgeny Morzov, a visiting scholar at Stanford University and an op-ed contributor to The New York Times, explain that companies in the United States, Finland, France, Germany, Britain, Canada, and South Africa are in part responsible for the increasing sophistication of online content filtering worldwide. While the off-the-shelf filtering software sold by Internet security companies are primarily marketed to businesses and individuals seeking to protect themselves and their employees and families, they are also used by governments to block what they consider sensitive content. Among the most popular filtering software programs is SmartFilter by Secure Computing in California, which was bought by McAfee in 2008. SmartFilter has been used by Tunisia, Saudi Arabia, Sudan, the UAE, Kuwait, Bahrain, Iran, and Oman, as well as the United States and the UK. Myanmar and Yemen have used filtering software from Websense. The Canadian-made commercial filter Netsweeper is used in Qatar, the UAE, and Yemen. The Canadian organization CitizenLab has reported that Sandvine and Procera products are used in Turkey and Egypt. On 12 March 2013, in a Special Report on Internet Surveillance, Reporters Without Borders named five "Corporate Enemies of the Internet": Amesys (France), Blue Coat Systems (U.S.), Gamma (UK and Germany), HackingTeam (Italy), and Trovicor (Germany). The companies sell products that are liable to be used by governments to violate human rights and freedom of information. RWB said that the list is not exhaustive and will be expanded in the coming months. In a U.S. lawsuit filed in May 2011, Cisco is accused of helping the Chinese government build a firewall, known widely as the Golden Shield, to censor the Internet and keep tabs on dissidents. Cisco said it had made nothing special for China. Cisco is also accused of aiding the Chinese government in monitoring and apprehending members of the banned Falun Gong group. Many filtering programs allow blocking to be configured based on dozens of categories and sub-categories such as these from Websense: "abortion" (pro-life, pro-choice), "adult material" (adult content, lingerie and swimsuit, nudity, sex, sex education), "advocacy groups" (sites that promote change or reform in public policy, public opinion, social practice, economic activities, and relationships), "drugs" (abused drugs, marijuana, prescribed medications, supplements and unregulated compounds), "religion" (non-traditional religions occult and folklore, traditional religions), .... The blocking categories used by the filtering programs may contain errors leading to the unintended blocking of websites. The blocking of Dailymotion in early 2007 by Tunisian authorities was, according to the OpenNet Initiative, due to Secure Computing wrongly categorizing Dailymotion as pornography for its SmartFilter filtering software. It was initially thought that Tunisia had blocked Dailymotion due to satirical videos about human rights violations in Tunisia, but after Secure Computing corrected the mistake access to Dailymotion was gradually restored in Tunisia. Organizations such as the Global Network Initiative, the Electronic Frontier Foundation, Amnesty International, and the American Civil Liberties Union have successfully lobbied some vendors such as Websense to make changes to their software, to refrain from doing business with repressive governments, and to educate schools who have inadvertently reconfigured their filtering software too strictly. Nevertheless, regulations and accountability related to the use of commercial filters and services are often non-existent, and there is relatively little oversight from civil society or other independent groups. Vendors often consider information about what sites and content is blocked valuable intellectual property that is not made available outside the company, sometimes not even to the organizations purchasing the filters. Thus by relying upon out-of-the-box filtering systems, the detailed task of deciding what is or is not acceptable speech may be outsourced to the commercial vendors. Internet content is also subject to censorship methods similar to those used with more traditional media. For example: Deplatforming is a form of Internet censorship in which controversial speakers or speech are suspended, banned, or otherwise shut down by social media platforms and other service providers that generally provide a venue for free speech or expression. Banking and financial service providers, among other companies, have also denied services to controversial activists or organizations, a practice known as "financial censorship". Law professor Glenn Reynolds dubbed 2018 the "Year of Deplatforming", in an August 2018 article in The Wall Street Journal. According to Reynolds, in 2018 "the internet giants decided to slam the gates on a number of people and ideas they don't like." On 6 August 2018, for example, several major platforms, including YouTube and Facebook, executed a coordinated, permanent ban on all accounts and media associated with conservative talk show host Alex Jones and his media platform InfoWars, citing "hate speech" and "glorifying violence." Most major web service operators reserve to themselves broad rights to remove or pre-screen content, and to suspend or terminate user accounts, sometimes without giving a specific list or only a vague general list of the reasons allowing the removal. The phrases "at our sole discretion", "without prior notice", and "for other reasons" are common in Terms of service agreements. Circumvention Internet censorship circumvention is one of the processes used by technologically savvy Internet users to bypass the technical aspects of Internet filtering and gain access to the otherwise censored material. Circumvention is an inherent problem for those wishing to censor the Internet because filtering and blocking do not remove content from the Internet, but instead block access to it. Therefore, as long as there is at least one publicly accessible uncensored system, it will often be possible to gain access to the otherwise censored material. However circumvention may not be possible by non-tech-savvy users, so blocking and filtering remain effective means of censoring the Internet access of large numbers of users. Different techniques and resources are used to bypass Internet censorship, including proxy websites, virtual private networks, sneakernets, the dark web and circumvention software tools. Solutions have differing ease of use, speed, security, and risks. Most, however, rely on gaining access to an Internet connection that is not subject to filtering, often in a different jurisdiction not subject to the same censorship laws. According to GlobalWebIndex, over 400 million people use virtual private networks to circumvent censorship or for an increased level of privacy. The majority of circumvention techniques are not suitable for day to day use. There are risks to using circumvention software or other methods to bypass Internet censorship. In some countries, individuals that gain access to otherwise restricted content may be violating the law and if caught can be expelled, fired, jailed, or subject to other punishments and loss of access. In June 2011, The New York Times reported that the U.S. is engaged in a "global effort to deploy 'shadow' Internet and mobile phone systems that dissidents can use to undermine repressive governments that seek to silence them by censoring or shutting down telecommunications networks." Another way to circumvent Internet censorship is to physically go to an area where the Internet is not censored. In 2017, a so-called "Internet refugee camp" was established by IT workers in the village of Bonako, just outside an area of Cameroon where the Internet is regularly blocked. The use of HTTPS versus what originally was HTTP in web searches created greater accessibility to most sites originally blocked or heavily monitored. Many social media sites including, Facebook, Google, and Twitter have added an automatic redirection to HTTPS as of 2017. With the added adoption of HTTPS use, "censors" are left with limited options of either completely blocking all content or none of it. The use of HTTPS does not inherently prevent the censorship of an entire domain, as the domain name is left unencrypted in the ClientHello of the TLS handshake. The Encrypted Client Hello TLS extension expands on HTTPS and encrypts the entire ClientHello but this depends on both client and server support. Common targets There are several motives or rationales for Internet filtering: politics and power, social norms and morals, and security concerns. Protecting existing economic interests is an additional emergent motive for Internet filtering. In addition, networking tools and applications that allow the sharing of information related to these motives are themselves subjected to filtering and blocking. And while there is considerable variation from country to country, the blocking of web sites in a local language is roughly twice that of web sites available only in English or other international languages. Internet controls and censorship directed at political opposition to the ruling government have been associated with higher authoritarianism. Internet controls can be categorized in pervasive internet control and more subtle internet influence operations. Examples include: Social filtering is censorship of topics that are held to be antithetical to accepted societal norms. In particular censorship of child pornography and content deemed inappropriate for children enjoys very widespread public support and such content is subject to censorship and other restrictions in most countries. Examples include: Many organizations implement filtering as part of a defense in depth strategy to protect their environments from malware, and to protect their reputations in the event of their networks being used, for example, to carry out sexual harassment. Internet filtering related to threats to national security that targets the Web sites of insurgents, extremists, and terrorists often enjoys wide public support. Examples include: The protection of existing economic interests is sometimes the motivation for blocking new Internet services such as low-cost telephone services that use Voice over Internet Protocol (VoIP). These services can reduce the customer base of telecommunications companies, many of which enjoy entrenched monopoly positions and some of which are government sponsored or controlled. Anti-copyright activists Christian Engström, Rick Falkvinge and Oscar Swartz have alleged that censorship of child pornography is being used as a pretext by copyright lobby organizations to get politicians to implement similar site blocking legislation against copyright-related piracy. Examples include: Blocking the intermediate tools and applications of the Internet that can be used to assist users in accessing and sharing sensitive material is common in many countries. Examples include: The right to be forgotten is a concept that has been discussed and put into practice in the European Union. In May 2014, the European Court of Justice ruled against Google in Costeja, a case brought by a Spanish man who requested the removal of a link to a digitized 1998 article in La Vanguardia newspaper about an auction for his foreclosed home, for a debt that he had subsequently paid. He initially attempted to have the article removed by complaining to Spain's data protection agency—Agencia Española de Protección de Datos—which rejected the claim on the grounds that it was lawful and accurate, but accepted a complaint against Google and asked Google to remove the results. Google sued in Spain and the lawsuit was transferred to the European Court of Justice. The court ruled in Costeja that search engines are responsible for the content they point to and thus, Google was required to comply with EU data privacy laws. It began compliance on 30 May 2014 during which it received 12,000 requests to have personal details removed from its search engine. Index on Censorship claimed that "Costeja ruling ... allows individuals to complain to search engines about information they do not like with no legal oversight. This is akin to marching into a library and forcing it to pulp books. Although the ruling is intended for private individuals it opens the door to anyone who wants to whitewash their personal history....The Court's decision is a retrograde move that misunderstands the role and responsibility of search engines and the wider internet. It should send chills down the spine of everyone in the European Union who believes in the crucial importance of free expression and freedom of information." Resilience Various contexts influence whether or not an internet user will be resilient to censorship attempts. Users are more resilient to censorship if they are aware that information is being manipulated. This awareness of censorship leads to users finding ways to circumvent it. Awareness of censorship also allows users to factor this manipulation into their belief systems. Knowledge of censorship also offers some citizens incentive to try to discover information that is being concealed. In contrast, those that lack awareness of censorship cannot easily compensate for information manipulation. Other important factors for censorship resiliency are the demand for the information being concealed, and the ability to pay the costs to circumvent censorship. Entertainment content is more resilient to online censorship than political content, and users with more education, technology access, and wider, more diverse social networks are more resilient to censorship attempts. Around the world From 1995 to 2002, the government of South Korea passed the Telecommunications Business Act (TBA), the first internet censorship law in the world. As more people in more places begin using the Internet for important activities, there is an increase in online censorship, using increasingly sophisticated techniques. The motives, scope, and effectiveness of Internet censorship vary widely from country to country. The countries engaged in state-mandated filtering are clustered in three main regions of the world: east Asia, central Asia, and the Middle East/North Africa. Countries in other regions also practice certain forms of filtering. In the United States, state-mandated Internet filtering occurs on some computers in libraries and K–12 schools. Content related to Nazism or Holocaust denial is blocked in France and Germany. Child pornography and hate speech are blocked in many countries throughout the world. In fact, many countries throughout the world, including some democracies with long traditions of strong support for freedom of expression and freedom of the press, are engaged in some amount of online censorship, often with substantial public support. Internet censorship in China is among the most stringent in the world. The government blocks Web sites that discuss the Dalai Lama, the 1989 crackdown on Tiananmen Square protesters, the banned spiritual practice Falun Gong, as well as many general Internet sites. The government requires Internet search firms and state media to censor issues deemed officially "sensitive," and blocks access to foreign websites including Facebook, Twitter, and YouTube. According to a study in 2014, censorship in China is used to muzzle those outside government who attempt to spur the creation of crowds for any reason—in opposition to, in support of, or unrelated to the government. There are international bodies that oppose internet censorship, for example "Internet censorship is open to challenge at the World Trade Organization (WTO) as it can restrict trade in online services, a forthcoming study argues". Generally, national laws affecting content within a country only apply to services that operate within that country and do not affect international services, but this has not been established clearly by international case law. There are concerns that due to the vast differences in freedom of speech between countries, that the ability for one country to affect speech across the global Internet could have chilling effects. For example, Google had won a case at the European Court of Justice in September 2019 that ruled that the EU's right to be forgotten only applied to services within the EU, and not globally. But in a contrary decision in October 2019, the same court ruled that Facebook was required to globally comply with a takedown request made in relationship to defamatory material that was posted to Facebook by an Austrian that was libelous of another, which had been determined to be illegal under Austrian laws. The case created a problematic precedent that the Internet may become subject to regulation under the strictest national defamation laws, and would limit free speech that may be acceptable in other countries. Several governments have resorted to shutting down most or all Internet connections in all or part of the country. This appears to have been the case on 27 and 28 January 2011 during the 2011 Egyptian revolution, in what has been widely described as an "unprecedented" internet block. About 3500 Border Gateway Protocol (BGP) routes to Egyptian networks were shut down from about 22:10 to 22:35 UTC 27 January. This full block was implemented without cutting off major intercontinental fibre-optic links, with Renesys stating on 27 January, "Critical European-Asian fiber-optic routes through Egypt appear to be unaffected for now." Full blocks also occurred in Myanmar/Burma in 2007, Libya in 2011, Iran in 2019, and Syria during the Syrian civil war. Almost all Internet connections in Sudan were disconnected from 3 June to 9 July 2019, in response to a political opposition sit-in seeking civilian rule. and since the beginning of the civil war in April 2023 between the Sudanese Armed Forces (SAF) and the Rapid Support Forces (RSF), there have been series of Internet shutdowns in Sudan. A near-complete shutdown in Ethiopia lasted for a week after the Amhara Region coup attempt. A week-long shutdown in Mauritania followed disputes over the 2019 Mauritanian presidential election. Other country-wide shutdowns in 2019 include Zimbabwe after a gasoline price protests triggered police violence, Gabon during the 2019 Gabonese coup attempt, and during or after elections in Democratic Republic of the Congo, Benin, Malawi, and Kazakhstan. Local shutdowns are frequently ordered in India during times of unrest and security concerns. Some countries have used localized Internet shutdowns to combat cheating during exams, including Iraq, Ethiopia, India, Algeria, and Uzbekistan. The Iranian government imposed a total internet shutdown from 16 to 23 November 2019, in response to the fuel protests. Doug Madory, the director of Internet analysis at Oracle, has described the operation as "unusual in its scale" and way more advanced. Beginning Saturday afternoon on 16 November 2019, the government of Iran ordered the disconnection of much of the country's internet connectivity as a response to widespread protests against the government's decision to raise gas prices. While Iran is no stranger to government-directed interference in its citizens’ access to the internet, this outage is notable in how it differs from past events. Unlike previous efforts at censorship and bandwidth throttling, the internet of Iran is presently experiencing a multi-day wholesale disconnection for much of its population – arguably the largest such event ever for Iran. Detailed country by country information on Internet censorship is provided by the OpenNet Initiative, Reporters Without Borders, Freedom House, V-Dem Institute, Access Now and in the US State Department Bureau of Democracy, Human Rights, and Labor's Human Rights Reports. The ratings produced by several of these organizations are summarized in the Internet censorship and surveillance by country and the Censorship by country articles. Through 2010, the OpenNet Initiative had documented Internet filtering by governments in over forty countries worldwide. The level of filtering in 26 countries in 2007 and in 25 countries in 2009 was classified in the political, social, and security areas. Of the 41 separate countries classified, seven were found to show no evidence of filtering in all three areas (Egypt, France, Germany, India, Ukraine, United Kingdom, and United States), while one was found to engage in pervasive filtering in all three areas (China), 13 were found to engage in pervasive filtering in one or more areas, and 34 were found to engage in some level of filtering in one or more areas. Of the 10 countries classified in both 2007 and 2009, one reduced its level of filtering (Pakistan), five increased their level of filtering (Azerbaijan, Belarus, Kazakhstan, South Korea, and Uzbekistan), and four maintained the same level of filtering (China, Iran, Myanmar, and Tajikistan). The Freedom on the Net reports from Freedom House provide analytical reports and numerical ratings regarding the state of Internet freedom for countries worldwide. The countries surveyed represent a sample with a broad range of geographical diversity and levels of economic development, as well as varying levels of political and media freedom. The surveys ask a set of questions designed to measure each country's level of Internet and digital media freedom, as well as the access and openness of other digital means of transmitting information, particularly mobile phones and text messaging services. Results are presented for three areas: Obstacles to Access, Limits on Content, and Violations of User Rights. The results from the three areas are combined into a total score for a country (from 0 for best to 100 for worst) and countries are rated as "Free" (0 to 30), "Partly Free" (31 to 60), or "Not Free" (61 to 100) based on the totals. Starting in 2009 Freedom House has produced nine editions of the report. There was no report in 2010. The reports generally cover the period from June through May. The 2014 report assessed 65 countries and reported that 36 countries experienced a negative trajectory in Internet freedom since the previous year, with the most significant declines in Russia, Turkey and Ukraine. According to the report, few countries demonstrated any gains in Internet freedom, and the improvements that were recorded reflected less vigorous application of existing controls rather than new steps taken by governments to actively increase Internet freedom. The year's largest improvement was recorded in India, where restrictions to content and access were relaxed from what had been imposed in 2013 to stifle rioting in the northeastern states. Notable improvement was also recorded in Brazil, where lawmakers approved the bill Marco Civil da Internet, which contains significant provisions governing net neutrality and safeguarding privacy protection. In 2006, Reporters without Borders (Reporters sans frontières, RSF), a Paris-based international non-governmental organization that advocates freedom of the press, started publishing a list of "Enemies of the Internet". The organization classifies a country as an enemy of the internet because "all of these countries mark themselves out not just for their capacity to censor news and information online but also for their almost systematic repression of Internet users." In 2007 a second list of countries "Under Surveillance" (originally "Under Watch") was added. Past Countries Under Surveillance: When the "Enemies of the Internet" list was introduced in 2006, it listed 13 countries. From 2006 to 2012 the number of countries listed fell to 10 and then rose to 12. The list was not updated in 2013. In 2014 the list grew to 19 with an increased emphasis on surveillance in addition to censorship. The list has not been updated since 2014. When the "Countries under surveillance" list was introduced in 2008, it listed 10 countries. Between 2008 and 2012 the number of countries listed grew to 16 and then fell to 11. The number grew to 12 with the addition of Norway in 2020. The list was last updated in 2020.[citation needed] On 12 March 2013, Reporters Without Borders published a Special report on Internet Surveillance. The report includes two new lists: The five "State Enemies of the Internet" named in March 2013 are: Bahrain, China, Iran, Syria, and Vietnam. The five "Corporate Enemies of the Internet" named in March 2013 are: Amesys (France), Blue Coat Systems (U.S.), Gamma Group (UK and Germany), HackingTeam (Italy), and Trovicor (Germany). The V-Dem Digital Societies Project measures a range of questions related to internet censorship, misinformation online, and internet shutdowns. This annual report includes 35 indicators assessing five areas: disinformation, digital media freedom, state regulation of digital media, polarization of online media, and online social cleavages. The data set uses V-Dem's methodology of aggregating surveys of experts from around the world. It has been updated each year starting in 2019, with data covering from 2000 to 2021. These ratings are more similar to other expert analyses like Freedom House than remotely sensed data from Access Now. Access Now maintains an annual list of internet shutdowns, throttling, and blockages as part of the #KeepItOn project. These data track several features of shutdowns including their location, their duration, the particular services impacted, the government's justification for the shutdown, and actual reasons for the shutdown as reported by independent media. Unlike Freedom House or V-Dem, Access Now detects shutdowns using remote sensing and then confirms these instances with reports from civil society, government, in-country volunteers, or ISPs. These methods have been found to be less prone to false positives. A poll of 27,973 adults in 26 countries, including 14,306 Internet users, was conducted for the BBC World Service by the international polling firm GlobeScan using telephone and in-person interviews between 30 November 2009 and 7 February 2010. GlobeScan Chairman Doug Miller felt, overall, that the poll showed that: Findings from the poll include: In July and August 2012, the Internet Society conducted online interviews of more than 10,000 Internet users in 20 countries. Some of the results relevant to Internet censorship are summarized below. Among the countries that filter or block online content, few openly admit to or fully disclose their filtering and blocking activities. States are frequently opaque and/or deceptive about the blocking of access to political information. For example: During the Arab Spring of 2011, media jihad (media struggle) was extensive. Internet and mobile technologies, particularly social networks such as Facebook and Twitter, played and are playing important new and unique roles in organizing and spreading the protests and making them visible to the rest of the world. An activist in Egypt tweeted, "we use Facebook to schedule the protests, Twitter to coordinate, and YouTube to tell the world". This successful use of digital media in turn led to increased censorship including the complete loss of Internet access for periods of time in Egypt and Libya in 2011. In Syria, the Syrian Electronic Army (SEA), an organization that operates with at least tacit support of the government, claims responsibility for defacing or otherwise compromising scores of websites that it contends spread news hostile to the Syrian government. SEA disseminates denial of service (DoS) software designed to target media websites including those of Al Jazeera, BBC News, Syrian satellite broadcaster Orient TV, and Dubai-based Al Arabiya TV. In response to the greater freedom of expression brought about by the Arab Spring revolutions in countries that were previously subject to very strict censorship, in March 2011, Reporters Without Borders moved Tunisia and Egypt from its "Internet enemies" list to its list of countries "under surveillance" and in 2012 dropped Libya from the list entirely. At the same time, there were warnings that Internet censorship might increase in other countries following the events of the Arab Spring. However, in 2013, Libyan communication company LTT blocked the pornographic websites. It even blocked the family-filtered videos of ordinary websites like Dailymotion. During the Russian invasion of Ukraine in 2022, Russia was reported to have blocked the internet websites Twitter and Facebook. Facebook was noted as being suspended due to an objection to its policy of reviewing news stories for authenticity where they were produced by Russian state-backed media before allowing them to be published on its platform. It was subject to a total ban whereas Twitter was suspended regionally. Reports have identified that VPN use has enabled people to circumvent the restrictions by installing software. It been reported that the European Union would seek to censor Russian media outlets regarded as producing propaganda.[citation needed] See also Sources References Further reading External links Media related to Internet censorship at Wikimedia Commons |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/GameCube] | [TOKENS: 6837] |
Contents GameCube The Nintendo GameCube[i][j] is a home video game console developed and marketed by Nintendo. It was released in Japan on September 14, 2001, in North America on November 18, 2001, in Europe on May 3, 2002. It was Nintendo's fourth major home console, succeeding the Nintendo 64, and competed with Sony's PlayStation 2 and Microsoft's Xbox in the sixth generation of game consoles. Nintendo began developing the GameCube in 1998 after entering a partnership with ArtX to design a graphics processing unit. It was the first Nintendo console to use optical discs instead of ROM cartridges, supplemented by writable memory cards for saved games. Unlike its competitors, the GameCube was solely focused on games; most models cannot play DVDs or CDs. Its controller uses a handlebar design with a staggered analog stick layout. GameCube accessories include a link cable that enables connectivity with the Game Boy Advance (GBA) and e-Reader, the Game Boy Player add-on to run Game Boy, Game Boy Color, and GBA games, and the WaveBird Wireless Controller. Select games supported online gaming via a broadband or modem adapter. The GameCube received praise for its controller and exclusive games, but criticism for its toy-like design and lack of multimedia features. Though profitable, it sold far less than the PlayStation 2 and slightly less than the Xbox, only outselling Sega's Dreamcast. Nintendo sold 21.74 million GameCubes worldwide,[k] much fewer than anticipated. This has been attributed to a weak launch game lineup and Nintendo's focus on younger players, a minority of the gaming audience at the time, rather than teenagers and adults. Compared to its competitors, the GameCube's third-party support was limited; some developers skipped releasing multiplatform games on the GameCube, and others reduced support due to poor sales. Nintendo released a successor, the Wii, in November 2006; most Wiis are backward compatible with GameCube games and accessories. It discontinued the GameCube in February 2007. In retrospect, video game journalists have ranked the GameCube among the best game consoles. Its library includes acclaimed games such as Super Smash Bros. Melee (2001), Eternal Darkness (2002), Metroid Prime (2002), The Legend of Zelda: The Wind Waker (2002), Paper Mario: The Thousand-Year Door (2004), and Resident Evil 4 (2005). Several popular Nintendo franchises, including Animal Crossing, Luigi's Mansion, and Pikmin, began on the GameCube. The GameCube controller has been compatible with every subsequent Nintendo home console. History In 1997, the graphics hardware design company ArtX was founded, with twenty engineers who had previously worked at SGI. ArtX was led by Wei Yen, who had been SGI's head of Nintendo Operations and of Project Reality, which from 1993 to 1996 had scaled down SGI's supercomputer design to create the Nintendo 64 console. In May 1998, ArtX entered into a partnership with Nintendo to undertake the complete design of the system logic and graphics processor, codenamed "Flipper," for Nintendo's sixth-generation video game console. The console went through a series of codenames, including N2000, Star Cube, and Nintendo Advance. On May 12, 1999, Nintendo announced the console during a press conference, giving it the codename "Dolphin" and positioning it as the successor to the Nintendo 64. They also announced partnerships with IBM to create Dolphin's PowerPC-based CPU, codenamed "Gekko," and with Panasonic (Matsushita Electric Industrial Co., Ltd.) for the development of its DVD drive and other Dolphin-based devices. Following this announcement, Nintendo began providing development kits to game developers, including Rare and Retro Studios. In April 2000, ArtX was acquired by ATI, whereupon the Flipper graphics processor design had already been mostly completed by ArtX and was not overtly influenced by ATI. The ArtX cofounder Greg Buchner recalled that their portion of the console's hardware design timeline had arced from inception in 1998 to completion in 2000. Of the ArtX acquisition, an ATI spokesperson said, "ATI now becomes a major supplier to the game console market via Nintendo. The Dolphin platform is reputed to be king of the hill in terms of graphics and video performance with 128-bit architecture." The console was announced as the GameCube at a press conference in Japan on August 25, 2000, abbreviated as both "NGC" and "GC" in Japan and "GCN" in Europe and North America. Nintendo unveiled its software lineup at E3 2001, focusing on fifteen launch games, including Luigi's Mansion and Star Wars Rogue Squadron II: Rogue Leader. Several games originally scheduled to launch with the console were delayed. The GameCube was the first Nintendo home console since the Famicom not to have a Mario launch game. Long before the launch, Nintendo developed and patented an early prototype of motion controls for the GameCube, with which the developer Factor 5 had experimented for its launch games. Greg Thomas, Sega of America's VP of Development said, "What does worry me is Dolphin's sensory controllers [which are rumored to include microphones and headphone jacks] because there's an example of someone thinking about something different." These motion control concepts would not be deployed to consumers for several years, until the Wii Remote. Prior to the GameCube's release, Nintendo focused resources on the launch of the Game Boy Advance (GBA), a handheld game console and successor to the original Game Boy and Game Boy Color. Several games planned for the Nintendo 64 were postponed to become early releases on the GameCube. Concurrently, Nintendo was developing GameCube software provisioning future connectivity with the GBA. Certain games, such as The Legend of Zelda: Four Swords Adventures and Final Fantasy Crystal Chronicles, can use GBA as a secondary screen and controller when connected to the GameCube via a link cable. Nintendo began its marketing campaign with the catchphrase "The Nintendo Difference" at its E3 2001 reveal. The goal was to distinguish itself from the competition as an entertainment company. Later advertisements had the slogan "Born to Play", and game ads feature a rotating cube animation that morphs into a GameCube logo and end with a voice whispering "GameCube". On May 21, 2001, the launch price of US$199 was announced, $100 lower than that of the PlayStation 2 and Xbox. Nintendo spent $76 million on marketing. In September 2020, leaked documents included Nintendo's plans for a GameCube model that would be both portable with a built-in display and dockable to a TV, similar to its later console, the Nintendo Switch. Other leaks suggest plans for a GameCube successor codenamed Tako, with HD graphics and slots for SD and memory cards, apparently resulting from a partnership with ATI and scheduled for release in 2005. Tako was abandoned for Revolution (later revealed and released as the Wii in 2006), a non-HD console with motion controls. Nintendo would later work on Project Cafe, an HD console that became the Wii U, released in 2012. The GameCube was launched in Japan on September 14, 2001. Approximately 500,000 units were shipped in time to retailers. The console was scheduled to launch two months later in North America on November 5, 2001, but this was delayed in an effort to increase the number of available units. The console eventually launched in North America on November 18, with over 700,000 units shipped. Other regions followed the following year, beginning with Europe in the second quarter of 2002. On April 22, 2002, the third-party Nintendo console developer Factor 5 announced its 3D audio software development kit, MusyX. In collaboration with Dolby Laboratories, MusyX provided motion-based surround sound encoded as Dolby Pro Logic II. Throughout the mid-2000s, GameCube hardware sales remained far behind the PlayStation 2 and slightly behind the Xbox, though there were brief periods when it would outsell both. The family-friendly appeal and lack of support from certain third-party developers skewed toward a younger market, a minority of the gaming population at the time. Many third-party games popular with teenagers or adults, such as the blockbuster Grand Theft Auto series and several key first-person shooters, skipped the GameCube in favor of the PlayStation 2 and Xbox. However, many journalists and analysts noted that Nintendo's focus on younger audiences and its family-friendly image, was the biggest advantage and disadvantage at a time when video games were aimed at more mature audiences.[l] Nintendo was successful with games aimed at a more mature audience. As of June 2003[update], the GameCube had a 13% market share, tying with the Xbox in sales but far below the 60% of the PlayStation 2. However, despite slow sales and tough competition, Nintendo's position improved by 2003 and 2004. The American market share for the GameCube increased from 19% to 37% in one year due to price cuts and high-quality games.[m] One article stated that by early 2004, the GameCube had 39% market share in America. By Christmas of 2003, Nintendo of America's president, George Harrison, reported that price cuts down to just under $100 quadrupled sales in the American market. GameCube's profitability never reached that of the PlayStation 2 or GBA. However, it was more profitable than the Xbox. GameCube's first two years had slow sales, but by 2004 and 2005 vastly improved to a 32% share of the hardware market in Europe. Due to price drops, which saved it in the American markets, and well reviewed games such as Pokémon Colosseum and Resident Evil 4, the GameCube improved to outsell Xbox. The top three European countries for GameCube success included the UK, France, and Germany, and modestly in Spain and Italy.[n] Though falling behind the PlayStation 2 in Europe, the GameCube was successful and profitable there. Nintendo launched the Wii, the successor to the GameCube, on November 19, 2006, in North America and in December 2006 in other regions. In February 2007, Nintendo announced that it had ceased first-party support for the GameCube and that it had been discontinued, as it was shifting its manufacturing and development efforts towards the Wii and its new handheld console, the Nintendo DS. GameCube controllers, game discs, and certain accessories continued to be supported via the Wii's backward compatibility, although these features were removed in later iterations of the Wii console. The final game officially released on the GameCube was Madden NFL 08,[citation needed] on August 14, 2007. Several games originally developed for the GameCube were reworked for a Wii release, such as Super Paper Mario, or released on both consoles, such as the Wii launch game The Legend of Zelda: Twilight Princess.[citation needed] Hardware Howard Cheng, technical director of Nintendo technology development, said the company's goal was to select a "simple RISC architecture" to help speed the development of games by making it easier on software developers. IGN reported that the system was "designed from the get-go to attract third-party developers by offering more power at a cheaper price. Nintendo's design document for the console specifies that cost is of utmost importance, followed by space." Hardware partner ArtX's Vice President Greg Buchner stated that their guiding thought on the console's hardware design was to target the developers rather than the players, and to "look into a crystal ball" and discern "what's going to allow the Miyamoto-sans of the world to develop the best games". We thought about the developers as our main customers. In particular, for GameCube, we spent three years working with Nintendo of America and with all sorts of developers, trying to understand the challenges, needs, and problems they face. First among these is the rising cost of development. The GameCube can see high performance without too much trouble; it isn't a quirky design, but a very clean one. It was important we didn't require jumping through hoops for high performance to be achieved. On top of that, it is rich in features, and we worked to include a dream group of technical features that developers requested. Initiating the GameCube's design in 1998, Nintendo partnered with ArtX (then acquired by ATI Technologies during development) for the system logic and the GPU, and with IBM for the CPU. IBM designed a 32-bit PowerPC-based processor with custom architectural extensions for the next-generation console, known as Gekko, which runs at 486 MHz and features a floating point unit (FPU) capable of a total throughput of 1.9 GFLOPS and a peak of 10.5 GFLOPS. Described as "an extension of the IBM PowerPC architecture", the Gekko CPU is based on the PowerPC 750CXe with IBM's 0.18 μm CMOS technology, which features copper interconnects. Codenamed Flipper, the GPU runs at 162 MHz, and in addition to graphics manages other tasks through its audio and input/output (I/O) processors. The GameCube is Nintendo's first console to not use primarily cartridge media, following the Famicom Data Recorder, Famicom Disk System, SNES-CD, and 64DD which represent past explorations of complementary storage technologies. The GameCube introduced a proprietary miniDVD optical disc format for up to 1.5 GB of data. It was designed by Matsushita Electric Industrial (now Panasonic Corporation) with a proprietary copy-protection scheme unlike the Content Scramble System (CSS) in standard DVDs. The size is sufficient for most games, although a few multi-platform games require an extra disc, higher video compression, or removal of content. By comparison, the PlayStation 2 and Xbox use CDs and DVDs up to 8.5 GB. Like its predecessor, the Nintendo 64, GameCube models were produced in several different color motifs. The system launched in "Indigo", the primary color shown in advertising and on the logo, and in "Jet Black". One year later, Nintendo released a "Platinum" GameCube, which uses a silver color scheme for both the console and controller. A "Spice" orange-colored console was eventually released only in Japan, though that scheme is only on controllers released in other countries. A Platinum Pokémon XD: Gale of Darkness console was released in 2005 only in North America with a custom faceplate and a standard Platinum controller. Nintendo developed stereoscopic 3D technology for the GameCube, supported by one launch game, Luigi's Mansion. However, the feature never reached production. 3D televisions were not widespread, and it was deemed that compatible displays and crystals for the add-on accessories would be too cost-prohibitive for the consumer. Two audio Easter eggs can be invoked when the power is activated with the "Z" button on the Player 1 controller held down, or with four controllers connected and holding down the "Z" buttons. The GameCube features two memory card ports for saving game data. Nintendo released three memory card options: Memory Card 59 in gray (512 KB), Memory Card 251 in black (2 MB), and Memory Card 1019 in white (8 MB). These are often advertised in megabits instead: 4 Mb, 16 Mb, and 64 Mb, respectively. Memory cards with larger capacities were released by third-party manufacturers. Nintendo learned from its experiences—both positive and negative—with the Nintendo 64's three-handled controller design and chose a two-handled, "handlebar" design for the GameCube. The shape was popularized by Sony's PlayStation controller released in 1994 and its follow-up DualShock series in 1997 with vibration feedback and two analog sticks to improve the 3D experience. Nintendo and Microsoft designed similar features in the controllers for their sixth-generation consoles, but instead of having the analog sticks in parallel, they are staggered by swapping the positions of the directional pad (d-pad) and left analog stick. The GameCube controller features a total of eight buttons, two analog sticks, a d-pad, and a rumble motor. The primary analog stick is on the left with the d-pad located below and closer to the center. On the right are four buttons: a large, green "A" button in the center, a smaller red "B" button to the left, an "X" button to the right, and a "Y" button at the top. Below and to the inside is a yellow "C" analog stick, which often serves a variety of in-game functions, such as controlling the camera angle. The Start/Pause button is located in the middle, and the rumble motor is encased within the center of the controller. On the top are two "pressure-sensitive" trigger buttons marked "L" and "R". Each essentially provides two functions: one analog and one digital. As the trigger is depressed, it emits an increasing analog signal. Once fully depressed, the trigger "clicks" with a digital signal that a game can use for a separate function. There is also a purple, digital button on the right side marked "Z". The A button has a uniquely prominent size and placement, having been the primary action button in past Nintendo controller designs. The rubberized analog stick, within the overall button orientation, addresses "Nintendo thumb" pain. In 2002, Nintendo introduced the WaveBird Wireless Controller, the first wireless gamepad developed by a first-party console manufacturer. The RF-based wireless controller is similar in design to the standard controller. It communicates with the GameCube with a wireless receiver dongle. Powered by two AA batteries, it lacks vibration. The GameCube uses GameCube Game Discs, and the Game Boy Player accessory runs Game Pak cartridges for the Game Boy, Game Boy Color, and Game Boy Advance. The original version of the GameCube's successor, the Wii, supports backward compatibility with GameCube controllers, memory cards, and games but not the Game Boy Player or other hardware attachments. However, later revisions of the Wii—including the "Family Edition" released in 2011 and the Wii Mini released in 2012—do not support any GameCube hardware or software. The Panasonic Q[o] is a hybrid version of the GameCube with a standard DVD player, developed by Panasonic in a strategic alliance with Nintendo to develop the optical drive for the original GameCube hardware. Its stainless steel case is completely revised with a DVD-sized front-loading tray, a backlit LCD screen with playback controls, and a carrying handle like the GameCube. Announced by Panasonic on October 19, 2001, it was released exclusively in Japan on December 14 at a suggested retail price of ¥39,800; however, low sales resulted in Panasonic announcing the discontinuation of the Q on December 18, 2003. The Q supports CDs, DVDs, and GameCube discs but there is virtually no integration between the GameCube and DVD player modes. Games In its lifespan from 2001 to 2007, Nintendo licensed over 600 GameCube games. Nintendo bolstered the console's popularity by creating new franchises, such as Pikmin and Animal Crossing, which actually began as a Japanese exclusive on N64, and renewing some that had skipped the Nintendo 64, such as with Metroid Prime. Longer standing franchises include the critically acclaimed The Legend of Zelda: The Wind Waker and Super Mario Sunshine, as well as Mario Kart: Double Dash. Other Nintendo games are successors to Nintendo 64 games, such as the GameCube's best-selling game, Super Smash Bros. Melee, at more than 7 million copies worldwide, F-Zero GX; Mario Golf: Toadstool Tour; Mario Party 4, 5, 6, and 7; Mario Power Tennis; and Paper Mario: The Thousand-Year Door. Though committed to its software library, Nintendo was still criticized for not releasing enough launch window games and by the release of Luigi's Mansion instead of a 3D Mario game. Nintendo had struggled with its family-friendly image during the late 1990s and most of the 2000s. However, during this period, it released more video games for a mature audience with mostly successful results. While the video game industry was focusing on more mature audiences and online connections, Nintendo regained older players who had gravitated to the PlayStation 2 and Xbox during the early 2000s. Some games aimed at older audiences were critically and financially successful—more than on Dreamcast, and less than on PlayStation 2 and Xbox. Such examples include The Legend of Zelda: Twilight Princess, Super Smash Bros. Melee, Resident Evil 4, Metal Gear Solid: The Twin Snakes, Killer7, Star Wars Rogue Squadron II: Rogue Leader, Final Fantasy Crystal Chronicles, Resident Evil (2002), Metroid Prime, Metroid Prime II: Echoes, Soul Calibur II, Resident Evil Zero, F-Zero GX, Star Fox Adventures, and Star Fox Assault. One of the most well-known GameCube games for mature audiences is Eternal Darkness: Sanity's Requiem, which underperformed financially, but garnered critical acclaim and is now regarded as a cult classic. The GameCube is Nintendo's first home console with a system menu, activated by powering on without a valid game disc or by holding down the A button while one is loaded. Early in Nintendo's history, the company had achieved considerable success with third-party developer support on the Nintendo Entertainment System and Super NES. Competition from the Sega Genesis and Sony PlayStation in the 1990s changed the market's landscape and reduced Nintendo's ability to obtain exclusive, third-party support on the Nintendo 64. The Nintendo 64 Game Pak cartridge format increased the cost to manufacture software, as opposed to the cheaper, higher-capacity optical discs on PlayStation. With the GameCube, Nintendo intended to reverse the trend as evidenced by the number of third-party games available at launch. The new optical disc format increased game storage capacity significantly and reduced production costs. Successful exclusives include Star Wars Rogue Squadron II: Rogue Leader from Factor 5, Resident Evil 4 from Capcom, and Metal Gear Solid: The Twin Snakes from Konami. Sega discontinued its Dreamcast console to become a third-party developer, porting Dreamcast games such as Crazy Taxi and Sonic Adventure 2, and developing new franchises, such as Super Monkey Ball. Longtime Nintendo partner Rare, which had developed GoldenEye 007, Perfect Dark, Banjo-Kazooie, Conker's Bad Fur Day, and the Donkey Kong Country series, released Star Fox Adventures for GameCube, its final Nintendo game before acquisition by Microsoft in 2002. Several third-party developers were contracted to work on new games for Nintendo franchises, including Star Fox: Assault (which became a Player's Choice re-release), Donkey Konga by Namco, and Wario World from Treasure. Capcom had announced 5 games for the system dubbed the Capcom Five in November 2002, Viewtiful Joe and Resident Evil 4 were later ported to other systems. Third-party GameCube support was some of the most extensive of any Nintendo console predating the Wii. Some third-party developers, such as Midway, Namco, Activision, Konami, Ubisoft, THQ, Disney Interactive Studios, Humongous Entertainment, Electronic Arts, and EA Sports, continued to release GameCube games into 2007. One of the biggest third-party GameCube developers was Sega, which had quit the console hardware market to become a third-party game developer after the failure of the Dreamcast. It partnered with long-time rival Nintendo, and with Microsoft and Sony, to recuperate profits lost from the Dreamcast. Sega was a successful third-party developer since the early 2000s, mostly those for the family market, such as Super Monkey Ball, Phantasy Star Online, Sonic Adventure, Sonic Adventure 2: Battle, and Sonic Heroes. Nintendo's GameCube did not put heavy focus on online games earlier in the console's life. Only eight GameCube games support network connectivity, five with Internet support and three with local area network (LAN) support. The only Internet capable games released in western territories are three role-playing games (RPGs) in Sega's Phantasy Star series: Phantasy Star Online Episode I & II, Phantasy Star Online Episode I & II Plus, and Phantasy Star Online Episode III: C.A.R.D. Revolution. The official servers were decommissioned in 2007, but players can still connect to fan maintained private servers. Japan received two additional games with Internet capabilities, a cooperative RPG, Homeland and a baseball game with downloadable content, Jikkyō Powerful Pro Yakyū 10. Lastly, three racing games have LAN multiplayer modes: 1080° Avalanche, Kirby Air Ride, and Mario Kart: Double Dash. Those can be forced over the Internet with third-party PC software capable of tunneling the GameCube's network traffic. Online play requires an official broadband or modem adapter because the GameCube lacks out of the box network capabilities. Nintendo never commissioned any Internet services for GameCube but allowed other publishers to manage custom online experiences. On June 5, 2025, several GameCube games were re-released on the Nintendo Classics service as part of the "Expansion Pack" tier of Nintendo Switch Online exclusively for the Nintendo Switch 2. Reception The GameCube received generally positive reviews following its launch. PC Magazine praised the overall hardware design and quality of games available at launch. CNET gave an average review rating, noting that though the console lacks a few features offered by its competition, it is relatively inexpensive, has a great controller design, and launched a decent lineup of games. In later reviews, criticism mounted against the console often centering on its overall look and feel, describing it as "toy-ish". With poor sales figures and the associated financial harm to Nintendo, a Time International article called the GameCube an "unmitigated disaster". Retrospectively, Joystiq compared the GameCube's launch window to its successor, the Wii, noting that the GameCube's "lack of games" resulted in a subpar launch, and the console's limited selection of online games damaged its market share in the long run. Time International concluded that the system had low sales figures, because it lacked "technical innovations". In Japan, between 280,000 and 300,000 GameCube consoles were sold during the first three days of its sale, out of an initial shipment of 450,000 units. During its launch weekend, $100 million worth of GameCube products were sold in North America. The console was sold out in several stores, faster than initial sales of both of its competitors, the Xbox and the PlayStation 2. Nintendo reported that the most popular launch game is Luigi's Mansion, with more sales at its launch than Super Mario 64 had. Other popular games include Star Wars Rogue Squadron II: Rogue Leader and Wave Race: Blue Storm. By early December 2001, 600,000 units had been sold in the US. Nintendo predicted 50 million GameCube units by 2005, but only sold 22 million GameCube units worldwide during its lifespan, placing it slightly behind the Xbox's 24 million (though it did manage to outsell the Xbox in Japan), and well behind the PlayStation 2's 155 million. Ars Technica articles from 2006 showed and a 2020 book state that Nintendo officially sold 24 million GameCube consoles worldwide, and one article from Seeking Alpha states that the GameCube sold 26 million consoles worldwide. Its sales exceeded that of the Xbox 360 in Japan. The GameCube's predecessor, the Nintendo 64, also outperformed it at nearly 33 million units. It also exceeded the Dreamcast, which yielded 9.13 million units. In September 2009, IGN ranked the GameCube 16th in its list of best gaming consoles of all time, placing it behind all three of its sixth-generation competitors: the PlayStation 2 (3rd), the Dreamcast (8th), and the Xbox (11th). As of March 31, 2003, 9.55 million GameCube units had been sold worldwide, behind Nintendo's initial goal of 10 million consoles. Many of Nintendo's own first-party games, such as Super Smash Bros. Melee, Pokémon Colosseum, and Mario Kart: Double Dash, had strong sales, though this did not typically benefit third-party developers or directly drive sales of their games. However, at the same time, these first-party games, and second-party and third-party games, elevated the GameCube.[p] Sales of many cross-platform games—such as sports franchises released by Electronic Arts—were far below their PlayStation 2 and Xbox counterparts, eventually prompting some developers to scale back or completely cease support for the GameCube. Exceptions include Sega's family friendly Sonic Adventure 2 and Super Monkey Ball, which reportedly yielded more sales on GameCube than most of the company's games on the PlayStation 2 and Xbox. In June 2003, Acclaim Entertainment CEO Rod Cousens said that the company would no longer support the GameCube, and criticized it as a system "that don't deliver profits". Acclaim would later rescind his claims, by saying the company would elevate support for the system. This decision was made unclear after the company filed for bankruptcy in August 2004. In September 2003, Eidos Interactive announced to end support for the GameCube, as the publisher was losing money from developing for Nintendo's console. This led to several games in development being canceled for the system. Eidos's CEO Mike McGravey would say that the GameCube was a "declining business". However, after the company's purchase by the SCi Entertainment Group in 2005, Eidos resumed development for the system and released Lego Star Wars: The Video Game and Tomb Raider: Legend. In March 2003, British retailer Dixons removed all GameCube consoles, accessories and games from its stores. That same month, another British retailer Argos, cut the price of the GameCube in their stores to £78.99, which was more than £50 cheaper than Nintendo's SRP for the console at the time. However, in October of that year, they did eventually restock their supply of consoles after a price drop was ordered which caused the console sales to outpace the PlayStation 2 for a week. With sales sagging and millions of unsold consoles in stock, Nintendo halted GameCube production for the first nine months of 2003 to reduce surplus units. Sales rebounded slightly after a price drop to US$99.99 on September 24, 2003 and the release of The Legend of Zelda: Collector's Edition bundle. A demo disc, the GameCube Preview Disc, was also released in a bundle in 2003. Beginning with this period, GameCube sales continued to be steady, particularly in Japan, but the GameCube remained in third place in worldwide sales during the sixth-generation era because of weaker sales performance elsewhere, though its fortunes would change for the better in America and Europe. Iwata forecasted to investors that the company would sell 50 million GameCube units worldwide by March 2005, but by the end of 2006, it had only sold 21.74 million—fewer than half. However, it had the highest attach rate of any Nintendo console at 9.59 and was profitable, even more than Xbox with higher sales rates. Many games that debuted on the GameCube, including the Pikmin series, Chibi-Robo!, Metroid Prime, and Luigi's Mansion became popular and profitable Nintendo franchises or subseries.[q] GameCube controllers have limited support on Wii U and Switch, to play Super Smash Bros. for Wii U, and Super Smash Bros. Ultimate respectively, via a USB adapter. While on the Wii U the controller was only allowed to be used in Super Smash Bros., the Nintendo Switch recognizes it as a Pro Controller. Thus, the GameCube Controller can be used in any game where the Pro Controller is recognized. However, due to the GameCube controller lacking motion controls and some buttons, it may not be fully playable in some Switch games. Regarding concerns about the correlation between violence and video games, a 2009 study by Iowa State University found that certain games like Super Mario Sunshine and Chibi-Robo!, which were GameCube exclusives, would help players learn positive skills about helping others, empathy, and cooperation. The game Super Monkey Ball, which was a GameCube exclusive, could help surgeons perform laparoscopic surgery better than surgeons who do not play video games. GamesRadar+ ranked it 11th on their list of The 20 best video game consoles and hardware of all time in 2021. Den of Geek placed it at number 12 on their list of The 25 Best Video Game Consoles Ever, Ranked, in 2023. See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Adab_(city)] | [TOKENS: 3905] |
Contents Adab (city) Adab (Sumerian: 𒌓𒉣𒆠 Adabki, spelled UD.NUNKI) was an ancient Sumerian city between Girsu and Nippur, lying about 35 kilometers (22 miles) southeast of the latter. It was located at the site of modern Bismaya or Bismya in the Al-Qādisiyyah Governorate of Iraq. The site was occupied at least as early as the 3rd millennium BC, through the Early Dynastic, Akkadian Empire, and Ur III Empire periods, into the Kassite period in the mid-2nd millennium BC. It is known that there were temples of Ninhursag/Digirmah, Iskur, Asgi, Inanna and Enki at Adab and that the city-god of Adab was Parag'ellilegarra (Panigingarra) "The Sovereign Appointed by Enlil". Bismaya is not to be confused with the small, later (Old Babylonian and Sassanian periods) archaeological site named Tell Bismaya, 9 kilometers (5.6 miles) east of the confluence of the Diyala and the Tigris rivers, excavated by Iraqi archaeologists in the 1980s or Tell Basmaya, southeast of modern Baghdad, excavated by Iraqi archaeologists in 2013-2014. Archaeology The 400-hectare site consists of a number of mounds distributed over an area about 1.5 kilometres (0.93 mi) long and 3 kilometres (1.9 mi) wide, consisting of a number of low ridges, nowhere exceeding 12 metres (39 ft) in height, lying somewhat nearer to the Tigris than the Euphrates, about a day's journey to the southeast of Nippur. It was surrounded by a double wall. In total there are twelve mounds of which two (Mounds X and XII) are the result of sand dredged from the Iturungal canal, though some rooms and 20 tablets were found on the northern extension of X. Persons reported working on mound XIV and mound XVI but there is no record where they lay. Some private houses were noted outside the east wall. Notable mounds were Initial examinations of the site of Bismaya were by William Hayes Ward of the Wolfe Expedition in 1885 and by John Punnett Peters of the University of Pennsylvania in 1890, each spending a day there and finding one cuneiform tablet and a few fragments. Walter Andrae visited Bismaya in 1902, found a tablet fragment and produced a sketch map of the site. Excavations were conducted there on behalf of the University of Chicago and led by Edgar James Banks for a total of six months beginning on Christmas Day of 1903 until May 25, 1904. Work resumed on September 19, 1904 but was stopped after 8 1/2 days by the Ottoman authorities. Excavation resumed on March 13, 1905 under the direction of Victor S. Persons and continued until the end of June, 1905. During the excavation of a city gate thousands of sling balls (some stone, most of baked clay), some flattened, were found which the excavator interpreted as the result of a battle. While Banks was better trained than the earlier generation of antiquarians and treasure hunters and used more modern archaeological methods the excavations suffered seriously from having never been properly published. The Banks expedition to Bismaya was well documented by the standards of the time and many objects photographed though no final report was ever produced due to personal disputes. In 2012, the Oriental Institute re-examined the records and objects returned to the institute by Banks and produced a "re-excavation" report. One issue is that Banks and Persons purchased objects from Adab locally while there and it is uncertain which object held at the museum were excavated vs being bought. On Mound V, on what was originally thought to be an island but has since been understood to have resulted from a shift in the canal bed, stood the temple, E-mah, with a ziggurat. The temple had two occupational phases. E-Sar, the first (Earlier Temple), constructed of plano-convex bricks, was from the Early Dynastic period. That temple was later filled in with mud bricks and sealed off with a course of baked brick and bitumen pavement. A foundation deposit of Adab ruler E-iginimpa'e dated to Early Dynastic IIIa was found on that pavement containing "inscribed adze-shaped copper object (A543) with a copper spike (A542) inserted into the hole at its end and two tablets, one of copper alloy (A1160) and one of white stone (A1159)". 𒀭𒈤 𒂍𒅆𒉏𒉺𒌓𒁺 𒃻𒑐𒋼𒋛 𒌓𒉣𒆠 𒂍𒈤 𒈬𒈾𒆕 𒌫𒁉𒆠𒂠 𒋼𒁀𒋛 d-mah/ e2-igi-nim-pa-e3/ GAR-ensi/ adab{ki}/ e2-mah mu-na-du/ ur2-be2 ki-sze3/ temen ba-si "For the goddess Digirmah, E-iginimpa'e, ensi-GAR of Adab, built the E-Mah for her, and buried foundation deposits below its base" The second temple (Later Temple) was faced by baked bricks, some with an inscription of the Ur III ruler Shulgi naming it the temple of the goddess Ninhursag. Adab was evidently once a city of considerable importance, but deserted at a very early period, since the ruins found close to the surface of the mounds belong to Shulgi and Ur-Nammu, kings of the Third Dynasty of Ur in the latter part of the third millennium BC, based on inscribed bricks excavated at Bismaya. Immediately below these, as at Nippur, were found artifacts dating to the reign of Naram-Sin and Sargon or the Akkadian Empire, c. 2300 BC. Below these there were still 10.5 metres (34 ft) of stratified remains, constituting seven-eighths of the total depth of the ruins. A large palace was found in the central area with a very large well lined with plan-convex bricks, marking it as being from the Early Dynastic period. Besides the remains of buildings, walls, and graves, Banks discovered a large number of inscribed clay tablets of a very early period, bronze and stone tablets, bronze implements and the like. Of the tablets, 543 went to the Oriental Institute and roughly 1100, mostly purchased from the locals rather than excavated, went to the Istanbul Museum. The latter are still unpublished and are unavailable for study. Brick stamps, found by Banks during his excavation of Adab state that the Akkadian ruler Naram-Sin built a temple to Inanna at Adab, but the temple was not found during the dig, and is not known for certain to be E-shar. The two most notable discoveries were a complete statue in white marble, apparently the earliest yet found in Mesopotamia, now in the Istanbul Archaeology Museums, bearing the inscription, translated by Banks as "E-mach, King Da-udu, King of, Ud-Nun", now known as the statue of Lugal-dalu and a temple refuse heap, consisting of great quantities of fragments of vases in marble, alabaster, onyx, porphyry and granite, some of which were inscribed, and others engraved and inlaid with ivory and precious stones. Of the Adab tablets that ended up at the University of Chicago, sponsor of the excavations, all have been published and also made available in digital form online. After the end of excavation, on a later personal trip the region in 1913, Banks purchased thousands of tablets from a number of sites, many from Adab, and sold them sold piecemeal to various owners over years. Some have made their way into publication. Many more have subsequently made their way into the antiquities market from illegal looting of the site and some have also been published. A number ended up in the collection of the Cornell University. In response to widespread looting which began after the war 1991, the Iraq State Board of Antiquities and Heritage conducted an excavation at Adab in 2001. The site has now been largely destroyed by systematic looting which increased after the war in 2003, so further excavation is unlikely. On the order of a thousand tablets from that looting, all from the Sargonic Period, have been sold to various collectors and many are being published, though missing archaeological context. Of the 9,000 published tablets from the Sargonic Period (Early Dynastic IIIb, Early Sargonic, Middle Sargonic and Classic Sargonic) about 2,300 came from Adab. From 2016 to 2019, the University of Bologna and the Iraqi State Board of Antiquities and Heritage led by Nicolò Marchetti conducted a program, the Qadis survey, of coordinated remote sensing and surface surveys in the Qadisiyah province including at Bismaya (QD049). Results included a "Preliminary reconstruction of the urban layout and hydraulic landscape around Bismaya/Adab in the ED III and Akkadian periods". A previously unknown palace was discovered and the extent of looting identified. It was determined that the city was surrounded by canals. The overall occupation of the site in the Early Dynastic III period was determined to have been 462 hectares. The Qadis survey showed that Adab had a 24-hectare central harbor, with a maximum length of 240 meters and a maximum width of 215 meters. The harbor was connected to the Tigris river via a 100-meter–wide canal. In 2001 a statue became available to the Baghdad Museum which was inscribed "Temple Builder, of the goddess Nin-SU(?)-KID(?): Epa'e, King of Adab". History Adab is mentioned in late 4th millennium BC texts found at Uruk but no finds from that period have been recovered from the site. Adab was occupied from at least the Early Dynastic Period. According to Sumerian text Inanna's descent to the netherworld, there was a temple of Inanna named E-shar at Adab during the reign of Dumuzid of Uruk. In another text in the same series, Dumuzid's dream, Dumuzid of Uruk is toppled from his opulence by a hungry mob composed of men from the major cities of Sumer, including Adab. A king of Kish, Mesilim, appears to have ruled at Adab, based on inscriptions found at Bismaya. One inscription, on a bowl fragment reads "Mesilim, king of Kish, to Esar has returned[this bowl], Salkisalsi being patesi of Adab". One king of Adab, Lugal-Anne-Mundu, appearing in the Sumerian King List, is mentioned in few contemporary inscriptions; some that are much later copies claim that he established a vast, but brief empire stretching from Elam all the way to Lebanon and the Amorite territories along the Jordan. Adab is also mentioned in some of the Ebla tablets from roughly the same era as a trading partner of Ebla in northern Syria, shortly before Ebla was destroyed by unknown forces. A marble statue was found at Bismaya inscribed with the name of another king of Adab, variously translated as Lugal-daudu, Da-udu, and Lugaldalu. An inscription of Eannatum, ruler of Lagash was also found at Adab. Meskigal, governor of Adab under Lugalzagesi of Uruk, changed allegiance to Akkad and became governor under Sargon of Akkad. He later joined other cities including Zabalam in a rebellion against Rimush son of Sargon and second ruler of the Akkadian Empire and was defeated and captured. About 380 of the published tablets from Adab date to the time of Meskigal (ED IIIB/Early Sargonic). This rebellion occurred during the first two regnal years of Rimush. A year name of Rimush reads "The year Adab was destroyed" and an inscription reads "Rimus, king of the world, was victorious over Adab and Zabala in battle and struck down 15,718 men. He took 14,576 captives". Various governors, including Lugal-gis, Sarru-alli, Ur-Tur, and Lugal-ajagu then ruled Adab under direct Akkadian control. About 1000 tablets from this period (Middle Sargonic) have been published. In the time of Sargon's grandson Naram-Sin Adab, again joined a "Great Rebellion" against Akkad and was again defeated. In the succeeding period (Classical Sargonic) it is known that there were temples to Ninhursag/Digirmah (E-Mah), Iskur, Asgi, Inanna and Enki. By the end of the Akkadian period, Adab was occupied by the Gutians, who made it their capital. Enheduanna, daughter of Sargon and first known poet, wrote a number of temple hymns including one to the temple of the goddess Ninhursag and her son Ashgi at Adab. A cuneiform text lays out the adjudication of a boundary dispute between Adab and Umma by Naram-Sin of Akkad (c. 2255–2218 BC). It lists the governor under Akkad of Umma as Šubur-Nagarpae and the governor of Adab as Lugal-ša. This is the first attestation of both. The decision was witnessed by the "city elders" of Adab, Tallani, Ibrat, and Pašime. Several governors of the city under Ur III are also known including Ur-Asgi and Habaluge under Ur III ruler Shulgi (and Amar-Sin) and Ur-Asgi II under Shu-Sin. A brick inscription found at Adab marked Shulgi dedicating a weir to the goddess Ninhursag. Inscribed bricks of Amar-Sin were also found at Adab. A temple for the deified Shi-Sin was built at Adab by Habaluge. "Sü-Sín, beloved of the god Enlil, the king whom the god Enlil lovingly chose in his (own) heart, mighty king, king of Ur, king of the four quarters, his beloved god, Habaluge, governor of Adab, his servant, built for him his beloved temple." About 200 inscribed objects, mainly tablets but also a few bricks and clay sealings, from the Old Babylonian period of the early 2nd millennium BC from Adab are known. The city of Adab is also mentioned in the Code of Hammurabi (c. 1792 – c. 1750 BC). There is a Sumerian language comic tale, dating to the Old Babylonian period, of the Three Ox-drivers from Adab. Inscribed bricks of the Kassite dynasty ruler Kurigalzu I (c. 1375 BC) were found at Adab, marking the last verified occupation of the site. List of rulers The Sumerian King List (SKL) names only one ruler of Adab (Lugalannemundu). The following list should not be considered complete: "Then Ur was defeated and the kingship was taken to Adab." — SKL "1 king; he ruled for 90 years. Then Adab was defeated and the kingship was taken to Mari." — SKL Gallery See also References Further reading External links (Shamshi-Adad dynasty1808–1736 BCE)(Amorites)Shamshi-Adad I Ishme-Dagan I Mut-Ashkur Rimush Asinum Ashur-dugul Ashur-apla-idi Nasir-Sin Sin-namir Ipqi-Ishtar Adad-salulu Adasi (Non-dynastic usurpers1735–1701 BCE) Puzur-Sin Ashur-dugul Ashur-apla-idi Nasir-Sin Sin-namir Ipqi-Ishtar Adad-salulu Adasi (Adaside dynasty1700–722 BCE)Bel-bani Libaya Sharma-Adad I Iptar-Sin Bazaya Lullaya Shu-Ninua Sharma-Adad II Erishum III Shamshi-Adad II Ishme-Dagan II Shamshi-Adad III Ashur-nirari I Puzur-Ashur III Enlil-nasir I Nur-ili Ashur-shaduni Ashur-rabi I Ashur-nadin-ahhe I Enlil-Nasir II Ashur-nirari II Ashur-bel-nisheshu Ashur-rim-nisheshu Ashur-nadin-ahhe II Second Intermediate PeriodSixteenthDynasty of Egypt AbydosDynasty SeventeenthDynasty of Egypt (1500–1100 BCE)Kidinuid dynastyIgehalkid dynastyUntash-Napirisha Twenty-first Dynasty of EgyptSmendes Amenemnisu Psusennes I Amenemope Osorkon the Elder Siamun Psusennes II Twenty-third Dynasty of EgyptHarsiese A Takelot II Pedubast I Shoshenq VI Osorkon III Takelot III Rudamun Menkheperre Ini Twenty-fourth Dynasty of EgyptTefnakht Bakenranef (Sargonid dynasty)Tiglath-Pileser† Shalmaneser† Marduk-apla-iddina II Sargon† Sennacherib† Marduk-zakir-shumi II Marduk-apla-iddina II Bel-ibni Ashur-nadin-shumi† Nergal-ushezib Mushezib-Marduk Esarhaddon† Ashurbanipal Ashur-etil-ilani Sinsharishkun Sin-shumu-lishir Ashur-uballit II |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Jython] | [TOKENS: 351] |
Contents Jython Jython is an implementation of the Python programming language designed to run on the Java platform. It was known as JPython until 1999. Overview Jython programs can import and use any Java class. Except for some standard modules, Jython programs use Java classes instead of Python modules. Jython includes almost all of the modules in the standard Python programming language distribution, lacking only some of the modules implemented originally in C. For example, a user interface in Jython could be written with Swing, AWT or SWT. Jython compiles Python source code to Java bytecode (an intermediate language) either on demand or statically. History Jython was initially created in late 1997 to replace C with Java for performance-intensive code accessed by Python programs, moving to SourceForge in October 2000. The Python Software Foundation awarded a grant in January 2005. Jython 2.5 was released in June 2009. Status and roadmap The most recent release is Jython 2.7.4. It was released on August 18, 2024 and is compatible with Python 2.7. Python 3 compatible changes are planned in Jython 3 Roadmap. Although Jython implements the Python language specification, it has some differences and incompatibilities with CPython, which is the reference implementation of Python. License terms From version 2.2 on, Jython (including the standard library) is released under the Python Software Foundation License (v2). Older versions are covered by the Jython 2.0, 2.1 license and the JPython 1.1.x Software License. The command-line interpreter is available under the Apache Software License. Usage See also References External links |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.