text
stringlengths
0
473k
[SOURCE: https://en.wikipedia.org/wiki/Torato_Umanuto] | [TOKENS: 1566]
Contents Torato Umanuto Torato Umanuto (Hebrew: תּוֹרָתוֹ אֻמָּנוּתוֹ, lit. 'Torah study is his job') was a special government arrangement in Israel allowing young Haredi Jewish men enrolled in yeshivas to complete their studies before they were conscripted into the Israeli military. Historically, it has been mandatory in Israeli law for male and female Jews, male Druze, and male Circassians to serve in the military once they become 18 years of age, with male conscripts required to serve for three years and female Jewish conscripts required to serve for two years. Haredi Jews maintain that the practice of studying or reciting the Torah, when undertaken by great Torah scholars or their disciples, is crucial in defending the Israeli people from threats, similar to an additional "praying division" of the military. In practice, the Torato Umanuto arrangement provides a legal route whereby Haredi rabbis and their disciples can either enroll for a shortened service period of four months or otherwise be exempted from compulsory military service altogether. In June 2024, the Supreme Court of Israel declared any continued exemption of IDF conscription unlawful and the army sent draft orders to thousands of Haredi men in the following months. Etymology The source of the phrase Torato Umanuto is taken from the Talmud: For it was taught: If companions [scholars] are engaged in studying, they must break off for the reading of the shema, but not for prayer. R. Johanan said: This was taught only of such as R. Simeon b. Yohai and his companions, whose Torah study was their occupation. — b. Talmud, tractate Shabbat, 11a History During the 1948 Arab–Israeli War, Israeli prime minister David Ben-Gurion reached a special arrangement with Israel's Haredi Jews (represented by Agudat Yisrael and Yitzhak-Meir Levin) in which yeshiva students would be temporarily exempted from serving in the Israel Defense Forces, but only as long as their sole occupation was studying the Torah, which a number of Haredi Jews devote and occupy themselves with for the majority of their day as a religious commandment. The arrangement's original purpose was to reach a comprehensive accommodation,[citation needed] later called the secular–religious status quo, between the secular community and the Haredi population who were then living under the British Mandate for Palestine, and by extension preventing an internal conflict within the Palestinian Jewish community (the Yishuv) amidst high tensions with the region's Arabs. By contrast, Israelis who belong to the Religious Zionist community are conscripted, often under the yeshiva system of the Hesder program, which combines Torah study with military service. Over the years, as the Israeli population grew, the number of Haredi men eligible for exemption under the Torato Umanuto grew significantly; from 400 men originally to tens of thousands as of 2024. Many non-Haredi Israeli Jews began to complain over the uneven burden of military service. For many years the Torato Umanuto arrangement had the status of a regulation under the jurisdiction of the Ministry of Defense (Prime Minister David Ben-Gurion also had the Defense portfolio). In the 1990s the High Court of Justice of Israel ruled that the Defense minister had no authority to determine the extent of these exemptions. The Supreme Court postponed the application of the ruling to give the government time to resolve the matter.[citation needed] In accordance with the judicial ruling, Prime Minister Ehud Barak set up the Tal committee in 1999. The Tal committee reported in April 2000, and its recommendations were approved by the Knesset in July 2002. The new Tal Law, as it came to be known, was passed with 51 votes in favour and 41 against. The new law provided for a continuation of the Torato Umanuto arrangement under specific conditions laid down in the law; it was hoped that the number of exemptions would gradually reduce. The new law did not however put an end to controversies and disagreements.[citation needed] In 2005, Justice Minister Tzipi Livni stated that the Tal Law, which by then had yet to be fully implemented, did not provide an adequate solution of the problem of Haredi conscription as only 1,115 of the 41,450 yeshiva students covered by the arrangement had taken the "decision year" provided by the law, and of these only 31 had later enlisted in the Israel Defense Forces. In 2007 the Tal Law was extended until August 2012. In January 2012, Defense Minister Ehud Barak said his ministry was preparing an alternative to the Tal Law. Dozens of IDF reserve soldiers had put up what they called "the suckers' camp" near the Tel Aviv Savidor Central Railway Station, to protest the possible extension of the Tal Law. Several politicians, public figures, disabled IDF veterans and high school and university students visited the protest encampment. In February 2012 the High Court of Justice ruled that the Tal Law in its current form was unconstitutional and could not be extended beyond August. Prime minister Benjamin Netanyahu said that the government would formulate a new bill that would guarantee a more equal sharing of the burden by all parts of Israeli society. The issue was also part of a possible government collapse leading into the 2012-2013 election. The Supreme court ruled in 2017 that blanket military service exemptions for Haredi yeshiva students were illegal and discriminatory. In March 2024, Attorney General Gali Baharav-Miara instructed both the Education Ministry and the Defense Ministry to begin the drafting process for Haredi men. On the night of 1 April, the coalition government stated that it had not agreed to an extension of the exemption, which had thus expired. Following a number of delays, a letter from Prime Minister Netanyahu requesting a one-month extension, and the passing of the deadline, on 1 April 2024, the Supreme Court decided that the Haredim would no longer receive an exemption from military service and that yeshivas could no longer receive the associated government subsidies. The Times of Israel reported that per government figures, 1,257 yeshivas would lose subsidies for 49,485 students receiving the exemption. Haredi lawmakers, members of the political parties United Torah Judaism and Shas, and supporters of the coalition government have stated their intention to walk out if the exemption removal is enforced; Anshel Pfeffer, a journalist for the newspaper Haaretz, argued that these threats were hollow. Haredi youth in the ultra-Orthodox neighbourhood of Mea Shearim in Jerusalem burned Israeli flags and military uniforms in protest. Following the 1 April lapse, the Supreme Court stated that it would convene on 2 June 2024 to hear a case regarding the conscription of Haredi men. The case was proceeded over by an expanded, nine-judge panel, as opposed to the standard three-judge panel. On June 25, 2024, the panel unanimously agreed to allow the compulsory conscription of Haredi Jews. In July 2024, the army began drafting 3,000 Haredi men, less than 10% showed up at recruitment centers. In November, 7,000 additional draft orders for Haredi men were approved. See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Drug_test] | [TOKENS: 6691]
Contents Drug test A drug test (also often toxicology screen or tox screen) is a technical analysis of a biological specimen, for example urine, hair, blood, breath, sweat, or oral fluid/saliva—to determine the presence or absence of specified parent drugs or their metabolites. Major applications of drug testing include detection of the presence of performance enhancing steroids in sport, employers and parole/probation officers screening for drugs prohibited by law (such as cocaine, methamphetamine, and heroin) and police officers testing for the presence and concentration of alcohol (ethanol) in the blood commonly referred to as BAC (blood alcohol content). BAC tests are typically administered via a breathalyzer while urinalysis is used for the vast majority of drug testing in sports and the workplace. Numerous other methods with varying degrees of accuracy, sensitivity (detection threshold/cutoff), and detection periods exist. A drug test may also refer to a test that provides quantitative chemical analysis of an illegal drug, typically intended to help with responsible drug use. Detection periods The detection windows depend upon multiple factors: drug class, amount and frequency of use, metabolic rate, body mass, age, overall health, and urine pH. For ease of use, the detection times of metabolites have been incorporated into each parent drug. For example, heroin and cocaine can only be detected for a few hours after use, but their metabolites can be detected for several days in urine. The chart depicts the longer detection times of the metabolites. In the case of hair testing, the metabolites are permanently embedded into hair, and the detection time is determined by the length of the hair sample used in the analysis. The standard length of head hair used in the test is 1.5", which corresponds to about 3 months. Body/pubic hair grows slower, and the same 1.5" would result in a longer detection time. Oral fluid or saliva testing results for the most part mimic that of blood. The only exceptions are THC (tetrahydrocannabinol) and benzodiazepines. Oral fluid will likely detect THC from ingestion up to a maximum period of 6–12 hours. This continues to cause difficulty in oral fluid detection of THC and benzodiazepines. Breath air for the most part mimics blood tests as well. Due to the very low levels of substances in the breath air, liquid chromatography—mass spectrometry has to be used to analyze the sample according to a recent publication wherein 12 analytes were investigated. Rapid oral fluid products are not approved for use in workplace drug testing programs and are not FDA cleared. Using rapid oral fluid drug tests in the workplace is prohibited in only: The following chart gives approximate detection periods for each substance by test type. Types Urine analysis is primarily used because of its low cost. Urine drug testing is one of the most common testing methods used. The enzyme-multiplied immune test is the most frequently used urinalysis. Complaints have been made about the relatively high rates of false positives using this test. Urine drug tests screen the urine for the presence of a parent drug or its metabolites. The level of drug or its metabolites is not predictive of when the drug was taken or how much the patient used. Urine drug testing is an immunoassay based on the principle of competitive binding. Drugs which may be present in the urine specimen compete against their respective drug conjugate for binding sites on their specific antibody. During testing, a urine specimen migrates upward by capillary action. A drug, if present in the urine specimen below its cut-off concentration, will not saturate the binding sites of its specific antibody. The antibody will then react with the drug-protein conjugate and a visible colored line will show up in the test line region of the specific drug strip. A common misconception is that a drug test that is testing for a class of drugs, for example, opioids, will detect all drugs of that class. However, most opioid tests will not reliably detect oxycodone, oxymorphone, meperidine, or fentanyl. Likewise, most benzodiazepine drug tests will not reliably detect lorazepam. However, urine drug screens that test for a specific drug, rather than an entire class, are often available. When an employer requests a drug test from an employee, or a physician requests a drug test from a patient, the employee or patient is typically instructed to go to a collection site or their home. The urine sample goes through a specified 'chain of custody' to ensure that it is not tampered with or invalidated through lab or employee error. The patient or employee's urine is collected at a remote location in a specially designed secure cup, sealed with tamper-resistant tape, and sent to a testing laboratory to be screened for drugs (typically the Substance Abuse and Mental Health Services Administration 5 panel). The first step at the testing site is to split the urine into two aliquots. One aliquot is first screened for drugs using an analyzer that performs immunoassay as the initial screen. To ensure the specimen integrity and to detect possible adulterants, additional parameters are tested for. Some test the properties of normal urine, such as, urine creatinine, pH, and specific gravity. Others are intended to catch substances added to the urine to alter the test result, such as, oxidants (including bleach), nitrites, and gluteraldehyde. If the urine screen is positive then another aliquot of the sample is used to confirm the findings by gas chromatography—mass spectrometry (GC-MS) or liquid chromatography - mass spectrometry methodology. If requested by the physician or employer, certain drugs are screened for individually; these are generally drugs part of a chemical class that are, for one of many reasons, considered more habit-forming or of concern. For instance, oxycodone and diamorphine may be tested, both sedative analgesics. If such a test is not requested specifically, the more general test (in the preceding case, the test for opioids) will detect most of the drugs of a class, but the employer or physician will not have the benefit of the identity of the drug. Employment-related test results are relayed to a medical review office (MRO) where a medical physician reviews the results. If the result of the screen is negative, the MRO informs the employer that the employee has no detectable drug in the urine, typically within 24 hours. However, if the test result of the immunoassay and GC-MS are non-negative and show a concentration level of parent drug or metabolite above the established limit, the MRO contacts the employee to determine if there is any legitimate reason—such as a medical treatment or prescription. On-site instant drug testing is a more cost-efficient method of effectively detecting substance use amongst employees, as well as in rehabilitation programs to monitor patient progress. These instant tests can be used for both urine and saliva testing. Although the accuracy of such tests varies with the manufacturer, some kits have rates of accuracy correlating closely with laboratory test results. Breath test is a widespread method for quickly determining alcohol intoxication. A breath test measures the alcohol concentration in the body by a deep-lung breath. There are different instruments used for measuring the alcohol content of an individual though their breath. Breathalyzer is a widely known instrument which was developed in 1954 and contained chemicals unlike other breath-testing instruments. More modernly used instruments are the infrared light-absorption devices and fuel cell detectors, these two testers are microprocessor controlled meaning the operator only has to press the start button. To get accurate readings on a breath-testing device the individual must blow for approximately 6 seconds and need to contain roughly 1.1 to 1.5 liters of breath. For a breath-test to result accurately and truly an operator must take steps such as avoiding measuring "mouth alcohol" which is a result from regurgitation, belching, or recent intake of an alcoholic beverage. To avoid measuring "mouth alcohol" the operator must not allow the individual that's taking the test to consume any materials for at least fifteen minutes before the breath test. When pulled over for a driving violation if an individual in the United States refuses to take a breath test that individual's driver's license can be suspended for a 6 to 12 months time period. Hair analysis to detect addictive substances has been used by court systems in the United States, United Kingdom, Canada, and other countries worldwide. In the United States, hair testing has been accepted in court cases as forensic evidence following the Frye Rule, the Federal Rules of Evidence, and the Daubert Rule. As such, hair testing results are legally and scientifically recognized as admissible evidence. Hair testing is commonly used in the USA as pre-employment drug test. The detection time for this test is roughly 3 months, which is the time, that takes head hair to grow ca. 1.5 inches, that are collected as a specimen. Longer detection times are possible with longer hair samples. A 2014 collaborative US study of 359 adults with moderate-risk drug use found, that a large number of participants, who reported drug use in the last 3 months, had negative hair tests. The tests were done using an immunoassay followed by a confirmatory GC-MS. For marijuana, only about half of self-disclosed users had a positive hair test. Under-identification of drug use by hair testing (or over-reporting) was also widespread for cocaine, amphetamines, and opioids. Because such under-identification was more common among participants, who self-reported an infrequent use, the authors suggested, that the immunoassay did not have the sensitivity required for such infrequent uses. It is worth noting, that most earlier studies reported, that hair tests found ca. 50-fold higher prevalence of illicit drug use, than self reports. In late 2022 the US Federal Motor Carrier Safety Administration denied a petition to recognize hair samples as an alternative (to the currently used urine samples) drug-testing method for truckers. The agency did not comment on the test validity, but rather stated, that it lacks the statutory authority to adopt new analytical methods. Although some lower courts may have accepted hair test evidence, there is no controlling judicial ruling in either the federal or any state system declaring any type of hair test as reliable. Hair testing is now recognized in both the UK and US judicial systems. There are guidelines for hair testing that have been published by the Society of Hair Testing (a private company in France) that specify the markers to be tested for and the cutoff concentrations that need to be tested. Addictive substances that can be detected include Cannabis, Cocaine, Amphetamines and drugs new to the UK such as Mephedrone. In contrast to other drugs consumed, alcohol is deposited directly in the hair. For this reason the investigation procedure looks for direct products of ethanol metabolism. The main part of alcohol is oxidized in the human body. This means it is released as water and carbon dioxide. One part of the alcohol reacts with fatty acids to produce esters. The sum of the concentrations of four of these fatty acid ethyl esters (FAEEs: ethyl myristate, ethyl palmitate, ethyl oleate and ethyl stearate) are used as indicators of the alcohol consumption. The amounts found in hair are measured in nanograms (one nanogram equals only one billionth of a gram), however with the benefit of modern technology, it is possible to detect such small amounts. In the detection of ethyl glucuronide, or EtG, testing can detect amounts in picograms (one picogram equals 0.001 nanograms). However, there is one major difference between most drugs and alcohol metabolites in the way in which they enter into the hair: on the one hand like other drugs FAEEs enter into the hair via the keratinocytes, the cells responsible for hair growth. These cells form the hair in the root and then grow through the skin surface taking any substances with them. On the other hand, the sebaceous glands produce FAEEs in the scalp and these migrate together with the sebum along the hair shaft (Auwärter et al., 2001, Pragst et al., 2004). So these glands lubricate not only the part of the hair that is just growing at 0.3 mm per day on the skin surface, but also the more mature hair growth, providing it with a protective layer of fat. FAEEs (nanogram = one billionth of a gram) appear in hair in almost one order of magnitude lower than (the relevant order of magnitude of) EtG (picogram = one trillionth of a gram). It has been technically possible to measure FAEEs since 1993, and the first study reporting the detection of EtG in hair was done by Sachs in 1993. In practice, most hair which is sent for analysis has been cosmetically treated in some way (bleached, permed etc.). It has been proven that FAEEs are not significantly affected by such treatments (Hartwig et al., 2003a). FAEE concentrations in hair from other body sites can be interpreted in a similar fashion as scalp hair (Hartwig et al., 2003b). Presumptive substance tests attempt to identify a suspicious substance, material or surface where traces of drugs are thought to be, instead of testing individuals through biological methods such as urine or hair testing. The test involves mixing the suspicious material with a chemical in order to trigger a color change to indicate if a drug is present. Most are now available over-the-counter for consumer use, and do not require a lab to read results. Benefits to this method include that the person who is suspected of drug use does not need to be confronted or aware of testing. Only a very small amount of material is needed to obtain results, and can be used to test powder, pills, capsules, crystals, or organic material. There is also the ability to detect illicit material when mixed with other non-illicit materials. The tests are used for general screening purposes, offering a generic result for the presence of a wide range of drugs, including Heroin, Cocaine, Methamphetamine, Amphetamine, Ecstasy/MDMA, Methadone, Ketamine, PCP, PMA, DMT, MDPV, and may detect rapidly evolving synthetic designer drugs. Separate tests for Marijuana/Hashish are also available. There are five primary color-tests reagents used for general screening purposes. The Marquis reagent turns into a variety of colors when in the presence of different substances. Dille-Koppanyi reagent uses two chemical solutions which turns a violet-blue color in the presence of barbiturates. Duquenois-Levine reagent is a series of chemical solutions that turn to the color of purple when the vegetation of marijuana is added. Van Urk reagent turns blue-purple when in the presence of LSD. Scott test's chemical solution shows up as a faint blue for cocaine base. In recent years, the use of presumptive test kits in the criminal justice system has come under great scrutiny due to the lack to forensic studies, questioned reliability, rendering of false positives with legal substances, and wrongful arrests. Saliva / oral fluid-based drug tests can generally detect use during the previous few days. It is better at detecting very recent use of a substance. THC may only be detectable for 2–24 hours in most cases. On site drug tests are allowed per the Department of Labor. Detection in saliva tests begins almost immediately upon use of the following substances, and lasts for approximately the following times: A disadvantage of saliva based drug testing is that it is not approved by FDA or SAMHSA for use with DOT / Federal Mandated Drug Testing. Oral fluid is not considered a bio-hazard unless there is visible blood; however, it should be treated with care. Sweat patches are attached to the skin to collect sweat over a long period of time (up to 14 days). These are used by child protective services, parole departments, and other government institutions concerned with drug use over long periods, when urine testing is not practical. There are also surface drug tests that test for the metabolite of parent drug groups in the residue of drugs left in sweat. An example of a rapid, non-invasive, sweat-based drug test is fingerprint drug screening. This 10 minute fingerprint test is in use by a variety of organisations in the UK and beyond, including within workplaces, drug treatment and family safeguarding services at airport border control (to detect drug mules) and in mortuaries to assist in investigations into cause of death. Drug-testing a blood sample measures whether or not a drug or a metabolite is in the body at a particular time. These types of tests are considered to be the most accurate way of telling if a person is intoxicated. Blood drug tests are not used very often because they need specialized equipment and medically trained administrators. Depending on how much marijuana was consumed, it can usually be detected in blood tests within six hours of consumption. After six hours has passed, the concentration of marijuana in the blood decreases significantly. It generally disappears completely within 30 days. Can occur at any time, usually when the investigator has reason to believe that a substance is possibly being used by the subject by behavior or immediately after an employee-related incident occurs during work hours. Testing protocol typically conforms to the national medical standard, candidates are given up to 120 minutes to reasonably produce a urine sample from the time of commencement (in some instances this time frame may be extended at the examiner's discretion). In the case of life-threatening symptoms, unconsciousness, or bizarre behavior in an emergency situation, screening for common drugs and toxins may help find the cause, called a toxicology test or tox screen to denote the broader area of possible substances beyond just self-administered drugs. These tests can also be done post-mortem during an autopsy in cases where a death was not expected. The test is usually done within 96 hours (4 days) after the desire for the test is realized. Both a urine sample and a blood sample may be tested. A blood sample is routinely used to detect ethanol/methanol and ASA/paracetamol intoxication. Various panels are used for screening urine samples for common substances, e.g. triage 8 that detects amphetamines, benzodiazepines, cocaine, methadone, opiates, cannabis, barbiturates and tricyclic antidepressants. Results are given in 10–15 min. Similar screenings may be used to evaluate the possible use of date rape drugs. This is usually done on a urine sample. Drug checks/tests (also known as pill testing) are provided at some events such as concerts and music festivals. Attendees can voluntarily hand over a sample of any drug or drugs in their possession to be tested to check what the drug is and its purity. The scheme is used as a harm reduction technique so people are more aware of what they are taking and the potential risks. Drug and alcohol impairment while at work increases the risk of work-place accidents and decreases productivity. Employers such as the commercial driving and airline industry may conduct random drug tests on employees with the goal of deterring use to improve safety. There is some evidence that increasing the use of random drug testing in the airline industry reduces the percentage of people who test positive, however, it is unclear if this decrease is associated with a corresponding decrease in fatal or non-fatal injuries, other accidents, number of days absent from work. It is also not clear if there are other unwanted side effects that may result from random drug and alcohol testing in the workplace. Commonly tested substances Anabolic steroids are used to enhance performance in sports and as they are prohibited in most high-level competitions drug testing is used extensively in order to enforce this prohibition. This is particularly so in individual (rather than team) sports such as athletics and cycling. Methodologies Before testing samples, the tamper-evident seal is checked for integrity. If it appears to have been tampered with or damaged, the laboratory rejects the sample and does not test it. Next, the sample must be made testable. Urine and oral fluid can be used "as is" for some tests, but other tests require the drugs to be extracted from urine. Strands of hair, patches, and blood must be prepared before testing. Hair is washed in order to eliminate second-hand sources of drugs on the surface of the hair, then the keratin is broken down using enzymes. Blood plasma may need to be separated by centrifuge from blood cells prior to testing. Sweat patches are opened and the sweat collection component is removed and soaked in a solvent to dissolve any drugs present. Laboratory-based drug testing is done in two steps. The first step is the screening test, which is an immunoassay based test applied to all samples. The second step, known as the confirmation test, is usually undertaken by a laboratory using highly specific chromatographic techniques and only applied to samples that test positive during the screening test. Screening tests are usually done by immunoassay (EMIT, ELISA, and RIA are the most common). A "dipstick" drug testing method which could provide screening test capabilities to field investigators has been developed at the University of Illinois. After a suspected positive sample is detected during screening, the sample is tested using a confirmation test. Samples that are negative on the screening test are discarded and reported as negative. The confirmation test in most laboratories (and all SAMHSA certified labs) is performed using mass spectrometry, and is precise but expensive. False positive samples from the screening test will almost always be negative on the confirmation test. Samples testing positive during both screening and confirmation tests are reported as positive to the entity that ordered the test. Most laboratories save positive samples for some period of months or years in the event of a disputed result or lawsuit. For workplace drug testing, a positive result is generally not confirmed without a review by a Medical Review Officer who will normally interview the subject of the drug test. Urine drug test kits are available as on-site tests, or laboratory analysis. Urinalysis is the most common test type and used by federally mandated drug testing programs and is considered the Gold Standard of drug testing. Urine based tests have been upheld in most courts for more than 30 years. However, urinalysis conducted by the Department of Defense has been challenged for reliability of testing the metabolite of cocaine. There are two associated metabolites of cocaine, benzoylecgonine (BZ) and ecgonine methyl ester (EME), the first (BZ) is created by the presence of cocaine in an aqueous solution with a pH greater than 7.0, while the second (EME) results from the actual human metabolic process. The presence of EME confirms actual ingestion of cocaine by a human being, while the presence of BZ is indicative only. BZ without EME is evidence of sample contamination, however, the US Department of Defense has chosen not to test for EME in its urinalysis program.[relevant?] A number of different analyses (defined as the unknown substance being tested for) are available on Urine Drug Screens. Spray (sweat) drug test kits are non-invasive. It is a simple process to collect the required specimen, no bathroom is needed, no laboratory is required for analysis, and the tests themselves are difficult to manipulate and relatively tamper-resistant. The detection window is long and can detect recent drug use within several hours. There are also some disadvantages to spray or sweat testing. There is not much variety in these drug tests, only a limited number of drugs can be detected, prices tend to be higher, and inconclusive results can be produced by variations in sweat production rates in donors. They also have a relatively long specimen collection period and are more vulnerable to contamination than other common forms of testing. Hair drug testing is a method that can detect drug use over a much longer period of time than saliva, sweat or urine tests. Hair testing is also more robust with respect to tampering. Thus, hair sampling is preferred by the US military and by many large corporations, which are subject to Drug-Free Workplace Act of 1988. Head hair normally growth at the rate of 0.5 inches per month. Thus, the most common hair sample length of 1.5" from the scalp would detect drug use within the last 90-100 days. 80-120 strands of hair are sufficient for the test. In the absence of hair on the head, body hair can be used as an acceptable substitute. This includes facial hair, the underarms, arms, and legs or even pubic hair. Because body hair usually grows slower than head hair, drugs can often be detected in body hair for longer periods, e.g. up to 12 months. Most drugs are analysed in hair samples not as the original psychoactive molecules, but rather as their metabolytes. For example, ethanol is determined as ethyl glucuronide, while cocaine use is confirmed using ecgonine. Testing for metabolytes reduces the likelihood of false positive results due to contamination. One disadvantage of hair testing is, that it cannot detect recent drug use, because it takes at least a week after a drug intake for the metabolytes to show up in a growing hair above the skin. Urine tests are better suited for detecting recent (within a week) drug use. In a practical test, hair sample is usually washed with a low polarity solvent (such as dichloromethane) to remove surface contaminations. Then, the sample is pulverized and extracted with a more polar solvent, such as methanol. Although thousand different substances can be determined in a single gas chromatography–mass spectrometry or liquid chromatography–mass spectrometry experiment, due to the low concentration of analytes, practical measurements (see selective ion monitoring) are limited to a smaller number (10-20) of analytes. Designer drugs are usually missed in such measurements, because the analyst must know in advance what chemicals to look for. Most hair testing laboratories use the aforementioned chromato-mass-spectrometry methods for confirmation or for rarely tested drugs only. Mass screening (preliminary or final) is usually done with immunoassays, because of their lower cost. Legality, ethics and politics The results of federally mandating drug testing were similar to the effects of simply extending to the trucking industry the right to perform drug tests, and it has been argued that the latter approach would have been as effective at lower cost. Psychologist Tony Buon has criticized the use of workplace drug testing on a number of grounds, including: Tony Buon has also reported by the CIPD as stating that "drug testing captures the stupid—experienced drug users know how to beat the tests". From a penological standpoint, one purpose of drug testing is to help classify the people taking the drug test within risk groups so that those who pose more of a danger to the public can be incapacitated through incarceration or other restrictions on liberty. Thus, the drug testing serves a crime control purpose even if there is no expectation of rehabilitating the drug user through treatment, deterring drug use through sanctions, or sending a message that drug use is a deviant behavior that will not be tolerated. A study in 2004 by the Independent Inquiry into Drug Testing at Work found that attempts by employers to force employees to take drug tests could potentially be challenged as a violation of privacy under the Human Rights Act 1998 and Article 8 of the European Convention of Human Rights. However, this does not apply to industries where drug testing is a matter of personal and public safety or security rather than productivity. In consultation with Dr. Carlton Turner, President Ronald Reagan issued Executive Order 12564. In doing so, he instituted mandatory drug-testing for all safety-sensitive executive-level and civil-service Federal employees. This was challenged in the courts by the National Treasury Employees Union. In 1988, this challenge was considered by the US Supreme Court. A similar challenge resulted in the Court extending the drug-free workplace concept to the private sector. These decisions were then incorporated into the White House Drug Control Strategy directive issued by President George H.W. Bush in 1989. All defendants serving on federal probation or federal supervised release are required to submit to at least three drug tests. Failing a drug test can be construed as possession of a controlled substance, resulting in mandatory revocation and imprisonment. There have been inconsistent evaluation results as to whether continued pretrial drug testing has beneficial effects. Testing positive can lead to bail not being granted, or if bail has already been granted, to bail revocation or other sanctions. Arizona also adopted a law in 1987 authorizing mandatory drug testing of felony arrestees for the purpose of informing the pretrial release decision, and the District of Columbia has had a similar law since the 1970s. It has been argued that one of the problems with such testing is that there is often not enough time between the arrest and the bail decision to confirm positive results using GC/MS technology. It has also been argued that such testing potentially implicates the Fifth Amendment privilege against self-incrimination, the right to due process (including the prohibition against gathering evidence in a manner that shocks the conscience or constitutes outrageous government conduct), and the prohibition against unreasonable searches and seizures contained in the Fourth Amendment. According to Henriksson, the anti-drug appeals of the Reagan administration "created an environment in which many employers felt compelled to implement drug testing programs because failure to do so might be perceived as condoning drug use. This fear was easily exploited by aggressive marketing and sales forces, who often overstated the value of testing and painted a bleak picture of the consequences of failing to use the drug testing product or service being offered." On March 10, 1986, the Commission on Organized Crime asked all U.S. companies to test employees for drug use. By 1987, nearly 25% of the Fortune 500 companies used drug tests. According to an uncontrolled self-report study done by DATIA and Society for Human Resource Management in 2012 (sample of 6,000 randomly selected human resource professionals), human resource professionals reported the following results after implementing a drug testing program: 19% of companies reported a subjective increase in employee productivity, 16% reported a decrease in employee turnover (8% reported an increase), and unspecified percentages reported decreases in absenteeism and improvement of workers' compensation incidence rates. According to US Chamber of Commerce 70% of all illicit drug users are employed. Some industries have high rates of employee drug use such as construction (12.8%), repair (11.1%), and hospitality (7.9-16.3%). A person conducting a business or undertaking (PCBU—the new term that includes employers) has duties under the work health and safety (WHS) legislation to ensure a worker affected by alcohol or other drugs does not place themselves or other persons at risk of injury while at work. Workplace policies and prevention programs can help change the norms and culture around substance use. All organisations—large and small—can benefit from an agreed policy on alcohol and drug misuse that applies to all workers. Such a policy should form part of an organisations overall health and safety management system. PCBUs are encouraged to establish a policy and procedure, in consultation with workers, to constructively manage alcohol and other drug related hazards in their workplace. A comprehensive workplace alcohol and other drug policy should apply to everyone in the workplace and include prevention, education, counselling and rehabilitation arrangements. In addition, the roles and responsibilities of managers and supervisors should be clearly outlined. All Australian workplace drug testing must comply with Australian standard AS/NZS4308:2008. In Victoria, roadside saliva tests detect drugs that contain: In February 2016 a New South Wales magistrate "acquitted a man who tested positive for cannabis". He had been arrested and charged after testing positive during a roadside drug test, despite not having smoked for nine days. He was relying on advice previously given to him by police. Refusal In the United States federal criminal system, refusing to take a drug test triggers an automatic revocation of probation or supervised release. In Victoria, Australia the driver of the car has the option to refuse the drug test. Refusing to undergo a drug test or refusing to undergo a secondary drug test after the first one, triggers an automatic suspension and disqualification for a period of two years and a fine of AUD$1000. The second refusal triggers an automatic suspension and disqualification for a period of four years and an even larger fine. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/OpenAI#cite_ref-nature-maths_174-0] | [TOKENS: 8773]
Contents OpenAI OpenAI is an American artificial intelligence research organization comprising both a non-profit foundation and a controlled for-profit public benefit corporation (PBC), headquartered in San Francisco. It aims to develop "safe and beneficial" artificial general intelligence (AGI), which it defines as "highly autonomous systems that outperform humans at most economically valuable work". OpenAI is widely recognized for its development of the GPT family of large language models, the DALL-E series of text-to-image models, and the Sora series of text-to-video models, which have influenced industry research and commercial applications. Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI. The organization was founded in 2015 in Delaware but evolved a complex corporate structure. As of October 2025, following restructuring approved by California and Delaware regulators, the non-profit OpenAI Foundation holds 26% of the for-profit OpenAI Group PBC, with Microsoft holding 27% and employees/other investors holding 47%. Under its governance arrangements, the OpenAI Foundation holds the authority to appoint the board of the for-profit OpenAI Group PBC, a mechanism designed to align the entity’s strategic direction with the Foundation’s charter. Microsoft previously invested over $13 billion into OpenAI, and provides Azure cloud computing resources. In October 2025, OpenAI conducted a $6.6 billion share sale that valued the company at $500 billion. In 2023 and 2024, OpenAI faced multiple lawsuits for alleged copyright infringement against authors and media companies whose work was used to train some of OpenAI's products. In November 2023, OpenAI's board removed Sam Altman as CEO, citing a lack of confidence in him, but reinstated him five days later following a reconstruction of the board. Throughout 2024, roughly half of then-employed AI safety researchers left OpenAI, citing the company's prominent role in an industry-wide problem. Founding In December 2015, OpenAI was founded as a not for profit organization by Sam Altman, Elon Musk, Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba, with Sam Altman and Elon Musk as the co-chairs. A total of $1 billion in capital was pledged by Sam Altman, Greg Brockman, Elon Musk, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), and Infosys. However, the actual capital collected significantly lagged pledges. According to company disclosures, only $130 million had been received by 2019. In its founding charter, OpenAI stated an intention to collaborate openly with other institutions by making certain patents and research publicly available, but later restricted access to its most capable models, citing competitive and safety concerns. OpenAI was initially run from Brockman's living room. It was later headquartered at the Pioneer Building in the Mission District, San Francisco. According to OpenAI's charter, its founding mission is "to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity." Musk and Altman stated in 2015 that they were partly motivated by concerns about AI safety and existential risk from artificial general intelligence. OpenAI stated that "it's hard to fathom how much human-level AI could benefit society", and that it is equally difficult to comprehend "how much it could damage society if built or used incorrectly". The startup also wrote that AI "should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible", and that "because of AI's surprising history, it's hard to predict when human-level AI might come within reach. When it does, it'll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest." Co-chair Sam Altman expected a decades-long project that eventually surpasses human intelligence. Brockman met with Yoshua Bengio, one of the "founding fathers" of deep learning, and drew up a list of great AI researchers. Brockman was able to hire nine of them as the first employees in December 2015. OpenAI did not pay AI researchers salaries comparable to those of Facebook or Google. It also did not pay stock options which AI researchers typically get. Nevertheless, OpenAI spent $7 million on its first 52 employees in 2016. OpenAI's potential and mission drew these researchers to the firm; a Google employee said he was willing to leave Google for OpenAI "partly because of the very strong group of people and, to a very large extent, because of its mission." OpenAI co-founder Wojciech Zaremba stated that he turned down "borderline crazy" offers of two to three times his market value to join OpenAI instead. In April 2016, OpenAI released a public beta of "OpenAI Gym", its platform for reinforcement learning research. Nvidia gifted its first DGX-1 supercomputer to OpenAI in August 2016 to help it train larger and more complex AI models with the capability of reducing processing time from six days to two hours. In December 2016, OpenAI released "Universe", a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites, and other applications. Corporate structure In 2019, OpenAI transitioned from non-profit to "capped" for-profit, with the profit being capped at 100 times any investment. According to OpenAI, the capped-profit model allows OpenAI Global, LLC to legally attract investment from venture funds and, in addition, to grant employees stakes in the company. Many top researchers work for Google Brain, DeepMind, or Facebook, which offer equity that a nonprofit would be unable to match. Before the transition, OpenAI was legally required to publicly disclose the compensation of its top employees. The company then distributed equity to its employees and partnered with Microsoft, announcing an investment package of $1 billion into the company. Since then, OpenAI systems have run on an Azure-based supercomputing platform from Microsoft. OpenAI Global, LLC then announced its intention to commercially license its technologies. It planned to spend $1 billion "within five years, and possibly much faster". Altman stated that even a billion dollars may turn out to be insufficient, and that the lab may ultimately need "more capital than any non-profit has ever raised" to achieve artificial general intelligence. The nonprofit, OpenAI, Inc., is the sole controlling shareholder of OpenAI Global, LLC, which, despite being a for-profit company, retains a formal fiduciary responsibility to OpenAI, Inc.'s nonprofit charter. A majority of OpenAI, Inc.'s board is barred from having financial stakes in OpenAI Global, LLC. In addition, minority members with a stake in OpenAI Global, LLC are barred from certain votes due to conflict of interest. Some researchers have argued that OpenAI Global, LLC's switch to for-profit status is inconsistent with OpenAI's claims to be "democratizing" AI. On February 29, 2024, Elon Musk filed a lawsuit against OpenAI and CEO Sam Altman, accusing them of shifting focus from public benefit to profit maximization—a case OpenAI dismissed as "incoherent" and "frivolous," though Musk later revived legal action against Altman and others in August. On April 9, 2024, OpenAI countersued Musk in federal court, alleging that he had engaged in "bad-faith tactics" to slow the company's progress and seize its innovations for his personal benefit. OpenAI also argued that Musk had previously supported the creation of a for-profit structure and had expressed interest in controlling OpenAI himself. The countersuit seeks damages and legal measures to prevent further alleged interference. On February 10, 2025, a consortium of investors led by Elon Musk submitted a $97.4 billion unsolicited bid to buy the nonprofit that controls OpenAI, declaring willingness to match or exceed any better offer. The offer was rejected on 14 February 2025, with OpenAI stating that it was not for sale, but the offer complicated Altman's restructuring plan by suggesting a lower bar for how much the nonprofit should be valued. OpenAI, Inc. was originally designed as a nonprofit in order to ensure that AGI "benefits all of humanity" rather than "the private gain of any person". In 2019, it created OpenAI Global, LLC, a capped-profit subsidiary controlled by the nonprofit. In December 2024, OpenAI proposed a restructuring plan to convert the capped-profit into a Delaware-based public benefit corporation (PBC), and to release it from the control of the nonprofit. The nonprofit would sell its control and other assets, getting equity in return, and would use it to fund and pursue separate charitable projects, including in science and education. OpenAI's leadership described the change as necessary to secure additional investments, and claimed that the nonprofit's founding mission to ensure AGI "benefits all of humanity" would be better fulfilled. The plan has been criticized by former employees. A legal letter named "Not For Private Gain" asked the attorneys general of California and Delaware to intervene, stating that the restructuring is illegal and would remove governance safeguards from the nonprofit and the attorneys general. The letter argues that OpenAI's complex structure was deliberately designed to remain accountable to its mission, without the conflicting pressure of maximizing profits. It contends that the nonprofit is best positioned to advance its mission of ensuring AGI benefits all of humanity by continuing to control OpenAI Global, LLC, whatever the amount of equity that it could get in exchange. PBCs can choose how they balance their mission with profit-making. Controlling shareholders have a large influence on how closely a PBC sticks to its mission. On October 28, 2025, OpenAI announced that it had adopted the new PBC corporate structure after receiving approval from the attorneys general of California and Delaware. Under the new structure, OpenAI's for-profit branch became a public benefit corporation known as OpenAI Group PBC, while the non-profit was renamed to the OpenAI Foundation. The OpenAI Foundation holds a 26% stake in the PBC, while Microsoft holds a 27% stake and the remaining 47% is owned by employees and other investors. All members of the OpenAI Group PBC board of directors will be appointed by the OpenAI Foundation, which can remove them at any time. Members of the Foundation's board will also serve on the for-profit board. The new structure allows the for-profit PBC to raise investor funds like most traditional tech companies, including through an initial public offering, which Altman claimed was the most likely path forward. In January 2023, OpenAI Global, LLC was in talks for funding that would value the company at $29 billion, double its 2021 value. On January 23, 2023, Microsoft announced a new US$10 billion investment in OpenAI Global, LLC over multiple years, partially needed to use Microsoft's cloud-computing service Azure. From September to December, 2023, Microsoft rebranded all variants of its Copilot to Microsoft Copilot, and they added MS-Copilot to many installations of Windows and released Microsoft Copilot mobile apps. Following OpenAI's 2025 restructuring, Microsoft owns a 27% stake in the for-profit OpenAI Group PBC, valued at $135 billion. In a deal announced the same day, OpenAI agreed to purchase $250 billion of Azure services, with Microsoft ceding their right of first refusal over OpenAI's future cloud computing purchases. As part of the deal, OpenAI will continue to share 20% of its revenue with Microsoft until it achieves AGI, which must now be verified by an independent panel of experts. The deal also loosened restrictions on both companies working with third parties, allowing Microsoft to pursue AGI independently and allowing OpenAI to develop products with other companies. In 2017, OpenAI spent $7.9 million, a quarter of its functional expenses, on cloud computing alone. In comparison, DeepMind's total expenses in 2017 were $442 million. In the summer of 2018, training OpenAI's Dota 2 bots required renting 128,000 CPUs and 256 GPUs from Google for multiple weeks. In October 2024, OpenAI completed a $6.6 billion capital raise with a $157 billion valuation including investments from Microsoft, Nvidia, and SoftBank. On January 21, 2025, Donald Trump announced The Stargate Project, a joint venture between OpenAI, Oracle, SoftBank and MGX to build an AI infrastructure system in conjunction with the US government. The project takes its name from OpenAI's existing "Stargate" supercomputer project and is estimated to cost $500 billion. The partners planned to fund the project over the next four years. In July, the United States Department of Defense announced that OpenAI had received a $200 million contract for AI in the military, along with Anthropic, Google, and xAI. In the same month, the company made a deal with the UK Government to use ChatGPT and other AI tools in public services. OpenAI subsequently began a $50 million fund to support nonprofit and community organizations. In April 2025, OpenAI raised $40 billion at a $300 billion post-money valuation, which was the highest-value private technology deal in history. The financing round was led by SoftBank, with other participants including Microsoft, Coatue, Altimeter and Thrive. In July 2025, the company reported annualized revenue of $12 billion. This was an increase from $3.7 billion in 2024, which was driven by ChatGPT subscriptions, which reached 20 million paid subscribers by April 2025, up from 15.5 million at the end of 2024, alongside a rapidly expanding enterprise customer base that grew to five million business users. The company’s cash burn remains high because of the intensive computational costs required to train and operate large language models. It projects an $8 billion operating loss in 2025. OpenAI reports revised long-term spending projections totaling approximately $115 billion through 2029, with annual expenditures projected to escalate significantly, reaching $17 billion in 2026, $35 billion in 2027, and $45 billion in 2028. These expenditures are primarily allocated toward expanding compute infrastructure, developing proprietary AI chips, constructing data centers, and funding intensive model training programs, with more than half of the spending through the end of the decade expected to support research-intensive compute for model training and development. The company's financial strategy prioritizes market expansion and technological advancement over near-term profitability, with OpenAI targeting cash-flow-positive operations by 2029 and projecting revenue of approximately $200 billion by 2030. This aggressive spending trajectory underscores both the enormous capital requirements of scaling cutting-edge AI technology and OpenAI's commitment to maintaining its position as a leader in the artificial intelligence industry. In October 2025, OpenAI completed an employee share sale of up to $10 billion to existing investors which valued the company at $500 billion. The deal values OpenAI as the most valuable privately owned company in the world—surpassing SpaceX as the world's most valuable private company. On November 17, 2023, Sam Altman was removed as CEO when its board of directors (composed of Helen Toner, Ilya Sutskever, Adam D'Angelo and Tasha McCauley) cited a lack of confidence in him. Chief Technology Officer Mira Murati took over as interim CEO. Greg Brockman, the president of OpenAI, was also removed as chairman of the board and resigned from the company's presidency shortly thereafter. Three senior OpenAI researchers subsequently resigned: director of research and GPT-4 lead Jakub Pachocki, head of AI risk Aleksander Mądry, and researcher Szymon Sidor. On November 18, 2023, there were reportedly talks of Altman returning as CEO amid pressure placed upon the board by investors such as Microsoft and Thrive Capital, who objected to Altman's departure. Although Altman himself spoke in favor of returning to OpenAI, he has since stated that he considered starting a new company and bringing former OpenAI employees with him if talks to reinstate him didn't work out. The board members agreed "in principle" to resign if Altman returned. On November 19, 2023, negotiations with Altman to return failed and Murati was replaced by Emmett Shear as interim CEO. The board initially contacted Anthropic CEO Dario Amodei (a former OpenAI executive) about replacing Altman, and proposed a merger of the two companies, but both offers were declined. On November 20, 2023, Microsoft CEO Satya Nadella announced Altman and Brockman would be joining Microsoft to lead a new advanced AI research team, but added that they were still committed to OpenAI despite recent events. Before the partnership with Microsoft was finalized, Altman gave the board another opportunity to negotiate with him. About 738 of OpenAI's 770 employees, including Murati and Sutskever, signed an open letter stating they would quit their jobs and join Microsoft if the board did not rehire Altman and then resign. This prompted OpenAI investors to consider legal action against the board as well. In response, OpenAI management sent an internal memo to employees stating that negotiations with Altman and the board had resumed and would take some time. On November 21, 2023, after continued negotiations, Altman and Brockman returned to the company in their prior roles along with a reconstructed board made up of new members Bret Taylor (as chairman) and Lawrence Summers, with D'Angelo remaining. According to subsequent reporting, shortly before Altman’s firing, some employees raised concerns to the board about how he had handled the safety implications of a recent internal AI capability discovery. On November 29, 2023, OpenAI announced that an anonymous Microsoft employee had joined the board as a non-voting member to observe the company's operations; Microsoft resigned from the board in July 2024. In February 2024, the Securities and Exchange Commission subpoenaed OpenAI's internal communication to determine if Altman's alleged lack of candor misled investors. In 2024, following the temporary removal of Sam Altman and his return, many employees gradually left OpenAI, including most of the original leadership team and a significant number of AI safety researchers. In August 2023, it was announced that OpenAI had acquired the New York-based start-up Global Illumination, a company that deploys AI to develop digital infrastructure and creative tools. In June 2024, OpenAI acquired Multi, a startup focused on remote collaboration. In March 2025, OpenAI reached a deal with CoreWeave to acquire $350 million worth of CoreWeave shares and access to AI infrastructure, in return for $11.9 billion paid over five years. Microsoft was already CoreWeave's biggest customer in 2024. Alongside their other business dealings, OpenAI and Microsoft were renegotiating the terms of their partnership to facilitate a potential future initial public offering by OpenAI, while ensuring Microsoft's continued access to advanced AI models. On May 21, OpenAI announced the $6.5 billion acquisition of io, an AI hardware start-up founded by former Apple designer Jony Ive in 2024. In September 2025, OpenAI agreed to acquire the product testing startup Statsig for $1.1 billion in an all-stock deal and appointed Statsig's founding CEO Vijaye Raji as OpenAI's chief technology officer of applications. The company also announced development of an AI-driven hiring service designed to rival LinkedIn. OpenAI acquired personal finance app Roi in October 2025. In October 2025, OpenAI acquired Software Applications Incorporated, the developer of Sky, a macOS-based natural language interface designed to operate across desktop applications. The Sky team joined OpenAI, and the company announced plans to integrate Sky’s capabilities into ChatGPT. In December 2025, it was announced OpenAI had agreed to acquire Neptune, an AI tooling startup that helps companies track and manage model training, for an undisclosed amount. In January 2026, it was announced OpenAI had acquired healthcare technology startup Torch for approximately $60 million. The acquisition followed the launch of OpenAI’s ChatGPT Health product and was intended to strengthen the company’s medical data and healthcare artificial intelligence capabilities. OpenAI has been criticized for outsourcing the annotation of data sets to Sama, a company based in San Francisco that employed workers in Kenya. These annotations were used to train an AI model to detect toxicity, which could then be used to moderate toxic content, notably from ChatGPT's training data and outputs. However, these pieces of text usually contained detailed descriptions of various types of violence, including sexual violence. The investigation uncovered that OpenAI began sending snippets of data to Sama as early as November 2021. The four Sama employees interviewed by Time described themselves as mentally scarred. OpenAI paid Sama $12.50 per hour of work, and Sama was redistributing the equivalent of between $1.32 and $2.00 per hour post-tax to its annotators. Sama's spokesperson said that the $12.50 was also covering other implicit costs, among which were infrastructure expenses, quality assurance and management. In 2024, OpenAI began collaborating with Broadcom to design a custom AI chip capable of both training and inference, targeted for mass production in 2026 and to be manufactured by TSMC on a 3 nm process node. This initiative intended to reduce OpenAI's dependence on Nvidia GPUs, which are costly and face high demand in the market. In January 2024, Arizona State University purchased ChatGPT Enterprise in OpenAI's first deal with a university. In June 2024, Apple Inc. signed a contract with OpenAI to integrate ChatGPT features into its products as part of its new Apple Intelligence initiative. In June 2025, OpenAI began renting Google Cloud's Tensor Processing Units (TPUs) to support ChatGPT and related services, marking its first meaningful use of non‑Nvidia AI chips. In September 2025, it was revealed that OpenAI signed a contract with Oracle to purchase $300 billion in computing power over the next five years. In September 2025, OpenAI and NVIDIA announced a memorandum of understanding that included a potential deployment of at least 10 gigawatts of NVIDIA systems and a $100 billion investment from NVIDIA in OpenAI. OpenAI expected the negotiations to be completed within weeks. As of January 2026, this has not been realized, and the two sides are rethinking the future of their partnership. In October 2025, OpenAI announced a multi-billion dollar deal with AMD. OpenAI committed to purchasing six gigawatts worth of AMD chips, starting with the MI450. OpenAI will have the option to buy up to 160 million shares of AMD, about 10% of the company, depending on development, performance and share price targets. In December 2025, Disney said it would make a $1 billion investment in OpenAI, and signed a three-year licensing deal that will let users generate videos using Sora—OpenAI's short-form AI video platform. More than 200 Disney, Marvel, Star Wars and Pixar characters will be available to OpenAI users. In early 2026, Amazon entered advanced discussions to invest up to $50 billion in OpenAI as part of a potential artificial intelligence partnership. Under the proposed agreement, OpenAI’s models could be integrated into Amazon’s digital assistant Alexa and other internal projects. OpenAI provides LLMs to the Artificial Intelligence Cyber Challenge and to the Advanced Research Projects Agency for Health. In October 2024, The Intercept revealed that OpenAI's tools are considered "essential" for AFRICOM's mission and included in an "Exception to Fair Opportunity" contractual agreement between the United States Department of Defense and Microsoft. In December 2024, OpenAI said it would partner with defense-tech company Anduril to build drone defense technologies for the United States and its allies. In 2025, OpenAI's Chief Product Officer, Kevin Weil, was commissioned lieutenant colonel in the U.S. Army to join Detachment 201 as senior advisor. In June 2025, the U.S. Department of Defense awarded OpenAI a $200 million one-year contract to develop AI tools for military and national security applications. OpenAI announced a new program, OpenAI for Government, to give federal, state, and local governments access to its models, including ChatGPT. Services In February 2019, GPT-2 was announced, which gained attention for its ability to generate human-like text. In 2020, OpenAI announced GPT-3, a language model trained on large internet datasets. GPT-3 is aimed at natural language answering questions, but it can also translate between languages and coherently generate improvised text. It also announced that an associated API, named the API, would form the heart of its first commercial product. Eleven employees left OpenAI, mostly between December 2020 and January 2021, in order to establish Anthropic. In 2021, OpenAI introduced DALL-E, a specialized deep learning model adept at generating complex digital images from textual descriptions, utilizing a variant of the GPT-3 architecture. In December 2022, OpenAI received widespread media coverage after launching a free preview of ChatGPT, its new AI chatbot based on GPT-3.5. According to OpenAI, the preview received over a million signups within the first five days. According to anonymous sources cited by Reuters in December 2022, OpenAI Global, LLC was projecting $200 million of revenue in 2023 and $1 billion in revenue in 2024. After ChatGPT was launched, Google announced a similar chatbot, Bard, amid internal concerns that ChatGPT could threaten Google’s position as a primary source of online information. On February 7, 2023, Microsoft announced that it was building AI technology based on the same foundation as ChatGPT into Microsoft Bing, Edge, Microsoft 365 and other products. On March 14, 2023, OpenAI released GPT-4, both as an API (with a waitlist) and as a feature of ChatGPT Plus. On November 6, 2023, OpenAI launched GPTs, allowing individuals to create customized versions of ChatGPT for specific purposes, further expanding the possibilities of AI applications across various industries. On November 14, 2023, OpenAI announced they temporarily suspended new sign-ups for ChatGPT Plus due to high demand. Access for newer subscribers re-opened a month later on December 13. In December 2024, the company launched the Sora model. It also launched OpenAI o1, an early reasoning model that was internally codenamed strawberry. Additionally, ChatGPT Pro—a $200/month subscription service offering unlimited o1 access and enhanced voice features—was introduced, and preliminary benchmark results for the upcoming OpenAI o3 models were shared. On January 23, 2025, OpenAI released Operator, an AI agent and web automation tool for accessing websites to execute goals defined by users. The feature was only available to Pro users in the United States. OpenAI released deep research agent, nine days later. It scored a 27% accuracy on the benchmark Humanity's Last Exam (HLE). Altman later stated GPT-4.5 would be the last model without full chain-of-thought reasoning. In July 2025, reports indicated that AI models by both OpenAI and Google DeepMind solved mathematics problems at the level of top-performing students in the International Mathematical Olympiad. OpenAI's large language model was able to achieve gold medal-level performance, reflecting significant progress in AI's reasoning abilities. On October 6, 2025, OpenAI unveiled its Agent Builder platform during the company's DevDay event. The platform includes a visual drag-and-drop interface that lets developers and businesses design, test, and deploy agentic workflows with limited coding. On October 21, 2025, OpenAI introduced ChatGPT Atlas, a browser integrating the ChatGPT assistant directly into web navigation, to compete with existing browsers such as Google Chrome and Apple Safari. On December 11, 2025, OpenAI announced GPT-5.2. This model will be better at creating spreadsheets, building presentations, perceiving images, writing code and understanding long context. On January 27, 2026, OpenAI introduced Prism, a LaTeX-native workspace meant to assist scientists to help with research and writing. The platform utilizes GPT-5.2 as a backend to automate the process of drafting for scientific papers, including features for managing citations, complex equation formatting, and real-time collaborative editing. In March 2023, the company was criticized for disclosing particularly few technical details about products like GPT-4, contradicting its initial commitment to openness and making it harder for independent researchers to replicate its work and develop safeguards. OpenAI cited competitiveness and safety concerns to justify this repudiation. OpenAI's former chief scientist Ilya Sutskever argued in 2023 that open-sourcing increasingly capable models was increasingly risky, and that the safety reasons for not open-sourcing the most potent AI models would become "obvious" in a few years. In September 2025, OpenAI published a study on how people use ChatGPT for everyday tasks. The study found that "non-work tasks" (according to an LLM-based classifier) account for more than 72 percent of all ChatGPT usage, with a minority of overall usage related to business productivity. In July 2023, OpenAI launched the superalignment project, aiming within four years to determine how to align future superintelligent systems. OpenAI promised to dedicate 20% of its computing resources to the project, although the team denied receiving anything close to 20%. OpenAI ended the project in May 2024 after its co-leaders Ilya Sutskever and Jan Leike left the company. In August 2025, OpenAI was criticized after thousands of private ChatGPT conversations were inadvertently exposed to public search engines like Google due to an experimental "share with search engines" feature. The opt-in toggle, intended to allow users to make specific chats discoverable, resulted in some discussions including personal details such as names, locations, and intimate topics appearing in search results when users accidentally enabled it while sharing links. OpenAI announced the feature's permanent removal on August 1, 2025, and the company began coordinating with search providers to remove the exposed content, emphasizing that it was not a security breach but a design flaw that heightened privacy risks. CEO Sam Altman acknowledged the issue in a podcast, noting users often treat ChatGPT as a confidant for deeply personal matters, which amplified concerns about AI handling sensitive data. Management In 2018, Musk resigned from his Board of Directors seat, citing "a potential future conflict [of interest]" with his role as CEO of Tesla due to Tesla's AI development for self-driving cars. OpenAI stated that Musk's financial contributions were below $45 million. On March 3, 2023, Reid Hoffman resigned from his board seat, citing a desire to avoid conflicts of interest with his investments in AI companies via Greylock Partners, and his co-founding of the AI startup Inflection AI. Hoffman remained on the board of Microsoft, a major investor in OpenAI. In May 2024, Chief Scientist Ilya Sutskever resigned and was succeeded by Jakub Pachocki. Co-leader Jan Leike also departed amid concerns over safety and trust. OpenAI then signed deals with Reddit, News Corp, Axios, and Vox Media. Paul Nakasone then joined the board of OpenAI. In August 2024, cofounder John Schulman left OpenAI to join Anthropic, and OpenAI's president Greg Brockman took extended leave until November. In September 2024, CTO Mira Murati left the company. In November 2025, Lawrence Summers resigned from the board of directors. Governance and legal issues In May 2023, Sam Altman, Greg Brockman and Ilya Sutskever posted recommendations for the governance of superintelligence. They stated that superintelligence could happen within the next 10 years, allowing a "dramatically more prosperous future" and that "given the possibility of existential risk, we can't just be reactive". They proposed creating an international watchdog organization similar to IAEA to oversee AI systems above a certain capability threshold, suggesting that relatively weak AI systems on the other side should not be overly regulated. They also called for more technical safety research for superintelligences, and asked for more coordination, for example through governments launching a joint project which "many current efforts become part of". In July 2023, the FTC issued a civil investigative demand to OpenAI to investigate whether the company's data security and privacy practices to develop ChatGPT were unfair or harmed consumers (including by reputational harm) in violation of Section 5 of the Federal Trade Commission Act of 1914. These are typically preliminary investigative matters and are nonpublic, but the FTC's document was leaked. In July 2023, the FTC launched an investigation into OpenAI over allegations that the company scraped public data and published false and defamatory information. They asked OpenAI for comprehensive information about its technology and privacy safeguards, as well as any steps taken to prevent the recurrence of situations in which its chatbot generated false and derogatory content about people. The agency also raised concerns about ‘circular’ spending arrangements—for example, Microsoft extending Azure credits to OpenAI while both companies shared engineering talent—and warned that such structures could negatively affect the public. In September 2024, OpenAI's global affairs chief endorsed the UK's "smart" AI regulation during testimony to a House of Lords committee. In February 2025, OpenAI CEO Sam Altman stated that the company is interested in collaborating with the People's Republic of China, despite regulatory restrictions imposed by the U.S. government. This shift comes in response to the growing influence of the Chinese artificial intelligence company DeepSeek, which has disrupted the AI market with open models, including DeepSeek V3 and DeepSeek R1. Following DeepSeek's market emergence, OpenAI enhanced security protocols to protect proprietary development techniques from industrial espionage. Some industry observers noted similarities between DeepSeek's model distillation approach and OpenAI's methodology, though no formal intellectual property claim was filed. According to Oliver Roberts, in March 2025, the United States had 781 state AI bills or laws. OpenAI advocated for preempting state AI laws with federal laws. According to Scott Kohler, OpenAI has opposed California's AI legislation and suggested that the state bill encroaches on a more competent federal government. Public Citizen opposed a federal preemption on AI and pointed to OpenAI's growth and valuation as evidence that existing state laws have not hampered innovation. Before May 2024, OpenAI required departing employees to sign a lifelong non-disparagement agreement forbidding them from criticizing OpenAI and acknowledging the existence of the agreement. Daniel Kokotajlo, a former employee, publicly stated that he forfeited his vested equity in OpenAI in order to leave without signing the agreement. Sam Altman stated that he was unaware of the equity cancellation provision, and that OpenAI never enforced it to cancel any employee's vested equity. However, leaked documents and emails refute this claim. On May 23, 2024, OpenAI sent a memo releasing former employees from the agreement. OpenAI was sued for copyright infringement by authors Sarah Silverman, Matthew Butterick, Paul Tremblay and Mona Awad in July 2023. In September 2023, 17 authors, including George R. R. Martin, John Grisham, Jodi Picoult and Jonathan Franzen, joined the Authors Guild in filing a class action lawsuit against OpenAI, alleging that the company's technology was illegally using their copyrighted work. The New York Times also sued the company in late December 2023. In May 2024 it was revealed that OpenAI had destroyed its Books1 and Books2 training datasets, which were used in the training of GPT-3, and which the Authors Guild believed to have contained over 100,000 copyrighted books. In 2021, OpenAI developed a speech recognition tool called Whisper. OpenAI used it to transcribe more than one million hours of YouTube videos into text for training GPT-4. The automated transcription of YouTube videos raised concerns within OpenAI employees regarding potential violations of YouTube's terms of service, which prohibit the use of videos for applications independent of the platform, as well as any type of automated access to its videos. Despite these concerns, the project proceeded with notable involvement from OpenAI's president, Greg Brockman. The resulting dataset proved instrumental in training GPT-4. In February 2024, The Intercept as well as Raw Story and Alternate Media Inc. filed lawsuit against OpenAI on copyright litigation ground. The lawsuit is said to have charted a new legal strategy for digital-only publishers to sue OpenAI. On April 30, 2024, eight newspapers filed a lawsuit in the Southern District of New York against OpenAI and Microsoft, claiming illegal harvesting of their copyrighted articles. The suing publications included The Mercury News, The Denver Post, The Orange County Register, St. Paul Pioneer Press, Chicago Tribune, Orlando Sentinel, Sun Sentinel, and New York Daily News. In June 2023, a lawsuit claimed that OpenAI scraped 300 billion words online without consent and without registering as a data broker. It was filed in San Francisco, California, by sixteen anonymous plaintiffs. They also claimed that OpenAI and its partner as well as customer Microsoft continued to unlawfully collect and use personal data from millions of consumers worldwide to train artificial intelligence models. On May 22, 2024, OpenAI entered into an agreement with News Corp to integrate news content from The Wall Street Journal, the New York Post, The Times, and The Sunday Times into its AI platform. Meanwhile, other publications like The New York Times chose to sue OpenAI and Microsoft for copyright infringement over the use of their content to train AI models. In November 2024, a coalition of Canadian news outlets, including the Toronto Star, Metroland Media, Postmedia, The Globe and Mail, The Canadian Press and CBC, sued OpenAI for using their news articles to train its software without permission. In October 2024 during a New York Times interview, Suchir Balaji accused OpenAI of violating copyright law in developing its commercial LLMs which he had helped engineer. He was a likely witness in a major copyright trial against the AI company, and was one of several of its current or former employees named in court filings as potentially having documents relevant to the case. On November 26, 2024, Balaji died by suicide. His death prompted the circulation of conspiracy theories alleging that he had been deliberately silenced. California Congressman Ro Khanna endorsed calls for an investigation. On April 24, 2025, Ziff Davis sued OpenAI in Delaware federal court for copyright infringement. Ziff Davis is known for publications such as ZDNet, PCMag, CNET, IGN and Lifehacker. In April 2023, the EU's European Data Protection Board (EDPB) formed a dedicated task force on ChatGPT "to foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities" based on the "enforcement action undertaken by the Italian data protection authority against OpenAI about the ChatGPT service". In late April 2024 NOYB filed a complaint with the Austrian Datenschutzbehörde against OpenAI for violating the European General Data Protection Regulation. A text created with ChatGPT gave a false date of birth for a living person without giving the individual the option to see the personal data used in the process. A request to correct the mistake was denied. Additionally, neither the recipients of ChatGPT's work nor the sources used, could be made available, OpenAI claimed. OpenAI was criticized for lifting its ban on using ChatGPT for "military and warfare". Up until January 10, 2024, its "usage policies" included a ban on "activity that has high risk of physical harm, including", specifically, "weapons development" and "military and warfare". Its new policies prohibit "[using] our service to harm yourself or others" and to "develop or use weapons". In August 2025, the parents of a 16-year-old boy who died by suicide filed a wrongful death lawsuit against OpenAI (and CEO Sam Altman), alleging that months of conversations with ChatGPT about mental health and methods of self-harm contributed to their son's death and that safeguards were inadequate for minors. OpenAI expressed condolences and said it was strengthening protections (including updated crisis response behavior and parental controls). Coverage described it as a first-of-its-kind wrongful death case targeting the company's chatbot. The complaint was filed in California state court in San Francisco. In November 2025, the Social Media Victims Law Center and Tech Justice Law Project filed seven lawsuits against OpenAI, of which four lawsuits alleged wrongful death. The suits were filed on behalf of Zane Shamblin, 23, of Texas; Amaurie Lacey, 17, of Georgia; Joshua Enneking, 26, of Florida; and Joe Ceccanti, 48, of Oregon, who each committed suicide after prolonged ChatGPT usage. In December 2025, Stein-Erik Soelberg, who was 56 years old at the time, allegedly murdered his mother Suzanne Adams. In the months prior the paranoid, delusional man often discussed his ideas with ChatGPT. Adam's estate then sued OpenAI claiming that the company shared responsibility due to the risk of chatbot psychosis despite the fact that chatbot psychosis is not a real medical diagnosis. OpenAI responded saying they will make ChatGPT safer for users disconnected from reality. See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Nitrate] | [TOKENS: 3850]
Contents Nitrate Nitrate is a polyatomic ion with the chemical formula NO−3. Salts containing this ion are called nitrates. Nitrates are common components of fertilizers and explosives. Almost all inorganic nitrates are soluble in water. An example of an insoluble (inorganic) nitrate is bismuth oxynitrate. In nature, nitrates are produced by a number of species of nitrifying bacteria in the natural environment using ammonia or urea as a source of nitrogen and source of free energy. Nitrate compounds for gunpowder were historically produced, in the absence of mineral nitrate sources, by means of various fermentation processes using urine and dung. Modern nitrate production is mostly focused on creation for fertilizer and chemical manufacturing for various applications, such as medicine synthesis, ceramics and preservation of meat. Annually, about 195 million metric tons of synthetic nitrogen fertilizers are used worldwide, with nitrates constituting a significant portion of this amount. Because nitrates are soluble and easily can be swept away from the soil because of precipitation, excessive agricultural use has been associated with nutrient runoff, water pollution, and the proliferation of aquatic dead zones. Direct exposure of nitrates for humans can have direct health consequences: the excess consumption of nitrates in cured meats is associated with intestinal cancers. Chemical structure The nitrate anion is the conjugate base of nitric acid, consisting of one central nitrogen atom surrounded by three identically bonded oxygen atoms in a trigonal planar arrangement. The nitrate ion carries a formal charge of −1. This charge results from a combination formal charge in which each of the three oxygens carries a −2⁄3 charge,[citation needed] whereas the nitrogen carries a +1 charge, all these adding up to formal charge of the polyatomic nitrate ion.[citation needed] This arrangement is commonly used as an example of resonance. Like the isoelectronic carbonate ion, the nitrate ion can be represented by three resonance structures: Chemical and biochemical properties In the NO−3 anion, the oxidation state of the central nitrogen atom is V (+5). This corresponds to the highest possible oxidation number of nitrogen. Nitrate is a potentially powerful oxidizer as evidenced by its explosive behaviour at high temperature when it is detonated in ammonium nitrate (NH4NO3), or black powder, ignited by the shock wave of a primary explosive. In contrast to red fuming nitric acid (HNO3/N2O4), or concentrated nitric acid (HNO3), nitrate in aqueous solution at neutral or high pH is only a weak oxidizing agent in redox reactions in which the reductant does not produce hydrogen ions (such as mercury going to calomel). However, it is still a strong oxidizer when the reductant does produce hydrogen ions, such as in the oxidation of hydrogen itself. Nitrate is stable in the absence of microorganisms, or reductants such as organic matter. In fact, nitrogen gas is thermodynamically stable in the presence of 1 atm of oxygen only in very acidic conditions, and otherwise would combine with it to form nitrate. This is shown by subtracting the two oxidation reactions: giving: Dividing by 0.0118 and rearranging gives the equilibrium relation: However, in reality, nitrogen, oxygen, and water do not combine directly to form nitrate. Rather, a reductant such as hydrogen reacts with nitrogen to produce "fixed nitrogen" such as ammonia, which is then oxidized, eventually becoming nitrate. Nitrate does not accumulate to high levels in nature because it reacts with reductants in the process called denitrification (see Nitrogen cycle). Nitrate is used as a powerful terminal electron acceptor by denitrifying bacteria to deliver the energy they need to thrive. Under anaerobic conditions, nitrate is the strongest electron acceptor used by prokaryote microorganisms (bacteria and archaea) to respirate. The redox couple NO−3/N2 is at the top of the redox scale for the anaerobic respiration, just below the couple oxygen (O2/H2O), but above the couples Mn(IV)/Mn(II), Fe(III)/Fe(II), SO2−4/HS−, CO2/CH4. In natural waters habitated by microorganisms, nitrate is a quite unstable and labile dissolved chemical species because it is metabolised by denitrifying bacteria. Water samples for nitrate/nitrite analyses need to be kept at 4 °C in a refrigerated room and analysed as quick as possible to limit the loss of nitrate. In the first step of the denitrification process, dissolved nitrate (NO−3) is catalytically reduced into nitrite (NO−2) by the enzymatic activity of bacteria. In aqueous solution, dissolved nitrite, N(III), is a more powerful oxidizer that nitrate, N(V), because it has to accept less electrons and its reduction is less kinetically hindered than that of nitrate. Electrochemical reduction of nitrate is also well-known, although its use for energy storage and denitrification remains underdeveloped. During the biological denitrification process, further nitrite reduction also gives rise to another powerful oxidizing agent: nitric oxide (NO). NO can fix on myoglobin, accentuating its red coloration. NO is an important biological signaling molecule and intervenes in the vasodilation process. Still, it can also produce free radicals in biological tissues, accelerating their degradation and aging process. The reactive oxygen species (ROS) generated by NO contribute to the oxidative stress, a condition involved in vascular dysfunction and atherogenesis. Detection in chemical analysis The nitrate anion is commonly analysed in water by ion chromatography (IC) along with other anions also present in the solution. The main advantage of IC is its ease and the simultaneous analysis of all the anions present in the aqueous sample. Since the emergence of IC instruments in the 1980s, this separation technique, coupled with many detectors, has become commonplace in the chemical analysis laboratory and is the preferred and most widely used method for nitrate and nitrite analyses. Previously, nitrate determination relied on spectrophotometric and colorimetric measurements after a specific reagent is added to the solution to reveal a characteristic color (often red because it absorbs visible light in the blue). Because of interferences with the brown color of dissolved organic matter (DOM: humic and fulvic acids) often present in soil pore water, artefacts can easily affect the absorbance values. In case of weak interference, a blank measurement with only a naturally brown-colored water sample can be sufficient to subtract the undesired background from the measured sample absorbance. If the DOM brown color is too intense, the water samples must be pretreated, and inorganic nitrogen species must be separated before measurement. Meanwhile, for clear water samples, colorimetric instruments retain the advantage of being less expensive and sometimes portable, making them an affordable option for fast routine controls or field measurements. Colorimetric methods for the specific detection of nitrate (NO−3) often rely on its conversion to nitrite (NO−2) followed by nitrite-specific tests. The reduction of nitrate to nitrite can be effected by a copper-cadmium alloy, metallic zinc, or hydrazine. The most popular of these assays is the Griess test, whereby nitrite is converted to a deeply red colored azo dye suited for UV–vis spectrophotometry analysis. The method exploits the reactivity of nitrous acid (HNO2) derived from the acidification of nitrite. Nitrous acid selectively reacts with aromatic amines to give diazonium salts, which in turn couple with a second reagent to give the azo dye. The detection limit is 0.02 to 2 μM. Such methods have been highly adapted to biological samples and soil samples. In the dimethylphenol method, 1 mL of concentrated sulfuric acid (H2SO4) is added to 200 μL of the solution being tested for nitrate. Under strongly acidic conditions, nitrate ions react with 2,6-dimethylphenol, forming a yellow compound, 4-nitro-2,6-dimethylphenol. This occurs through electrophilic aromatic substitution where the intermediate nitronium (+NO2) ions attack the aromatic ring of dimethylphenol. The resulting product (ortho- or para-nitro-dimethylphenol) is analyzed using UV-vis spectrophotometry at 345 nm according to the Lambert-Beer law. Another colorimetric method based on the chromotropic acid (dihydroxynaphthalene-disulfonic acid) was also developed by West and Lyles in 1960 for the direct spectrophotometric determination of nitrate anions. If formic acid is added to a mixture of brucine (an alkaloid related to strychnine) and potassium nitrate (KNO3), its color instantly turns red. This reaction has been used for the direct colorimetric detection of nitrates. For direct online chemical analysis using a flow-through system, the water sample is introduced by a peristaltic pump in a flow injection analyzer, and the nitrate or resulting nitrite-containing effluent is then combined with a reagent for its colorimetric detection. Occurrence and production Nitrate salts are found naturally on earth in arid environments as large deposits, particularly of nitratine, a major source of sodium nitrate. Nitrates are produced by a number of species of nitrifying bacteria in the natural environment using ammonia or urea as a source of nitrogen and source of free energy. Nitrate compounds for gunpowder were historically produced, in the absence of mineral nitrate sources, by means of various fermentation processes using urine and dung. Lightning strikes in earth's nitrogen- and oxygen-rich atmosphere produce a mixture of oxides of nitrogen, which form nitrous ions and nitrate ions, which are washed from the atmosphere by rain or in occult deposition. Nitrates are produced industrially from nitric acid. Uses Nitrate is a chemical compound that serves as a primary form of nitrogen for many plants. This essential nutrient is used by plants to synthesize proteins, nucleic acids, and other vital organic molecules. The transformation of atmospheric nitrogen into nitrate is facilitated by certain bacteria and lightning in the nitrogen cycle, which exemplifies nature's ability to convert a relatively inert molecule into a form that is crucial for biological productivity. Nitrates are used as fertilizers in agriculture because of their high solubility and biodegradability. The main nitrate fertilizers are ammonium, sodium, potassium, calcium, and magnesium salts. Several billion kilograms are produced annually for this purpose. The significance of nitrate extends beyond its role as a nutrient since it acts as a signaling molecule in plants, regulating processes such as root growth, flowering, and leaf development. While nitrate is beneficial for agriculture since it enhances soil fertility and crop yields, its excessive use can lead to nutrient runoff, water pollution, and the proliferation of aquatic dead zones. Therefore, sustainable agricultural practices that balance productivity with environmental stewardship are necessary. Nitrate's importance in ecosystems is evident since it supports the growth and development of plants, contributing to biodiversity and ecological balance. Nitrates are used as oxidizing agents, most notably in explosives, where the rapid oxidation of carbon compounds liberates large volumes of gases (see gunpowder as an example). Sodium nitrate is used to remove air bubbles from molten glass and some ceramics. Mixtures of molten salts are used to harden the surface of some metals. In the medical field, nitrate-derived organic esters, such as glyceryl trinitrate, isosorbide dinitrate, and isosorbide mononitrate, are used in the prophylaxis and management of acute coronary syndrome, myocardial infarction, acute pulmonary oedema. This class of drug, to which amyl nitrite also belongs, is known as nitrovasodilators. Toxicity and safety The two areas of concerns about the toxicity of nitrate are the following: One of the most common cause of methemoglobinemia in infants is due to the ingestion of nitrates and nitrites through well water or foods. In fact, nitrates (NO−3), often present at too high concentration in drinkwater, are only the precursor chemical species of nitrites (NO−2), the real culprits of methemoglobinemia. Nitrites produced by the microbial reduction of nitrate (directly in the drinkwater, or after ingestion by the infant's digestive system) are more powerful oxidizers than nitrates and are the chemical agent really responsible for the oxidation of Fe2+ into Fe3+ in the tetrapyrrole heme of hemoglobin. Indeed, nitrate anions are too weak oxidizers in aqueous solution to be able to directly, or at least sufficiently rapidly, oxidize Fe2+ into Fe3+, because of kinetics limitations. Infants younger than four months are at greater risk given that they drink more water per body weight, they have a lower NADH-cytochrome b5 reductase activity, and they have a higher level of fetal hemoglobin which converts more easily to methemoglobin. Additionally, infants are at an increased risk after an episode of gastroenteritis due to the production of nitrites by bacteria. However, other causes than nitrates can also affect infants and pregnant women. Indeed, the blue baby syndrome can also be caused by a number of other factors such as the cyanotic heart disease, a congenital heart defect resulting in low levels of oxygen in the blood, or by gastric upset, such as diarrheal infection, protein intolerance, heavy metal toxicity, etc. Through the Safe Drinking Water Act, the United States Environmental Protection Agency has set a maximum contaminant level of 10 mg/L or 10 ppm of nitrate in drinking water. An acceptable daily intake (ADI) for nitrate ions was established in the range of 0–3.7 mg (kg body weight)−1 day−1 by the Joint FAO/WHO Expert Committee on Food Additives (JEFCA). In freshwater or estuarine systems close to land, nitrate can reach concentrations that are lethal to fish. While nitrate is much less toxic than ammonia, levels over 30 ppm of nitrate can inhibit growth, impair the immune system and cause stress in some aquatic species. Nitrate toxicity remains a subject of debate. In most cases of excess nitrate concentrations in aquatic systems, the primary sources are wastewater discharges, as well as surface runoff from agricultural or landscaped areas that have received excess nitrate fertilizer. The resulting eutrophication and algae blooms result in anoxia and dead zones. As a consequence, as nitrate forms a component of total dissolved solids, they are widely used as an indicator of water quality. Human impacts on ecosystems through nitrate deposition Nitrate deposition into ecosystems has markedly increased due to anthropogenic activities, notably from the widespread application of nitrogen-rich fertilizers in agriculture and the emissions from fossil fuel combustion. Annually, about 195 million metric tons of synthetic nitrogen fertilizers are used worldwide, with nitrates constituting a significant portion of this amount. In regions with intensive agriculture, such as parts of the U.S., China, and India, the use of nitrogen fertilizers can exceed 200 kilograms per hectare. The impact of increased nitrate deposition extends beyond plant communities to affect soil microbial populations. The change in soil chemistry and nutrient dynamics can disrupt the natural processes of nitrogen fixation, nitrification, and denitrification, leading to altered microbial community structures and functions. This disruption can further impact the nutrient cycling and overall ecosystem health. Dietary nitrate A source of nitrate in the human diets arises from the consumption of leafy green foods, such as spinach and arugula. NO−3 can be present in beetroot juice. Drinking water represents also a primary nitrate intake source. Nitrate ingestion rapidly increases the plasma nitrate concentration by a factor of 2 to 3, and this elevated nitrate concentration can be maintained for more than 2 weeks. Increased plasma nitrate enhances the production of nitric oxide, NO. Nitric oxide is a physiological signaling molecule which intervenes in, among other things, regulation of muscle blood flow and mitochondrial respiration. Nitrite (NO−2) consumption is primarily determined by the amount of processed meats eaten, and the concentration of nitrates (NO−3) added to these meats (bacon, sausages…) for their curing. Although nitrites are the nitrogen species chiefly used in meat curing, nitrates are used as well and can be transformed into nitrite by microorganisms, or in the digestion process, starting by their dissolution in saliva and their contact with the microbiota of the mouth. Nitrites lead to the formation of carcinogenic nitrosamines. The production of nitrosamines may be inhibited by the use of the antioxidants vitamin C and the alpha-tocopherol form of vitamin E during curing. Many meat processors claim their meats (e.g. bacon) is "uncured" – which is a marketing claim with no factual basis: there is no such thing as "uncured" bacon (as that would be, essentially, raw sliced pork belly).[better source needed] "Uncured" meat is in fact actually cured with nitrites with virtually no distinction in process – the only difference being the USDA labeling requirement between nitrite of vegetable origin (such as from celery) vs. "synthetic" sodium nitrite. An analogy would be purified "sea salt" vs. sodium chloride – both being exactly the same chemical with the only essential difference being the origin. Anti-hypertensive diets, such as the DASH diet, typically contain high levels of nitrates, which are first reduced to nitrite in the saliva, as detected in saliva testing, prior to forming nitric oxide (NO). Domestic animal feed Symptoms of nitrate poisoning in domestic animals include increased heart rate and respiration; in advanced cases blood and tissue may turn a blue or brown color. Feed can be tested for nitrate; treatment consists of supplementing or substituting existing supplies with lower nitrate material. Safe levels of nitrate for various types of livestock are as follows: The values above are on a dry (moisture-free) basis. Salts and covalent derivatives Nitrate formation with elements of the periodic table: See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Internet#cite_note-169] | [TOKENS: 9291]
Contents Internet The Internet (or internet)[a] is the global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP)[b] to communicate between networks and devices. It is a network of networks that comprises private, public, academic, business, and government networks of local to global scope, linked by electronic, wireless, and optical networking technologies. The Internet carries a vast range of information services and resources, such as the interlinked hypertext documents and applications of the World Wide Web (WWW), electronic mail, discussion groups, internet telephony, streaming media and file sharing. Most traditional communication media, including telephone, radio, television, paper mail, newspapers, and print publishing, have been transformed by the Internet, giving rise to new media such as email, online music, digital newspapers, news aggregators, and audio and video streaming websites. The Internet has enabled and accelerated new forms of personal interaction through instant messaging, Internet forums, and social networking services. Online shopping has also grown to occupy a significant market across industries, enabling firms to extend brick and mortar presences to serve larger markets. Business-to-business and financial services on the Internet affect supply chains across entire industries. The origins of the Internet date back to research that enabled the time-sharing of computer resources, the development of packet switching, and the design of computer networks for data communication. The set of communication protocols to enable internetworking on the Internet arose from research and development commissioned in the 1970s by the Defense Advanced Research Projects Agency (DARPA) of the United States Department of Defense in collaboration with universities and researchers across the United States and in the United Kingdom and France. The Internet has no single centralized governance in either technological implementation or policies for access and usage. Each constituent network sets its own policies. The overarching definitions of the two principal name spaces on the Internet, the Internet Protocol address (IP address) space and the Domain Name System (DNS), are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols is an activity of the non-profit Internet Engineering Task Force (IETF). Terminology The word internetted was used as early as 1849, meaning interconnected or interwoven. The word Internet was used in 1945 by the United States War Department in a radio operator's manual, and in 1974 as the shorthand form of Internetwork. Today, the term Internet most commonly refers to the global system of interconnected computer networks, though it may also refer to any group of smaller networks. The word Internet may be capitalized as a proper noun, although this is becoming less common. This reflects the tendency in English to capitalize new terms and move them to lowercase as they become familiar. The word is sometimes still capitalized to distinguish the global internet from smaller networks, though many publications, including the AP Stylebook since 2016, recommend the lowercase form in every case. In 2016, the Oxford English Dictionary found that, based on a study of around 2.5 billion printed and online sources, "Internet" was capitalized in 54% of cases. The terms Internet and World Wide Web are often used interchangeably; it is common to speak of "going on the Internet" when using a web browser to view web pages. However, the World Wide Web, or the Web, is only one of a large number of Internet services. It is the global collection of web pages, documents and other web resources linked by hyperlinks and URLs. History In the 1960s, computer scientists began developing systems for time-sharing of computer resources. J. C. R. Licklider proposed the idea of a universal network while working at Bolt Beranek & Newman and, later, leading the Information Processing Techniques Office at the Advanced Research Projects Agency (ARPA) of the United States Department of Defense. Research into packet switching,[c] one of the fundamental Internet technologies, started in the work of Paul Baran at RAND in the early 1960s and, independently, Donald Davies at the United Kingdom's National Physical Laboratory in 1965. After the Symposium on Operating Systems Principles in 1967, packet switching from the proposed NPL network was incorporated into the design of the ARPANET, an experimental resource sharing network proposed by ARPA. ARPANET development began with two network nodes which were interconnected between the University of California, Los Angeles and the Stanford Research Institute on 29 October 1969. The third site was at the University of California, Santa Barbara, followed by the University of Utah. By the end of 1971, 15 sites were connected to the young ARPANET. Thereafter, the ARPANET gradually developed into a decentralized communications network, connecting remote centers and military bases in the United States. Other user networks and research networks, such as the Merit Network and CYCLADES, were developed in the late 1960s and early 1970s. Early international collaborations for the ARPANET were rare. Connections were made in 1973 to Norway (NORSAR and, later, NDRE) and to Peter Kirstein's research group at University College London, which provided a gateway to British academic networks, the first internetwork for resource sharing. ARPA projects, the International Network Working Group and commercial initiatives led to the development of various protocols and standards by which multiple separate networks could become a single network, or a network of networks. In 1974, Vint Cerf at Stanford University and Bob Kahn at DARPA published a proposal for "A Protocol for Packet Network Intercommunication". Cerf and his graduate students used the term internet as a shorthand for internetwork in RFC 675. The Internet Experiment Notes and later RFCs repeated this use. The work of Louis Pouzin and Robert Metcalfe had important influences on the resulting TCP/IP design. National PTTs and commercial providers developed the X.25 standard and deployed it on public data networks. The ARPANET initially served as a backbone for the interconnection of regional academic and military networks in the United States to enable resource sharing. Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). In 1982, the Internet Protocol Suite (TCP/IP) was standardized, which facilitated worldwide proliferation of interconnected networks. TCP/IP network access expanded again in 1986 when the National Science Foundation Network (NSFNet) provided access to supercomputer sites in the United States for researchers, first at speeds of 56 kbit/s and later at 1.5 Mbit/s and 45 Mbit/s. The NSFNet expanded into academic and research organizations in Europe, Australia, New Zealand and Japan in 1988–89. Although other network protocols such as UUCP and PTT public data networks had global reach well before this time, this marked the beginning of the Internet as an intercontinental network. Commercial Internet service providers emerged in 1989 in the United States and Australia. The ARPANET was decommissioned in 1990. The linking of commercial networks and enterprises by the early 1990s, as well as the advent of the World Wide Web, marked the beginning of the transition to the modern Internet. Steady advances in semiconductor technology and optical networking created new economic opportunities for commercial involvement in the expansion of the network in its core and for delivering services to the public. In mid-1989, MCI Mail and Compuserve established connections to the Internet, delivering email and public access products to the half million users of the Internet. Just months later, on 1 January 1990, PSInet launched an alternate Internet backbone for commercial use; one of the networks that added to the core of the commercial Internet of later years. In March 1990, the first high-speed T1 (1.5 Mbit/s) link between the NSFNET and Europe was installed between Cornell University and CERN, allowing much more robust communications than were capable with satellites. Later in 1990, Tim Berners-Lee began writing WorldWideWeb, the first web browser, after two years of lobbying CERN management. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the HyperText Transfer Protocol (HTTP) 0.9, the HyperText Markup Language (HTML), the first Web browser (which was also an HTML editor and could access Usenet newsgroups and FTP files), the first HTTP server software (later known as CERN httpd), the first web server, and the first Web pages that described the project itself. In 1991 the Commercial Internet eXchange was founded, allowing PSInet to communicate with the other commercial networks CERFnet and Alternet. Stanford Federal Credit Union was the first financial institution to offer online Internet banking services to all of its members in October 1994. In 1996, OP Financial Group, also a cooperative bank, became the second online bank in the world and the first in Europe. By 1995, the Internet was fully commercialized in the U.S. when the NSFNet was decommissioned, removing the last restrictions on use of the Internet to carry commercial traffic. As technology advanced and commercial opportunities fueled reciprocal growth, the volume of Internet traffic started experiencing similar characteristics as that of the scaling of MOS transistors, exemplified by Moore's law, doubling every 18 months. This growth, formalized as Edholm's law, was catalyzed by advances in MOS technology, laser light wave systems, and noise performance. Since 1995, the Internet has tremendously impacted culture and commerce, including the rise of near-instant communication by email, instant messaging, telephony (Voice over Internet Protocol or VoIP), two-way interactive video calls, and the World Wide Web. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1 Gbit/s, 10 Gbit/s, or more. The Internet continues to grow, driven by ever-greater amounts of online information and knowledge, commerce, entertainment and social networking services. During the late 1990s, it was estimated that traffic on the public Internet grew by 100 percent per year, while the mean annual growth in the number of Internet users was thought to be between 20% and 50%. This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network. In November 2006, the Internet was included on USA Today's list of the New Seven Wonders. As of 31 March 2011[update], the estimated total number of Internet users was 2.095 billion (30% of world population). It is estimated that in 1993 the Internet carried only 1% of the information flowing through two-way telecommunication. By 2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information was carried over the Internet. Modern smartphones can access the Internet through cellular carrier networks, and internet usage by mobile and tablet devices exceeded desktop worldwide for the first time in October 2016. As of 2018[update], 80% of the world's population were covered by a 4G network. The International Telecommunication Union (ITU) estimated that, by the end of 2017, 48% of individual users regularly connect to the Internet, up from 34% in 2012. Mobile Internet connectivity has played an important role in expanding access in recent years, especially in Asia and the Pacific and in Africa. The number of unique mobile cellular subscriptions increased from 3.9 billion in 2012 to 4.8 billion in 2016, two-thirds of the world's population, with more than half of subscriptions located in Asia and the Pacific. The limits that users face on accessing information via mobile applications coincide with a broader process of fragmentation of the Internet. Fragmentation restricts access to media content and tends to affect the poorest users the most. One solution, zero-rating, is the practice of Internet service providers allowing users free connectivity to access specific content or applications without cost. Social impact The Internet has enabled new forms of social interaction, activities, and social associations, giving rise to the scholarly study of the sociology of the Internet. Between 2000 and 2009, the number of Internet users globally rose from 390 million to 1.9 billion. By 2010, 22% of the world's population had access to computers with 1 billion Google searches every day, 300 million Internet users reading blogs, and 2 billion videos viewed daily on YouTube. In 2014 the world's Internet users surpassed 3 billion or 44 percent of world population, but two-thirds came from the richest countries, with 78 percent of Europeans using the Internet, followed by 57 percent of the Americas. However, by 2018, Asia alone accounted for 51% of all Internet users, with 2.2 billion out of the 4.3 billion Internet users in the world. China's Internet users surpassed a major milestone in 2018, when the country's Internet regulatory authority, China Internet Network Information Centre, announced that China had 802 million users. China was followed by India, with some 700 million users, with the United States third with 275 million users. However, in terms of penetration, in 2022, China had a 70% penetration rate compared to India's 60% and the United States's 90%. In 2022, 54% of the world's Internet users were based in Asia, 14% in Europe, 7% in North America, 10% in Latin America and the Caribbean, 11% in Africa, 4% in the Middle East and 1% in Oceania. In 2019, Kuwait, Qatar, the Falkland Islands, Bermuda and Iceland had the highest Internet penetration by the number of users, with 93% or more of the population with access. As of 2022, it was estimated that 5.4 billion people use the Internet, more than two-thirds of the world's population. Early computer systems were limited to the characters in the American Standard Code for Information Interchange (ASCII), a subset of the Latin alphabet. After English (27%), the most requested languages on the World Wide Web are Chinese (25%), Spanish (8%), Japanese (5%), Portuguese and German (4% each), Arabic, French and Russian (3% each), and Korean (2%). Modern character encoding standards, such as Unicode, allow for development and communication in the world's widely used languages. However, some glitches such as mojibake (incorrect display of some languages' characters) still remain. Several neologisms exist that refer to Internet users: Netizen (as in "citizen of the net") refers to those actively involved in improving online communities, the Internet in general or surrounding political affairs and rights such as free speech, Internaut refers to operators or technically highly capable users of the Internet, digital citizen refers to a person using the Internet in order to engage in society, politics, and government participation. The Internet allows greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections. The Internet can be accessed almost anywhere by numerous means, including through mobile Internet devices. Mobile phones, datacards, handheld game consoles and cellular routers allow users to connect to the Internet wirelessly.[citation needed] Educational material at all levels from pre-school (e.g. CBeebies) to post-doctoral (e.g. scholarly literature through Google Scholar) is available on websites. The internet has facilitated the development of virtual universities and distance education, enabling both formal and informal education. The Internet allows researchers to conduct research remotely via virtual laboratories, with profound changes in reach and generalizability of findings as well as in communication between scientists and in the publication of results. By the late 2010s the Internet had been described as "the main source of scientific information "for the majority of the global North population".: 111 Wikis have also been used in the academic community for sharing and dissemination of information across institutional and international boundaries. In those settings, they have been found useful for collaboration on grant writing, strategic planning, departmental documentation, and committee work. The United States Patent and Trademark Office uses a wiki to allow the public to collaborate on finding prior art relevant to examination of pending patent applications. Queens, New York has used a wiki to allow citizens to collaborate on the design and planning of a local park. The English Wikipedia has the largest user base among wikis on the World Wide Web and ranks in the top 10 among all sites in terms of traffic. The Internet has been a major outlet for leisure activity since its inception, with entertaining social experiments such as MUDs and MOOs being conducted on university servers, and humor-related Usenet groups receiving much traffic. Many Internet forums have sections devoted to games and funny videos. Another area of leisure activity on the Internet is multiplayer gaming. This form of recreation creates communities, where people of all ages and origins enjoy the fast-paced world of multiplayer games. These range from MMORPG to first-person shooters, from role-playing video games to online gambling. While online gaming has been around since the 1970s, modern modes of online gaming began with subscription services such as GameSpy and MPlayer. Streaming media is the real-time delivery of digital media for immediate consumption or enjoyment by end users. Streaming companies (such as Netflix, Disney+, Amazon's Prime Video, Mubi, Hulu, and Apple TV+) now dominate the entertainment industry, eclipsing traditional broadcasters. Audio streamers such as Spotify and Apple Music also have significant market share in the audio entertainment market. Video sharing websites are also a major factor in the entertainment ecosystem. YouTube was founded on 15 February 2005 and is now the leading website for free streaming video with more than two billion users. It uses a web player to stream and show video files. YouTube users watch hundreds of millions, and upload hundreds of thousands, of videos daily. Other video sharing websites include Vimeo, Instagram and TikTok.[citation needed] Although many governments have attempted to restrict both Internet pornography and online gambling, this has generally failed to stop their widespread popularity. A number of advertising-funded ostensible video sharing websites known as "tube sites" have been created to host shared pornographic video content. Due to laws requiring the documentation of the origin of pornography, these websites now largely operate in conjunction with pornographic movie studios and their own independent creator networks, acting as de-facto video streaming services. Major players in this field include the market leader Aylo, the operator of PornHub and numerous other branded sites, as well as other independent operators such as xHamster and Xvideos. As of 2023[update], Internet traffic to pornographic video sites rivalled that of mainstream video streaming and sharing services. Remote work is facilitated by tools such as groupware, virtual private networks, conference calling, videotelephony, and VoIP so that work may be performed from any location, such as the worker's home.[citation needed] The spread of low-cost Internet access in developing countries has opened up new possibilities for peer-to-peer charities, which allow individuals to contribute small amounts to charitable projects for other individuals. Websites, such as DonorsChoose and GlobalGiving, allow small-scale donors to direct funds to individual projects of their choice. A popular twist on Internet-based philanthropy is the use of peer-to-peer lending for charitable purposes. Kiva pioneered this concept in 2005, offering the first web-based service to publish individual loan profiles for funding. The low cost and nearly instantaneous sharing of ideas, knowledge, and skills have made collaborative work dramatically easier, with the help of collaborative software, which allow groups to easily form, cheaply communicate, and share ideas. An example of collaborative software is the free software movement, which has produced, among other things, Linux, Mozilla Firefox, and OpenOffice.org (later forked into LibreOffice).[citation needed] Content management systems allow collaborating teams to work on shared sets of documents simultaneously without accidentally destroying each other's work.[citation needed] The internet also allows for cloud computing, virtual private networks, remote desktops, and remote work.[citation needed] The online disinhibition effect describes the tendency of many individuals to behave more stridently or offensively online than they would in person. A significant number of feminist women have been the target of various forms of harassment, including insults and hate speech, to, in extreme cases, rape and death threats, in response to posts they have made on social media. Social media companies have been criticized in the past for not doing enough to aid victims of online abuse. Children also face dangers online such as cyberbullying and approaches by sexual predators, who sometimes pose as children themselves. Due to naivety, they may also post personal information about themselves online, which could put them or their families at risk unless warned not to do so. Many parents choose to enable Internet filtering or supervise their children's online activities in an attempt to protect their children from pornography or violent content on the Internet. The most popular social networking services commonly forbid users under the age of 13. However, these policies can be circumvented by registering an account with a false birth date, and a significant number of children aged under 13 join such sites.[citation needed] Social networking services for younger children, which claim to provide better levels of protection for children, also exist. Internet usage has been correlated to users' loneliness. Lonely people tend to use the Internet as an outlet for their feelings and to share their stories with others, such as in the "I am lonely will anyone speak to me" thread.[citation needed] Cyberslacking can become a drain on corporate resources; employees spend a significant amount of time surfing the Web while at work. Internet addiction disorder is excessive computer use that interferes with daily life. Nicholas G. Carr believes that Internet use has other effects on individuals, for instance improving skills of scan-reading and interfering with the deep thinking that leads to true creativity. Electronic business encompasses business processes spanning the entire value chain: purchasing, supply chain management, marketing, sales, customer service, and business relationship. E-commerce seeks to add revenue streams using the Internet to build and enhance relationships with clients and partners. According to International Data Corporation, the size of worldwide e-commerce, when global business-to-business and -consumer transactions are combined, equate to $16 trillion in 2013. A report by Oxford Economics added those two together to estimate the total size of the digital economy at $20.4 trillion, equivalent to roughly 13.8% of global sales. While much has been written of the economic advantages of Internet-enabled commerce, there is also evidence that some aspects of the Internet such as maps and location-aware services may serve to reinforce economic inequality and the digital divide. Electronic commerce may be responsible for consolidation and the decline of mom-and-pop, brick and mortar businesses resulting in increases in income inequality. A 2013 Institute for Local Self-Reliance report states that brick-and-mortar retailers employ 47 people for every $10 million in sales, while Amazon employs only 14. Similarly, the 700-employee room rental start-up Airbnb was valued at $10 billion in 2014, about half as much as Hilton Worldwide, which employs 152,000 people. At that time, Uber employed 1,000 full-time employees and was valued at $18.2 billion, about the same valuation as Avis Rent a Car and The Hertz Corporation combined, which together employed almost 60,000 people. Advertising on popular web pages can be lucrative, and e-commerce. Online advertising is a form of marketing and advertising which uses the Internet to deliver promotional marketing messages to consumers. It includes email marketing, search engine marketing (SEM), social media marketing, many types of display advertising (including web banner advertising), and mobile advertising. In 2011, Internet advertising revenues in the United States surpassed those of cable television and nearly exceeded those of broadcast television.: 19 Many common online advertising practices are controversial and increasingly subject to regulation. The Internet has achieved new relevance as a political tool. The presidential campaign of Howard Dean in 2004 in the United States was notable for its success in soliciting donation via the Internet. Many political groups use the Internet to achieve a new method of organizing for carrying out their mission, having given rise to Internet activism. Social media websites, such as Facebook and Twitter, helped people organize the Arab Spring, by helping activists organize protests, communicate grievances, and disseminate information. Many have understood the Internet as an extension of the Habermasian notion of the public sphere, observing how network communication technologies provide something like a global civic forum. However, incidents of politically motivated Internet censorship have now been recorded in many countries, including western democracies. E-government is the use of technological communications devices, such as the Internet, to provide public services to citizens and other persons in a country or region. E-government offers opportunities for more direct and convenient citizen access to government and for government provision of services directly to citizens. Cybersectarianism is a new organizational form that involves: highly dispersed small groups of practitioners that may remain largely anonymous within the larger social context and operate in relative secrecy, while still linked remotely to a larger network of believers who share a set of practices and texts, and often a common devotion to a particular leader. Overseas supporters provide funding and support; domestic practitioners distribute tracts, participate in acts of resistance, and share information on the internal situation with outsiders. Collectively, members and practitioners of such sects construct viable virtual communities of faith, exchanging personal testimonies and engaging in the collective study via email, online chat rooms, and web-based message boards. In particular, the British government has raised concerns about the prospect of young British Muslims being indoctrinated into Islamic extremism by material on the Internet, being persuaded to join terrorist groups such as the so-called "Islamic State", and then potentially committing acts of terrorism on returning to Britain after fighting in Syria or Iraq.[citation needed] Applications and services The Internet carries many applications and services, most prominently the World Wide Web, including social media, electronic mail, mobile applications, multiplayer online games, Internet telephony, file sharing, and streaming media services. The World Wide Web is a global collection of documents, images, multimedia, applications, and other resources, logically interrelated by hyperlinks and referenced with Uniform Resource Identifiers (URIs), which provide a global system of named references. URIs symbolically identify services, web servers, databases, and the documents and resources that they can provide. HyperText Transfer Protocol (HTTP) is the main access protocol of the World Wide Web. Web services also use HTTP for communication between software systems for information transfer, sharing and exchanging business data and logistics and is one of many languages or protocols that can be used for communication on the Internet. World Wide Web browser software, such as Microsoft Edge, Mozilla Firefox, Opera, Apple's Safari, and Google Chrome, enable users to navigate from one web page to another via the hyperlinks embedded in the documents. These documents may also contain computer data, including graphics, sounds, text, video, multimedia and interactive content. Client-side scripts can include animations, games, office applications and scientific demonstrations. Email is an important communications service available via the Internet. The concept of sending electronic text messages between parties, analogous to mailing letters or memos, predates the creation of the Internet. Internet telephony is a common communications service realized with the Internet. The name of the principal internetworking protocol, the Internet Protocol, lends its name to voice over Internet Protocol (VoIP).[citation needed] VoIP systems now dominate many markets, being as easy and convenient as a traditional telephone, while having substantial cost savings, especially over long distances. File sharing is the practice of transferring large amounts of data in the form of computer files across the Internet, for example via file servers. The load of bulk downloads to many users can be eased by the use of "mirror" servers or peer-to-peer networks. Access to the file may be controlled by user authentication, the transit of the file over the Internet may be obscured by encryption, and money may change hands for access to the file. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passed—usually fully encrypted—across the Internet. The origin and authenticity of the file received may be checked by a digital signature. Governance The Internet is a global network that comprises many voluntarily interconnected autonomous networks. It operates without a central governing body. The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise. While the hardware components in the Internet infrastructure can often be used to support other software systems, it is the design and the standardization process of the software that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been assumed by the IETF. The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. The resulting contributions and standards are published as Request for Comments (RFC) documents on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices when implementing Internet technologies. To maintain interoperability, the principal name spaces of the Internet are administered by the Internet Corporation for Assigned Names and Numbers (ICANN). ICANN is governed by an international board of directors drawn from across the Internet technical, business, academic, and other non-commercial communities. The organization coordinates the assignment of unique identifiers for use on the Internet, including domain names, IP addresses, application port numbers in the transport protocols, and many other parameters. Globally unified name spaces are essential for maintaining the global reach of the Internet. This role of ICANN distinguishes it as perhaps the only central coordinating body for the global Internet. The National Telecommunications and Information Administration, an agency of the United States Department of Commerce, had final approval over changes to the DNS root zone until the IANA stewardship transition on 1 October 2016. Regional Internet registries (RIRs) were established for five regions of the world to assign IP address blocks and other Internet parameters to local registries, such as Internet service providers, from a designated pool of addresses set aside for each region:[citation needed] The Internet Society (ISOC) was founded in 1992 with a mission to "assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". Its members include individuals as well as corporations, organizations, governments, and universities. Among other activities ISOC provides an administrative home for a number of less formally organized groups that are involved in developing and managing the Internet, including: the Internet Engineering Task Force (IETF), Internet Architecture Board (IAB), Internet Engineering Steering Group (IESG), Internet Research Task Force (IRTF), and Internet Research Steering Group (IRSG). On 16 November 2005, the United Nations-sponsored World Summit on the Information Society in Tunis established the Internet Governance Forum (IGF) to discuss Internet-related issues.[citation needed] Infrastructure The communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. As with any computer network, the Internet physically consists of routers, media (such as cabling and radio links), repeaters, and modems. However, as an example of internetworking, many of the network nodes are not necessarily Internet equipment per se. Internet packets are carried by other full-fledged networking protocols, with the Internet acting as a homogeneous networking standard, running across heterogeneous hardware, with the packets guided to their destinations by IP routers.[citation needed] Internet service providers (ISPs) establish worldwide connectivity between individual networks at various levels of scope. At the top of the routing hierarchy are the tier 1 networks, large telecommunication companies that exchange traffic directly with each other via very high speed fiber-optic cables and governed by peering agreements. Tier 2 and lower-level networks buy Internet transit from other providers to reach at least some parties on the global Internet, though they may also engage in peering. End-users who only access the Internet when needed to perform a function or obtain information, represent the bottom of the routing hierarchy.[citation needed] An ISP may use a single upstream provider for connectivity, or implement multihoming to achieve redundancy and load balancing. Internet exchange points are major traffic exchanges with physical connections to multiple ISPs. Large organizations, such as academic institutions, large enterprises, and governments, may perform the same function as ISPs, engaging in peering and purchasing transit on behalf of their internal networks. Research networks tend to interconnect with large subnetworks such as GEANT, GLORIAD, Internet2, and the UK's national research and education network, JANET.[citation needed] Common methods of Internet access by users include broadband over coaxial cable, fiber optics or copper wires, Wi-Fi, satellite, and cellular telephone technology.[citation needed] Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services that cover large areas are available in many cities, such as New York, London, Vienna, Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh. Most servers that provide internet services are today hosted in data centers, and content is often accessed through high-performance content delivery networks. Colocation centers often host private peering connections between their customers, internet transit providers, cloud providers, meet-me rooms for connecting customers together, Internet exchange points, and landing points and terminal equipment for fiber optic submarine communication cables, connecting the internet. Internet Protocol Suite The Internet standards describe a framework known as the Internet protocol suite (also called TCP/IP, based on the first two components.) This is a suite of protocols that are ordered into a set of four conceptional layers by the scope of their operation, originally documented in RFC 1122 and RFC 1123:[citation needed] The most prominent component of the Internet model is the Internet Protocol. IP enables internetworking, essentially establishing the Internet itself. Two versions of the Internet Protocol exist, IPv4 and IPv6.[citation needed] Aside from the complex array of physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe the exchange of data over the network.[citation needed] For locating individual computers on the network, the Internet provides IP addresses. IP addresses are used by the Internet infrastructure to direct internet packets to their destinations. They consist of fixed-length numbers, which are found within the packet. IP addresses are generally assigned to equipment either automatically via Dynamic Host Configuration Protocol, or are configured.[citation needed] Domain Name Systems convert user-inputted domain names (e.g. "en.wikipedia.org") into IP addresses.[citation needed] Internet Protocol version 4 (IPv4) defines an IP address as a 32-bit number. IPv4 is the initial version used on the first generation of the Internet and is still in dominant use. It was designed in 1981 to address up to ≈4.3 billion (109) hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion, which entered its final stage in 2011, when the global IPv4 address allocation pool was exhausted. Because of the growth of the Internet and the depletion of available IPv4 addresses, a new version of IP IPv6, was developed in the mid-1990s, which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 uses 128 bits for the IP address and was standardized in 1998. IPv6 deployment has been ongoing since the mid-2000s and is currently in growing deployment around the world, since Internet address registries began to urge all resource managers to plan rapid adoption and conversion. By design, IPv6 is not directly interoperable with IPv4. Instead, it establishes a parallel version of the Internet not directly accessible with IPv4 software. Thus, translation facilities exist for internetworking, and some nodes have duplicate networking software for both networks. Essentially all modern computer operating systems support both versions of the Internet Protocol.[citation needed] Network infrastructure, however, has been lagging in this development.[citation needed] A subnet or subnetwork is a logical subdivision of an IP network.: 1, 16 Computers that belong to a subnet are addressed with an identical most-significant bit-group in their IP addresses. This results in the logical division of an IP address into two fields, the network number or routing prefix and the rest field or host identifier. The rest field is an identifier for a specific host or network interface.[citation needed] The routing prefix may be expressed in Classless Inter-Domain Routing (CIDR) notation written as the first address of a network, followed by a slash character (/), and ending with the bit-length of the prefix. For example, 198.51.100.0/24 is the prefix of the Internet Protocol version 4 network starting at the given address, having 24 bits allocated for the network prefix, and the remaining 8 bits reserved for host addressing. Addresses in the range 198.51.100.0 to 198.51.100.255 belong to this network. The IPv6 address specification 2001:db8::/32 is a large address block with 296 addresses, having a 32-bit routing prefix.[citation needed] For IPv4, a network may also be characterized by its subnet mask or netmask, which is the bitmask that when applied by a bitwise AND operation to any IP address in the network, yields the routing prefix. Subnet masks are also expressed in dot-decimal notation like an address. For example, 255.255.255.0 is the subnet mask for the prefix 198.51.100.0/24.[citation needed] Computers and routers use routing tables in their operating system to forward IP packets to reach a node on a different subnetwork. Routing tables are maintained by manual configuration or automatically by routing protocols. End-nodes typically use a default route that points toward an ISP providing transit, while ISP routers use the Border Gateway Protocol to establish the most efficient routing across the complex connections of the global Internet.[citation needed] The default gateway is the node that serves as the forwarding host (router) to other networks when no other route specification matches the destination IP address of a packet. Security Internet resources, hardware, and software components are the target of criminal or malicious attempts to gain unauthorized control to cause interruptions, commit fraud, engage in blackmail or access private information. Malware is malicious software used and distributed via the Internet. It includes computer viruses which are copied with the help of humans, computer worms which copy themselves automatically, software for denial of service attacks, ransomware, botnets, and spyware that reports on the activity and typing of users.[citation needed] Usually, these activities constitute cybercrime. Defense theorists have also speculated about the possibilities of hackers using cyber warfare using similar methods on a large scale. Malware poses serious problems to individuals and businesses on the Internet. According to Symantec's 2018 Internet Security Threat Report (ISTR), malware variants number has increased to 669,947,865 in 2017, which is twice as many malware variants as in 2016. Cybercrime, which includes malware attacks as well as other crimes committed by computer, was predicted to cost the world economy US$6 trillion in 2021, and is increasing at a rate of 15% per year. Since 2021, malware has been designed to target computer systems that run critical infrastructure such as the electricity distribution network. Malware can be designed to evade antivirus software detection algorithms. The vast majority of computer surveillance involves the monitoring of data and traffic on the Internet. In the United States for example, under the Communications Assistance For Law Enforcement Act, all phone calls and broadband Internet traffic (emails, web traffic, instant messaging, etc.) are required to be available for unimpeded real-time monitoring by Federal law enforcement agencies. Under the Act, all U.S. telecommunications providers are required to install packet sniffing technology to allow Federal law enforcement and intelligence agencies to intercept all of their customers' broadband Internet and VoIP traffic.[d] The large amount of data gathered from packet capture requires surveillance software that filters and reports relevant information, such as the use of certain words or phrases, the access to certain types of web sites, or communicating via email or chat with certain parties. Agencies, such as the Information Awareness Office, NSA, GCHQ and the FBI, spend billions of dollars per year to develop, purchase, implement, and operate systems for interception and analysis of data. Similar systems are operated by Iranian secret police to identify and suppress dissidents. The required hardware and software were allegedly installed by German Siemens AG and Finnish Nokia. Some governments, such as those of Myanmar, Iran, North Korea, Mainland China, Saudi Arabia and the United Arab Emirates, restrict access to content on the Internet within their territories, especially to political and religious content, with domain name and keyword filters. In Norway, Denmark, Finland, and Sweden, major Internet service providers have voluntarily agreed to restrict access to sites listed by authorities. While this list of forbidden resources is supposed to contain only known child pornography sites, the content of the list is secret. Many countries, including the United States, have enacted laws against the possession or distribution of certain material, such as child pornography, via the Internet but do not mandate filter software. Many free or commercially available software programs, called content-control software are available to users to block offensive specific on individual computers or networks in order to limit access by children to pornographic material or depiction of violence.[citation needed] Performance As the Internet is a heterogeneous network, its physical characteristics, including, for example the data transfer rates of connections, vary widely. It exhibits emergent phenomena that depend on its large-scale organization. PB per monthYear020,00040,00060,00080,000100,000120,000140,000199019952000200520102015Petabytes per monthGlobal Internet Traffic Volume The volume of Internet traffic is difficult to measure because no single point of measurement exists in the multi-tiered, non-hierarchical topology. Traffic data may be estimated from the aggregate volume through the peering points of the Tier 1 network providers, but traffic that stays local in large provider networks may not be accounted for.[citation needed] An Internet blackout or outage can be caused by local signaling interruptions. Disruptions of submarine communications cables may cause blackouts or slowdowns to large areas, such as in the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to the small number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging for scrap metal severed most connectivity for the nation of Armenia. Internet blackouts affecting almost entire countries can be achieved by governments as a form of Internet censorship, as in the blockage of the Internet in Egypt, whereby approximately 93% of networks were without access in 2011 in an attempt to stop mobilization for anti-government protests. Estimates of the Internet's electricity usage have been the subject of controversy, according to a 2014 peer-reviewed research paper that found claims differing by a factor of 20,000 published in the literature during the preceding decade, ranging from 0.0064 kilowatt hours per gigabyte transferred (kWh/GB) to 136 kWh/GB. The researchers attributed these discrepancies mainly to the year of reference (i.e. whether efficiency gains over time had been taken into account) and to whether "end devices such as personal computers and servers are included" in the analysis. In 2011, academic researchers estimated the overall energy used by the Internet to be between 170 and 307 GW, less than two percent of the energy used by humanity. This estimate included the energy needed to build, operate, and periodically replace the estimated 750 million laptops, a billion smart phones and 100 million servers worldwide as well as the energy that routers, cell towers, optical switches, Wi-Fi transmitters and cloud storage devices use when transmitting Internet traffic. According to a non-peer-reviewed study published in 2018 by The Shift Project (a French think tank funded by corporate sponsors), nearly 4% of global CO2 emissions could be attributed to global data transfer and the necessary infrastructure. The study also said that online video streaming alone accounted for 60% of this data transfer and therefore contributed to over 300 million tons of CO2 emission per year, and argued for new "digital sobriety" regulations restricting the use and size of video files. See also Notes References Sources Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Puzzle_video_game] | [TOKENS: 1249]
Contents Puzzle video game Puzzle video games make up a broad genre of video games that emphasize puzzle solving. The types of puzzles can test problem-solving skills, including logic, pattern recognition, sequence solving, spatial recognition, and word completion. Many puzzle games involve a real-time element and require quick thinking, such as Tetris (1985) and Lemmings (1991). History Puzzle video games owe their origins to brain teasers and puzzles throughout human history. The mathematical strategy game Nim, and other traditional thinking games such as Hangman and Bulls and Cows (commercialized as Mastermind), were popular targets for computer implementation. In Universal Entertainment's Space Panic, released in arcades in 1980, the player digs holes in platforms to trap creatures. It is a precursor to puzzle-platform games such as Lode Runner (1983), Door Door (1983), and Doki Doki Penguin Land (1985). Blockbuster, by Alan Griesemer and Stephen Bradshaw (Atari 8-bit, 1981), is a computerized version of the Rubik's Cube puzzle. Snark Hunt (Atari 8-bit, 1982) is a single-player game of logical deduction, a clone of the 1970s Black Box board game. Elements of Konami's tile-sliding Loco-Motion (1982) were later seen in Pipe Mania from LucasArts (1989). In Boulder Dash (1984), the goal is to collect diamonds while avoiding or exploiting rocks that fall after digging out the dirt beneath them. Chain Shot! (1985) introduced removing groups of the same color tiles on a grid, causing the remaining tiles to fall into the gap. Uncle Henry's Nuclear Waste Dump (1986) involves dropping colored shapes into a pit, but the goal is to keep the same color tiles from touching. Tetris (1985) revolutionized and popularized the puzzle game genre. The game was created by Soviet game designer Alexey Pajitnov for the Electronika 60. Pajitnov was inspired by a traditional puzzle game named Pentominos in which players arrange blocks into lines without any gaps. The game was released by Spectrum Holobyte for MS-DOS in 1987, Atari Games in arcades in 1988, and sold 30 million copies for Game Boy. In Lemmings (1991), a series of creatures walk into deadly situations, and a player assigns jobs to specific lemmings to guide the swarm to a safe destination. The 1994 MS-DOS game Shariki, by Eugene Alemzhin, introduced the mechanic of swapping adjacent elements to tile matching games. It was little known at the time, but later had a major influence on the genre. Interest in Mahjong video games from Japan began to grow in 1994. In 2000, PopCap Games released Bejeweled, a direct clone of the 1994 tile-matching game Shariki with improved visuals. It sparked interest in the match-three mechanic which became the foundation for other popular games, including Puzzle Quest: Challenge of the Warlords (2007), Candy Crush Saga (2012), and Puzzle & Dragons (2012). More recently, Block Blast (2020s) exemplifies the continued evolution of the match-three and block-puzzle genre, alongside the emergence of AI-based solvers and analytical tools capable of automatically solving and optimizing gameplay in such puzzle games. After the release of Portal in 2007, there has been a rise in popularity of physics-based logic puzzle games. Sub-genres A physics game is a type of logical puzzle video game wherein the player must use the game's physics and environment to complete each puzzle. Physics games use consistent physics to make games more challenging. The genre is popular in online flash games and mobile games. Educators have used these games to demonstrate principles of physics. Physics-based logic puzzle games include The Incredible Machine, Portal, The Talos Principle, Braid, Fez, World of Goo, and Cut the Rope, as well as projectile collision games such as Angry Birds, Peggle, Monster Strike, and Crush the Castle. Programming games require writing code, either as text or using a visual system, to solve puzzles. Examples include Rocky's Boots (1982), Robot Odyssey (1984), SpaceChem (2011), and Infinifactory (2015). This sub-genre includes point-and-click games that often overlap with adventure games and walking simulators. Unlike logical puzzle games, these games generally require inductive reasoning to solve. The defining trait is that the player must experiment with mechanisms in each level before they can solve them. Exploration games include Myst, Limbo, and The Dig. Escape room games such as The Room involve detailed exploration of a single location. Sokoban games, such as its 1982 namesake title, or block-pushing games, involve pushing or pulling blocks on a grid-like space to move them into designated positions without blocking the movement of other blocks. Similar games include Baba is You and Patrick's Parabox. A hidden object game, sometimes called hidden picture or hidden object puzzle adventure (HOPA), is a genre of puzzle video game in which the player must find items from a list that are hidden within a scene. Hidden object games are a popular trend in casual gaming. In tile-matching video games, the player manipulates tiles in order to make them disappear according to a matching criterion. The genre began with 1985's Chain Shot! and has similarities to falling-block games such as Tetris. This genre includes games that require pieces to be swapped such as Bejeweled or Candy Crush Saga, games that adapt the classic tile-based game Mahjong such as Mahjong Trails, and games in which pieces are shot on the board such as Zuma. Puzzle games based on Tetris include tile-matching games where the matching criterion is to place a given number of tiles of the same type so that they adjoin each other. That number is often three, and the corresponding subset of tile-matching games is referred to as match-three games. See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Oxyanion] | [TOKENS: 2270]
Contents Oxyanion An oxyanion, or oxoanion, is an ion with the generic formula AxOz−y (where A represents a chemical element and O represents an oxygen atom). Oxyanions are formed by a large majority of the chemical elements. The corresponding oxyacid of an oxyanion is the compound HzAxOy. The structures of condensed oxyanions can be rationalized in terms of AOn polyhedral units with sharing of corners or edges between polyhedra. The oxyanions (specifically, phosphate and polyphosphate esters) adenosine monophosphate (AMP), adenosine diphosphate (ADP) and adenosine triphosphate (ATP) are important in biology. Monomeric oxyanions The formula of monomeric oxyanions, AOm−n, is dictated by the oxidation state of the element A and its position in the periodic table. Elements of the first row are limited to a maximum coordination number of 4. However, none of the first row elements has a monomeric oxyanion with that coordination number. Instead, carbonate (CO2−3) and nitrate (NO−3) have a trigonal planar structure with π bonding between the central atom and the oxygen atoms. This π bonding is favoured by the similarity in size of the central atom and oxygen. The oxyanions of second-row elements in the group oxidation state are tetrahedral. Tetrahedral SiO4 units are found in olivine minerals, (Mg,Fe)2SiO4, but the anion does not have a separate existence as the oxygen atoms are surrounded tetrahedrally by cations in the solid state. Phosphate (PO3−4), sulfate (SO2−4), and perchlorate (ClO−4) ions can be found as such in various salts. Many oxyanions of elements in lower oxidation state obey the octet rule and this can be used to rationalize the formulae adopted. For example, chlorine(V) has two valence electrons so it can accommodate three electron pairs from bonds with oxide ions. The charge on the ion is +5 − 3 × 2 = −1, and so the formula is ClO−3. The structure of the ion is predicted by VSEPR theory to be pyramidal, with three bonding electron pairs and one lone pair. In a similar way, The oxyanion of chlorine(III) has the formula ClO−2, and is bent with two lone pairs and two bonding pairs. In the third and subsequent rows of the periodic table, 6-coordination is possible, but isolated octahedral oxyanions are not known because they would carry an electrical charge that is too high and undergo hydrolysis. Thus molybdenum(VI) does not form MoO6−6, but forms the tetrahedral molybdate anion, MoO2−4. MoO6 units are found in condensed molybdates. Fully protonated oxyanions with an octahedral structure are found in such species as Sn(OH)2−6 and Sb(OH)−6. In addition, orthoperiodate can be only partially deprotonated,[Note 1] with The naming of monomeric oxyanions follows the following rules. Here the halogen group (group 7A, 17) is referred to as group VII and the noble gases group (group 8A) is referred to as group VIII. Condensation reactions In aqueous solution, oxyanions with high charge can undergo condensation reactions, such as in the formation of the dichromate ion, Cr2O2−7: The driving force for this reaction is the reduction of electrical charge density on the anion and the elimination of the hydronium (H3O+) ion. The amount of order in the solution is decreased, releasing a certain amount of entropy which makes the Gibbs free energy more negative and favors the forward reaction. It is an example of an acid–base reaction with the monomeric oxyanion acting as a base and the condensed oxyanion acting as its conjugate acid. The reverse reaction is a hydrolysis reaction, as a water molecule, acting as a base, is split. Further condensation may occur, particularly with anions of higher charge, as occurs with adenosine phosphates. The conversion of ATP to ADP is a hydrolysis reaction and is an important source of energy in biological systems. The formation of most silicate minerals can be viewed as the result of a de-condensation reaction in which silica reacts with a basic oxide, an acid–base reaction in the Lux–Flood sense. Structures and formulae of polyoxyanions A polyoxyanion is a polymeric oxyanion in which multiple oxyanion monomers, usually regarded as MOn polyhedra, are joined by sharing corners or edges. When two corners of a polyhedron are shared the resulting structure may be a chain or a ring. Short chains occur, for example, in polyphosphates. Inosilicates, such as pyroxenes, have a long chain of SiO4 tetrahedra each sharing two corners. The same structure occurs in so-called meta-vanadates, such as ammonium metavanadate, NH4VO3. The formula of the oxyanion SiO2−3 is obtained as follows: each nominal silicon ion (Si4+) is attached to two nominal oxide ions (O2−) and has a half share in two others. Thus the stoichiometry and charge are given by: A ring can be viewed as a chain in which the two ends have been joined. Cyclic triphosphate, P3O3−9 is an example. When three corners are shared the structure extends into two dimensions. In amphiboles, (of which asbestos is an example) two chains are linked together by sharing of a third corner on alternate places along the chain. This results in an ideal formula Si4O6−11 and a linear chain structure which explains the fibrous nature of these minerals. Sharing of all three corners can result in a sheet structure, as in mica, Si2O2−5, in which each silicon has one oxygen to itself and a half-share in three others. Crystalline mica can be cleaved into very thin sheets. The sharing of all four corners of the tetrahedra results in a 3-dimensional structure, such as in quartz. Aluminosilicates are minerals in which some silicon is replaced by aluminium. However, the oxidation state of aluminium is one less than that of silicon, so the replacement must be accompanied by the addition of another cation. The number of possible combinations of such a structure is very large, which is, in part, the reason why there are so many aluminosilicates. Octahedral MO6 units are common in oxyanions of the larger transition metals. Some compounds, such as salts of the chain-polymeric ion, Mo2O2−7 even contain both tetrahedral and octahedral units. Edge-sharing is common in ions containing octahedral building blocks and the octahedra are usually distorted to reduce the strain at the bridging oxygen atoms. This results in 3-dimensional structures called polyoxometalates. Typical examples occur in the Keggin structure of the phosphomolybdate ion. Edge sharing is an effective means of reducing electrical charge density, as can be seen with the hypothetical condensation reaction involving two octahedra: Here, the average charge on each M atom is reduced by 2. The efficacy of edge-sharing is demonstrated by the following reaction, which occurs when an alkaline aqueous solution of molybdate is acidified. The tetrahedral molybdate ion is converted into a cluster of 7 edge-linked octahedra giving an average charge on each molybdenum of 6⁄7. The heptamolybdate cluster is so stable that clusters with between 2 and 6 molybdate units have not been detected even though they must be formed as intermediates. Heuristic for acidity The pKa of the related acids can be guessed from the number of double bonds to oxygen. Thus perchloric acid is a very strong acid while hypochlorous acid is very weak. A simple rule usually works to within about 1 pH unit. Acid–base properties Most oxyanions are weak bases and can be protonated to give acids or acid salts. For example, the phosphate ion can be successively protonated to form phosphoric acid. The extent of protonation in aqueous solution will depend on the acid dissociation constants and pH. For example, AMP (adenosine monophosphate) has a pKa value of 6.21, so at pH 7 it will be about 10% protonated. Charge neutralization is an important factor in these protonation reactions. By contrast, the univalent anions perchlorate and permanganate ions are very difficult to protonate and so the corresponding acids are strong acids. Although acids such as phosphoric acid are written as H3PO4, the protons are attached to oxygen atoms forming hydroxyl groups, so the formula can also be written as OP(OH)3 to better reflect the structure. Sulfuric acid may be written as O2S(OH)2; this is the molecule observed in the gas phase. The phosphite ion, PO3−3, is a strong base, and so always carries at least one proton. In this case the proton is attached directly to the phosphorus atom with the structure HPO2−3. In forming this ion, the phosphite ion is behaving as a Lewis base and donating a pair of electrons to the Lewis acid, H+. As mentioned above, a condensation reaction is also an acid–base reaction. In many systems, both protonation and condensation reactions can occur. The case of the chromate ion provides a relatively simple example. In the predominance diagram for chromate, shown at the right, pCr stands for the negative logarithm of the chromium concentration and pH stands for the negative logarithm of H+ ion concentration. There are two independent equilibria. Equilibrium constants are defined as follows. The predominance diagram is interpreted as follows. The species H2CrO4 and HCr2O−7 are not shown as they are formed only at very low pH. Predominance diagrams can become very complicated when many polymeric species can be formed, such as in vanadates, molybdates, and tungstates. Another complication is that many of the higher polymers are formed extremely slowly, such that equilibrium may not be attained even in months, leading to possible errors in the equilibrium constants and the predominance diagram. See also References and notes External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Social_network#cite_ref-GaudeulGiannetti2013_83-0] | [TOKENS: 5247]
Contents Social network 1800s: Martineau · Tocqueville · Marx · Spencer · Le Bon · Ward · Pareto · Tönnies · Veblen · Simmel · Durkheim · Addams · Mead · Weber · Du Bois · Mannheim · Elias A social network is a social structure consisting of a set of social actors (such as individuals or organizations), networks of dyadic ties, and other social interactions between actors. The social network perspective provides a set of methods for analyzing the structure of whole social entities along with a variety of theories explaining the patterns observed in these structures. The study of these structures uses social network analysis to identify local and global patterns, locate influential entities, and examine dynamics of networks. For instance, social network analysis has been used in studying the spread of misinformation on social media platforms or analyzing the influence of key figures in social networks. Social networks and the analysis of them is an inherently interdisciplinary academic field which emerged from social psychology, sociology, statistics, and graph theory. Georg Simmel authored early structural theories in sociology emphasizing the dynamics of triads and "web of group affiliations". Jacob Moreno is credited with developing the first sociograms in the 1930s to study interpersonal relationships. These approaches were mathematically formalized in the 1950s and theories and methods of social networks became pervasive in the social and behavioral sciences by the 1980s. Social network analysis is now one of the major paradigms in contemporary sociology, and is also employed in a number of other social and formal sciences. Together with other complex networks, it forms part of the nascent field of network science. Overview The social network is a theoretical construct useful in the social sciences to study relationships between individuals, groups, organizations, or even entire societies (social units, see differentiation). The term is used to describe a social structure determined by such interactions. The ties through which any given social unit connects represent the convergence of the various social contacts of that unit. This theoretical approach is, necessarily, relational. An axiom of the social network approach to understanding social interaction is that social phenomena should be primarily conceived and investigated through the properties of relations between and within units, instead of the properties of these units themselves. Thus, one common criticism of social network theory is that individual agency is often ignored although this may not be the case in practice (see agent-based modeling). Precisely because many different types of relations, singular or in combination, form these network configurations, network analytics are useful to a broad range of research enterprises. In social science, these fields of study include, but are not limited to anthropology, biology, communication studies, economics, geography, information science, organizational studies, social psychology, sociology, and sociolinguistics. History In the late 1890s, both Émile Durkheim and Ferdinand Tönnies foreshadowed the idea of social networks in their theories and research of social groups. Tönnies argued that social groups can exist as personal and direct social ties that either link individuals who share values and belief (Gemeinschaft, German, commonly translated as "community") or impersonal, formal, and instrumental social links (Gesellschaft, German, commonly translated as "society"). Durkheim gave a non-individualistic explanation of social facts, arguing that social phenomena arise when interacting individuals constitute a reality that can no longer be accounted for in terms of the properties of individual actors. Georg Simmel, writing at the turn of the twentieth century, pointed to the nature of networks and the effect of network size on interaction and examined the likelihood of interaction in loosely knit networks rather than groups. Major developments in the field can be seen in the 1930s by several groups in psychology, anthropology, and mathematics working independently. In psychology, in the 1930s, Jacob L. Moreno began systematic recording and analysis of social interaction in small groups, especially classrooms and work groups (see sociometry). In anthropology, the foundation for social network theory is the theoretical and ethnographic work of Bronislaw Malinowski, Alfred Radcliffe-Brown, and Claude Lévi-Strauss. A group of social anthropologists associated with Max Gluckman and the Manchester School, including John A. Barnes, J. Clyde Mitchell and Elizabeth Bott Spillius, often are credited with performing some of the first fieldwork from which network analyses were performed, investigating community networks in southern Africa, India and the United Kingdom. Concomitantly, British anthropologist S. F. Nadel codified a theory of social structure that was influential in later network analysis. In sociology, the early (1930s) work of Talcott Parsons set the stage for taking a relational approach to understanding social structure. Later, drawing upon Parsons' theory, the work of sociologist Peter Blau provides a strong impetus for analyzing the relational ties of social units with his work on social exchange theory. By the 1970s, a growing number of scholars worked to combine the different tracks and traditions. One group consisted of sociologist Harrison White and his students at the Harvard University Department of Social Relations. Also independently active in the Harvard Social Relations department at the time were Charles Tilly, who focused on networks in political and community sociology and social movements, and Stanley Milgram, who developed the "six degrees of separation" thesis. Mark Granovetter and Barry Wellman are among the former students of White who elaborated and championed the analysis of social networks. Beginning in the late 1990s, social network analysis experienced work by sociologists, political scientists, and physicists such as Duncan J. Watts, Albert-László Barabási, Peter Bearman, Nicholas A. Christakis, James H. Fowler, and others, developing and applying new models and methods to emerging data available about online social networks, as well as "digital traces" regarding face-to-face networks. Levels of analysis In general, social networks are self-organizing, emergent, and complex, such that a globally coherent pattern appears from the local interaction of the elements that make up the system. These patterns become more apparent as network size increases. However, a global network analysis of, for example, all interpersonal relationships in the world is not feasible and is likely to contain so much information as to be uninformative. Practical limitations of computing power, ethics and participant recruitment and payment also limit the scope of a social network analysis. The nuances of a local system may be lost in a large network analysis, hence the quality of information may be more important than its scale for understanding network properties. Thus, social networks are analyzed at the scale relevant to the researcher's theoretical question. Although levels of analysis are not necessarily mutually exclusive, there are three general levels into which networks may fall: micro-level, meso-level, and macro-level. At the micro-level, social network research typically begins with an individual, snowballing as social relationships are traced, or may begin with a small group of individuals in a particular social context. Dyadic level: A dyad is a social relationship between two individuals. Network research on dyads may concentrate on structure of the relationship (e.g. multiplexity, strength), social equality, and tendencies toward reciprocity/mutuality. Triadic level: Add one individual to a dyad, and you have a triad. Research at this level may concentrate on factors such as balance and transitivity, as well as social equality and tendencies toward reciprocity/mutuality. In the balance theory of Fritz Heider the triad is the key to social dynamics. The discord in a rivalrous love triangle is an example of an unbalanced triad, likely to change to a balanced triad by a change in one of the relations. The dynamics of social friendships in society has been modeled by balancing triads. The study is carried forward with the theory of signed graphs. Actor level: The smallest unit of analysis in a social network is an individual in their social setting, i.e., an "actor" or "ego." Egonetwork analysis focuses on network characteristics, such as size, relationship strength, density, centrality, prestige and roles such as isolates, liaisons, and bridges. Such analyses, are most commonly used in the fields of psychology or social psychology, ethnographic kinship analysis or other genealogical studies of relationships between individuals. Subset level: Subset levels of network research problems begin at the micro-level, but may cross over into the meso-level of analysis. Subset level research may focus on distance and reachability, cliques, cohesive subgroups, or other group actions or behavior. In general, meso-level theories begin with a population size that falls between the micro- and macro-levels. However, meso-level may also refer to analyses that are specifically designed to reveal connections between micro- and macro-levels. Meso-level networks are low density and may exhibit causal processes distinct from interpersonal micro-level networks. Organizations: Formal organizations are social groups that distribute tasks for a collective goal. Network research on organizations may focus on either intra-organizational or inter-organizational ties in terms of formal or informal relationships. Intra-organizational networks themselves often contain multiple levels of analysis, especially in larger organizations with multiple branches, franchises or semi-autonomous departments. In these cases, research is often conducted at a work group level and organization level, focusing on the interplay between the two structures. Experiments with networked groups online have documented ways to optimize group-level coordination through diverse interventions, including the addition of autonomous agents to the groups. Randomly distributed networks: Exponential random graph models of social networks became state-of-the-art methods of social network analysis in the 1980s. This framework has the capacity to represent social-structural effects commonly observed in many human social networks, including general degree-based structural effects commonly observed in many human social networks as well as reciprocity and transitivity, and at the node-level, homophily and attribute-based activity and popularity effects, as derived from explicit hypotheses about dependencies among network ties. Parameters are given in terms of the prevalence of small subgraph configurations in the network and can be interpreted as describing the combinations of local social processes from which a given network emerges. These probability models for networks on a given set of actors allow generalization beyond the restrictive dyadic independence assumption of micro-networks, allowing models to be built from theoretical structural foundations of social behavior. Scale-free networks: A scale-free network is a network whose degree distribution follows a power law, at least asymptotically. In network theory a scale-free ideal network is a random network with a degree distribution that unravels the size distribution of social groups. Specific characteristics of scale-free networks vary with the theories and analytical tools used to create them, however, in general, scale-free networks have some common characteristics. One notable characteristic in a scale-free network is the relative commonness of vertices with a degree that greatly exceeds the average. The highest-degree nodes are often called "hubs", and may serve specific purposes in their networks, although this depends greatly on the social context. Another general characteristic of scale-free networks is the clustering coefficient distribution, which decreases as the node degree increases. This distribution also follows a power law. The Barabási model of network evolution shown above is an example of a scale-free network. Rather than tracing interpersonal interactions, macro-level analyses generally trace the outcomes of interactions, such as economic or other resource transfer interactions over a large population. Large-scale networks: Large-scale network is a term somewhat synonymous with "macro-level." It is primarily used in social and behavioral sciences, and in economics. Originally, the term was used extensively in the computer sciences (see large-scale network mapping). Complex networks: Most larger social networks display features of social complexity, which involves substantial non-trivial features of network topology, with patterns of complex connections between elements that are neither purely regular nor purely random (see, complexity science, dynamical system and chaos theory), as do biological, and technological networks. Such complex network features include a heavy tail in the degree distribution, a high clustering coefficient, assortativity or disassortativity among vertices, community structure (see stochastic block model), and hierarchical structure. In the case of agency-directed networks these features also include reciprocity, triad significance profile (TSP, see network motif), and other features. In contrast, many of the mathematical models of networks that have been studied in the past, such as lattices and random graphs, do not show these features. Theoretical links Various theoretical frameworks have been imported for the use of social network analysis. The most prominent of these are Graph theory, Balance theory, Social comparison theory, and more recently, the Social identity approach. Few complete theories have been produced from social network analysis. Two that have are structural role theory and heterophily theory. The basis of Heterophily Theory was the finding in one study that more numerous weak ties can be important in seeking information and innovation, as cliques have a tendency to have more homogeneous opinions as well as share many common traits. This homophilic tendency was the reason for the members of the cliques to be attracted together in the first place. However, being similar, each member of the clique would also know more or less what the other members knew. To find new information or insights, members of the clique will have to look beyond the clique to its other friends and acquaintances. This is what Granovetter called "the strength of weak ties". Structural holes In the context of networks, social capital exists where people have an advantage because of their location in a network. Contacts in a network provide information, opportunities and perspectives that can be beneficial to the central player in the network. Most social structures tend to be characterized by dense clusters of strong connections. Information within these clusters tends to be rather homogeneous and redundant. Non-redundant information is most often obtained through contacts in different clusters. When two separate clusters possess non-redundant information, there is said to be a structural hole between them. Thus, a network that bridges structural holes will provide network benefits that are in some degree additive, rather than overlapping. An ideal network structure has a vine and cluster structure, providing access to many different clusters and structural holes. Networks rich in structural holes are a form of social capital in that they offer information benefits. The main player in a network that bridges structural holes is able to access information from diverse sources and clusters. For example, in business networks, this is beneficial to an individual's career because he is more likely to hear of job openings and opportunities if his network spans a wide range of contacts in different industries/sectors. This concept is similar to Mark Granovetter's theory of weak ties, which rests on the basis that having a broad range of contacts is most effective for job attainment. Structural holes have been widely applied in social network analysis, resulting in applications in a wide range of practical scenarios as well as machine learning-based social prediction. Research clusters Research has used network analysis to examine networks created when artists are exhibited together in museum exhibition. Such networks have been shown to affect an artist's recognition in history and historical narratives, even when controlling for individual accomplishments of the artist. Other work examines how network grouping of artists can affect an individual artist's auction performance. An artist's status has been shown to increase when associated with higher status networks, though this association has diminishing returns over an artist's career. In J.A. Barnes' day, a "community" referred to a specific geographic location and studies of community ties had to do with who talked, associated, traded, and attended church with whom. Today, however, there are extended "online" communities developed through telecommunications devices and social network services. Such devices and services require extensive and ongoing maintenance and analysis, often using network science methods. Community development studies, today, also make extensive use of such methods. Complex networks require methods specific to modelling and interpreting social complexity and complex adaptive systems, including techniques of dynamic network analysis. Mechanisms such as Dual-phase evolution explain how temporal changes in connectivity contribute to the formation of structure in social networks. The study of social networks is being used to examine the nature of interdependencies between actors and the ways in which these are related to outcomes of conflict and cooperation. Areas of study include cooperative behavior among participants in collective actions such as protests; promotion of peaceful behavior, social norms, and public goods within communities through networks of informal governance; the role of social networks in both intrastate conflict and interstate conflict; and social networking among politicians, constituents, and bureaucrats. In criminology and urban sociology, much attention has been paid to the social networks among criminal actors. For example, murders can be seen as a series of exchanges between gangs. Murders can be seen to diffuse outwards from a single source, because weaker gangs cannot afford to kill members of stronger gangs in retaliation, but must commit other violent acts to maintain their reputation for strength. Diffusion of ideas and innovations studies focus on the spread and use of ideas from one actor to another or one culture and another. This line of research seeks to explain why some become "early adopters" of ideas and innovations, and links social network structure with facilitating or impeding the spread of an innovation. A case in point is the social diffusion of linguistic innovation such as neologisms. Experiments and large-scale field trials (e.g., by Nicholas Christakis and collaborators) have shown that cascades of desirable behaviors can be induced in social groups, in settings as diverse as Honduras villages, Indian slums, or in the lab. Still other experiments have documented the experimental induction of social contagion of voting behavior, emotions, risk perception, and commercial products. In demography, the study of social networks has led to new sampling methods for estimating and reaching populations that are hard to enumerate (for example, homeless people or intravenous drug users.) For example, respondent driven sampling is a network-based sampling technique that relies on respondents to a survey recommending further respondents. The field of sociology focuses almost entirely on networks of outcomes of social interactions. More narrowly, economic sociology considers behavioral interactions of individuals and groups through social capital and social "markets". Sociologists, such as Mark Granovetter, have developed core principles about the interactions of social structure, information, ability to punish or reward, and trust that frequently recur in their analyses of political, economic and other institutions. Granovetter examines how social structures and social networks can affect economic outcomes like hiring, price, productivity and innovation and describes sociologists' contributions to analyzing the impact of social structure and networks on the economy. Analysis of social networks is increasingly incorporated into health care analytics, not only in epidemiological studies but also in models of patient communication and education, disease prevention, mental health diagnosis and treatment, and in the study of health care organizations and systems. Human ecology is an interdisciplinary and transdisciplinary study of the relationship between humans and their natural, social, and built environments. The scientific philosophy of human ecology has a diffuse history with connections to geography, sociology, psychology, anthropology, zoology, and natural ecology. In the study of literary systems, network analysis has been applied by Anheier, Gerhards and Romo, De Nooy, Senekal, and Lotker, to study various aspects of how literature functions. The basic premise is that polysystem theory, which has been around since the writings of Even-Zohar, can be integrated with network theory and the relationships between different actors in the literary network, e.g. writers, critics, publishers, literary histories, etc., can be mapped using visualization from SNA. Research studies of formal or informal organization relationships, organizational communication, economics, economic sociology, and other resource transfers. Social networks have also been used to examine how organizations interact with each other, characterizing the many informal connections that link executives together, as well as associations and connections between individual employees at different organizations. Many organizational social network studies focus on teams. Within team network studies, research assesses, for example, the predictors and outcomes of centrality and power, density and centralization of team instrumental and expressive ties, and the role of between-team networks. Intra-organizational networks have been found to affect organizational commitment, organizational identification, interpersonal citizenship behaviour. Social capital is a form of economic and cultural capital in which social networks are central, transactions are marked by reciprocity, trust, and cooperation, and market agents produce goods and services not mainly for themselves, but for a common good. Social capital is split into three dimensions: the structural, the relational and the cognitive dimension. The structural dimension describes how partners interact with each other and which specific partners meet in a social network. Also, the structural dimension of social capital indicates the level of ties among organizations. This dimension is highly connected to the relational dimension which refers to trustworthiness, norms, expectations and identifications of the bonds between partners. The relational dimension explains the nature of these ties which is mainly illustrated by the level of trust accorded to the network of organizations. The cognitive dimension analyses the extent to which organizations share common goals and objectives as a result of their ties and interactions. Social capital is a sociological concept about the value of social relations and the role of cooperation and confidence to achieve positive outcomes. The term refers to the value one can get from their social ties. For example, newly arrived immigrants can make use of their social ties to established migrants to acquire jobs they may otherwise have trouble getting (e.g., because of unfamiliarity with the local language). A positive relationship exists between social capital and the intensity of social network use. In a dynamic framework, higher activity in a network feeds into higher social capital which itself encourages more activity. This particular cluster focuses on brand-image and promotional strategy effectiveness, taking into account the impact of customer participation on sales and brand-image. This is gauged through techniques such as sentiment analysis which rely on mathematical areas of study such as data mining and analytics. This area of research produces vast numbers of commercial applications as the main goal of any study is to understand consumer behaviour and drive sales. In many organizations, members tend to focus their activities inside their own groups, which stifles creativity and restricts opportunities. A player whose network bridges structural holes has an advantage in detecting and developing rewarding opportunities. Such a player can mobilize social capital by acting as a "broker" of information between two clusters that otherwise would not have been in contact, thus providing access to new ideas, opinions and opportunities. British philosopher and political economist John Stuart Mill, writes, "it is hardly possible to overrate the value of placing human beings in contact with persons dissimilar to themselves.... Such communication [is] one of the primary sources of progress." Thus, a player with a network rich in structural holes can add value to an organization through new ideas and opportunities. This in turn, helps an individual's career development and advancement. A social capital broker also reaps control benefits of being the facilitator of information flow between contacts. Full communication with exploratory mindsets and information exchange generated by dynamically alternating positions in a social network promotes creative and deep thinking. In the case of consulting firm Eden McCallum, the founders were able to advance their careers by bridging their connections with former big three consulting firm consultants and mid-size industry firms. By bridging structural holes and mobilizing social capital, players can advance their careers by executing new opportunities between contacts. There has been research that both substantiates and refutes the benefits of information brokerage. A study of high tech Chinese firms by Zhixing Xiao found that the control benefits of structural holes are "dissonant to the dominant firm-wide spirit of cooperation and the information benefits cannot materialize due to the communal sharing values" of such organizations. However, this study only analyzed Chinese firms, which tend to have strong communal sharing values. Information and control benefits of structural holes are still valuable in firms that are not quite as inclusive and cooperative on the firm-wide level. In 2004, Ronald Burt studied 673 managers who ran the supply chain for one of America's largest electronics companies. He found that managers who often discussed issues with other groups were better paid, received more positive job evaluations and were more likely to be promoted. Thus, bridging structural holes can be beneficial to an organization, and in turn, to an individual's career. Computer networks combined with social networking software produce a new medium for social interaction. A relationship over a computerized social networking service can be characterized by context, direction, and strength. The content of a relation refers to the resource that is exchanged. In a computer-mediated communication context, social pairs exchange different kinds of information, including sending a data file or a computer program as well as providing emotional support or arranging a meeting. With the rise of electronic commerce, information exchanged may also correspond to exchanges of money, goods or services in the "real" world. Social network analysis methods have become essential to examining these types of computer mediated communication. In addition, the sheer size and the volatile nature of social media has given rise to new network metrics. A key concern with networks extracted from social media is the lack of robustness of network metrics given missing data. Based on the pattern of homophily, ties between people are most likely to occur between nodes that are most similar to each other, or within neighbourhood segregation, individuals are most likely to inhabit the same regional areas as other individuals who are like them. Therefore, social networks can be used as a tool to measure the degree of segregation or homophily within a social network. Social Networks can both be used to simulate the process of homophily but it can also serve as a measure of level of exposure of different groups to each other within a current social network of individuals in a certain area. See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Simulation_video_game] | [TOKENS: 1291]
Contents Simulation video game Simulation video games are a diverse super-category of video games, generally designed to closely simulate real world activities. A simulation game attempts to copy various activities from real life in the form of a game for various purposes such as training, analysis, prediction, or entertainment. Usually there are no strictly defined goals in the game, and the player is allowed to control a character or environment freely. Well-known examples are war games, business games, and role play simulation. From three basic types of strategic, planning, and learning exercises: games, simulations, and case studies, a number of hybrids may be considered, including simulation games that are used as case studies. Comparisons of the merits of simulation games versus other teaching techniques have been carried out by many researchers and a number of comprehensive reviews have been published. Subgenres Construction and management simulation (CMS) is a type of simulation game in which players build, expand or manage fictional communities or projects with limited resources. Strategy games sometimes incorporate CMS aspects into their game economy, as players must manage resources while expanding their projects. Pure CMS games differ from strategy games in that "the player's goal is not to defeat an enemy, but to build something within the context of an ongoing process." Games in this category are sometimes also called "management games". Life simulation games (or artificial life games) are a subgenre of simulation video games in which the player lives or controls one or more artificial lifeforms. A life simulation game can revolve around "individuals and relationships, or it could be a simulation of an ecosystem". Social simulation games are one of its subgenres. Some video games simulate the playing of sports. Most sports have been recreated by video games, including team sports, athletics and extreme sports. Some games emphasize playing the sport (such as the Madden NFL series), whilst others emphasize strategy and organization (such as Football Manager). Some, such as Arch Rivals, satirize the sport for comic effect. This genre has been popular throughout the history of video games, and is competitive, just like real-world sports. A number of game series feature the names and characteristics of real teams and players, and are updated continuously to reflect real-world changes. Simulation games in education Because Simulation games make learning a matter of direct experience, they may relieve the tedium associated with more conventional modes of instruction, as they demand increased participation rather than merely reading about or discussing concepts and ideas (like discrimination, culture, stratification, and norms). Students will experience them by actually ''living" the experiences. Therefore, the use of simulation games may increase students' motivation and interest in learning.[needs update] Simulation games can provide increased insights into how the world is seen, like the moral and intellectual idiosyncrasies of others. They may also increase empathy for others and help develop awareness of personal and interpersonal values by allowing players to see moral and ethical implications of the choices they make. As such, they can be used to change and improve students attitudes toward self, environment, and classroom learning.[needs update] Many games are designed to change and develop specific skills of decision making, problem solving and critical thinking (such as those involved in survey sampling, perception and communication).[needs update] History The Sumerian Game (1964), a text-based early mainframe game designed by Mabel Addis, based on the ancient Sumerian city-state of Lagash, was the first economic simulation game. In 1968, Cornell University funded several simulation games which were developed by Prof. Robert Chase and his students. These included Cornell Hotel Administration Simulation Exercise and Cornell Restaurant Administration Simulation Exercise. Notably the restaurant game featured competitive play, with teams managing competing restaurants. The games drew attention from the relevant industries of the time and were made playable at national conventions for the American Hotel & Motel Association and the Club Managers Association of America in 1969. Another early economic sim by Danielle Bunten Berry, M.U.L.E., released in 1983. In the 1980s, it became a trend for arcade video games to use hydraulic motion simulator arcade cabinets. The trend was sparked by Sega's "taikan" games, with "taikan" meaning "body sensation" in Japanese. Sega's first game to use a motion simulator cabinet was Space Tactics (1981), a space combat simulator that had a cockpit cabinet where the screen moved in sync with the on-screen action. The "taikan" trend later began when Yu Suzuki's team at Sega (later known as Sega AM2) developed Hang-On (1985), a racing video game where the player sits on and moves a motorbike replica to control the in-game actions. Suzuki's team at Sega followed it with hydraulic motion simulator cockpit cabinets for rail shooters such as Space Harrier (1985), racing games such as Out Run (1986), and combat flight simulators such as After Burner (1987) and G-LOC: Air Battle (1990). One of the most sophisticated motion simulator cabinets in arcades was Sega's R360 (1990), which simulated the full 360-degree rotation of an aircraft. Sega have since continued to manufacture motion simulator cabinets for arcade games through to the 2010s. In the mid-1980s, Codemasters and the Oliver Twins released a number of games with "Simulator" in the title, including BMX Simulator (1986), Grand Prix Simulator (1986), and Pro Boxing Simulator (1988). Richard and David Darling of Codemasters were inspired by Concertmaster's best-selling games, which were based on real sports such as football and BMX racing, which had a pre-existing popularity. In a parody of the established "simulator" cliche, Your Sinclair released a game titled Advanced Lawnmower Simulator in 1988. The introduction of the city-building simulation subgenre is closely associated with the 1989 release of SimCity by developer Will Wright. However, earlier city-building titles had been published, including the 1984 Colecovision title Fortune Builder. Later games published by Wright's company Maxis, including SimLife and SimEarth, simulated worlds at a broader scale, including recreations of genetics and global ecosystems. A study of adolescents who played SimCity 2000 found that those players had a greater appreciation and expectation of their government officials after playing. See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Special:BookSources/0-465-00488-1] | [TOKENS: 380]
Contents Book sources This page allows users to search multiple sources for a book given a 10- or 13-digit International Standard Book Number. Spaces and dashes in the ISBN do not matter. This page links to catalogs of libraries, booksellers, and other book sources where you will be able to search for the book by its International Standard Book Number (ISBN). Online text Google Books and other retail sources below may be helpful if you want to verify citations in Wikipedia articles, because they often let you search an online version of the book for specific words or phrases, or you can browse through the book (although for copyright reasons the entire book is usually not available). At the Open Library (part of the Internet Archive) you can borrow and read entire books online. Online databases Subscription eBook databases Libraries Alabama Alaska California Colorado Connecticut Delaware Florida Georgia Illinois Indiana Iowa Kansas Kentucky Massachusetts Michigan Minnesota Missouri Nebraska New Jersey New Mexico New York North Carolina Ohio Oklahoma Oregon Pennsylvania Rhode Island South Carolina South Dakota Tennessee Texas Utah Washington state Wisconsin Bookselling and swapping Find your book on a site that compiles results from other online sites: These sites allow you to search the catalogs of many individual booksellers: Non-English book sources If the book you are looking for is in a language other than English, you might find it helpful to look at the equivalent pages on other Wikipedias, linked below – they are more likely to have sources appropriate for that language. Find other editions The WorldCat xISBN tool for finding other editions is no longer available. However, there is often a "view all editions" link on the results page from an ISBN search. Google books often lists other editions of a book and related books under the "about this book" link. You can convert between 10 and 13 digit ISBNs with these tools: Find on Wikipedia See also Get free access to research! Research tools and services Outreach Get involved
========================================
[SOURCE: https://en.wikipedia.org/wiki/Category:Video_games_scored_by_Gareth_Coker] | [TOKENS: 55]
Category:Video games scored by Gareth Coker Video games that were scored by Gareth Coker. Pages in category "Video games scored by Gareth Coker" The following 9 pages are in this category, out of 9 total. This list may not reflect recent changes.
========================================
[SOURCE: https://en.wikipedia.org/wiki/Decolonization] | [TOKENS: 10242]
Contents Decolonization Decolonization is the undoing of colonialism, the latter being the process whereby imperial nations establish and dominate foreign territories, often overseas. The meanings and applications of the term are disputed. Some scholars of decolonization focus especially on independence movements in the colonies and the collapse of global colonial empires. As a movement to establish independence for colonized territories from their respective metropoles, decolonization began in 1775 with the American Revolution in North America against the British Empire. The Napoleonic Wars in the 19th century saw the French colonial empire, the Spanish Empire, and Portugal face decolonization with the Haitian Revolution, the Spanish American wars of independence, and the independence of Brazil from Portugal. A major wave of decolonization occurred in the aftermath of the First World War, including in the United States and the Empire of Japan. Another wave of decolonization occurred after the Second World War, and many countries gained their independence in the following years. The last wave of decolonization occurred after the Cold War with the collapse of the Soviet Union, the independence of Palau, and the handovers of Hong Kong and Macau. Seventeen territories remain under the United Nations classification of non-self-governing territories. Scope According to David Strang, decolonization is achieved through the attainment of sovereign statehood with de jure recognition by the international community or through full incorporation into an existing sovereign state. The United Nations (UN) states that the fundamental right to self-determination is the core requirement for decolonization, and that this right can be exercised with or without political independence. A UN General Assembly Resolution in 1960 characterised colonial foreign rule as a violation of human rights. In states that have won independence, Indigenous people living under settler colonialism continue to make demands for decolonization and self-determination. Although discussions of hegemony and power, central to the concept of decolonization, can be found as early as the writings of Thucydides, there have been several particularly active periods of decolonization in modern times. These include the decolonization of Africa, the breakup of the Spanish Empire in the 19th century; of the German, Austro-Hungarian, Ottoman, and Russian Empires following World War I; of the British, French, Dutch, Portuguese, Belgian, Italian, and Japanese Empires following World War II; and of the Soviet Union at the end of the Cold War. Early studies of decolonisation appeared in the 1960s and 1970s. An important book from this period was The Wretched of the Earth (1961) by Martiniquan author Frantz Fanon, which established many aspects of decolonisation that would be considered in later works. Subsequent studies of decolonisation addressed economic disparities as a legacy of colonialism as well as the annihilation of people's cultures. Ngũgĩ wa Thiong'o explored the cultural and linguistic legacies of colonialism in the influential book Decolonising the Mind (1986). "Decolonization" has also been used to refer to the intellectual decolonization from the colonizers' ideas that made the colonized feel inferior. Issues of decolonization persist and are raised contemporarily. In the Americas and South Africa, such issues are increasingly discussed under the term decoloniality. By area In the two hundred years following the American Revolutionary War in 1783, 165 colonies have gained independence from Western imperial powers. Several analyses point to different reasons for the spread of anti-colonial political movements. Institutional arguments suggest that increasing levels of education in the colonies led to calls for popular sovereignty; Marxist analyses view decolonization as a result of economic shifts toward wage labor and an enlarged bourgeois class; yet another argument sees decolonization as a diffusion process wherein earlier revolutionary movements inspired later ones. Other explanations emphasize how the lower profitability of colonization and the costs associated with empire prompted decolonization. Some explanations emphasize how colonial powers struggled militarily against insurgents in the colonies due to a shift from 19th century conditions of "strong political will, a permissive international environment, access to local collaborators, and flexibility to pick their battles" to 20th century conditions of "apathetic publics, hostile superpowers, vanishing collaborators, and constrained options". In other words, colonial powers had more support from their own region in pursuing colonies in the 19th century than they did in the 20th century, where holding on to such colonies was often understood to be a burden. A great deal of scholarship attributes the ideological origins of national independence movements to the Age of Enlightenment. Enlightenment social and political theories such as individualism and liberalism were central to the debates about national constitutions for newly independent countries. Contemporary decolonial scholarship has critiqued the emancipatory potential of Enlightenment thought, highlighting its erasure of Indigenous epistemologies and failure to provide subaltern and Indigenous people with liberty, equality, and dignity. Great Britain's Thirteen North American colonies were the first to declare independence, forming the United States of America in 1776, and defeating Britain in the Revolutionary War. The Haitian Revolution was a revolt in 1789 and subsequent slave uprising in 1791 in the French colony of Saint-Domingue, on the Caribbean island of Hispaniola. In 1804, Haiti secured independence from France as the Empire of Haiti, which later became a republic. The chaos of the Napoleonic Wars in Europe cut the direct links between Spain and its American colonies, allowing for the process of decolonization to begin. With the invasion of Spain by Napoleon in 1806, the American colonies declared autonomy and loyalty to King Ferdinand VII. The contract was broken and each of the regions of the Spanish Empire had to decide whether to show allegiance to the Junta of Cadiz (the only territory in Spain free from Napoleon) or have a junta (assembly) of its own. The economic monopoly of the metropolis was the main reason why many countries decided to become independent from Spain. In 1809, the independence wars of Latin America began with a revolt in La Paz, Bolivia. In 1807 and 1808, the Viceroyalty of the River Plate was invaded by the British. After their 2nd defeat, a Frenchman called Santiague de Liniers was proclaimed a new Viceroy by the local population and later accepted by Spain. In May 1810 in Buenos Aires, a Junta was created, but in Montevideo it was not recognized by the local government who followed the authority of the Junta of Cadiz. The rivalry between the two cities was the main reason for the distrust between them. During the next 15 years, the Spanish and Royalist on one side, and the rebels on the other fought in South America and Mexico. Numerous countries declared their independence. In 1824, the Spanish forces were defeated in the Battle of Ayacucho. The mainland was free, and in 1898, Spain lost Cuba and Puerto Rico in the Spanish–American War. Puerto Rico became an unincorporated territory of the US, but Cuba became independent in 1902. The Napoleonic Wars also led to the severing of the direct links between Portugal and its only American colony, Brazil. Days before Napoleon invaded Portugal, in 1807 the Portuguese royal court fled to Brazil. In 1820 there was a Constitutionalist Revolution in Portugal, which led to the return of the Portuguese court to Lisbon. This led to distrust between the Portuguese and the Brazilian colonists, and finally, in 1822, to the colony becoming independent as the Empire of Brazil, which later became a republic. The emergence of Indigenous political parties was especially characteristic of the British Empire, which seemed less ruthless than, for example, Belgium, in controlling political dissent. Driven by pragmatic demands of budgets and manpower the British made deals with the local politicians. Across the empire, the general protocol was to convene a constitutional conference in London to discuss the transition to greater self-government and then independence, submit a report of the constitutional conference to parliament, if approved submit a bill to Parliament at Westminster to terminate the responsibility of the United Kingdom (with a copy of the new constitution annexed), and finally, if approved, issuance of an Order of Council fixing the exact date of independence. After World War I, several former German and Ottoman territories in the Middle East, Africa, and the Pacific were governed by the UK as League of Nations mandates. Some were administered directly by the UK, and others by British dominions – Nauru and the Territory of New Guinea by Australia, South West Africa by the Union of South Africa, and Western Samoa by New Zealand. Egypt became independent in 1922, although the UK retained security prerogatives, control of the Suez Canal, and effective control of the Anglo-Egyptian Sudan. The Balfour Declaration of 1926 declared the British Empire dominions as equals, and the 1931 Statute of Westminster established full legislative independence for them. The equal dominions were six– Canada, Newfoundland, Australia, the Irish Free State, New Zealand, and the Union of South Africa; Ireland had been brought into a union with Great Britain in 1801 creating the United Kingdom of Great Britain and Ireland until the formation of the Irish Free State in 1922. However, some of the Dominions were already independent de facto, and even de jure and recognized as such by the international community. Thus, Canada was a founding member of the League of Nations in 1919 and served on the council from 1927 to 1930. That country also negotiated on its own and signed bilateral and multilateral treaties and conventions from the early 1900s onward. Newfoundland ceded self-rule back to London in 1934. Iraq, a League of Nations mandate, became independent in 1932. In response to a growing Indian independence movement, the UK made successive reforms to the British Raj, culminating in the Government of India Act 1935. These reforms included creating elected legislative councils in some of the provinces of British India. Mohandas Karamchand Gandhi, India's independence movement leader, led a peaceful resistance to British rule. By becoming a symbol of both peace and opposition to British imperialism, many Indians began to view the British as the cause of India's problems leading to a newfound sense of nationalism among its population. With this new wave of Indian nationalism, Gandhi was eventually able to garner the support needed to push back the British and create an independent India in 1947. Africa was only fully drawn into the colonial system at the end of the 19th century. In the north-east the continued independence of the Ethiopian Empire remained a beacon of hope to pro-independence activists. However, with the anti-colonial wars of the 1900s (decade) barely over, new modernizing forms of Africa nationalism began to gain strength in the early 20th century with the emergence of Pan-Africanism, as advocated by the Jamaican journalist Marcus Garvey (1887–1940) whose widely distributed newspapers demanded swift abolition of European imperialism, as well as republicanism in Egypt. Kwame Nkrumah (1909–1972) who was inspired by the works of Garvey led Ghana to independence from colonial rule. Independence for the colonies in Africa began with the independence of Sudan in 1956, and Ghana in 1957. All of the British colonies on mainland Africa became independent by 1966, although Rhodesia's unilateral declaration of independence in 1965 was not recognized by the UK or internationally. Some of the British colonies in Asia were directly administered by British officials, while others were ruled by local monarchs as protectorates or in subsidiary alliance with the UK. In 1947, British India was partitioned into the independent dominions of India and Pakistan. Hundreds of princely states, states ruled by monarchs in a treaty of subsidiary alliance with Britain, were integrated into India and Pakistan. India and Pakistan fought several wars over the former princely state of Jammu and Kashmir. French India was integrated into India between 1950 and 1954, and India annexed Portuguese India in 1961, and the Kingdom of Sikkim merged with India by popular vote in 1975. Significant violence was involved in several prominent cases of decolonization of the British Empire; partition was a frequent solution. In 1783, the North American colonies were divided between the independent United States, and British North America, which later became Canada. The Indian Rebellion of 1857 was a major uprising in India against British East India Company. It was characterized by massacres of civilians on both sides. It was not a movement for independence, however, and only a small part of India was involved. In the aftermath, the British pulled back from modernizing reforms of Indian society, and the level of organised violence under the British Raj was relatively small. Most of that was initiated by repressive British administrators, as in the Amritsar massacre of 1919, or the police assaults on the Salt March of 1930. Large-scale communal violence broke out between Hindus and Muslims and between Muslims and Sikhs after the British left in 1947 in the newly independent dominions of India and Pakistan. Much later, in 1970, further communal violence broke out within Pakistan in the detached eastern part of East Bengal, which became independent as Bangladesh in 1971. Cyprus, which came under full British control in 1914 from the Ottoman Empire, was culturally divided between the majority Greek element (which demanded "enosis" or union with Greece) and the minority Turks. London for decades assumed it needed the island to defend the Suez Canal; but after the Suez crisis of 1956, that became a minor factor, and Greek violence became a more serious issue. Cyprus became an independent country in 1960, but ethnic violence escalated until 1974 when Turkey invaded and partitioned the island. Each side rewrote its own history, blaming the other. Palestine became a British mandate from the League of Nations after World War I, initially including Transjordan. During that war, the British gained support from Arabs and Jews by making promises to both (see McMahon–Hussein Correspondence and Balfour Declaration). Decades of ethno—religious violence reached a climax with the UN Partition Plan and the ensuing war. The British eventually pulled out, and the former Mandate territory was divided between Israel, Jordan and Egypt. After World War I, the colonized people were frustrated at France's failure to recognize the effort provided by the French colonies (resources, but more importantly colonial troops – the famous tirailleurs). Although in Paris the Great Mosque of Paris was constructed as recognition of these efforts, the French state had no intention to allow self-rule, let alone grant independence to the colonized people. Thus, nationalism in the colonies became stronger in between the two wars, leading to Abd el-Krim's Rif War (1921–1925) in Morocco and to the creation of Messali Hadj's Star of North Africa in Algeria in 1925. However, these movements would gain full potential only after World War II. After World War I, France administered the former Ottoman territories of Syria and Lebanon, and the former German colonies of Togoland and Cameroon, as League of Nations mandates. Lebanon declared its independence in 1943, and Syria in 1945. In some instances, decolonization efforts ran counter to other concerns, such as the rapid increase of antisemitism in Algeria in the course of the nation's resistance to French rule. Although France was ultimately a victor of World War II, Nazi Germany's occupation of France and its North African colonies during the war had disrupted colonial rule. On 27 October 1946, France adopted a new constitution creating the Fourth Republic, and substituted the French Union for the colonial empire. However power over the colonies remained concentrated in France, and the power of local assemblies outside France was extremely limited. On the night of 29 March 1947, a Madagascar nationalist uprising led the French government headed by Paul Ramadier (Socialist) to violent repression: one year of bitter fighting, 11,000–40,000 Malagasy died. After the end of World War II, the Viet Minh launched the August Revolution and declared Vietnamese independence in September, although Allied troops reoccupied the territory afterwards. In late 1946, the Viet Minh attacked French troops in Hanoi, leading to the Indochina War (1946–54). France later recognized the independence of the State of Vietnam, the Kingdom of Laos, and the Kingdom of Cambodia, while also recognizing the unity of Vietnam (whose territories has been split into three separate regions under French colonial rule) and supported the anti-communist faction in this country against the communists who fought in the name of anti-colonialism in 1949. The war thus became part of the world-wide Cold War. Cambodia and Laos became fully independent in late 1953, Vietnam became fully independent on 4 June 1954, and the Geneva Accords of 21 July 1954 left Vietnam divided into the North and South with the fact that France recognized communists gaining the North. After North Vietnamese military victory in April 1975, Vietnam would be de jure united under a communist government on 2 July 1976. In 1956, Morocco and Tunisia gained their independence from France. In 1960, eight independent countries emerged from French West Africa, and five from French Equatorial Africa. The Algerian War of Independence raged from 1954 to 1962. To this day, the Algerian war – officially called a "public order operation" until the 1990s – remains a trauma for both France and Algeria. Philosopher Paul Ricœur has spoken of the necessity of a "decolonisation of memory", starting with the recognition of the 1961 Paris massacre during the Algerian war, and the decisive role of African and especially North African immigrant manpower in the Trente Glorieuses post–World War II economic growth period. In the 1960s, due to economic needs for post-war reconstruction and rapid economic growth, French employers actively sought to recruit manpower from the colonies, explaining today's multiethnic population. A union of former colonies itself, the United States approached imperialism differently from the other Powers. Much of its energy and rapidly expanding population was directed westward across the North American continent against English and French claims, the Spanish Empire and Mexico. The Native Americans were sent to reservations, often unwillingly. With support from Britain, its Monroe Doctrine reserved the Americas as its sphere of interest, prohibiting other states (particularly Spain) from recolonizing the newly independent polities of Latin America. However, France, taking advantage of the American government's distraction during the Civil War, intervened militarily in Mexico and set up a French-protected monarchy. Spain took the step to occupy the Dominican Republic and restore colonial rule. The Union victory in the Civil War in 1865 forced both France and Spain to accede to American demands to evacuate those two countries. America's only African colony, Liberia, was formed privately and achieved independence early; Washington unofficially protected it. By 1900, the U.S. advocated an Open Door Policy and opposed the direct division of China. After 1898 direct intervention expanded in Latin America. The United States purchased Alaska from the Russian Empire in 1867 and annexed Hawaii in 1898. Following the Spanish–American War in 1898, the US added most of Spain's remaining colonies: Puerto Rico, Philippines, and Guam. Deciding not to annex Cuba outright, the U.S. established it as a client state with obligations including the perpetual lease of Guantánamo Bay to the U.S. Navy. The attempt of the first governor to void the island's constitution and remain in power past the end of his term provoked a rebellion that provoked a reoccupation between 1906 and 1909, but this was again followed by devolution. Similarly, the McKinley administration, despite prosecuting the Philippine–American War against a native republic, set out that the Territory of the Philippine Islands was eventually granted independence. In 1917, the U.S. purchased the Danish West Indies (later renamed the US Virgin Islands) from Denmark and Puerto Ricans became full U.S. citizens that same year. The US government declared Puerto Rico the territory was no longer a colony and stopped transmitting information about it to the United Nations Decolonization Committee. As a result, the UN General Assembly removed Puerto Rico from the U.N. list of non-self-governing territories. Four referendums showed little support for independence, but much interest in statehood such as Hawaii and Alaska received in 1959. The Monroe Doctrine was expanded by the Roosevelt Corollary in 1904, providing that the United States had a right and obligation to intervene "in flagrant cases of such wrongdoing or impotence" that a nation in the Western Hemisphere became vulnerable to European control. In practice, this meant that the United States was led to act as a collections agent for European creditors by administering customs duties in the Dominican Republic (1905–1941), Haiti (1915–1934), and elsewhere. The intrusiveness and bad relations this engendered were somewhat checked by the Clark Memorandum and renounced by President Franklin D. Roosevelt's "Good Neighbor Policy". The Fourteen Points were preconditions addressed by President Woodrow Wilson to the European powers at the Paris Peace Conference following World War I. In allowing allies France and Britain the former colonial possessions of the German and Ottoman Empires, the US demanded of them submission to the League of Nations mandate, in calling for V. A free, open-minded, and absolutely impartial adjustment of all colonial claims, based upon a strict observance of the principle that in determining all such questions of sovereignty the interests of the populations concerned must have equal weight with the equitable government whose title is to be determined. See also point XII. After World War II, the U.S. poured tens of billions of dollars into the Marshall Plan, and other grants and loans to Europe and Asia to rebuild the world economy. At the same time American military bases were established around the world and direct and indirect interventions continued in Korea, Indochina, Latin America (inter alia, the 1965 occupation of the Dominican Republic), Africa, and the Middle East to oppose Communist movements and insurgencies. Since the dissolution of the Soviet Union, the United States has been far less active in the Americas, but invaded Afghanistan and Iraq following the September 11 attacks in 2001, establishing army and air bases in Central Asia. Before World War I, Japan had gained several substantial colonial possessions in East Asia such as Taiwan (1895) and Korea (1910). Japan joined the allies in World War I, and after the war acquired the South Seas Mandate, the former German colony in Micronesia, as a League of Nations Mandate. Pursuing a colonial policy comparable to those of European powers, Japan settled significant populations of ethnic Japanese in its colonies while simultaneously suppressing Indigenous ethnic populations by enforcing the learning and use of the Japanese language in schools. Other methods such as public interaction, and attempts to eradicate the use of Korean, Hokkien, and Hakka among the Indigenous peoples, were seen to be used. Japan also set up the Imperial Universities in Korea (Keijō Imperial University) and Taiwan (Taihoku Imperial University) to compel education. In 1931, Japan seized Manchuria from the Republic of China, setting up a puppet state under Puyi, the last Manchu emperor of China. In 1933 Japan seized the Chinese province of Rehe, and incorporated it into its Manchurian possessions. The Second Sino-Japanese War started in 1937, and Japan occupied much of eastern China, including the Republic's capital at Nanjing. An estimated 20 million Chinese died during the 1931–1945 war with Japan. In December 1941, the empire of Japan joined World War II by invading the European and U.S. colonies in Southeast Asia and the Pacific, including French Indochina, Hong Kong, the Philippines, Burma, Malaya, Indonesia, Portuguese Timor, and others. Following its surrender to the Allies in 1945, Japan was deprived of all its colonies with a number of them being returned to the original colonizing Western powers. The Soviet Union declared war on Japan in August 1945, and shortly after occupied and annexed the southern Kuril Islands, which Japan still claims. Decolonization was often not extensively planned, instead occurring as a response to politics in the colony, politics at home, and increasing international pressure. Immediately following the war there was a wave of decolonization throughout Asia. This was followed by the Middle East, and in the 1960s sub-Saharan Africa. These waves saw most large colonies become independent, with many remaining colonies being smaller islands. Many of the smallest colonies would not become independent, instead joining either with nearby colonies and countries or becoming full parts of their administering country. In the United States, the two major parties were divided on the acquisition of the Philippines, which became a major campaign issue in 1900. The Republicans, who favored permanent acquisition, won the election, but after a decade or so, Republicans turned their attention to the Caribbean, focusing on building the Panama Canal. President Woodrow Wilson, a Democrat in office from 1913 to 1921, ignored the Philippines, and focused his attention on Mexico and Caribbean nations. By the 1920s, the peaceful efforts by the Filipino leadership to pursue independence proved convincing. When the Democrats returned to power in 1933, they worked with the Filipinos to plan a smooth transition to independence. It was scheduled for 1946 by Tydings–McDuffie Act of 1934. In 1935, the Philippines transitioned out of territorial status, controlled by an appointed governor, to the semi-independent status of the Commonwealth of the Philippines. Its constitutional convention wrote a new constitution, which was approved by Washington and went into effect, with an elected governor Manuel L. Quezon and legislature. Foreign Affairs remained under American control. The Philippines built up a new army, under general Douglas MacArthur, who took leave from his U.S. Army position to take command of the new army reporting to Quezon. The Japanese occupation 1942 to 1945 disrupted but did not delay the transition. It took place on schedule in 1946 as Manuel Roxas took office as president. As a result of its pioneering discoveries, Portugal had a large and particularly long-lasting colonial empire which had begun in 1415 with the conquest of Ceuta and ended only in 1999 with the handover of Portuguese Macau to China. In 1822, Portugal lost control of Brazil, its largest colony. From 1933 to 1974, Portugal was an authoritarian state (ruled by António de Oliveira Salazar). The regime was fiercely determined to maintain the country's colonial possessions at all costs and to aggressively suppress any insurgencies. In 1961, India annexed Goa and by the same year nationalist forces had begun organizing in Portugal. Revolts (preceding the Portuguese Colonial War) spread to Angola, Guinea Bissau and Mozambique. Lisbon escalated its effort in the war: for instance, it increased the number of natives in the colonial army and built strategic hamlets. Portugal sent another 300,000 European settlers into Angola and Mozambique before 1974. That year, a left-wing revolution inside Portugal overthrew the existing regime and encouraged pro-Soviet elements to attempt to seize control in the colonies. The result was a very long and extremely difficult multi-party Civil War in Angola, and lesser insurrections in Mozambique. Belgium's empire began with the annexation of the Congo in 1908 in response to international pressure to bring an end to the terrible atrocities that had taken place under King Leopold's privately run Congo Free State. It added Rwanda and Burundi as League of Nations mandates from the former German Empire in 1919. The colonies remained independent during the war, while Belgium was occupied by the Germans. There was no serious planning for independence, and exceedingly little training or education provided. The Belgian Congo was especially rich, and many Belgian businessmen lobbied hard to maintain control. Local revolts grew in power and finally, the Belgian king suddenly announced in 1959 that independence was on the agenda – and it was hurriedly arranged in 1960, for country bitterly and deeply divided on social and economic grounds. The Netherlands had spent centuries building up its empire. By 1940 it consisted mostly of the Dutch East Indies, corresponding to what is now Indonesia. Its massive oil reserves provided about 14 percent of the Dutch national product and supported a large population of ethnic Dutch government officials and businessmen in Batavia (now Jakarta) and other major cities. The Netherlands was overrun and almost starved to death by the Nazis during the war, and Japan sank the Dutch fleet in seizing the East Indies. In 1945 the Netherlands could not regain these islands on its own; it did so by depending on British military help and American financial grants. By the time Dutch soldiers returned, an independent government under Sukarno was in power, originally set up by the Empire of Japan. The Dutch both abroad and at home generally agreed that Dutch power depended on an expensive war to regain the islands. Compromises were negotiated, but were trusted by neither side. When the Indonesian Republic successfully suppressed a large-scale communist revolt, the United States realized that it needed the nationalist government as an ally in the Cold War. Dutch possession was an obstacle to American Cold War goals, so Washington forced the Dutch to grant full independence. A few years later, Sukarno nationalized all Dutch East Indies properties and expelled all ethnic Dutch—over 300,000—as well as several hundred thousand ethnic Indonesians who supported the Dutch cause. In the aftermath, the Netherlands prospered greatly in the 1950s and 1960s but nevertheless public opinion was bitterly hostile to the United States for betrayal. The Dutch government eventually gave up on claims to Indonesian sovereignty in 1949, after American pressure. The Netherlands also had one other major colony, Dutch Guiana in South America, which became independent as Suriname in 1975. When the United Nations was formed in 1945, it established trust territories. These territories included the League of Nations mandate territories which had not achieved independence by 1945, along with the former Italian Somaliland. The Trust Territory of the Pacific Islands was transferred from Japanese to US administration. By 1990 all but one of the trust territories had achieved independence, either as independent states or by merger with another independent state; the Northern Mariana Islands elected to become a commonwealth of the United States. Newly independent states organised themselves in order to oppose continued economic colonialism by former imperial powers. The Non-Aligned Movement constituted itself around the main figures of Jawaharlal Nehru, the first Prime Minister of India, Sukarno, the Indonesian president, Josip Broz Tito the Communist leader of Yugoslavia, and Gamal Abdel Nasser, head of Egypt. In 1955 these leaders gathered at the Bandung Conference along with Sukarno, the leader of Indonesia, and Zhou Enlai, Premier of the People's Republic of China. In 1960, the UN General Assembly voted on the Declaration on the Granting of Independence to Colonial Countries and Peoples. The next year, the first Non-Aligned Movement conference was held in Belgrade (1961), and was followed in 1964 by the creation of the United Nations Conference on Trade and Development (UNCTAD) which tried to promote a New International Economic Order (NIEO). The NIEO was opposed to the 1944 Bretton Woods system, which had benefited the leading states which had created it, and remained in force until 1971 after the United States' suspension of convertibility from dollars to gold. The main principles of the NIEO are: The UNCTAD however was not very effective in implementing the NIEO, and social and economic inequalities between industrialized countries and the Third World grew throughout the 1960s until the 21st century. The 1973 oil crisis which followed the Yom Kippur War (October 1973) was triggered by the OPEC which decided an embargo against the US and Western countries, causing a fourfold increase in the price of oil, which lasted five months, starting on 17 October 1973, and ending on 18 March 1974. OPEC nations then agreed, on 7 January 1975, to raise crude oil prices by 10%. At that time, OPEC nations – including many who had recently nationalized their oil industries – joined the call for a New International Economic Order to be initiated by coalitions of primary producers. Concluding the First OPEC Summit in Algiers they called for stable and just commodity prices, an international food and agriculture program, technology transfer from North to South, and the democratization of the economic system. But industrialized countries quickly began to look for substitutes to OPEC petroleum, with the oil companies investing the majority of their research capital in the US and European countries or others, politically sure countries. The OPEC lost more and more influence on the world prices of oil. The second oil crisis occurred in the wake of the 1979 Iranian Revolution. Then, the 1982 Latin American debt crisis exploded in Mexico first, then Argentina and Brazil, which proved unable to pay back their debts, jeopardizing the existence of the international economic system. The 1990s were characterized by the prevalence of the Washington consensus on neoliberal policies, "structural adjustment" and "shock therapies" for the former Communist states. The decolonization of North Africa and sub-Saharan Africa took place in the mid-to-late 1950s, very suddenly, with little preparation. There was widespread unrest and organized revolts, especially in French Algeria, Portuguese Angola, the Belgian Congo and British Kenya. In 1945, Africa had four independent countries – Egypt, Ethiopia, Liberia, and South Africa. After Italy's defeat in World War II, France and the UK occupied the former Italian colonies. Libya became an independent kingdom in 1951. Eritrea was merged with Ethiopia in 1952. Italian Somaliland was governed by the UK, and by Italy after 1954, until its independence in 1960. By 1977, European colonial rule in mainland Africa had ended. Most of Africa's island countries had also become independent, although Réunion and Mayotte remain part of France. However the black majorities in Rhodesia and South Africa were disenfranchised until 1979 in Rhodesia, which became Zimbabwe-Rhodesia that year and Zimbabwe the next, and until 1994 in South Africa. Namibia, Africa's last UN Trust Territory, became independent of South Africa in 1990. Most independent African countries exist within prior colonial borders. However Morocco merged French Morocco with Spanish Morocco, and Somalia formed from the merger of British Somaliland and Italian Somaliland. Eritrea merged with Ethiopia in 1952, but became an independent country in 1993. Most African countries became independent as republics. Morocco, Lesotho, and Eswatini remain monarchies under dynasties that predate colonial rule. Burundi, Egypt, Libya, and Tunisia gained independence as monarchies, but all four countries' monarchs were later deposed, and they became republics. African countries cooperate in various multi-state associations. The African Union includes all 55 African states. There are several regional associations of states, including the East African Community, Southern African Development Community, and Economic Community of West African States, some of which have overlapping membership. Japan expanded its occupation of Chinese territory during the 1930s, and occupied Southeast Asia during World War II. After the war, the Japanese colonial empire was dissolved, and national independence movements resisted the re-imposition of colonial control by European countries and the United States. The Republic of China regained control of Japanese-occupied territories in Manchuria and eastern China, as well as Taiwan. Only Hong Kong and Macau remained in outside control until both places were transferred to the People's Republic of China by the UK and Portugal in 1997 and 1999. The Allied powers divided Korea into two occupation zones, which became the states of North Korea and South Korea. The Philippines became independent of the U.S. in 1946. The Netherlands recognized Indonesia's independence in 1949, after a four-year independence struggle. Indonesia annexed Netherlands New Guinea in 1963, and Portuguese Timor in 1975. In 2002, former Portuguese Timor became independent as East Timor. The following list shows the colonial powers following the end of hostilities in 1945, and their colonial or administrative possessions. The year of decolonization is given chronologically in parentheses. Italy had occupied the Dodecanese islands in 1912, but Italian occupation ended after World War II, and the islands were integrated into Greece. British rule ended in Cyprus in 1960, and Malta in 1964, and both islands became independent republics. Referring to the Revolutions of 1989, the historian Robert Daniels stated: "A special dimension that the anti-Communist revolutions shared with some of their predecessors was decolonization." During the Russo-Ukrainian war, Ukraine passed a law in 2023 that banned geographical names associated with Russia. This law in particular has been described by Volodymyr Viatrovych as providing "a legitimate framework for the ongoing decolonization processes in Ukraine and effective mechanisms". Scholars of Russian studies have also renewed awareness of Russian colonialism and interest in decolonizing scholarship in their field. The decolonization of Oceania occurred after World War II when nations in Oceania achieved independence by transitioning from European colonial rule to full independence. Aspects Typical challenges of decolonization include state-building, nation-building, and economic development. After independence, the new states needed to establish or strengthen the institutions of a sovereign state, i.e. governments, laws, a military, schools, administrative systems, and so on. The amount of self-rule granted prior to independence, and assistance from the colonial power and/or international organizations after independence, varied greatly between colonial powers, and between individual colonies. Except for a few absolute monarchies, most post-colonial states are either republics or constitutional monarchies. These new states had to devise constitutions, electoral systems, and other institutions of representative democracy. Nation-building is the process of creating a sense of identification with, and loyalty to, the state. Nation-building projects seek to replace loyalty to the old colonial power, and/or tribal or regional loyalties, with loyalty to the new state. Elements of nation-building include creating and promoting symbols of the state like a flag, a coat of arms and an anthem, monuments, official histories, national sports teams, codifying one or more Indigenous official languages, and replacing colonial place-names with local ones. Nation-building after independence often continues the work began by independence movements during the colonial period. From the perspective of language policy (or language politics), "linguistic decolonization" entails the replacement of a colonizing (imperial) power's language with a given colony's indigenous language in the function of official language. With the exception of colonies in Eurasia, linguistic decolonization did not take place in the former colonies-turned-independent states on the other continents ("Rest of the World"). Linguistic imperialism is the imposition and enforcement of one dominant language over other languages, and one response to this form of imperialism is linguistic decolonization. Kenyan writer Ngũgĩ wa Thiong'o has written about colonization and decolonization in the film universe. Born in Ethiopia, filmmaker Haile Gerima describes the "colonization of the unconscious" he describes experiencing as a child: ...as kids, we tried to act out the things we had seen in the movies. We used to play cowboys and Indians in the mountains around Gondar...We acted out the roles of these heroes, identifying with the cowboys conquering the Indians. We didn't identify with the Indians at all and we never wanted the Indians to win. Even in Tarzan movies, we would become totally galvanized by the activities of the hero and follow the story from his point of view, completely caught up in the structure of the story. Whenever Africans sneaked up behind Tarzan, we would scream our heads off, trying to warn him that 'they' were coming". In Asia, kung fu cinema emerged at a time Japan wanted to reach Asian populations in other countries by way of its cultural influence. The surge in popularity of kung fu movies began in the late 1960s through the 1970s. Local populations were depicted as protagonists opposing "imperialists" (foreigners) and their "Chinese collaborators". In a 2023 paper on the political theory of settler colonialism, Canadian academics Yann Allard-Tremblay and Elaine Coburn posit that: "In Africa, the Middle East, South America, and much of the rest of the world, decolonization often meant the expulsion or departure of most colonial settlers. In contrast, in settler colonial states like New Zealand, Australia, Canada, and the United States, settlers have not left, even as independence from the metropole was gained... The systemic oppression and domination of the colonized by the colonizer is not historical — firmly in the past — but ongoing and supported by radically unequal political, social, economic, and legal institutions." Decolonization is not an easy matter in colonies with large settler populations, particularly if they have been there for several generations. When settlers remain in former colonies after independence, colonialism is ongoing and takes the form of settler colonialism, which is highly resistant to decolonisation. Repatriation of existing colonizers or prevention of immigration of additional colonizers can be seen as return migration and opposition to immigration. In a few cases, settler populations have been repatriated. For instance, the decolonization of Algeria by France was particularly uneasy due to the large European population (see also pied noir), which largely evacuated to France when Algeria became independent. In Zimbabwe, former Rhodesia, Robert Mugabe seized property from white African farmers, killing several of them, and forcing the survivors to emigrate. A large Indian community lived in Uganda as a result of Britain colonizing both India and East Africa, and Idi Amin expelled them for domestic political gain. Newly independent states also had to develop independent economic institutions – a national currency, banks, companies, regulation, tax systems, etc. Many colonies were serving as resource colonies which produced raw materials and agricultural products, and as a captive market for goods manufactured in the colonizing country. Many decolonized countries created programs to promote industrialization. Some nationalized industries and infrastructure, and some engaged in land reform to redistribute land to individual farmers or create collective farms. Some decolonized countries maintain strong economic ties with the former colonial power. The CFA franc is a currency shared by 14 countries in West and Central Africa, mostly former French colonies. The CFA franc is guaranteed by the French treasury. After independence, many countries created regional economic associations to promote trade and economic development among neighboring countries, including the Association of Southeast Asian Nations (ASEAN), the Economic Community of West African States (ECOWAS), and the Gulf Cooperation Council. John Kenneth Galbraith argues that the post–World War II decolonization was brought about for economic reasons. In A Journey Through Economic Time, he writes: "The engine of economic well-being was now within and between the advanced industrial countries. Domestic economic growth – as now measured and much discussed – came to be seen as far more important than the erstwhile colonial trade.... The economic effect in the United States from the granting of independence to the Philippines was unnoticeable, partly due to the Bell Trade Act, which allowed American monopoly in the economy of the Philippines. The departure of India and Pakistan made small economic difference in the United Kingdom. Dutch economists calculated that the economic effect from the loss of the great Dutch empire in Indonesia was compensated for by a couple of years or so of domestic post-war economic growth. The end of the colonial era is celebrated in the history books as a triumph of national aspiration in the former colonies and of benign good sense on the part of the colonial powers. Lurking beneath, as so often happens, was a strong current of economic interest – or in this case, disinterest." In general, the release of the colonized caused little economic loss to the colonizers. Part of the reason for this was that major costs were eliminated while major benefits were obtained by alternate means. Decolonization allowed the colonizer to disclaim responsibility for the colonized. The colonizer no longer had the burden of obligation, financial or otherwise, to their colony. However, the colonizer continued to be able to obtain cheap goods and labor as well as economic benefits (see Suez Canal Crisis) from the former colonies. Financial, political and military pressure could still be used to achieve goals desired by the colonizer. Thus decolonization allowed the goals of colonization to be largely achieved, but without its burdens. Assassinated anti-colonialist leaders A non-exhaustive list of assassinated leaders would include: Current colonies The United Nations, under "Chapter XI: Declaration Regarding Non-Self Governing Territories" of the Charter of the United Nations, defines Non-Self Governing Nations (NSGSs) as "territories whose people have not yet attained a full measure of self-government"—the contemporary definition of colonialism. After the conclusion of World War II with the surrender of the Axis Powers in 1945, and two decades into the latter half of the 20th century, over three dozen "states in Asia and Africa achieved autonomy or outright independence" from European administering powers. As of 2020, 17 territories remain under Chapter XI distinction: "On 26 February 1976, Spain informed the Secretary-General that as of that date it had terminated its presence in the Territory of the Sahara and deemed it necessary to place on record that Spain considered itself thenceforth exempt from any responsibility of any international nature in connection with the administration of the Territory, in view of the cessation of its participation in the temporary administration established for the Territory. In 1990, the General Assembly reaffirmed that the question of Western Sahara was a question of decolonization which remained to be completed by the people of Western Sahara." On 10 December 2010, the United Nations published its official decree, announcing the Third International Decade for the Eradication of Colonialism wherein the United Nations declared its "renewal of the call to States Members of the United Nations to speed up the process of decolonization towards the complete elimination of colonialism". According to an article by scholar John Quintero, "given the modern emphasis on the equality of states and inalienable nature of their sovereignty, many people do not realize that these non-self-governing structures still exist". Some activists have claimed that the attention of the United Nations was "further diverted from the social and economic agenda [for decolonization] towards "firefighting and extinguishing" armed conflicts". Advocates have stressed that the United Nations "[remains] the last refuge of hope for peoples under the yolk [sic] of colonialism". Furthermore, on 19 May 2015, UN Secretary-General Ban Ki-moon addressed the attendants of the Caribbean Regional Seminar on Decolonization, urging international political leaders to "build on [the success of precedent decolonization efforts and] towards fully eradicating colonialism by 2020". The sovereignty of the Chagos Archipelago in the Indian Ocean is disputed between the United Kingdom and Mauritius. In February 2019, the International Court of Justice in The Hague ruled that the United Kingdom must transfer the islands to Mauritius as they were not legally separated from the latter in 1965. On 22 May 2019, the United Nations General Assembly debated and adopted a resolution that affirmed that the Chagos Archipelago "forms an integral part of the territory of Mauritius". The UK does not recognize Mauritius' sovereignty claim over the Chagos Archipelago. In October 2020, Mauritian Prime Minister Pravind Jugnauth described the British and American governments as "hypocrites" and "champions of double talk" over their response to the dispute. Effects of decolonization A 2019 study found that "democracy levels increased sharply as colonies gained internal autonomy in the period immediately before their independence. However, conflict, revenue growth, and economic growth did not systematically differ before and after independence." David Strang writes that the loss of their empires turned France and Britain into "second-rate powers". Criticism Some articles extend the meaning of decolonization beyond independence or equal rights for colonized peoples to include broader economic, cultural and psychological aspects of the colonial experience. Extending the meaning of decolonization beyond political independence has been disputed and received criticism. According to political theorist Kevin Duong, decolonization "may have been the century's greatest act of disenfranchisement", as numerous anti-colonial activists primarily pursued universal suffrage within empires rather than independence: "As dependent territories became nation-states, they lost their voice in metropolitan assemblies whose affairs affected them long after independence." See also Notes References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Electron_scattering] | [TOKENS: 3877]
Contents Electron scattering Electron scattering occurs when electrons are displaced from their original trajectory. This is due to the electrostatic forces within matter or, if an external magnetic field is present, the electron may be deflected by the Lorentz force. This scattering typically happens with solids such as metals, semiconductors and insulators; and is a limiting factor in integrated circuits and transistors. Electron scattering has many applications ranging from the use of swift electron in electron microscopes to very high energies for hadronic systems that allows the measurement of the distribution of charges for nucleons and nuclear structure. The scattering of electrons has allowed us to understand many details about the atomic structure, from the ordering of atoms to that protons and neutrons are made up of the smaller elementary subatomic particles called quarks. Electrons may be scattered through a solid in several ways: The likelihood of an electron scattering and the degree of the scattering is a function of the specimen thickness and the mean free path. History The principle of the electron was first theorised in the period of 1838–1851 by a natural philosopher by the name of Richard Laming who speculated on the existence of sub-atomic, unit charged particles; he also pictured the atom as being an 'electrosphere' of concentric shells of electrical particles surrounding a material core.[note 3] It is generally accepted that J. J. Thomson first discovered the electron in 1897, although other notable members in the development in charged particle theory are George Johnstone Stoney (who coined the term "electron"), Emil Wiechert (who was first to publish his independent discovery of the electron), Walter Kaufmann, Pieter Zeeman and Hendrik Lorentz. Compton scattering was first observed at Washington University in St. Louis in 1923 by Arthur Compton who earned the 1927 Nobel Prize in Physics for the discovery; his graduate student Y. H. Woo who further verified the results is also of mention. Compton scattering is usually cited in reference to the interaction involving the electrons of an atom, however nuclear Compton scattering does exist.[citation needed] The first electron diffraction experiment was conducted in 1927 by Clinton Davisson and Lester Germer using what would come to be a prototype for modern LEED system. The experiment was able to demonstrate the wave-like properties of electrons,[note 4] thus confirming the de Broglie hypothesis that matter particles have a wave-like nature.[citation needed] However, after this the interest in LEED diminished in favour of high-energy electron diffraction until the early 1960s when an interest in LEED was revived; of notable mention during this period is H. E. Farnsworth who continued to develop LEED techniques. High energy electron-electron beams for collisions history begins in 1956 when Gerard K. O'Neill of Princeton University became interested in high energy collisions, and introduced the idea of accelerator(s) injecting into storage ring(s). While the idea of beam-beam collisions had been around since approximately the 1920s, it was not until 1953 that a German patent for a colliding beam apparatus was obtained by Rolf Widerøe. Phenomena Electrons can be scattered by other charged particles through the electrostatic Coulomb forces. Furthermore, if a magnetic field is present, a traveling electron will be deflected by the Lorentz force. An extremely accurate description of all electron scattering, including quantum and relativistic aspects, is given by the theory of quantum electrodynamics. The Lorentz force, named after Dutch physicist Hendrik Lorentz, for a charged particle q is given (in SI units) by the equation: where qE describes the electric force due to a present electric field, E, acting on q. And qv × B describes the magnetic force due to a present magnetic field, B, acting on q when q is moving with velocity v. This can also be written as: where ϕ {\displaystyle \phi } is the electric potential, and A is the magnetic vector potential. It was Oliver Heaviside who is considered to be the first in 1885 and 1889 to derive the correct expression for the Lorentz force of qv × B. Hendrik Lorentz derived and refined the concept in 1892 and gave it his name, incorporating forces due to electric fields. Rewriting this as the equation of motion for a free particle of charge q mass m, this becomes: or in the relativistic case including the Lorentz contraction where γ is: this equation of motion was first verified in 1897 in J. J. Thomson's experiment investigating cathode rays, which confirmed, through bending of the rays in a magnetic field, that these rays were a stream of charged particles now known as electrons. Variations on this basic formula describe the magnetic force on a current-carrying wire (sometimes called Laplace force), the electromotive force in a wire loop moving through a magnetic field (an aspect of Faraday's law of induction), and the force on a particle which might be traveling near the speed of light (relativistic form of the Lorentz force). Electrostatic Coulomb force also known as Coulomb interaction and electrostatic force, named for Charles-Augustin de Coulomb who published the result in 1785, describes the attraction or repulsion of particles due to their electric charge. Coulomb's law states that: The magnitude of the electrostatic force is proportional to the scalar multiple of the charge magnitudes, and inversely proportional to the square of the distance (i.e. inverse-square law), and is given by: or in vector notation: where q1, q2 are two point charges; ^r being the unit vector direction of the distance r between charges and ε0 is the permittivity of free space, given in SI units by: The directions of the forces exerted by the two charges on one another are always along the straight line joining them (the shortest distance), and are vector forces of infinite range, and they obey Newton's third law, being of equal magnitude and opposite direction. When both charges q1 and q2 have the same sign (either both positive or both negative) the forces between them are repulsive, if they are of opposite sign then the forces are attractive. These forces obey an important property called the principle of superposition of forces, which states that if a third charge were introduced then the total force acting on that charge is the vector sum of the forces that would be exerted by the other charges individually; this holds for any number of charges. Coulomb's law has been stated for charges in a vacuum, if the space between point charges contains matter then the permittivity of the matter between the charges must be accounted for as follows: where εr is the relative permittivity of the space the force acts through, and is dimensionless. If two particles interact with one another in a scattering process there are two results possible after the interaction: Elastic scattering is when the collisions between target and incident particles have total conservation of kinetic energy. This implies that there is no fragmentation of the particles or energy loss, that is to say that the internal states of each of the particles remains unchanged. Due to the fact that there is no fragmentation elastic collisions can as a first approximation be modeled as occurring between point-like particles, a principle that is very useful for an elementary particle such as the electron. Inelastic scattering is when the collisions do not conserve kinetic energy, and as such the internal states of one or both of the particles has changed. This is due to energy being converted into heat, waves (sound), or vibrations between constituent particles of either collision party or other excitations such as light. Particles may also split apart, and energy can be converted into breaking the chemical bonds between components. Momentum is conserved in both elastic and inelastic scattering. Other results than scattering are reactions, in which the structure of the interacting particles is changed producing two or more generally complex particles, and the creation of new particles that are not constituent elementary particles of the interacting particles. Other types of scattering Electron scattering by isolated atoms and molecules occurs in the gas phase. It plays a key role in plasma physics and chemistry and it's important for such applications as semiconductor physics. Electron-molecule/atom scattering is normally treated by means of quantum mechanics. The leading approach to compute the cross sections is using R-matrix method. Compton scattering, so named for Arthur Compton who first observed the effect in 1922 and which earned him the 1927 Nobel Prize in Physics; is the inelastic scattering of a high-energy photon by a free charged particle.[note 6] This was demonstrated in 1923 by firing radiation of a given wavelength (X-rays in the given case) through a foil (carbon target), which was scattered in a manner inconsistent with classical radiation theory.[note 7] Compton published a paper in the Physical Review explaining the phenomenon: A quantum theory of the scattering of X-rays by light elements. The Compton effect can be understood as high-energy photons scattering in-elastically off individual electrons, when the incoming photon gives part of its energy to the electron, then the scattered photon has lower energy and lower frequency and longer wavelength according to the Planck relation: which gives the energy E of the photon in terms of frequency f or ν, and the Planck constant h (6.626×10−34 J⋅s = 4.136×10−15 eV⋅s). The wavelength change in such scattering depends only upon the angle of scattering for a given target particle. This was an important discovery during the 1920s when the particle (photon) nature of light suggested by the photoelectric effect was still being debated, the Compton experiment gave clear and independent evidence of particle-like behavior. The formula describing the Compton shift in the wavelength due to scattering is given by: where λf is the final wavelength of the photon after scattering, λi is the initial wavelength of the photon before scattering, h is the Planck constant, me is the rest mass of the electron, c is the speed of light and θ is the scattering angle of the photon. The coefficient of (1 − cos θ) is known as the Compton wavelength, but is in fact a proportionality constant for the wavelength shift. The collision causes the photon wavelength to increase by somewhere between 0 (for a scattering angle of 0°) and twice the Compton wavelength (for a scattering angle of 180°). Thomson scattering is the classical elastic quantitative interpretation of the scattering process, and this can be seen to happen with lower, mid-energy, photons. The classical theory of an electromagnetic wave scattered by charged particles, cannot explain low intensity shifts in wavelength. Inverse Compton scattering takes place when the electron is moving, and has sufficient kinetic energy compared to the photon. In this case net energy may be transferred from the electron to the photon. The inverse Compton effect is seen in astrophysics when a low energy photon (e.g. of the cosmic microwave background) bounces off a high energy (relativistic) electron. Such electrons are produced in supernovae and active galactic nuclei. If a charged particle such as an electron is accelerated – this can be acceleration in a straight line or motion in a curved path – electromagnetic radiation is emitted by the particle. Within electron storage rings and circular particle accelerators known as synchrotrons, electrons are bent in a circular path and emit X-rays typically. This radially emitted ( a ⊥ v {\displaystyle {\boldsymbol {a}}\perp {\boldsymbol {v}}} ) electromagnetic radiation when charged particles are accelerated is called synchrotron radiation. It is produced in synchrotrons using bending magnets, undulators and/or wigglers.[citation needed] The first observation came at the General Electric Research Laboratory in Schenectady, New York, on April 24, 1947, in the synchrotron built by a team of Herb Pollack to test the idea of phase-stability principle for RF accelerators.[note 8] When the technician was asked to look around the shielding with a large mirror to check for sparking in the tube, he saw a bright arc of light coming from the electron beam. Robert Langmuir is credited as recognizing it as synchrotron radiation or, as he called it, "Schwinger radiation" after Julian Schwinger. Classically, the radiated power P from an accelerated electron is: this comes from the Larmor formula; where ε0 is the vacuum permittivity, e is elementary charge, c is the speed of light, and a is the acceleration. Within a circular orbit such as a storage ring, the non-relativistic case is simply the centripetal acceleration. However within a storage ring the acceleration is highly relativistic, and can be obtained as follows: where v is the circular velocity, r is the radius of the circular accelerator, m is the rest mass of the charged particle, p is the momentum, τ is the Proper time (t/γ), and γ is the Lorentz factor. Radiated power then becomes: For highly relativistic particles, such that velocity becomes nearly constant, the factor γ4 becomes the dominant variable in determining loss rate, which means that the loss scales as the fourth power of the particle energy γmc2; and the inverse dependence of synchrotron radiation loss on radius argues for building the accelerator as large as possible. Facilities Stanford Linear Accelerator Center is located near Stanford University, California. Construction began on the 3-kilometre-long (2 mi) linear accelerator in 1962 and was completed in 1967, and in 1968 the first experimental evidence of quarks was discovered resulting in the 1990 Nobel Prize in Physics, shared by SLAC's Richard Taylor and Jerome I. Friedman and Henry Kendall of MIT. The accelerator came with a 20 GeV capacity for the electron acceleration, and while similar to Rutherford's scattering experiment, that experiment operated with alpha particles at only 7 MeV. In the SLAC case the incident particle was an electron and the target a proton, and due to the short wavelength of the electron (due to its high energy and momentum) it was able to probe into the proton. The Stanford Positron Electron Asymmetric Ring (SPEAR) addition to the SLAC made further such discoveries possible, leading to the discovery in 1974 of the J/psi particle, which consists of a paired charm quark and anti-charm quark, and another Nobel Prize in Physics in 1976. This was followed up with Martin Perl's announcement of the discovery of the tau lepton, for which he shared the 1995 Nobel Prize in Physics. The SLAC aims to be a premier accelerator laboratory, to pursue strategic programs in particle physics, particle astrophysics and cosmology, as well as the applications in discovering new drugs for healing, new materials for electronics and new ways to produce clean energy and clean up the environment. Under the directorship of Chi-Chang Kao the SLAC's fifth director (as of November 2012), a noted X-ray scientist who came to SLAC in 2010 to serve as associate laboratory director for the Stanford Synchrotron Radiation Lightsource. Other scientific programs run at SLAC include: RIKEN was founded in 1917 as a private research foundation in Tokyo, and is Japan's largest comprehensive research institution. Having grown rapidly in size and scope, it is today renowned for high-quality research in a diverse range of scientific disciplines, and encompasses a network of world-class research centers and institutes across Japan. The RIKEN RI Beam Factory, otherwise known as the RIKEN Nishina Centre (for Accelerator-Based Science), is a cyclotron-based research facility which began operating in 2007; 70 years after the first in Japanese cyclotron, from Dr. Yoshio Nishina whose name is given to the facility. As of 2006, the facility has a world-class heavy-ion accelerator complex. This consists of a K540-MeV ring cyclotron (RRC) and two different injectors: a variable-frequency heavy-ion linac (RILAC) and a K70-MeV AVF cyclotron (AVF). It has a projectile-fragment separator (RIPS) which provides RI (Radioactive Isotope) beams of less than 60 amu, the world's most intense light-atomic-mass RI beams. Overseen by the Nishina Centre, the RI Beam Factory is utilized by users worldwide promoting research in nuclear, particle and hadron physics. This promotion of accelerator applications research is an important mission of the Nishina Centre, and implements the use of both domestic and oversea accelerator facilities. The SCRIT (Self-Confining Radioactive isotope Ion Target) facility, is currently under construction at the RIKEN RI beam factory (RIBF) in Japan. The project aims to investigate short-lived nuclei through the use of an elastic electron scattering test of charge density distribution, with initial testing done with stable nuclei. With the first electron scattering off unstable Sn isotopes to take place in 2014. The investigation of short-lived radioactive nuclei (RI) by means of electron scattering has never been performed because of an inability to make these nuclei a target, now with the advent of a novel self-confining RI technique at the world's first facility dedicated to the study of the structure of short-lived nuclei by electron scattering this research becomes possible. The principle of the technique is based around the ion trapping phenomenon which is observed at electron storage ring facilities,[note 9] which has an adverse effect on the performance of electron storage rings. The novel idea to be employed at SCRIT is to use the ion trapping to allow short-lived RI's to be made a target, as trapped ions on the electron beam, for the scattering experiments. This idea was first given a proof-of-principle study using the electron storage ring of Kyoto University, KSR; this was done using a stable nucleus of 133Cs as a target in an experiment of 120MeV electron beam energy, 75mA typical stored beam current and a 100 seconds beam lifetime. The results of this study were favorable with elastically scattered electrons from the trapped Cs being clearly visible. See also Notes Page 574 : Il résulte donc de ces trois essais, que l'action répulsive que les deux balles électrifées de la même nature d'électricité exercent l'une sur l'autre, suit la raison inverse du carré des distances. Translation : It follows therefore from these three tests, that the repulsive force that the two balls – [that were] electrified with the same kind of electricity – exert on each other, follows the inverse proportion of the square of the distance. References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Middle_East#cite_ref-36] | [TOKENS: 6152]
Contents Middle East The Middle East[b] is a geopolitical region encompassing the Arabian Peninsula, Egypt, Iran, Iraq, the Levant, and Turkey. The term came into widespread usage by Western European nations in the early 20th century as a replacement of the term Near East (both were in contrast to the Far East). The term "Middle East" has led to some confusion over its changing definitions. Since the late 20th century, it has been criticized as being too Eurocentric. The region includes the vast majority of the territories included in the closely associated definition of West Asia, but without the South Caucasus. It also includes all of Egypt (not just the Sinai region) and all of Turkey (including East Thrace). Most Middle Eastern countries (13 out of 18) are part of the Arab world. The three most populous countries in the region are Egypt, Iran, and Turkey, while Saudi Arabia is the largest Middle Eastern country by area. The history of the Middle East dates back to ancient times, and it was long considered the "cradle of civilization". The geopolitical importance of the region has been recognized and competed for during millennia. The Abrahamic religions (Judaism, Christianity, and Islam) have their origins in the Middle East. Arabs constitute the main ethnic group in the region, followed by Turks, Persians, Kurds, Jews, and Assyrians. The Middle East generally has a hot, arid climate, especially in the Arabian and Egyptian regions. Several major rivers provide irrigation to support agriculture in limited areas here, such as the Nile Delta in Egypt, the Tigris and Euphrates watersheds of Mesopotamia, and the basin of the Jordan River that spans most of the Levant. These regions are collectively known as the Fertile Crescent, and comprise the core of what historians had long referred to as the cradle of civilization; multiple regions of the world have since been classified as also having developed independent, original civilizations. Conversely, the Levantine coast and most of Turkey have relatively temperate climates typical of the Mediterranean, with dry summers and cool, wet winters. Most of the countries that border the Persian Gulf have vast reserves of petroleum. Monarchs of the Arabian Peninsula in particular have benefitted economically from petroleum exports. Because of the arid climate and dependence on the fossil fuel industry, the Middle East is both a major contributor to climate change and a region that is expected to be severely adversely affected by it. Other concepts of the region exist, including the broader Middle East and North Africa (MENA), which includes states of the Maghreb and the Sudan. The term the "Greater Middle East" also includes Afghanistan, Mauritania, Pakistan, as well as parts of East Africa, and sometimes Central Asia and the South Caucasus. Terminology The term "Middle East" may have originated in the 1850s in the British India Office. However, it became more widely known when United States naval strategist Alfred Thayer Mahan used the term in 1902 to "designate the area between Arabia and India". During this time the British and Russian empires were vying for influence in Central Asia, a rivalry that would become known as the Great Game. Mahan realized not only the strategic importance of the region, but also of its center, the Persian Gulf. He labeled the area surrounding the Persian Gulf as the Middle East. He said that, beyond Egypt's Suez Canal, the Gulf was the most important passage for Britain to control in order to keep the Russians from advancing towards British India. Mahan first used the term in his article "The Persian Gulf and International Relations", published in September 1902 in the National Review, a British journal. The Middle East, if I may adopt a term which I have not seen, will some day need its Malta, as well as its Gibraltar; it does not follow that either will be in the Persian Gulf. Naval force has the quality of mobility which carries with it the privilege of temporary absences; but it needs to find on every scene of operation established bases of refit, of supply, and in case of disaster, of security. The British Navy should have the facility to concentrate in force if occasion arise, about Aden, India, and the Persian Gulf. Mahan's article was reprinted in The Times and followed in October by a 20-article series entitled "The Middle Eastern Question", written by Sir Ignatius Valentine Chirol. During this series, Sir Ignatius expanded the definition of Middle East to include "those regions of Asia which extend to the borders of India or command the approaches to India." After the series ended in 1903, The Times removed quotation marks from subsequent uses of the term. Until World War II, it was customary to refer to areas centered on Turkey and the eastern shore of the Mediterranean as the "Near East", while the "Far East" centered on China, India and Japan. The Middle East was then defined as the area from Mesopotamia to Burma; namely, the area between the Near East and the Far East. This area broadly corresponds to South Asia. In the late 1930s, the British established the Middle East Command, which was based in Cairo, for its military forces in the region. After that time, the term "Middle East" gained broader usage in Europe and the United States. Following World War II, for example, the Middle East Institute was founded in Washington, D.C. in 1946. The corresponding adjective is Middle Eastern and the derived noun is Middle Easterner. While non-Eurocentric terms such as "Southwest Asia" or "Swasia" have been sparsely used, the classification of the African country, Egypt, among those counted in the Middle East challenges the usefulness of using such terms. The description Middle has also led to some confusion over changing definitions. Before the First World War, "Near East" was used in English to refer to the Balkans and the Ottoman Empire, while "Middle East" referred to the Caucasus, Persia, and Arabian lands, and sometimes Afghanistan, India and others. In contrast, "Far East" referred to the countries of East Asia (e.g. China, Japan, and Korea). With the collapse of the Ottoman Empire in 1918, "Near East" largely fell out of common use in English, while "Middle East" came to be applied to the emerging independent countries of the Islamic world. However, the usage "Near East" was retained by a variety of academic disciplines, including archaeology and ancient history. In their usage, the term describes an area identical to the term Middle East, which is not used by these disciplines (see ancient Near East).[citation needed] The first official use of the term "Middle East" by the United States government was in the 1957 Eisenhower Doctrine, which pertained to the Suez Crisis. Secretary of State John Foster Dulles defined the Middle East as "the area lying between and including Libya on the west and Pakistan on the east, Syria and Iraq on the North and the Arabian peninsula to the south, plus the Sudan and Ethiopia." In 1958, the State Department explained that the terms "Near East" and "Middle East" were interchangeable, and defined the region as including only Egypt, Syria, Israel, Lebanon, Jordan, Iraq, Saudi Arabia, Kuwait, Bahrain, and Qatar. Since the late 20th century, scholars and journalists from the region, such as journalist Louay Khraish and historian Hassan Hanafi have criticized the use of "Middle East" as a Eurocentric and colonialist term. The Associated Press Stylebook of 2004 says that Near East formerly referred to the farther west countries while Middle East referred to the eastern ones, but that now they are synonymous. It instructs: Use Middle East unless Near East is used by a source in a story. Mideast is also acceptable, but Middle East is preferred. European languages have adopted terms similar to Near East and Middle East. Since these are based on a relative description, the meanings depend on the country and are generally different from the English terms. In German the term Naher Osten (Near East) is still in common use (nowadays the term Mittlerer Osten is more and more common in press texts translated from English sources, albeit having a distinct meaning). In the four Slavic languages, Russian Ближний Восток or Blizhniy Vostok, Bulgarian Близкия Изток, Polish Bliski Wschód or Croatian Bliski istok (terms meaning Near East are the only appropriate ones for the region). However, some European languages do have "Middle East" equivalents, such as French Moyen-Orient, Swedish Mellanöstern, Spanish Oriente Medio or Medio Oriente, Greek is Μέση Ανατολή (Mesi Anatoli), and Italian Medio Oriente.[c] Perhaps because of the political influence of the United States and Europe, and the prominence of Western press, the Arabic equivalent of Middle East (Arabic: الشرق الأوسط ash-Sharq al-Awsaṭ) has become standard usage in the mainstream Arabic press. It comprises the same meaning as the term "Middle East" in North American and Western European usage. The designation, Mashriq, also from the Arabic root for East, also denotes a variously defined region around the Levant, the eastern part of the Arabic-speaking world (as opposed to the Maghreb, the western part). Even though the term originated in the West, countries of the Middle East that use languages other than Arabic also use that term in translation. For instance, the Persian equivalent for Middle East is خاورمیانه (Khāvar-e miyāneh), the Hebrew is המזרח התיכון (hamizrach hatikhon), and the Turkish is Orta Doğu. Countries and territory Traditionally included within the Middle East are Arabia, Asia Minor, East Thrace, Egypt, Iran, the Levant, Mesopotamia, and the Socotra Archipelago. The region includes 17 UN-recognized countries and one British Overseas Territory. Various concepts are often paralleled to the Middle East, most notably the Near East, Fertile Crescent, and Levant. These are geographical concepts, which refer to large sections of the modern-day Middle East, with the Near East being the closest to the Middle East in its geographical meaning. Due to it primarily being Arabic speaking, the Maghreb region of North Africa is sometimes included. "Greater Middle East" is a political term coined by the second Bush administration in the first decade of the 21st century to denote various countries, pertaining to the Muslim world, specifically Afghanistan, Iran, Pakistan, and Turkey. Various Central Asian countries are sometimes also included. History The Middle East lies at the juncture of Africa and Eurasia and of the Indian Ocean and the Mediterranean Sea (see also: Indo-Mediterranean). It is the birthplace and spiritual center of religions such as Christianity, Islam, Judaism, Manichaeism, Yezidi, Druze, Yarsan, and Mandeanism, and in Iran, Mithraism, Zoroastrianism, Manicheanism, and the Baháʼí Faith. Throughout its history the Middle East has been a major center of world affairs; a strategically, economically, politically, culturally, and religiously sensitive area. The region is one of the regions where agriculture was independently discovered, and from the Middle East it was spread, during the Neolithic, to different regions of the world such as Europe, the Indus Valley and Eastern Africa. Prior to the formation of civilizations, advanced cultures formed all over the Middle East during the Stone Age. The search for agricultural lands by agriculturalists, and pastoral lands by herdsmen meant different migrations took place within the region and shaped its ethnic and demographic makeup. The Middle East is widely and most famously known as the cradle of civilization. The world's earliest civilizations, Mesopotamia (Sumer, Akkad, Assyria and Babylonia), ancient Egypt and Kish in the Levant, all originated in the Fertile Crescent and Nile Valley regions of the ancient Near East. These were followed by the Hittite, Greek, Hurrian and Urartian civilisations of Asia Minor; Elam, Persia and Median civilizations in Iran, as well as the civilizations of the Levant (such as Ebla, Mari, Nagar, Ugarit, Canaan, Aramea, Mitanni, Phoenicia and Israel) and the Arabian Peninsula (Magan, Sheba, Ubar). The Near East was first largely unified under the Neo Assyrian Empire, then the Achaemenid Empire followed later by the Macedonian Empire and after this to some degree by the Iranian empires (namely the Parthian and Sassanid Empires), the Roman Empire and Byzantine Empire. The region served as the intellectual and economic center of the Roman Empire and played an exceptionally important role due to its periphery on the Sassanid Empire. Thus, the Romans stationed up to five or six of their legions in the region for the sole purpose of defending it from Sassanid and Bedouin raids and invasions. From the 4th century CE onwards, the Middle East became the center of the two main powers at the time, the Byzantine Empire and the Sassanid Empire. However, it would be the later Islamic Caliphates of the Middle Ages, or Islamic Golden Age which began with the Islamic conquest of the region in the 7th century AD, that would first unify the entire Middle East as a distinct region and create the dominant Islamic Arab ethnic identity that largely (but not exclusively) persists today. The 4 caliphates that dominated the Middle East for more than 600 years were the Rashidun Caliphate, the Umayyad caliphate, the Abbasid caliphate and the Fatimid caliphate. Additionally, the Mongols would come to dominate the region, the Kingdom of Armenia would incorporate parts of the region to their domain, the Seljuks would rule the region and spread Turko-Persian culture, and the Franks would found the Crusader states that would stand for roughly two centuries. Josiah Russell estimates the population of what he calls "Islamic territory" as roughly 12.5 million in 1000 – Anatolia 8 million, Syria 2 million, and Egypt 1.5 million. From the 16th century onward, the Middle East came to be dominated, once again, by two main powers: the Ottoman Empire and the Safavid dynasty. The modern Middle East began after World War I, when the Ottoman Empire, which was allied with the Central Powers, was defeated by the Allies and partitioned into a number of separate nations, initially under British and French Mandates. Other defining events in this transformation included the establishment of Israel in 1948 and the eventual departure of European powers, notably Britain and France by the end of the 1960s. They were supplanted in some part by the rising influence of the United States from the 1970s onwards. In the 20th century, the region's significant stocks of crude oil gave it new strategic and economic importance. Mass production of oil began around 1945, with Saudi Arabia, Iran, Kuwait, Iraq, and the United Arab Emirates having large quantities of oil. Estimated oil reserves, especially in Saudi Arabia and Iran, are some of the highest in the world, and the international oil cartel OPEC is dominated by Middle Eastern countries. During the Cold War, the Middle East was a theater of ideological struggle between the two superpowers and their allies: NATO and the United States on one side, and the Soviet Union and Warsaw Pact on the other, as they competed to influence regional allies. Besides the political reasons there was also the "ideological conflict" between the two systems. Moreover, as Louise Fawcett argues, among many important areas of contention, or perhaps more accurately of anxiety, were, first, the desires of the superpowers to gain strategic advantage in the region, second, the fact that the region contained some two-thirds of the world's oil reserves in a context where oil was becoming increasingly vital to the economy of the Western world [...] Within this contextual framework, the United States sought to divert the Arab world from Soviet influence. Throughout the 20th and 21st centuries, the region has experienced both periods of relative peace and tolerance and periods of conflict particularly between Sunnis and Shiites. Geography In 2018, the MENA region emitted 3.2 billion tonnes of carbon dioxide and produced 8.7% of global greenhouse gas emissions (GHG) despite making up only 6% of the global population. These emissions are mostly from the energy sector, an integral component of many Middle Eastern and North African economies due to the extensive oil and natural gas reserves that are found within the region. The Middle East region is one of the most vulnerable to climate change. The impacts include increase in drought conditions, aridity, heatwaves and sea level rise. Sharp global temperature and sea level changes, shifting precipitation patterns and increased frequency of extreme weather events are some of the main impacts of climate change as identified by the Intergovernmental Panel on Climate Change (IPCC). The MENA region is especially vulnerable to such impacts due to its arid and semi-arid environment, facing climatic challenges such as low rainfall, high temperatures and dry soil. The climatic conditions that foster such challenges for MENA are projected by the IPCC to worsen throughout the 21st century. If greenhouse gas emissions are not significantly reduced, part of the MENA region risks becoming uninhabitable before the year 2100. Climate change is expected to put significant strain on already scarce water and agricultural resources within the MENA region, threatening the national security and political stability of all included countries. Over 60 percent of the region's population lives in high and very high water-stressed areas compared to the global average of 35 percent. This has prompted some MENA countries to engage with the issue of climate change on an international level through environmental accords such as the Paris Agreement. Law and policy are also being established on a national level amongst MENA countries, with a focus on the development of renewable energies. Economy Middle Eastern economies range from being very poor (such as Gaza and Yemen) to extremely wealthy nations (such as Qatar and UAE). According to the International Monetary Fund, the three largest Middle Eastern economies in nominal GDP in 2023 were Saudi Arabia ($1.06 trillion), Turkey ($1.03 trillion), and Israel ($0.54 trillion). For nominal GDP per person, the highest ranking countries are Qatar ($83,891), Israel ($55,535), the United Arab Emirates ($49,451) and Cyprus ($33,807). Turkey ($3.6 trillion), Saudi Arabia ($2.3 trillion), and Iran ($1.7 trillion) had the largest economies in terms of GDP PPP. For GDP PPP per person, the highest-ranking countries are Qatar ($124,834), the United Arab Emirates ($88,221), Saudi Arabia ($64,836), Bahrain ($60,596) and Israel ($54,997). The lowest-ranking country in the Middle East, in terms of GDP nominal per capita, is Yemen ($573). The economic structure of Middle Eastern nations are different because while some are heavily dependent on export of only oil and oil-related products (Saudi Arabia, the UAE and Kuwait), others have a highly diverse economic base (such as Cyprus, Israel, Turkey and Egypt). Industries of the Middle Eastern region include oil and oil-related products, agriculture, cotton, cattle, dairy, textiles, leather products, surgical instruments, defence equipment (guns, ammunition, tanks, submarines, fighter jets, UAVs, and missiles). Banking is an important sector, especially for UAE and Bahrain. With the exception of Cyprus, Turkey, Egypt, Lebanon and Israel, tourism has been a relatively undeveloped area of the economy, in part because of the socially conservative nature of the region as well as political turmoil in certain regions. Since the end of the COVID pandemic however, countries such as the UAE, Bahrain, and Jordan have begun attracting greater numbers of tourists because of improving tourist facilities and the relaxing of tourism-related restrictive policies. Unemployment is high in the Middle East and North Africa region, particularly among people aged 15–29, a demographic representing 30% of the region's population. The total regional unemployment rate in 2025 is 10.8%, and among youth is as high as 28%. Demographics Arabs constitute the largest ethnic group in the Middle East, followed by various Iranian peoples and then by Turkic peoples (Turkish, Azeris, Syrian Turkmen, and Iraqi Turkmen). Native ethnic groups of the region include, in addition to Arabs, Arameans, Assyrians, Baloch, Berbers, Copts, Druze, Greek Cypriots, Jews, Kurds, Lurs, Mandaeans, Persians, Samaritans, Shabaks, Tats, and Zazas. European ethnic groups that form a diaspora in the region include Albanians, Bosniaks, Circassians (including Kabardians), Crimean Tatars, Greeks, Franco-Levantines, Italo-Levantines, and Iraqi Turkmens. Among other migrant populations are Chinese, Filipinos, Indians, Indonesians, Pakistanis, Pashtuns, Romani, and Afro-Arabs. "Migration has always provided an important vent for labor market pressures in the Middle East. For the period between the 1970s and 1990s, the Arab states of the Persian Gulf in particular provided a rich source of employment for workers from Egypt, Yemen and the countries of the Levant, while Europe had attracted young workers from North African countries due both to proximity and the legacy of colonial ties between France and the majority of North African states." According to the International Organization for Migration, there are 13 million first-generation migrants from Arab nations in the world, of which 5.8 reside in other Arab countries. Expatriates from Arab countries contribute to the circulation of financial and human capital in the region and thus significantly promote regional development. In 2009 Arab countries received a total of US$35.1 billion in remittance in-flows and remittances sent to Jordan, Egypt and Lebanon from other Arab countries are 40 to 190 per cent higher than trade revenues between these and other Arab countries. In Somalia, the Somali Civil War has greatly increased the size of the Somali diaspora, as many of the best educated Somalis left for Middle Eastern countries as well as Europe and North America. Non-Arab Middle Eastern countries such as Turkey, Israel and Iran are also subject to important migration dynamics. A fair proportion of those migrating from Arab nations are from ethnic and religious minorities facing persecution and are not necessarily ethnic Arabs, Iranians or Turks.[citation needed] Large numbers of Kurds, Jews, Assyrians, Greeks and Armenians as well as many Mandeans have left nations such as Iraq, Iran, Syria and Turkey for these reasons during the last century. In Iran, many religious minorities such as Christians, Baháʼís, Jews and Zoroastrians have left since the Islamic Revolution of 1979. The Middle East is very diverse when it comes to religions, many of which originated there. Islam is the largest religion in the Middle East, but other faiths that originated there, such as Judaism and Christianity, are also well represented. Christian communities have played a vital role in the Middle East, and they represent 78% of Cyprus population, and 40.5% of Lebanon, where the Lebanese president, half of the cabinet, and half of the parliament follow one of the various Lebanese Christian rites. There are also important minority religions like the Baháʼí Faith, Yarsanism, Yazidism, Zoroastrianism, Mandaeism, Druze, and Shabakism, and in ancient times the region was home to Mesopotamian religions, Canaanite religions, Manichaeism, Mithraism and various monotheist gnostic sects. The six top languages, in terms of numbers of speakers, are Arabic, Persian, Turkish, Kurdish, Modern Hebrew and Greek. About 20 minority languages are also spoken in the Middle East. Arabic, with all its dialects, is the most widely spoken language in the Middle East, with Literary Arabic being official in all North African and in most West Asian countries. Arabic dialects are also spoken in some adjacent areas in neighbouring Middle Eastern non-Arab countries. It is a member of the Semitic branch of the Afro-Asiatic languages. Several Modern South Arabian languages such as Mehri and Soqotri are also spoken in Yemen and Oman. Another Semitic language is Aramaic and its dialects are spoken mainly by Assyrians and Mandaeans, with Western Aramaic still spoken in two villages near Damascus, Syria. There is also an Oasis Berber-speaking community in Egypt where the language is also known as Siwa. It is a non-Semitic Afro-Asiatic sister language. Persian is the second most spoken language. While it is primarily spoken in Iran and some border areas in neighbouring countries, the country is one of the region's largest and most populous. It belongs to the Indo-Iranian branch of the family of Indo-European languages. Other Western Iranic languages spoken in the region include Achomi, Daylami, Kurdish dialects, Semmani, Lurish, amongst many others. The close third-most widely spoken language, Turkish, is largely confined to Turkey, which is also one of the region's largest and most populous countries, but it is present in areas in neighboring countries. It is a member of the Turkic languages, which have their origins in East Asia. Another Turkic language, Azerbaijani, is spoken by Azerbaijanis in Iran. The fourth-most widely spoken language, Kurdish, is spoken in the countries of Iran, Iraq, Syria and Turkey, Sorani Kurdish is the second official language in Iraq (instated after the 2005 constitution) after Arabic. Hebrew is the official language of Israel, with Arabic given a special status after the 2018 Basic law lowered its status from an official language prior to 2018. Hebrew is spoken and used by over 80% of Israel's population, the other 20% using Arabic. Modern Hebrew only began being spoken in the 20th century after being revived in the late 19th century by Elizer Ben-Yehuda (Elizer Perlman) and European Jewish settlers, with the first native Hebrew speaker being born in 1882. Greek is one of the two official languages of Cyprus, and the country's main language. Small communities of Greek speakers exist all around the Middle East; until the 20th century it was also widely spoken in Asia Minor (being the second most spoken language there, after Turkish) and Egypt. During the antiquity, Ancient Greek was the lingua franca for many areas of the western Middle East and until the Muslim expansion it was widely spoken there as well. Until the late 11th century, it was also the main spoken language in Asia Minor; after that it was gradually replaced by the Turkish language as the Anatolian Turks expanded and the local Greeks were assimilated, especially in the interior. English is one of the official languages of Akrotiri and Dhekelia. It is also commonly taught and used as a foreign second language, in countries such as Egypt, Jordan, Iran, Iraq, Qatar, Bahrain, United Arab Emirates and Kuwait. It is also a main language in some Emirates of the United Arab Emirates. It is also spoken as native language by Jewish immigrants from Anglophone countries (UK, US, Australia) in Israel and understood widely as second language there. French is taught and used in many government facilities and media in Lebanon, and is taught in some primary and secondary schools of Egypt and Syria. Maltese, a Semitic language mainly spoken in Europe, is used by the Franco-Maltese diaspora in Egypt. Due to widespread immigration of French Jews to Israel, it is the native language of approximately 200,000 Jews in Israel. Armenian speakers are to be found in the region. Georgian is spoken by the Georgian diaspora. Russian is spoken by a large portion of the Israeli population, because of emigration in the late 1990s. Russian today is a popular unofficial language in use in Israel; news, radio and sign boards can be found in Russian around the country after Hebrew and Arabic. Circassian is also spoken by the diaspora in the region and by almost all Circassians in Israel who speak Hebrew and English as well. The largest Romanian-speaking community in the Middle East is found in Israel, where as of 1995[update] Romanian is spoken by 5% of the population.[d] Bengali, Hindi and Urdu are widely spoken by migrant communities in many Middle Eastern countries, such as Saudi Arabia (where 20–25% of the population is South Asian), the United Arab Emirates (where 50–55% of the population is South Asian), and Qatar, which have large numbers of Pakistani, Bangladeshi and Indian immigrants. Culture The Middle East has recently become more prominent in hosting global sport events due to its wealth and desire to diversify its economy. The South Asian diaspora is a major backer of cricket in the region. See also Notes References Further reading External links 29°N 41°E / 29°N 41°E / 29; 41
========================================
[SOURCE: https://en.wikipedia.org/wiki/Internet#cite_note-171] | [TOKENS: 9291]
Contents Internet The Internet (or internet)[a] is the global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP)[b] to communicate between networks and devices. It is a network of networks that comprises private, public, academic, business, and government networks of local to global scope, linked by electronic, wireless, and optical networking technologies. The Internet carries a vast range of information services and resources, such as the interlinked hypertext documents and applications of the World Wide Web (WWW), electronic mail, discussion groups, internet telephony, streaming media and file sharing. Most traditional communication media, including telephone, radio, television, paper mail, newspapers, and print publishing, have been transformed by the Internet, giving rise to new media such as email, online music, digital newspapers, news aggregators, and audio and video streaming websites. The Internet has enabled and accelerated new forms of personal interaction through instant messaging, Internet forums, and social networking services. Online shopping has also grown to occupy a significant market across industries, enabling firms to extend brick and mortar presences to serve larger markets. Business-to-business and financial services on the Internet affect supply chains across entire industries. The origins of the Internet date back to research that enabled the time-sharing of computer resources, the development of packet switching, and the design of computer networks for data communication. The set of communication protocols to enable internetworking on the Internet arose from research and development commissioned in the 1970s by the Defense Advanced Research Projects Agency (DARPA) of the United States Department of Defense in collaboration with universities and researchers across the United States and in the United Kingdom and France. The Internet has no single centralized governance in either technological implementation or policies for access and usage. Each constituent network sets its own policies. The overarching definitions of the two principal name spaces on the Internet, the Internet Protocol address (IP address) space and the Domain Name System (DNS), are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols is an activity of the non-profit Internet Engineering Task Force (IETF). Terminology The word internetted was used as early as 1849, meaning interconnected or interwoven. The word Internet was used in 1945 by the United States War Department in a radio operator's manual, and in 1974 as the shorthand form of Internetwork. Today, the term Internet most commonly refers to the global system of interconnected computer networks, though it may also refer to any group of smaller networks. The word Internet may be capitalized as a proper noun, although this is becoming less common. This reflects the tendency in English to capitalize new terms and move them to lowercase as they become familiar. The word is sometimes still capitalized to distinguish the global internet from smaller networks, though many publications, including the AP Stylebook since 2016, recommend the lowercase form in every case. In 2016, the Oxford English Dictionary found that, based on a study of around 2.5 billion printed and online sources, "Internet" was capitalized in 54% of cases. The terms Internet and World Wide Web are often used interchangeably; it is common to speak of "going on the Internet" when using a web browser to view web pages. However, the World Wide Web, or the Web, is only one of a large number of Internet services. It is the global collection of web pages, documents and other web resources linked by hyperlinks and URLs. History In the 1960s, computer scientists began developing systems for time-sharing of computer resources. J. C. R. Licklider proposed the idea of a universal network while working at Bolt Beranek & Newman and, later, leading the Information Processing Techniques Office at the Advanced Research Projects Agency (ARPA) of the United States Department of Defense. Research into packet switching,[c] one of the fundamental Internet technologies, started in the work of Paul Baran at RAND in the early 1960s and, independently, Donald Davies at the United Kingdom's National Physical Laboratory in 1965. After the Symposium on Operating Systems Principles in 1967, packet switching from the proposed NPL network was incorporated into the design of the ARPANET, an experimental resource sharing network proposed by ARPA. ARPANET development began with two network nodes which were interconnected between the University of California, Los Angeles and the Stanford Research Institute on 29 October 1969. The third site was at the University of California, Santa Barbara, followed by the University of Utah. By the end of 1971, 15 sites were connected to the young ARPANET. Thereafter, the ARPANET gradually developed into a decentralized communications network, connecting remote centers and military bases in the United States. Other user networks and research networks, such as the Merit Network and CYCLADES, were developed in the late 1960s and early 1970s. Early international collaborations for the ARPANET were rare. Connections were made in 1973 to Norway (NORSAR and, later, NDRE) and to Peter Kirstein's research group at University College London, which provided a gateway to British academic networks, the first internetwork for resource sharing. ARPA projects, the International Network Working Group and commercial initiatives led to the development of various protocols and standards by which multiple separate networks could become a single network, or a network of networks. In 1974, Vint Cerf at Stanford University and Bob Kahn at DARPA published a proposal for "A Protocol for Packet Network Intercommunication". Cerf and his graduate students used the term internet as a shorthand for internetwork in RFC 675. The Internet Experiment Notes and later RFCs repeated this use. The work of Louis Pouzin and Robert Metcalfe had important influences on the resulting TCP/IP design. National PTTs and commercial providers developed the X.25 standard and deployed it on public data networks. The ARPANET initially served as a backbone for the interconnection of regional academic and military networks in the United States to enable resource sharing. Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). In 1982, the Internet Protocol Suite (TCP/IP) was standardized, which facilitated worldwide proliferation of interconnected networks. TCP/IP network access expanded again in 1986 when the National Science Foundation Network (NSFNet) provided access to supercomputer sites in the United States for researchers, first at speeds of 56 kbit/s and later at 1.5 Mbit/s and 45 Mbit/s. The NSFNet expanded into academic and research organizations in Europe, Australia, New Zealand and Japan in 1988–89. Although other network protocols such as UUCP and PTT public data networks had global reach well before this time, this marked the beginning of the Internet as an intercontinental network. Commercial Internet service providers emerged in 1989 in the United States and Australia. The ARPANET was decommissioned in 1990. The linking of commercial networks and enterprises by the early 1990s, as well as the advent of the World Wide Web, marked the beginning of the transition to the modern Internet. Steady advances in semiconductor technology and optical networking created new economic opportunities for commercial involvement in the expansion of the network in its core and for delivering services to the public. In mid-1989, MCI Mail and Compuserve established connections to the Internet, delivering email and public access products to the half million users of the Internet. Just months later, on 1 January 1990, PSInet launched an alternate Internet backbone for commercial use; one of the networks that added to the core of the commercial Internet of later years. In March 1990, the first high-speed T1 (1.5 Mbit/s) link between the NSFNET and Europe was installed between Cornell University and CERN, allowing much more robust communications than were capable with satellites. Later in 1990, Tim Berners-Lee began writing WorldWideWeb, the first web browser, after two years of lobbying CERN management. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the HyperText Transfer Protocol (HTTP) 0.9, the HyperText Markup Language (HTML), the first Web browser (which was also an HTML editor and could access Usenet newsgroups and FTP files), the first HTTP server software (later known as CERN httpd), the first web server, and the first Web pages that described the project itself. In 1991 the Commercial Internet eXchange was founded, allowing PSInet to communicate with the other commercial networks CERFnet and Alternet. Stanford Federal Credit Union was the first financial institution to offer online Internet banking services to all of its members in October 1994. In 1996, OP Financial Group, also a cooperative bank, became the second online bank in the world and the first in Europe. By 1995, the Internet was fully commercialized in the U.S. when the NSFNet was decommissioned, removing the last restrictions on use of the Internet to carry commercial traffic. As technology advanced and commercial opportunities fueled reciprocal growth, the volume of Internet traffic started experiencing similar characteristics as that of the scaling of MOS transistors, exemplified by Moore's law, doubling every 18 months. This growth, formalized as Edholm's law, was catalyzed by advances in MOS technology, laser light wave systems, and noise performance. Since 1995, the Internet has tremendously impacted culture and commerce, including the rise of near-instant communication by email, instant messaging, telephony (Voice over Internet Protocol or VoIP), two-way interactive video calls, and the World Wide Web. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1 Gbit/s, 10 Gbit/s, or more. The Internet continues to grow, driven by ever-greater amounts of online information and knowledge, commerce, entertainment and social networking services. During the late 1990s, it was estimated that traffic on the public Internet grew by 100 percent per year, while the mean annual growth in the number of Internet users was thought to be between 20% and 50%. This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network. In November 2006, the Internet was included on USA Today's list of the New Seven Wonders. As of 31 March 2011[update], the estimated total number of Internet users was 2.095 billion (30% of world population). It is estimated that in 1993 the Internet carried only 1% of the information flowing through two-way telecommunication. By 2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information was carried over the Internet. Modern smartphones can access the Internet through cellular carrier networks, and internet usage by mobile and tablet devices exceeded desktop worldwide for the first time in October 2016. As of 2018[update], 80% of the world's population were covered by a 4G network. The International Telecommunication Union (ITU) estimated that, by the end of 2017, 48% of individual users regularly connect to the Internet, up from 34% in 2012. Mobile Internet connectivity has played an important role in expanding access in recent years, especially in Asia and the Pacific and in Africa. The number of unique mobile cellular subscriptions increased from 3.9 billion in 2012 to 4.8 billion in 2016, two-thirds of the world's population, with more than half of subscriptions located in Asia and the Pacific. The limits that users face on accessing information via mobile applications coincide with a broader process of fragmentation of the Internet. Fragmentation restricts access to media content and tends to affect the poorest users the most. One solution, zero-rating, is the practice of Internet service providers allowing users free connectivity to access specific content or applications without cost. Social impact The Internet has enabled new forms of social interaction, activities, and social associations, giving rise to the scholarly study of the sociology of the Internet. Between 2000 and 2009, the number of Internet users globally rose from 390 million to 1.9 billion. By 2010, 22% of the world's population had access to computers with 1 billion Google searches every day, 300 million Internet users reading blogs, and 2 billion videos viewed daily on YouTube. In 2014 the world's Internet users surpassed 3 billion or 44 percent of world population, but two-thirds came from the richest countries, with 78 percent of Europeans using the Internet, followed by 57 percent of the Americas. However, by 2018, Asia alone accounted for 51% of all Internet users, with 2.2 billion out of the 4.3 billion Internet users in the world. China's Internet users surpassed a major milestone in 2018, when the country's Internet regulatory authority, China Internet Network Information Centre, announced that China had 802 million users. China was followed by India, with some 700 million users, with the United States third with 275 million users. However, in terms of penetration, in 2022, China had a 70% penetration rate compared to India's 60% and the United States's 90%. In 2022, 54% of the world's Internet users were based in Asia, 14% in Europe, 7% in North America, 10% in Latin America and the Caribbean, 11% in Africa, 4% in the Middle East and 1% in Oceania. In 2019, Kuwait, Qatar, the Falkland Islands, Bermuda and Iceland had the highest Internet penetration by the number of users, with 93% or more of the population with access. As of 2022, it was estimated that 5.4 billion people use the Internet, more than two-thirds of the world's population. Early computer systems were limited to the characters in the American Standard Code for Information Interchange (ASCII), a subset of the Latin alphabet. After English (27%), the most requested languages on the World Wide Web are Chinese (25%), Spanish (8%), Japanese (5%), Portuguese and German (4% each), Arabic, French and Russian (3% each), and Korean (2%). Modern character encoding standards, such as Unicode, allow for development and communication in the world's widely used languages. However, some glitches such as mojibake (incorrect display of some languages' characters) still remain. Several neologisms exist that refer to Internet users: Netizen (as in "citizen of the net") refers to those actively involved in improving online communities, the Internet in general or surrounding political affairs and rights such as free speech, Internaut refers to operators or technically highly capable users of the Internet, digital citizen refers to a person using the Internet in order to engage in society, politics, and government participation. The Internet allows greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections. The Internet can be accessed almost anywhere by numerous means, including through mobile Internet devices. Mobile phones, datacards, handheld game consoles and cellular routers allow users to connect to the Internet wirelessly.[citation needed] Educational material at all levels from pre-school (e.g. CBeebies) to post-doctoral (e.g. scholarly literature through Google Scholar) is available on websites. The internet has facilitated the development of virtual universities and distance education, enabling both formal and informal education. The Internet allows researchers to conduct research remotely via virtual laboratories, with profound changes in reach and generalizability of findings as well as in communication between scientists and in the publication of results. By the late 2010s the Internet had been described as "the main source of scientific information "for the majority of the global North population".: 111 Wikis have also been used in the academic community for sharing and dissemination of information across institutional and international boundaries. In those settings, they have been found useful for collaboration on grant writing, strategic planning, departmental documentation, and committee work. The United States Patent and Trademark Office uses a wiki to allow the public to collaborate on finding prior art relevant to examination of pending patent applications. Queens, New York has used a wiki to allow citizens to collaborate on the design and planning of a local park. The English Wikipedia has the largest user base among wikis on the World Wide Web and ranks in the top 10 among all sites in terms of traffic. The Internet has been a major outlet for leisure activity since its inception, with entertaining social experiments such as MUDs and MOOs being conducted on university servers, and humor-related Usenet groups receiving much traffic. Many Internet forums have sections devoted to games and funny videos. Another area of leisure activity on the Internet is multiplayer gaming. This form of recreation creates communities, where people of all ages and origins enjoy the fast-paced world of multiplayer games. These range from MMORPG to first-person shooters, from role-playing video games to online gambling. While online gaming has been around since the 1970s, modern modes of online gaming began with subscription services such as GameSpy and MPlayer. Streaming media is the real-time delivery of digital media for immediate consumption or enjoyment by end users. Streaming companies (such as Netflix, Disney+, Amazon's Prime Video, Mubi, Hulu, and Apple TV+) now dominate the entertainment industry, eclipsing traditional broadcasters. Audio streamers such as Spotify and Apple Music also have significant market share in the audio entertainment market. Video sharing websites are also a major factor in the entertainment ecosystem. YouTube was founded on 15 February 2005 and is now the leading website for free streaming video with more than two billion users. It uses a web player to stream and show video files. YouTube users watch hundreds of millions, and upload hundreds of thousands, of videos daily. Other video sharing websites include Vimeo, Instagram and TikTok.[citation needed] Although many governments have attempted to restrict both Internet pornography and online gambling, this has generally failed to stop their widespread popularity. A number of advertising-funded ostensible video sharing websites known as "tube sites" have been created to host shared pornographic video content. Due to laws requiring the documentation of the origin of pornography, these websites now largely operate in conjunction with pornographic movie studios and their own independent creator networks, acting as de-facto video streaming services. Major players in this field include the market leader Aylo, the operator of PornHub and numerous other branded sites, as well as other independent operators such as xHamster and Xvideos. As of 2023[update], Internet traffic to pornographic video sites rivalled that of mainstream video streaming and sharing services. Remote work is facilitated by tools such as groupware, virtual private networks, conference calling, videotelephony, and VoIP so that work may be performed from any location, such as the worker's home.[citation needed] The spread of low-cost Internet access in developing countries has opened up new possibilities for peer-to-peer charities, which allow individuals to contribute small amounts to charitable projects for other individuals. Websites, such as DonorsChoose and GlobalGiving, allow small-scale donors to direct funds to individual projects of their choice. A popular twist on Internet-based philanthropy is the use of peer-to-peer lending for charitable purposes. Kiva pioneered this concept in 2005, offering the first web-based service to publish individual loan profiles for funding. The low cost and nearly instantaneous sharing of ideas, knowledge, and skills have made collaborative work dramatically easier, with the help of collaborative software, which allow groups to easily form, cheaply communicate, and share ideas. An example of collaborative software is the free software movement, which has produced, among other things, Linux, Mozilla Firefox, and OpenOffice.org (later forked into LibreOffice).[citation needed] Content management systems allow collaborating teams to work on shared sets of documents simultaneously without accidentally destroying each other's work.[citation needed] The internet also allows for cloud computing, virtual private networks, remote desktops, and remote work.[citation needed] The online disinhibition effect describes the tendency of many individuals to behave more stridently or offensively online than they would in person. A significant number of feminist women have been the target of various forms of harassment, including insults and hate speech, to, in extreme cases, rape and death threats, in response to posts they have made on social media. Social media companies have been criticized in the past for not doing enough to aid victims of online abuse. Children also face dangers online such as cyberbullying and approaches by sexual predators, who sometimes pose as children themselves. Due to naivety, they may also post personal information about themselves online, which could put them or their families at risk unless warned not to do so. Many parents choose to enable Internet filtering or supervise their children's online activities in an attempt to protect their children from pornography or violent content on the Internet. The most popular social networking services commonly forbid users under the age of 13. However, these policies can be circumvented by registering an account with a false birth date, and a significant number of children aged under 13 join such sites.[citation needed] Social networking services for younger children, which claim to provide better levels of protection for children, also exist. Internet usage has been correlated to users' loneliness. Lonely people tend to use the Internet as an outlet for their feelings and to share their stories with others, such as in the "I am lonely will anyone speak to me" thread.[citation needed] Cyberslacking can become a drain on corporate resources; employees spend a significant amount of time surfing the Web while at work. Internet addiction disorder is excessive computer use that interferes with daily life. Nicholas G. Carr believes that Internet use has other effects on individuals, for instance improving skills of scan-reading and interfering with the deep thinking that leads to true creativity. Electronic business encompasses business processes spanning the entire value chain: purchasing, supply chain management, marketing, sales, customer service, and business relationship. E-commerce seeks to add revenue streams using the Internet to build and enhance relationships with clients and partners. According to International Data Corporation, the size of worldwide e-commerce, when global business-to-business and -consumer transactions are combined, equate to $16 trillion in 2013. A report by Oxford Economics added those two together to estimate the total size of the digital economy at $20.4 trillion, equivalent to roughly 13.8% of global sales. While much has been written of the economic advantages of Internet-enabled commerce, there is also evidence that some aspects of the Internet such as maps and location-aware services may serve to reinforce economic inequality and the digital divide. Electronic commerce may be responsible for consolidation and the decline of mom-and-pop, brick and mortar businesses resulting in increases in income inequality. A 2013 Institute for Local Self-Reliance report states that brick-and-mortar retailers employ 47 people for every $10 million in sales, while Amazon employs only 14. Similarly, the 700-employee room rental start-up Airbnb was valued at $10 billion in 2014, about half as much as Hilton Worldwide, which employs 152,000 people. At that time, Uber employed 1,000 full-time employees and was valued at $18.2 billion, about the same valuation as Avis Rent a Car and The Hertz Corporation combined, which together employed almost 60,000 people. Advertising on popular web pages can be lucrative, and e-commerce. Online advertising is a form of marketing and advertising which uses the Internet to deliver promotional marketing messages to consumers. It includes email marketing, search engine marketing (SEM), social media marketing, many types of display advertising (including web banner advertising), and mobile advertising. In 2011, Internet advertising revenues in the United States surpassed those of cable television and nearly exceeded those of broadcast television.: 19 Many common online advertising practices are controversial and increasingly subject to regulation. The Internet has achieved new relevance as a political tool. The presidential campaign of Howard Dean in 2004 in the United States was notable for its success in soliciting donation via the Internet. Many political groups use the Internet to achieve a new method of organizing for carrying out their mission, having given rise to Internet activism. Social media websites, such as Facebook and Twitter, helped people organize the Arab Spring, by helping activists organize protests, communicate grievances, and disseminate information. Many have understood the Internet as an extension of the Habermasian notion of the public sphere, observing how network communication technologies provide something like a global civic forum. However, incidents of politically motivated Internet censorship have now been recorded in many countries, including western democracies. E-government is the use of technological communications devices, such as the Internet, to provide public services to citizens and other persons in a country or region. E-government offers opportunities for more direct and convenient citizen access to government and for government provision of services directly to citizens. Cybersectarianism is a new organizational form that involves: highly dispersed small groups of practitioners that may remain largely anonymous within the larger social context and operate in relative secrecy, while still linked remotely to a larger network of believers who share a set of practices and texts, and often a common devotion to a particular leader. Overseas supporters provide funding and support; domestic practitioners distribute tracts, participate in acts of resistance, and share information on the internal situation with outsiders. Collectively, members and practitioners of such sects construct viable virtual communities of faith, exchanging personal testimonies and engaging in the collective study via email, online chat rooms, and web-based message boards. In particular, the British government has raised concerns about the prospect of young British Muslims being indoctrinated into Islamic extremism by material on the Internet, being persuaded to join terrorist groups such as the so-called "Islamic State", and then potentially committing acts of terrorism on returning to Britain after fighting in Syria or Iraq.[citation needed] Applications and services The Internet carries many applications and services, most prominently the World Wide Web, including social media, electronic mail, mobile applications, multiplayer online games, Internet telephony, file sharing, and streaming media services. The World Wide Web is a global collection of documents, images, multimedia, applications, and other resources, logically interrelated by hyperlinks and referenced with Uniform Resource Identifiers (URIs), which provide a global system of named references. URIs symbolically identify services, web servers, databases, and the documents and resources that they can provide. HyperText Transfer Protocol (HTTP) is the main access protocol of the World Wide Web. Web services also use HTTP for communication between software systems for information transfer, sharing and exchanging business data and logistics and is one of many languages or protocols that can be used for communication on the Internet. World Wide Web browser software, such as Microsoft Edge, Mozilla Firefox, Opera, Apple's Safari, and Google Chrome, enable users to navigate from one web page to another via the hyperlinks embedded in the documents. These documents may also contain computer data, including graphics, sounds, text, video, multimedia and interactive content. Client-side scripts can include animations, games, office applications and scientific demonstrations. Email is an important communications service available via the Internet. The concept of sending electronic text messages between parties, analogous to mailing letters or memos, predates the creation of the Internet. Internet telephony is a common communications service realized with the Internet. The name of the principal internetworking protocol, the Internet Protocol, lends its name to voice over Internet Protocol (VoIP).[citation needed] VoIP systems now dominate many markets, being as easy and convenient as a traditional telephone, while having substantial cost savings, especially over long distances. File sharing is the practice of transferring large amounts of data in the form of computer files across the Internet, for example via file servers. The load of bulk downloads to many users can be eased by the use of "mirror" servers or peer-to-peer networks. Access to the file may be controlled by user authentication, the transit of the file over the Internet may be obscured by encryption, and money may change hands for access to the file. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passed—usually fully encrypted—across the Internet. The origin and authenticity of the file received may be checked by a digital signature. Governance The Internet is a global network that comprises many voluntarily interconnected autonomous networks. It operates without a central governing body. The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise. While the hardware components in the Internet infrastructure can often be used to support other software systems, it is the design and the standardization process of the software that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been assumed by the IETF. The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. The resulting contributions and standards are published as Request for Comments (RFC) documents on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices when implementing Internet technologies. To maintain interoperability, the principal name spaces of the Internet are administered by the Internet Corporation for Assigned Names and Numbers (ICANN). ICANN is governed by an international board of directors drawn from across the Internet technical, business, academic, and other non-commercial communities. The organization coordinates the assignment of unique identifiers for use on the Internet, including domain names, IP addresses, application port numbers in the transport protocols, and many other parameters. Globally unified name spaces are essential for maintaining the global reach of the Internet. This role of ICANN distinguishes it as perhaps the only central coordinating body for the global Internet. The National Telecommunications and Information Administration, an agency of the United States Department of Commerce, had final approval over changes to the DNS root zone until the IANA stewardship transition on 1 October 2016. Regional Internet registries (RIRs) were established for five regions of the world to assign IP address blocks and other Internet parameters to local registries, such as Internet service providers, from a designated pool of addresses set aside for each region:[citation needed] The Internet Society (ISOC) was founded in 1992 with a mission to "assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". Its members include individuals as well as corporations, organizations, governments, and universities. Among other activities ISOC provides an administrative home for a number of less formally organized groups that are involved in developing and managing the Internet, including: the Internet Engineering Task Force (IETF), Internet Architecture Board (IAB), Internet Engineering Steering Group (IESG), Internet Research Task Force (IRTF), and Internet Research Steering Group (IRSG). On 16 November 2005, the United Nations-sponsored World Summit on the Information Society in Tunis established the Internet Governance Forum (IGF) to discuss Internet-related issues.[citation needed] Infrastructure The communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. As with any computer network, the Internet physically consists of routers, media (such as cabling and radio links), repeaters, and modems. However, as an example of internetworking, many of the network nodes are not necessarily Internet equipment per se. Internet packets are carried by other full-fledged networking protocols, with the Internet acting as a homogeneous networking standard, running across heterogeneous hardware, with the packets guided to their destinations by IP routers.[citation needed] Internet service providers (ISPs) establish worldwide connectivity between individual networks at various levels of scope. At the top of the routing hierarchy are the tier 1 networks, large telecommunication companies that exchange traffic directly with each other via very high speed fiber-optic cables and governed by peering agreements. Tier 2 and lower-level networks buy Internet transit from other providers to reach at least some parties on the global Internet, though they may also engage in peering. End-users who only access the Internet when needed to perform a function or obtain information, represent the bottom of the routing hierarchy.[citation needed] An ISP may use a single upstream provider for connectivity, or implement multihoming to achieve redundancy and load balancing. Internet exchange points are major traffic exchanges with physical connections to multiple ISPs. Large organizations, such as academic institutions, large enterprises, and governments, may perform the same function as ISPs, engaging in peering and purchasing transit on behalf of their internal networks. Research networks tend to interconnect with large subnetworks such as GEANT, GLORIAD, Internet2, and the UK's national research and education network, JANET.[citation needed] Common methods of Internet access by users include broadband over coaxial cable, fiber optics or copper wires, Wi-Fi, satellite, and cellular telephone technology.[citation needed] Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services that cover large areas are available in many cities, such as New York, London, Vienna, Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh. Most servers that provide internet services are today hosted in data centers, and content is often accessed through high-performance content delivery networks. Colocation centers often host private peering connections between their customers, internet transit providers, cloud providers, meet-me rooms for connecting customers together, Internet exchange points, and landing points and terminal equipment for fiber optic submarine communication cables, connecting the internet. Internet Protocol Suite The Internet standards describe a framework known as the Internet protocol suite (also called TCP/IP, based on the first two components.) This is a suite of protocols that are ordered into a set of four conceptional layers by the scope of their operation, originally documented in RFC 1122 and RFC 1123:[citation needed] The most prominent component of the Internet model is the Internet Protocol. IP enables internetworking, essentially establishing the Internet itself. Two versions of the Internet Protocol exist, IPv4 and IPv6.[citation needed] Aside from the complex array of physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe the exchange of data over the network.[citation needed] For locating individual computers on the network, the Internet provides IP addresses. IP addresses are used by the Internet infrastructure to direct internet packets to their destinations. They consist of fixed-length numbers, which are found within the packet. IP addresses are generally assigned to equipment either automatically via Dynamic Host Configuration Protocol, or are configured.[citation needed] Domain Name Systems convert user-inputted domain names (e.g. "en.wikipedia.org") into IP addresses.[citation needed] Internet Protocol version 4 (IPv4) defines an IP address as a 32-bit number. IPv4 is the initial version used on the first generation of the Internet and is still in dominant use. It was designed in 1981 to address up to ≈4.3 billion (109) hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion, which entered its final stage in 2011, when the global IPv4 address allocation pool was exhausted. Because of the growth of the Internet and the depletion of available IPv4 addresses, a new version of IP IPv6, was developed in the mid-1990s, which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 uses 128 bits for the IP address and was standardized in 1998. IPv6 deployment has been ongoing since the mid-2000s and is currently in growing deployment around the world, since Internet address registries began to urge all resource managers to plan rapid adoption and conversion. By design, IPv6 is not directly interoperable with IPv4. Instead, it establishes a parallel version of the Internet not directly accessible with IPv4 software. Thus, translation facilities exist for internetworking, and some nodes have duplicate networking software for both networks. Essentially all modern computer operating systems support both versions of the Internet Protocol.[citation needed] Network infrastructure, however, has been lagging in this development.[citation needed] A subnet or subnetwork is a logical subdivision of an IP network.: 1, 16 Computers that belong to a subnet are addressed with an identical most-significant bit-group in their IP addresses. This results in the logical division of an IP address into two fields, the network number or routing prefix and the rest field or host identifier. The rest field is an identifier for a specific host or network interface.[citation needed] The routing prefix may be expressed in Classless Inter-Domain Routing (CIDR) notation written as the first address of a network, followed by a slash character (/), and ending with the bit-length of the prefix. For example, 198.51.100.0/24 is the prefix of the Internet Protocol version 4 network starting at the given address, having 24 bits allocated for the network prefix, and the remaining 8 bits reserved for host addressing. Addresses in the range 198.51.100.0 to 198.51.100.255 belong to this network. The IPv6 address specification 2001:db8::/32 is a large address block with 296 addresses, having a 32-bit routing prefix.[citation needed] For IPv4, a network may also be characterized by its subnet mask or netmask, which is the bitmask that when applied by a bitwise AND operation to any IP address in the network, yields the routing prefix. Subnet masks are also expressed in dot-decimal notation like an address. For example, 255.255.255.0 is the subnet mask for the prefix 198.51.100.0/24.[citation needed] Computers and routers use routing tables in their operating system to forward IP packets to reach a node on a different subnetwork. Routing tables are maintained by manual configuration or automatically by routing protocols. End-nodes typically use a default route that points toward an ISP providing transit, while ISP routers use the Border Gateway Protocol to establish the most efficient routing across the complex connections of the global Internet.[citation needed] The default gateway is the node that serves as the forwarding host (router) to other networks when no other route specification matches the destination IP address of a packet. Security Internet resources, hardware, and software components are the target of criminal or malicious attempts to gain unauthorized control to cause interruptions, commit fraud, engage in blackmail or access private information. Malware is malicious software used and distributed via the Internet. It includes computer viruses which are copied with the help of humans, computer worms which copy themselves automatically, software for denial of service attacks, ransomware, botnets, and spyware that reports on the activity and typing of users.[citation needed] Usually, these activities constitute cybercrime. Defense theorists have also speculated about the possibilities of hackers using cyber warfare using similar methods on a large scale. Malware poses serious problems to individuals and businesses on the Internet. According to Symantec's 2018 Internet Security Threat Report (ISTR), malware variants number has increased to 669,947,865 in 2017, which is twice as many malware variants as in 2016. Cybercrime, which includes malware attacks as well as other crimes committed by computer, was predicted to cost the world economy US$6 trillion in 2021, and is increasing at a rate of 15% per year. Since 2021, malware has been designed to target computer systems that run critical infrastructure such as the electricity distribution network. Malware can be designed to evade antivirus software detection algorithms. The vast majority of computer surveillance involves the monitoring of data and traffic on the Internet. In the United States for example, under the Communications Assistance For Law Enforcement Act, all phone calls and broadband Internet traffic (emails, web traffic, instant messaging, etc.) are required to be available for unimpeded real-time monitoring by Federal law enforcement agencies. Under the Act, all U.S. telecommunications providers are required to install packet sniffing technology to allow Federal law enforcement and intelligence agencies to intercept all of their customers' broadband Internet and VoIP traffic.[d] The large amount of data gathered from packet capture requires surveillance software that filters and reports relevant information, such as the use of certain words or phrases, the access to certain types of web sites, or communicating via email or chat with certain parties. Agencies, such as the Information Awareness Office, NSA, GCHQ and the FBI, spend billions of dollars per year to develop, purchase, implement, and operate systems for interception and analysis of data. Similar systems are operated by Iranian secret police to identify and suppress dissidents. The required hardware and software were allegedly installed by German Siemens AG and Finnish Nokia. Some governments, such as those of Myanmar, Iran, North Korea, Mainland China, Saudi Arabia and the United Arab Emirates, restrict access to content on the Internet within their territories, especially to political and religious content, with domain name and keyword filters. In Norway, Denmark, Finland, and Sweden, major Internet service providers have voluntarily agreed to restrict access to sites listed by authorities. While this list of forbidden resources is supposed to contain only known child pornography sites, the content of the list is secret. Many countries, including the United States, have enacted laws against the possession or distribution of certain material, such as child pornography, via the Internet but do not mandate filter software. Many free or commercially available software programs, called content-control software are available to users to block offensive specific on individual computers or networks in order to limit access by children to pornographic material or depiction of violence.[citation needed] Performance As the Internet is a heterogeneous network, its physical characteristics, including, for example the data transfer rates of connections, vary widely. It exhibits emergent phenomena that depend on its large-scale organization. PB per monthYear020,00040,00060,00080,000100,000120,000140,000199019952000200520102015Petabytes per monthGlobal Internet Traffic Volume The volume of Internet traffic is difficult to measure because no single point of measurement exists in the multi-tiered, non-hierarchical topology. Traffic data may be estimated from the aggregate volume through the peering points of the Tier 1 network providers, but traffic that stays local in large provider networks may not be accounted for.[citation needed] An Internet blackout or outage can be caused by local signaling interruptions. Disruptions of submarine communications cables may cause blackouts or slowdowns to large areas, such as in the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to the small number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging for scrap metal severed most connectivity for the nation of Armenia. Internet blackouts affecting almost entire countries can be achieved by governments as a form of Internet censorship, as in the blockage of the Internet in Egypt, whereby approximately 93% of networks were without access in 2011 in an attempt to stop mobilization for anti-government protests. Estimates of the Internet's electricity usage have been the subject of controversy, according to a 2014 peer-reviewed research paper that found claims differing by a factor of 20,000 published in the literature during the preceding decade, ranging from 0.0064 kilowatt hours per gigabyte transferred (kWh/GB) to 136 kWh/GB. The researchers attributed these discrepancies mainly to the year of reference (i.e. whether efficiency gains over time had been taken into account) and to whether "end devices such as personal computers and servers are included" in the analysis. In 2011, academic researchers estimated the overall energy used by the Internet to be between 170 and 307 GW, less than two percent of the energy used by humanity. This estimate included the energy needed to build, operate, and periodically replace the estimated 750 million laptops, a billion smart phones and 100 million servers worldwide as well as the energy that routers, cell towers, optical switches, Wi-Fi transmitters and cloud storage devices use when transmitting Internet traffic. According to a non-peer-reviewed study published in 2018 by The Shift Project (a French think tank funded by corporate sponsors), nearly 4% of global CO2 emissions could be attributed to global data transfer and the necessary infrastructure. The study also said that online video streaming alone accounted for 60% of this data transfer and therefore contributed to over 300 million tons of CO2 emission per year, and argued for new "digital sobriety" regulations restricting the use and size of video files. See also Notes References Sources Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Thirty-seventh_government_of_Israel#cite_note-98] | [TOKENS: 9915]
Contents Thirty-seventh government of Israel The thirty-seventh government of Israel is the current cabinet of Israel, formed on 29 December 2022, following the Knesset election the previous month. The coalition government currently consists of five parties — Likud, Shas, Otzma Yehudit, Religious Zionist Party and New Hope — and is led by Benjamin Netanyahu, who took office as the prime minister of Israel for the sixth time. The government is widely regarded as the most right-wing government in the country's history, and includes far-right politicians. Several of the government's policy proposals have led to controversies, both within Israel and abroad, with the government's attempts at reforming the judiciary leading to a wave of demonstrations across the country. Following the outbreak of the Gaza war, opposition leader Yair Lapid initiated discussions with Netanyahu on the formation of an emergency government. On 11 October 2023, National Unity MKs Benny Gantz, Gadi Eisenkot, Gideon Sa'ar, Hili Tropper, and Yifat Shasha-Biton joined the Security Cabinet of Israel to form an emergency national unity government. Their accession to the Security Cabinet and to the government (as ministers without portfolio) was approved by the Knesset the following day. Gantz, Netanyahu, and Defense Minister Yoav Gallant became part of the newly formed Israeli war cabinet, with Eisenkot and Ron Dermer serving as observers. National Unity left the government in June 2024. New Hope rejoined the government in September. Otzma Yehudit announced on 19 January 2025 that it had withdrawn from the government, which took effect on 21 January, following the cabinet's acceptance of the three-phase Gaza war ceasefire proposal, though it rejoined two months later. United Torah Judaism left the government in July 2025 over dissatisfaction with the government's draft conscription law. Shas left the government several days later, though it remains part of the coalition. Background The right-wing bloc of parties, led by Benjamin Netanyahu, known in Israel as the national camp, won 64 of the 120 seats in the elections for the Knesset, while the coalition led by the incumbent prime minister Yair Lapid won 51 seats. The new majority has been variously described as the most right-wing government in Israeli history, as well as Israel's most religious government. Shortly after the elections, Lapid conceded to Netanyahu, and congratulated him, wishing him luck "for the sake of the Israeli people". On 15 November, the swearing-in ceremony for the newly elected members of the 25th Knesset was held during the opening session. The vote to appoint a new Speaker of the Knesset, which is usually conducted at the opening session, as well as the swearing in of cabinet members were postponed since ongoing coalition negotiations had not yet resulted in agreement on these positions. Government formation Yair Lapid Yesh Atid Benjamin Netanyahu Likud On 3 November 2022, Netanyahu told his aide Yariv Levin to begin informal coalition talks with allied parties, after 97% of the vote was counted. The leader of the Shas party Aryeh Deri met with Yitzhak Goldknopf, the leader of United Torah Judaism and its Agudat Yisrael faction, on 4 November. The two parties agreed to cooperate as members of the next government. The Degel HaTorah faction of United Torah Judaism stated on 5 November that it will maintain its ideological stance about not seeking any ministerial posts, as per the instruction of its spiritual leader Rabbi Gershon Edelstein, but will seek other senior posts like Knesset committee chairmen and deputy ministers. Netanyahu himself started holding talks on 6 November. He first met with Moshe Gafni, the leader of Degel HaTorah, and then with Goldknopf. Meanwhile, the Religious Zionist Party leader Bezalel Smotrich and the leader of its Otzma Yehudit faction Itamar Ben-Gvir pledged that they would not enter the coalition without the other faction. Gafni later met with Smotrich for coalition talks. Smotrich then met with Netanyahu. On 7 November, Netanyahu met with Ben-Gvir who demanded the Ministry of Public Security with expanded powers for himself and the Ministry of Education or Transport and Road Safety for Yitzhak Wasserlauf. A major demand among all of Netanyahu's allies was that the Knesset be allowed to ignore the rulings of the Supreme Court. Netanyahu met with the Noam faction leader and its sole MK Avi Maoz on 8 November after he threatened to boycott the coalition. He demanded complete control of the Western Wall by the Haredi rabbinate and removal of what he considered as anti-Zionist and anti-Jewish content in schoolbooks. President Isaac Herzog began consultations with heads of all the political parties on 9 November after the election results were certified. During the consultations, he expressed his reservations about Ben-Gvir becoming a member in the next government. Shas met with Likud for coalition talks on 10 November. By 11 November, Netanyahu had secured recommendations from 64 MKs, which constituted a majority. He was given the mandate to form the thirty-seventh government of Israel by President Herzog on 13 November. Otzma Yehudit and Noam officially split from Religious Zionism on 20 November as per a pre-election agreement. On 25 November, Otzma Yehudit and Likud signed a coalition agreement, under which Ben-Gvir will assume the newly created position of National Security Minister, whose powers would be more expansive than that of the Minister of Public Security, including overseeing the Israel Police and the Israel Border Police in the West Bank, as well as giving powers to authorities to shoot thieves stealing from military bases. Yitzhak Wasserlauf was given the Ministry for the Development of the Negev and the Galilee with expanded powers to regulate new West Bank settlements, while separating it from the "Periphery" portfolio, which will be given to Shas. The deal also includes giving the Ministry of Heritage to Amihai Eliyahu, separating it from the "Jerusalem Affairs" portfolio, the chairmanship of the Knesset's Public Security Committee to Zvika Fogel and that of the Special Committee for the Israeli Citizens' Fund to Limor Son Har-Melech, the post of Deputy Economic Minister to Almog Cohen, establishment of a national guard, and expansion of mobilization of reservists in the Border Police. Netanyahu and Maoz signed a coalition agreement on 27 November, under which the latter would become a deputy minister, would head an agency on Jewish identity in the Prime Minister's Office, and would also head Nativ, which processes the aliyah from the former Soviet Union. The agency for Jewish identity would have authority over educational content taught outside the regular curriculum in schools, in addition to the department of the Ministry of Education overseeing external teaching and partnerships, which would bring nonofficial organisations permitted to teach and lecture at schools under its purview. Likud signed a coalition agreement with the Religious Zionist Party on 1 December. Under the deal, Smotrich would serve as the Minister of Finance in rotation with Aryeh Deri, and the party will receive the post of a minister within the Ministry of Defense with control over the departments administering settlement and open lands under the Coordinator of Government Activities in the Territories, in addition to another post of a deputy minister. The deal also includes giving the post of Minister of Aliyah and Integration to Ofir Sofer, the newly created National Missions Ministry to Orit Strook, and the chairmanship of the Knesset's Constitution, Law and Justice Committee to Simcha Rothman. Likud and United Torah Judaism signed a coalition agreement on 6 December, to allow request for an extension to the deadline. Under it, the party would receive the Ministry of Construction and Housing, the chairmanship of the Knesset Finance Committee which will be given to Moshe Gafni, the Ministry of Jerusalem and Tradition (which would replace the Ministry of Jerusalem Affairs and Heritage), in addition to several posts of deputy ministers and chairmanships of Knesset committees. Likud also signed a deal with Shas by 8 December, securing interim coalition agreements with all of their allies. Under the deal, Deri will first serve as the Minister of Interior and Health, before rotating posts with Smotrich after two years. The party will also receive the Ministry of Religious Services and Welfare Ministries, as well as posts of deputy ministers in the Ministry of Education and Interior. The vote to replace then-incumbent Knesset speaker Mickey Levy was scheduled for 13 December, after Likud and its allies secured the necessary number of signatures for it. Yariv Levin of Likud was elected as an interim speaker by 64 votes, while his opponents Merav Ben-Ari of Yesh Atid and Ayman Odeh of Hadash received 45 and five votes respectively. Netanyahu asked Herzog for a 14-day extension after the agreement with Shas to finalise the roles his allied parties would play. Herzog on 9 December extended the deadline to 21 December. On that date, Netanyahu informed Herzog that he had succeeded in forming a coalition, with the new government expected to be sworn in by 2 January 2023. The government was sworn in on 29 December 2022. Timeline Israeli law stated that people convicted of crimes cannot serve in the government. An amendment to that law was made in late 2022, known colloquially as the Deri Law, to allow those who had been convicted without prison time to serve. This allowed Deri to be appointed to the cabinet. Shas leader Aryeh Deri was appointed to be Minister of Health, Minister of the Interior, and Vice Prime Minister in December 2022. He was fired in January 2023, following a Supreme Court decision that his appointment was unreasonable, since he had been convicted of fraud, and had promised not to seek government roles through a plea deal. In March 2023, Defence Minister Yoav Gallant called on the government to delay legislation related to the judicial reform. Prime Minister Netanyahu announced that he had been dismissed from his position, leading to the continuation of mass protests across the country (which had started in January in Tel Aviv). Gallant continued to serve as a minister as he had not received formal notice of dismissal, and two weeks later it was announced that Netanyahu had reversed his decision. Public Safety Minister Itamar Ben-Gvir (Otzma Yehudit leader) and Minister of Justice Yariv Levin (Likud) both threatened to resign if the judicial reform was delayed.[better source needed] After the outbreak of the Gaza war, five members of the National Unity party joined the government as ministers without portfolio, with leader Benny Gantz being made a member of the new Israeli war cabinet (along with Netanyahu and Gallant). As the war progressed, minister of national security Itamar Ben-Gvir threatened to leave the government if the war was ended. A month later in mid December, he again threatened to leave if the war did not maintain "full strength". Gideon Sa'ar stated on 16 March that his New Hope party would resign from the government and join the opposition if Prime Minister Benjamin Netanyahu did not appoint him to the Israeli war cabinet. Netanyahu did not do so, resulting in Sa'ar's New Hope party leaving the government nine days later, reducing the size of the coalition from 76 MKs to 72. Ben-Gvir and Bezalel Smotrich, of the National Religious Party–Religious Zionism party, have indicated that they will withdraw their parties from the government if the January 2025 Gaza war ceasefire is adopted, which would bring down the government. Ben-Gvir announced on 5 June that the members of his party would be allowed to vote as they wish, though his party resumed support on 9 June. On 18 May, Gantz set an 8 June deadline for withdrawal from the coalition, which was delayed by a day following the 2024 Nuseirat rescue operation. Gantz and his party left the government on 9 June, giving the government 64 seats in the Knesset. Sa'ar and his New Hope party rejoined the Netanyahu government on 30 September, increasing the number of seats held by the government to 68. The High Court of Justice ruled on 28 March 2024 that yeshiva funds would no longer be available for students who are "eligible for enlistment", effectively allowing ultra-Orthodox Jews to be drafted into the IDF. Attorney general Gali Baharav-Miara indicated on 31 March that the conscription process must begin on 1 April. The court ruled on 25 June that the IDF must begin to draft yeshiva students. Likud announced on 7 July that it would not put forward any legislation after Shas and United Torah Judaism said that they would boycott the plenary session over the lack of legislation dealing with the Haredi draft. The Ultra-Orthodox boycott continued for a second day, with UTJ briefly ending its boycott on 9 July to unsuccessfully vote in favor of a bill which would have weakened the Law of Return. Yuli Edelstein, who was replaced by Boaz Bismuth on the Foreign Affairs and Defense Committee in early August, published a draft version of the conscription law shortly before his ouster. Bismuth cancelled the work on the draft law in September 2025, which Edelstein called "a shame." Bismuth released the official version of the draft law in late November 2025. It weakened penalties for draft evaders, with Edelstein saying it was "the exact opposite" of the bill which he attempted to pass. Members of Otzma Yehudit resigned from the government on 19 January 2025 over the January 2025 Gaza war ceasefire, which took effect on 21 January. The members rejoined in March, following the "resumption" of the war in Gaza. Avi Maoz of the Noam party left the government in March 2025. On 4 June 2025, senior rabbis for United Torah Judaism Dov Lando and Moshe Hillel Hirsch instructed the party's MKs to pass a bill which would dissolve the Knesset. Yesh Atid, Yisrael Beytenu and The Democrats announced that they will "submit a bill" for dissolution on 11 June, with Yesh Atid tabling the bill on 4 June. There were also reports that Shas would vote in favor of Knesset dissolution amidst division within the governing coalition on Haredi conscription. This jeopardized the coalition's majority and would have triggered new elections if the bill passed. The following day, Agudat Yisrael, one of the United Torah Judaism factions, confirmed that it would submit a bill to dissolve the Knesset. Asher Medina, a Shas spokesman, indicated on 9 June that the party would vote in favor of a preliminary bill to dissolve the Knesset. The rabbis of Degel HaTorah instructed the parties' MKs on 12 June 2025 to oppose the dissolution of the Knesset, which was followed by Yuli Edelstein and the Shas and Degel HaTorah parties announcing that a deal had been reached, with "rabbinical leaders" telling their parties to delay the dissolution vote by a week. Shas and Degel HaTorah voted against the dissolution bill, which led to the bill failing its preliminary reading in a vote of 61 against and 53 in favor. MKs Ya'akov Tessler and Moshe Roth of Agudat Yisrael voted in favor of dissolution. Another dissolution bill will be unable to be brought forward for six months. If the bill had passed its preliminary reading, in addition to three more readings, an election would have been held in approximately three months; The Jerusalem Post posited it would have been held in October. Degel HaTorah announced on 14 July 2025 that it would leave the government because members of the party were dissatisfied after viewing the proposed draft bill by Yuli Edelstein regarding Haredi exemptions from the Israeli draft. Several hours later, Agudat Yisrael announced that it would also leave the government. Deputy Transportation Minister Uri Maklev, Moshe Gafni, the head of the Knesset Finance Committee, Ya'akov Asher, the head of the Knesset Interior and Environment Protection Committee and Jerusalem Affairs minister Meir Porush all submitted their resignations, with their resignations taking effect in 48 hours. Sports Minister Ya'akov Tessler and "Special Committee for Public Petitions Chair" Yitzhak Pindrus also submitted resignations. Yisrael Eichler submitted his resignation as the "head of the Knesset Labor and Welfare Committee" the same day. The resignations will leave Netanyahu's government with a 60-seat majority in the Knesset, as Avi Maoz, of the Noam party, left the government in March 2025. Despite Edelstein's ouster in August, a spokesman for UTJ head Yitzhak Goldknopf remarked that it would not change the faction's withdrawal from the government. The religious council for Shas, called the Moetzet Chachmei HaTorah, instructed the party on 16 July to leave the government, but stay in the coalition. The following day, various cabinet ministers submitted their resignations, including "Interior Minister Moshe Arbel, Social Affairs Minister Ya'akov Margi and Religious Services Minister Michael Malchieli." Malchieli reportedly has postponed his resignation so he could attend a 20 July meeting of the panel investigating whether attorney general Gali Baharav-Miara should be dismissed. Deputy Minister of Agriculture Moshe Abutbul, Minister of Health Uriel Buso and Haim Biton, a minister in the Education Ministry, also submitted their resignation letters, while Arbel retracted his resignation letter. The last cabinet member from the party to submit it was Labor Minister Yoav Ben-Tzur. The ministers who resigned will return to the Knesset, replacing MKs Moshe Roth, Yitzhak Pindrus and Eliyahu Baruchi. Members of government Listed below are the current ministers in the government: Principles and priorities According to the agreements signed between Likud and each of its coalition partners, and the incoming government's published guideline principles, its stated priorities are to combat the cost of living, further centralize Orthodox control over the state religious services, pass judicial reforms which include legislation to reduce judicial controls on executive and legislative power, expand settlements in the West Bank, and consider an annexation of the West Bank. Before the vote of confidence in his new government in the Knesset, Netanyahu presented three top priorities for the new government: internal security and governance, halting the nuclear program of Iran, and the development of infrastructure, with a focus on further connecting the center of the country with its periphery. Policies The government's flagship program, centered around reforms in the judicial branch, drew widespread criticism. Critics said it would have negative effects on the separation of powers, the office of the Attorney General, the economy, public health, women and minorities, workers' rights, scientific research, the overall strength of Israel's democracy and its foreign relations. After weeks of public protests on Israel's streets, joined by a growing number of military reservists, Minister of Defense Yoav Gallant spoke against the reform on 25 March, calling for a halt of the legislative process "for the sake of Israel's security". The next day, Netanyahu announced that he would be removed from his post, sparking another wave of protest across Israel and ultimately leading to Netanyahu agreeing to pause the legislation. On 10 April, Netanyahu announced that Gallant would keep his post. On 27 March 2023, after the public protests and general strikes, Netanyahu announced a pause in the reform process to allow for dialogue with opposition parties. However, negotiations aimed at reaching a compromise collapsed in June, and the government resumed its plans to unilaterally pass parts of the legislation. On 24 July 2023, the Knesset passed a bill that curbs the power of the Supreme Court to declare government decisions unreasonable; on 1 January 2024, the Supreme Court struck the bill down. The Knesset passed a "watered-down" version of the judicial reform package in late March 2025 which "changes the composition" of the judicial selection committee. In December 2022 Minister of National Security Itamar Ben-Gvir sought to amend the law that regulates the operations of the Israel Police, such that the ministry will have more direct control of its forces and policies, including its investigative priorities. Attorney General Gali Baharav-Miara objected to the draft proposal, raising concerns that the law would enable the politicization of police work, and the draft was amended to partially address those concerns. Nevertheless, in March 2023 Deputy Attorney General Gil Limon stated that the Attorney General's fears had been realized, referring to several instances of ministerial involvement in the day-to-day work of the otherwise independent police force – statements that were repeated by the Attorney General herself two days later. Separately, Police Commissioner Kobi Shabtai instructed Deputy Commissioners to avoid direct communication with the minister, later stating that "the Israel Police will remain apolitical, and act only according to law". Following appeals by the Association for Civil Rights in Israel and the Movement for Quality Government in Israel, the High Court of Justice instructed Ben-Gvir "to refrain from giving operational directions to the police... [especially] as regards to protests and demonstrations against the government." As talks of halting the judicial reform gained wind during March 2023, Minister of National Security Itamar Ben-Gvir threatened to resign if the legislation implementing the changes was suspended. To appease Ben-Gvir, Prime Minister Netanyahu announced that the government would promote the creation of a new National Guard, to be headed by Ben-Gvir. On 29 March, thousands of Israelis demonstrated in Tel Aviv, Haifa and Jerusalem against this decision. On 1 April, the New York Times quoted Gadeer Nicola, head of the Arab department at the Association for Civil Rights in Israel, as saying "If this thing passes, it will be an imminent danger to the rights of Arab citizens in this country. This will create two separate systems of applying the law. The regular police which will operate against Jewish citizens — and a militarized militia to deal only with Arab citizens." The same day, while speaking on Israel's Channel 13 about those whom he'd like to see enlist in the National Guard, Ben-Gvir specifically mentioned La Familia, the far-right fan club of the Beitar Jerusalem soccer team. On 2 April, Israel's cabinet approved the establishment of a law enforcement body that would operate independently of the police, under Ben-Gvir's authority. According to the decision, the Minister was to establish a committee chaired by the Director General of the Ministry of National Security, with representatives of the ministries of defense, justice and finance, as well as the police and the IDF, to outline the operations of the new organization. The committee's recommendations will be submitted to the government for consideration. Addressing a conference on 4 April, Police Commissioner Kobi Shabtai said that he is not opposed to the establishment of a security body which would answer to the police, but "a separate body? Absolutely not." The police chief said he had warned Ben-Gvir that the establishment of a security body separate from the police is "unnecessary, with extremely high costs that may harm citizens' personal security." During a press conference on 10 April, Prime Minister Netanyahu said, in what has been seen by some news outlets as a concession to the protesters, that "This will not be anyone's militia, it will be a security body, orderly, professional, that will be subordinate to one of the [existing] security bodies." The committee established by the government recommended the government to order the establishment of the National Guard immediately while allocating budgets. The National Guard, under whose command will be a superintendent of the police, will not be subordinate to Ben-Gvir. It will be subordinate to the police commissioner and will be part of Israel Border Police. The Ministry of Defense and Finance opposed the conclusions. The Israeli National Security Council called for further discussion on this. The coalition's efforts to expand the purview of Rabbinical courts; force some organizations, such as hospitals, to enforce certain religious practices; amend the Law Prohibiting Discrimination to allow gender segregation and discrimination on the grounds of religious belief; expand funding for religious causes; and put into law the exemption of yeshiva and kolel students from conscription have drawn criticism. According to the Haaretz op-ed of 7 March 2023, "the current coalition is interested... in modifying the public space so it suits the religious lifestyle. The legal coup is meant to castrate anyone who can prevent it, most of all the HCJ." Several banks and institutional investors, including the Israel Discount Bank and AIG have committed to avoid investing in, or providing credit to any organization that will discriminate against others on ground of religion, race, gender or sexual orientation. A series of technology companies and investment firms including Wiz, Intel Israel, Salesforce and Microsoft Israel Research and Development, have criticized the proposed changes to the Law Prohibiting Discrimination, with Wiz stating that it will require its suppliers to commit to preventing discrimination. Over sixty prominent law firms pledged that they will neither represent, nor do business with discriminating individuals and organizations. Insight Partners, a major private equity fund operating in Israel, released a statement warning against intolerance and any attempt to harm personal liberties. Orit Lahav, chief executive of the women's rights organization Mavoi Satum ("Dead End"), said that "the Rabbinical courts are the most discriminatory institution in the State of Israel... Limiting the HCJ[d] while expanding the jurisdiction of the Rabbinical courts would... cause significant harm to women." Anat Thon Ashkenazy, Director of the Center for Democratic Values and Institutions at the Israel Democracy Institute, said that "almost every part of the reform could harm women... the meaning of an override clause is that even if the court says that the law on gender segregation is illegitimate, is harmful, the Knesset could say 'Okay, we say otherwise'". She added that "there is a very broad institutional framework here, after which there will come legislation that harms women's right and we will have no way of protecting or stopping it." During July 2023, 20 professional medical associations signed a letter of position warning against the ramifications to public health that would result from the exclusion of women from the public sphere. They cited, among others, a rise in prevalence of risk factors for cardiovascular disease, pregnancy-related ailments, psychological distress, and the risk of suicide. On 30 July the Knesset passed an amendment to penal law adding sexual offenses to those offenses whose penalty can be doubled if done on grounds of "nationalistic terrorism, racism or hostility towards a certain community". According to MK Limor Son Har-Melech, the bill is meant to penalize any individual who "[intends to] harm a woman sexually based on her Jewishness". The law was criticized by MK Gilad Kariv as "populist, nationalistic, and dangerous towards the Arab citizens of Israel", and by MK Ahmad Tibi as a "race law", and was objected to by legal advisors at the Ministry of Justice and the Knesset Committee on National Security. Activist Orit Kamir wrote that "the amendment... is neither feminist, equal, nor progressive, but the opposite: it subordinates women's sexuality to the nationalistic, racist patriarchy. It hijacks the Law for Prevention of Sexual Harassment to serve a world view that tags women as sexual objects that personify the nation's honor." Yael Sherer, director of the Lobby to Combat Sexual Violence, criticized the law as being informed by dated ideas about sexual assault, and proposed that MKs "dedicate a session... to give victims of sexual assault an opportunity to come out of the darkness... instead of [submitting] declarative bills that change nothing and are not meant but for grabbing headlines". In Israel, during 2022, 24 women "were murdered because they were women," which was an increase of 50% compared to 2021. A law permitting courts to order men subject to a restraining order following domestic violence offenses to wear electronic tags was drafted during the previous Knesset and had passed its first reading unanimously. On 22 March 2023, the Knesset voted to reject the bill. It had been urged to do so by National Security Minister Itamar Ben-Gvir, who said that the bill was unfair to men. Earlier in the week, Ben-Gvir had blocked the measure from advancing in the ministerial legislative committee. The MKs voting against the bill included Prime Minister Netanyahu. The Association of Families of Murder Victims said that by rejecting the law, National Security Minister Itamar Ben-Gvir "brings joy to violent men and abandons the women threatened with murder… unsupervised restraining orders endanger women's lives even more. They give women the illusion of being protected, and then they are murdered." MK Pnina Tamano-Shata, chairwoman of the Knesset Committee on the Status of Women and Gender Equality, said that "the coalition proved today that it despises women's lives." The NGO Amutat Bat Melech [he], which assists Orthodox and ultra-Orthodox women who suffer from domestic violence, said that: "Rejecting the electronic bracelet bill is disconnected from the terrible reality of seven femicides since the beginning of the year. This is an effective tool of the first degree that could have saved lives and reduced the threat to women suffering from domestic violence. This is a matter of life and death, whose whole purpose is to provide a solution to defend women." The agreement signed by the coalition parties includes the setting up of a committee to draft changes to the Law of Return. Israeli religious parties have long demanded that the "grandchild clause" of the Law of Return be cancelled. This clause grants citizenship to anyone with at least one Jewish grandparent, as long as they do not practice another religion. If the grandchild clause were to be removed from the Law of Return then around 3 million people who are currently eligible for aliyah would no longer be eligible. The heads of the Jewish Agency, the Jewish Federations of North America, the World Zionist Organization and Keren Hayesod sent a joint letter to Prime Minister Netanyahu, expressing their "deep concern" about any changes to the Law of Return, adding that "Any change in the delicate and sensitive status quo on issues such as the Law of Return or conversion could threaten to unravel the ties between us and keep us away from each other." The Executive Council of Australian Jewry and the Zionist Federation of Australia issued a joint statement saying "We… view with deep concern… proposals in relation to religious pluralism and the law of return that risk damaging Israel's… relationship with Diaspora Jewry." On 19 March 2023, Israeli Finance Minister Bezalel Smotrich spoke in Paris at a memorial service for a Likud activist. The lectern at which Smotrich spoke was covered with a flag depicting the 'Greater Land of Israel,' encompassing the whole of Mandatory Palestine, as well as Trans-Jordan. During his speech, Smotrich said that "there's no such thing as Palestinians because there's no such thing as a Palestinian people." He added that the Palestinian people are a fictitious nation invented only to fight the Zionist movement, asking "Is there a Palestinian history or culture? There isn't any." The event received widespread media coverage. On 21 March, a spokesman for the US State Department sharply criticized Smotrich's comments. "The comments, which were delivered at a podium adorned with an inaccurate and provocative map, are offensive, they are deeply concerning, and, candidly, they're dangerous. The Palestinians have a rich history and culture, and the United States greatly values our partnership with the Palestinian people," he said. The Jordanian Foreign Ministry also voiced disapproval: "The Israeli Minister of Finance's use, during his participation in an event held yesterday in Paris, of a map of Israel that includes the borders of the Hashemite Kingdom of Jordan and the occupied Palestinian territories represents a reckless inflammatory act, and a violation of international norms and the Jordanian-Israeli peace treaty." Additionally, a map encompassing Mandatory Palestine and Trans-Jordan with a Jordanian flag on it was placed on a central lectern in the Jordanian Parliament. Jordan's parliament voted to expel the Israeli ambassador. Israel's Ministry of Foreign Affairs released a clarification relating to the matter, stating that "Israel is committed to the 1994 peace agreement with Jordan. There has been no change in the position of the State of Israel, which recognizes the territorial integrity of the Hashemite Kingdom of Jordan". Ahead of a Europe Day event due to take place on 9 May 2023, far-right wing National Security Minister Itamar Ben-Gvir was assigned as a representative of the government and a speaker at the event by the government secretariat, which deals with placing ministers at receptions on the occasion of the national days of the foreign embassies. The European Union requested that Ben-Gvir not attend, but the government did not make changes to the plan. On 8 May, the European delegation to Israel cancelled the reception, stating that: "The EU Delegation to Israel is looking forward to celebrating Europe Day on May 9, as it does every year. Regrettably, this year we have decided to cancel the diplomatic reception, as we do not want to offer a platform to someone whose views contradict the values the European Union stands for. However, the Europe Day cultural event for the Israeli public will be maintained to celebrate with our friends and partners in Israel the strong and constructive bilateral relationship". Israel's Opposition Leader Yair Lapid stated: "Sending Itamar Ben-Gvir to a gathering of EU ambassadors is a serious professional mistake. The government is embarrassing a large group of friendly countries, jeopardizing future votes in international institutions, and damaging our foreign relations. Last year, after a decade of efforts, we succeeded in signing an economic-political agreement with the European Union that will contribute to the Israeli economy and our foreign relations. Why risk it, and for what? Ben-Gvir is not a legitimate person in the international community (and not really in Israel either), and sometimes you have to be both wise and just and simply send someone else". On 23 February 2023, Defense Minister Gallant signed an agreement assigning governmental powers in the West Bank to a body to be headed by Minister Bezalel Smotrich, who will effectively become the governor of the West Bank, controlling almost all areas of life in the area, including planning, building and infrastructure. Israeli governments have hitherto been careful to keep the occupation as a military government. The temporary holding of power by an occupying military force, pending a negotiated settlement, is a principle of international law – an expression of the prohibition against obtaining sovereignty through conquest that was introduced in the wake of World War II. An editorial in Haaretz noted that the assignment of governmental powers in the West Bank to a civilian governor, alongside the plan to expand the dual justice system so that Israeli law will apply fully to settlers in the West Bank, constitutes de jure annexation of the West Bank. On 26 February 2023, following the 2023 Huwara shooting in which two Israelis were killed by an unidentified attacker, hundreds of Israeli settlers attacked the Palestinian town of Huwara and three nearby villages, setting alight hundreds of Palestinian homes (some with people in them), businesses, a school, and numerous vehicles, killing one Palestinian man and injuring 100 others. Bezalel Smotrich subsequently called on Twitter for Huwara to be "wiped out" by the Israeli government. Zvika Fogel MK, of the ultra-nationalist Otzma Yehudit, which forms part of the governing coalition, said that he "looks very favorably upon" the results of the rampage. Members of the coalition proposed an amendment to the Disengagement Law, which would allow Israelis to resettle settlements vacated during the 2005 Israeli disengagement from Gaza and the northern West Bank. The evacuated settlements were considered illegal under international law, according to most countries. The proposal was approved for voting by the Foreign Affairs and Defense Committee on 9 March 2023, while the committee was still waiting for briefing materials from the NSS, IDF, MFA and Shin Bet, and was passed on 21 March. The US has requested clarification from Israeli ambassador Michael Herzog. A US State Department spokesman stated that "The U.S. strongly urges Israel to refrain from allowing the return of settlers to the area covered by the legislation, consistent with both former Prime Minister Sharon and the current Israeli Government's commitment to the United States," noting that the actions represent a clear violation of undertakings given by the Sharon government to the Bush administration in 2005 and Netanyahu's far-right coalition to the Biden administration the previous week. Minister of Communication Shlomo Karhi had initially intended to cut the funding of the Israeli Public Broadcasting Corporation (also known by its blanket branding Kan) by 400 million shekels – roughly half of its total budget – closing several departments, and privatizing content creation. In response, the Director-General of the European Broadcasting Union, Noel Curran, sent two urgent letters to Netanyahu, expressing his concerns and calling on the Israeli government to "safeguard the independence of our Member KAN and ensure it is allowed to operate in a sustainable way, with funding that is both stable, adequate, fair, and transparent." On 25 January 2023, nine journalist organizations representing some of Kan's competitors issued a statement of concern, acknowledging the "important contribution of public broadcasting in creating a worthy, unbiased and non-prejudicial journalistic platform", and noting that "the existence of the [broadcasting] corporation as a substantial public broadcast organization strengthens media as a whole, adding to the competition in the market rather than weakening it." They also expressed their concern that the "real reason" for the proposal was actually "an attempt to silence voices from which... [the Minister] doesn't always draw satisfaction". The same day, hundreds of journalists, actors and filmmakers protested in Tel Aviv. The proposal was eventually put on hold. On 22 February 2023 it was reported that Prime Minister Netanyahu was attempting to appoint his close associate Yossi Shelley as the deputy to the National Statistician — a highly sensitive position in charge of providing accurate data for decision makers. The appointment of Shelley, who did not possess the required qualifications for the role, was withdrawn following publication. In its daily editorial, Haaretz tied this attempt with the judicial reform: "once they take control of the judiciary, law enforcement and public media, they wish to control the state's data base, the dry numerical data it uses to plan its future". Netanyahu also proposed Avi Simhon for the role, and eventually froze all appointments at the Israel Central Bureau of Statistics. Also on 22 February 2023, it was revealed that Yoav Kish, the Minister of Education, was promoting a draft government decision change to the National Library of Israel board of directors which would grant him more power over the institution. In response, the Hebrew University — which owned the library until 2008 – announced that if the draft is accepted, it will withdraw its collections from the library. The university's collections, which according to the university constitute some 80% of the library's collection, include the Agnon archive, the original manuscript of Hatikvah, and the Rothschild Haggadah, the oldest known Haggadah. A group of 300 authors and poets signed an open letter against the move, further noting their objection against "political takeover" of public broadcasting, as well as "any legislation that will castrate the judiciary and damage the democratic foundations of the state of Israel". Several days later, it was reported that a series of donors decided to withhold their donations to the library, totaling some 80 million shekels. On 3 March a petition against the move by 1,500 academics, including Israel Prize laureates, was sent to Kish. The proposal has been seen by some as retribution against Shai Nitzan, the former State Attorney and the library's current rector. On 5 March it was reported that the Legal Advisor to the Ministry of Finance, Asi Messing, was withholding the proposal. According to Messing, the proposal – which was being promoted as part of the Economic Arrangements Law – "was not reviewed... by the qualified personnel in the Ministry of Finance, does not align with any of the common goals of the economic plan, was not agreed to by myself and was not approved by the Attorney General." As of February 2023, the government has been debating several proposals that will significantly weaken the Ministry of Environmental Protection, including reducing the environmental regulation of planning and development and electricity production. One of the main proposals, the transferal of a 3 billion shekel fund meant to finance waste management plants from the Ministry of Environmental Protection to the Ministry of the Interior, was eventually withdrawn. The Minister of Environmental Protection, Idit Silman, has been criticized for using for meeting with climate change denialists, for wasteful and personally-motivated travel on the ministry's expense, for politicizing the role, and for engaging in political activity on the ministry's time. The government has been noted for an unusually high number of dismissals and resignations of senior career civil servants, and for the frequent attempts to replace them with candidates with known political associations, who are often less competent. According to sources, Netanyahu and people in his vicinity are seeking out civil servants who were appointed by the previous government, intent on replacing them with people loyal to him. Governmental nominees for various positions have been criticized for lack of expertise. In addition to the nominee to the position of Deputy National Statistician (see above), the Director General of the Ministry of Finance, Shlomi Heisler; the Director General of the Ministry of Justice, Itamar Donenfeld; and the Director General of Ministry of Transport, Moshe Ben Zaken, have all been criticized for incompetence, lack of familiarity with their Ministries' subject matter, lack of interest in the job, or lack of experience in managing large organizations. It has been reported that in some ministries, senior officials were enacting slowdowns as a means for dealing with the new ministers and director generals. On 28 July the director general of the Ministry of Education resigned, citing as reason the societal "rift". Asaf Zalel, a retired Air Force Brigadier General, was appointed in January. When asked about attempts to appoint his personal friend and attorney to the board of directors of a state-owned company, Minister David Amsalem replied: "that is my job, due to my authority to appoint directors. I put forward people that I know and hold in esteem". Under Minister of Transport Miri Regev, the ministry has either dismissed or lost the heads of the National Public Transport Authority, Israel Airports Authority, National Road Safety Authority, Israel Railways, and several officials in Netivei Israel. The current chair of Netivei Israel is Likud member and Regev associate Yigal Amadi, and the legal counsel is Einav Abuhzira, daughter of a former Likud branch chair. Abuhzira was appointed instead of Elad Berdugo, nephew of Netanyahu surrogate Yaakov Bardugo, after he was disqualified for the role by the Israel Government Companies Authority. In July 2023 the Ministry of Communications, Shlomo Karhi, and the minister in charge of the Israel Government Companies Authority, Dudi Amsalem, deposed the chair of the Israel Postal Company, Michael Vaknin. The chair, who was hired to lead the company's financial recovery after years of operational loss and towards privatization, has gained the support of officials at the Authority and at the Ministry of Finance; nevertheless, the ministers claimed that his performance is inadequate, and nominated in his place Yiftah Ron-Tal, who has known ties to Netanyahu and Smotrich. They also nominated four new directors, two of which have known political associations, and a third who was a witness in Netanyahu's trial. The coalition is allowed to spend a portion of the state's budget on a discretionary basis, meant to coax member parties to reach an agreement on the budget. As of May 2023, the government was pushing an allocation of over 13 billion shekels over two years - almost seven times the amount allocated by the previous government. Most of the funds will be allocated for uses associated with the religious, orthodox and settler communities. The head of the Budget Department at the Ministry of Finance, Yoav Gardos, objected to the allocations, claiming they would exacerbate unemployment in the Orthodox community, which is projected to cost the economy a total of 6.7 trillion shekels in lost produce by 2065. At the onset of the Gaza war and the declaration of a state of national emergency, Minister of Finance Bezalel Smotrich instructed government agencies to continue with the planned distribution of discretionary funds. Corruption During March 2023, the government was promoting an amendment to the Law on Public Service (Gifts) that would allow Netanyahu to receive donations to fund his legal defense. The amendment follows a decision by the High Court of Justice (HCJ) that forced Netanyahu to refund US$270,000 given to him and his wife by his late cousin, Nathan Mileikowsky, for their legal defense. This is in contrast to past statements by Minister of Justice Yariv Levin, who spoke against the possible conflict of interests that can result from such transactions. The bill was opposed by the Attorney General Gali Baharav-Miara, who stressed that it could "create a real opportunity for governmental corruption", and was eventually withdrawn at the end of March. As of March 2023, the coalition was promoting a bill that would prevent judicial review of ministerial appointments. The bill is intended to prevent the HCJ from reviewing the appointment of the twice-convicted chairman of Shas, Aryeh Deri (convicted of bribery, fraud, and breach of trust), to a ministerial position, after his previous appointment was annulled on grounds of unreasonableness. The bill follows on the heels of another amendment, that relaxed the ban on the appointment of convicted criminals, so that Deri - who was handed a suspended sentence after his second conviction - could be appointed. The bill is opposed by the Attorney General, as well as by the Knesset Legal Adviser, Sagit Afik. Israeli law allows for declaring a Prime Minister (as well as several other high-ranking public officials) to be temporarily or permanently incapacitated, but does not specify the conditions which can lead to a declaration of incapacitation. In the case of the Prime Minister, the authority to do so is given to the Attorney General. In March 2023, the coalition advanced a bill that passes this authority from the Attorney General to the government with the approval of the Knesset committee, and clarified that incapacitation can only result from medical or mental conditions. On 3 January 2024, the Supreme Court ruled by a majority of 6 out of 11 that the validity of the law will be postponed to the next Knesset because the bill in its immediate application is a personal law and is intended to serve a distinct personal purpose. Later, the court rejected a petition regarding the definition of Netanyahu as an incapacitated prime minister due to his ongoing trial and conflict of interests. Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Social_network#cite_ref-84] | [TOKENS: 5247]
Contents Social network 1800s: Martineau · Tocqueville · Marx · Spencer · Le Bon · Ward · Pareto · Tönnies · Veblen · Simmel · Durkheim · Addams · Mead · Weber · Du Bois · Mannheim · Elias A social network is a social structure consisting of a set of social actors (such as individuals or organizations), networks of dyadic ties, and other social interactions between actors. The social network perspective provides a set of methods for analyzing the structure of whole social entities along with a variety of theories explaining the patterns observed in these structures. The study of these structures uses social network analysis to identify local and global patterns, locate influential entities, and examine dynamics of networks. For instance, social network analysis has been used in studying the spread of misinformation on social media platforms or analyzing the influence of key figures in social networks. Social networks and the analysis of them is an inherently interdisciplinary academic field which emerged from social psychology, sociology, statistics, and graph theory. Georg Simmel authored early structural theories in sociology emphasizing the dynamics of triads and "web of group affiliations". Jacob Moreno is credited with developing the first sociograms in the 1930s to study interpersonal relationships. These approaches were mathematically formalized in the 1950s and theories and methods of social networks became pervasive in the social and behavioral sciences by the 1980s. Social network analysis is now one of the major paradigms in contemporary sociology, and is also employed in a number of other social and formal sciences. Together with other complex networks, it forms part of the nascent field of network science. Overview The social network is a theoretical construct useful in the social sciences to study relationships between individuals, groups, organizations, or even entire societies (social units, see differentiation). The term is used to describe a social structure determined by such interactions. The ties through which any given social unit connects represent the convergence of the various social contacts of that unit. This theoretical approach is, necessarily, relational. An axiom of the social network approach to understanding social interaction is that social phenomena should be primarily conceived and investigated through the properties of relations between and within units, instead of the properties of these units themselves. Thus, one common criticism of social network theory is that individual agency is often ignored although this may not be the case in practice (see agent-based modeling). Precisely because many different types of relations, singular or in combination, form these network configurations, network analytics are useful to a broad range of research enterprises. In social science, these fields of study include, but are not limited to anthropology, biology, communication studies, economics, geography, information science, organizational studies, social psychology, sociology, and sociolinguistics. History In the late 1890s, both Émile Durkheim and Ferdinand Tönnies foreshadowed the idea of social networks in their theories and research of social groups. Tönnies argued that social groups can exist as personal and direct social ties that either link individuals who share values and belief (Gemeinschaft, German, commonly translated as "community") or impersonal, formal, and instrumental social links (Gesellschaft, German, commonly translated as "society"). Durkheim gave a non-individualistic explanation of social facts, arguing that social phenomena arise when interacting individuals constitute a reality that can no longer be accounted for in terms of the properties of individual actors. Georg Simmel, writing at the turn of the twentieth century, pointed to the nature of networks and the effect of network size on interaction and examined the likelihood of interaction in loosely knit networks rather than groups. Major developments in the field can be seen in the 1930s by several groups in psychology, anthropology, and mathematics working independently. In psychology, in the 1930s, Jacob L. Moreno began systematic recording and analysis of social interaction in small groups, especially classrooms and work groups (see sociometry). In anthropology, the foundation for social network theory is the theoretical and ethnographic work of Bronislaw Malinowski, Alfred Radcliffe-Brown, and Claude Lévi-Strauss. A group of social anthropologists associated with Max Gluckman and the Manchester School, including John A. Barnes, J. Clyde Mitchell and Elizabeth Bott Spillius, often are credited with performing some of the first fieldwork from which network analyses were performed, investigating community networks in southern Africa, India and the United Kingdom. Concomitantly, British anthropologist S. F. Nadel codified a theory of social structure that was influential in later network analysis. In sociology, the early (1930s) work of Talcott Parsons set the stage for taking a relational approach to understanding social structure. Later, drawing upon Parsons' theory, the work of sociologist Peter Blau provides a strong impetus for analyzing the relational ties of social units with his work on social exchange theory. By the 1970s, a growing number of scholars worked to combine the different tracks and traditions. One group consisted of sociologist Harrison White and his students at the Harvard University Department of Social Relations. Also independently active in the Harvard Social Relations department at the time were Charles Tilly, who focused on networks in political and community sociology and social movements, and Stanley Milgram, who developed the "six degrees of separation" thesis. Mark Granovetter and Barry Wellman are among the former students of White who elaborated and championed the analysis of social networks. Beginning in the late 1990s, social network analysis experienced work by sociologists, political scientists, and physicists such as Duncan J. Watts, Albert-László Barabási, Peter Bearman, Nicholas A. Christakis, James H. Fowler, and others, developing and applying new models and methods to emerging data available about online social networks, as well as "digital traces" regarding face-to-face networks. Levels of analysis In general, social networks are self-organizing, emergent, and complex, such that a globally coherent pattern appears from the local interaction of the elements that make up the system. These patterns become more apparent as network size increases. However, a global network analysis of, for example, all interpersonal relationships in the world is not feasible and is likely to contain so much information as to be uninformative. Practical limitations of computing power, ethics and participant recruitment and payment also limit the scope of a social network analysis. The nuances of a local system may be lost in a large network analysis, hence the quality of information may be more important than its scale for understanding network properties. Thus, social networks are analyzed at the scale relevant to the researcher's theoretical question. Although levels of analysis are not necessarily mutually exclusive, there are three general levels into which networks may fall: micro-level, meso-level, and macro-level. At the micro-level, social network research typically begins with an individual, snowballing as social relationships are traced, or may begin with a small group of individuals in a particular social context. Dyadic level: A dyad is a social relationship between two individuals. Network research on dyads may concentrate on structure of the relationship (e.g. multiplexity, strength), social equality, and tendencies toward reciprocity/mutuality. Triadic level: Add one individual to a dyad, and you have a triad. Research at this level may concentrate on factors such as balance and transitivity, as well as social equality and tendencies toward reciprocity/mutuality. In the balance theory of Fritz Heider the triad is the key to social dynamics. The discord in a rivalrous love triangle is an example of an unbalanced triad, likely to change to a balanced triad by a change in one of the relations. The dynamics of social friendships in society has been modeled by balancing triads. The study is carried forward with the theory of signed graphs. Actor level: The smallest unit of analysis in a social network is an individual in their social setting, i.e., an "actor" or "ego." Egonetwork analysis focuses on network characteristics, such as size, relationship strength, density, centrality, prestige and roles such as isolates, liaisons, and bridges. Such analyses, are most commonly used in the fields of psychology or social psychology, ethnographic kinship analysis or other genealogical studies of relationships between individuals. Subset level: Subset levels of network research problems begin at the micro-level, but may cross over into the meso-level of analysis. Subset level research may focus on distance and reachability, cliques, cohesive subgroups, or other group actions or behavior. In general, meso-level theories begin with a population size that falls between the micro- and macro-levels. However, meso-level may also refer to analyses that are specifically designed to reveal connections between micro- and macro-levels. Meso-level networks are low density and may exhibit causal processes distinct from interpersonal micro-level networks. Organizations: Formal organizations are social groups that distribute tasks for a collective goal. Network research on organizations may focus on either intra-organizational or inter-organizational ties in terms of formal or informal relationships. Intra-organizational networks themselves often contain multiple levels of analysis, especially in larger organizations with multiple branches, franchises or semi-autonomous departments. In these cases, research is often conducted at a work group level and organization level, focusing on the interplay between the two structures. Experiments with networked groups online have documented ways to optimize group-level coordination through diverse interventions, including the addition of autonomous agents to the groups. Randomly distributed networks: Exponential random graph models of social networks became state-of-the-art methods of social network analysis in the 1980s. This framework has the capacity to represent social-structural effects commonly observed in many human social networks, including general degree-based structural effects commonly observed in many human social networks as well as reciprocity and transitivity, and at the node-level, homophily and attribute-based activity and popularity effects, as derived from explicit hypotheses about dependencies among network ties. Parameters are given in terms of the prevalence of small subgraph configurations in the network and can be interpreted as describing the combinations of local social processes from which a given network emerges. These probability models for networks on a given set of actors allow generalization beyond the restrictive dyadic independence assumption of micro-networks, allowing models to be built from theoretical structural foundations of social behavior. Scale-free networks: A scale-free network is a network whose degree distribution follows a power law, at least asymptotically. In network theory a scale-free ideal network is a random network with a degree distribution that unravels the size distribution of social groups. Specific characteristics of scale-free networks vary with the theories and analytical tools used to create them, however, in general, scale-free networks have some common characteristics. One notable characteristic in a scale-free network is the relative commonness of vertices with a degree that greatly exceeds the average. The highest-degree nodes are often called "hubs", and may serve specific purposes in their networks, although this depends greatly on the social context. Another general characteristic of scale-free networks is the clustering coefficient distribution, which decreases as the node degree increases. This distribution also follows a power law. The Barabási model of network evolution shown above is an example of a scale-free network. Rather than tracing interpersonal interactions, macro-level analyses generally trace the outcomes of interactions, such as economic or other resource transfer interactions over a large population. Large-scale networks: Large-scale network is a term somewhat synonymous with "macro-level." It is primarily used in social and behavioral sciences, and in economics. Originally, the term was used extensively in the computer sciences (see large-scale network mapping). Complex networks: Most larger social networks display features of social complexity, which involves substantial non-trivial features of network topology, with patterns of complex connections between elements that are neither purely regular nor purely random (see, complexity science, dynamical system and chaos theory), as do biological, and technological networks. Such complex network features include a heavy tail in the degree distribution, a high clustering coefficient, assortativity or disassortativity among vertices, community structure (see stochastic block model), and hierarchical structure. In the case of agency-directed networks these features also include reciprocity, triad significance profile (TSP, see network motif), and other features. In contrast, many of the mathematical models of networks that have been studied in the past, such as lattices and random graphs, do not show these features. Theoretical links Various theoretical frameworks have been imported for the use of social network analysis. The most prominent of these are Graph theory, Balance theory, Social comparison theory, and more recently, the Social identity approach. Few complete theories have been produced from social network analysis. Two that have are structural role theory and heterophily theory. The basis of Heterophily Theory was the finding in one study that more numerous weak ties can be important in seeking information and innovation, as cliques have a tendency to have more homogeneous opinions as well as share many common traits. This homophilic tendency was the reason for the members of the cliques to be attracted together in the first place. However, being similar, each member of the clique would also know more or less what the other members knew. To find new information or insights, members of the clique will have to look beyond the clique to its other friends and acquaintances. This is what Granovetter called "the strength of weak ties". Structural holes In the context of networks, social capital exists where people have an advantage because of their location in a network. Contacts in a network provide information, opportunities and perspectives that can be beneficial to the central player in the network. Most social structures tend to be characterized by dense clusters of strong connections. Information within these clusters tends to be rather homogeneous and redundant. Non-redundant information is most often obtained through contacts in different clusters. When two separate clusters possess non-redundant information, there is said to be a structural hole between them. Thus, a network that bridges structural holes will provide network benefits that are in some degree additive, rather than overlapping. An ideal network structure has a vine and cluster structure, providing access to many different clusters and structural holes. Networks rich in structural holes are a form of social capital in that they offer information benefits. The main player in a network that bridges structural holes is able to access information from diverse sources and clusters. For example, in business networks, this is beneficial to an individual's career because he is more likely to hear of job openings and opportunities if his network spans a wide range of contacts in different industries/sectors. This concept is similar to Mark Granovetter's theory of weak ties, which rests on the basis that having a broad range of contacts is most effective for job attainment. Structural holes have been widely applied in social network analysis, resulting in applications in a wide range of practical scenarios as well as machine learning-based social prediction. Research clusters Research has used network analysis to examine networks created when artists are exhibited together in museum exhibition. Such networks have been shown to affect an artist's recognition in history and historical narratives, even when controlling for individual accomplishments of the artist. Other work examines how network grouping of artists can affect an individual artist's auction performance. An artist's status has been shown to increase when associated with higher status networks, though this association has diminishing returns over an artist's career. In J.A. Barnes' day, a "community" referred to a specific geographic location and studies of community ties had to do with who talked, associated, traded, and attended church with whom. Today, however, there are extended "online" communities developed through telecommunications devices and social network services. Such devices and services require extensive and ongoing maintenance and analysis, often using network science methods. Community development studies, today, also make extensive use of such methods. Complex networks require methods specific to modelling and interpreting social complexity and complex adaptive systems, including techniques of dynamic network analysis. Mechanisms such as Dual-phase evolution explain how temporal changes in connectivity contribute to the formation of structure in social networks. The study of social networks is being used to examine the nature of interdependencies between actors and the ways in which these are related to outcomes of conflict and cooperation. Areas of study include cooperative behavior among participants in collective actions such as protests; promotion of peaceful behavior, social norms, and public goods within communities through networks of informal governance; the role of social networks in both intrastate conflict and interstate conflict; and social networking among politicians, constituents, and bureaucrats. In criminology and urban sociology, much attention has been paid to the social networks among criminal actors. For example, murders can be seen as a series of exchanges between gangs. Murders can be seen to diffuse outwards from a single source, because weaker gangs cannot afford to kill members of stronger gangs in retaliation, but must commit other violent acts to maintain their reputation for strength. Diffusion of ideas and innovations studies focus on the spread and use of ideas from one actor to another or one culture and another. This line of research seeks to explain why some become "early adopters" of ideas and innovations, and links social network structure with facilitating or impeding the spread of an innovation. A case in point is the social diffusion of linguistic innovation such as neologisms. Experiments and large-scale field trials (e.g., by Nicholas Christakis and collaborators) have shown that cascades of desirable behaviors can be induced in social groups, in settings as diverse as Honduras villages, Indian slums, or in the lab. Still other experiments have documented the experimental induction of social contagion of voting behavior, emotions, risk perception, and commercial products. In demography, the study of social networks has led to new sampling methods for estimating and reaching populations that are hard to enumerate (for example, homeless people or intravenous drug users.) For example, respondent driven sampling is a network-based sampling technique that relies on respondents to a survey recommending further respondents. The field of sociology focuses almost entirely on networks of outcomes of social interactions. More narrowly, economic sociology considers behavioral interactions of individuals and groups through social capital and social "markets". Sociologists, such as Mark Granovetter, have developed core principles about the interactions of social structure, information, ability to punish or reward, and trust that frequently recur in their analyses of political, economic and other institutions. Granovetter examines how social structures and social networks can affect economic outcomes like hiring, price, productivity and innovation and describes sociologists' contributions to analyzing the impact of social structure and networks on the economy. Analysis of social networks is increasingly incorporated into health care analytics, not only in epidemiological studies but also in models of patient communication and education, disease prevention, mental health diagnosis and treatment, and in the study of health care organizations and systems. Human ecology is an interdisciplinary and transdisciplinary study of the relationship between humans and their natural, social, and built environments. The scientific philosophy of human ecology has a diffuse history with connections to geography, sociology, psychology, anthropology, zoology, and natural ecology. In the study of literary systems, network analysis has been applied by Anheier, Gerhards and Romo, De Nooy, Senekal, and Lotker, to study various aspects of how literature functions. The basic premise is that polysystem theory, which has been around since the writings of Even-Zohar, can be integrated with network theory and the relationships between different actors in the literary network, e.g. writers, critics, publishers, literary histories, etc., can be mapped using visualization from SNA. Research studies of formal or informal organization relationships, organizational communication, economics, economic sociology, and other resource transfers. Social networks have also been used to examine how organizations interact with each other, characterizing the many informal connections that link executives together, as well as associations and connections between individual employees at different organizations. Many organizational social network studies focus on teams. Within team network studies, research assesses, for example, the predictors and outcomes of centrality and power, density and centralization of team instrumental and expressive ties, and the role of between-team networks. Intra-organizational networks have been found to affect organizational commitment, organizational identification, interpersonal citizenship behaviour. Social capital is a form of economic and cultural capital in which social networks are central, transactions are marked by reciprocity, trust, and cooperation, and market agents produce goods and services not mainly for themselves, but for a common good. Social capital is split into three dimensions: the structural, the relational and the cognitive dimension. The structural dimension describes how partners interact with each other and which specific partners meet in a social network. Also, the structural dimension of social capital indicates the level of ties among organizations. This dimension is highly connected to the relational dimension which refers to trustworthiness, norms, expectations and identifications of the bonds between partners. The relational dimension explains the nature of these ties which is mainly illustrated by the level of trust accorded to the network of organizations. The cognitive dimension analyses the extent to which organizations share common goals and objectives as a result of their ties and interactions. Social capital is a sociological concept about the value of social relations and the role of cooperation and confidence to achieve positive outcomes. The term refers to the value one can get from their social ties. For example, newly arrived immigrants can make use of their social ties to established migrants to acquire jobs they may otherwise have trouble getting (e.g., because of unfamiliarity with the local language). A positive relationship exists between social capital and the intensity of social network use. In a dynamic framework, higher activity in a network feeds into higher social capital which itself encourages more activity. This particular cluster focuses on brand-image and promotional strategy effectiveness, taking into account the impact of customer participation on sales and brand-image. This is gauged through techniques such as sentiment analysis which rely on mathematical areas of study such as data mining and analytics. This area of research produces vast numbers of commercial applications as the main goal of any study is to understand consumer behaviour and drive sales. In many organizations, members tend to focus their activities inside their own groups, which stifles creativity and restricts opportunities. A player whose network bridges structural holes has an advantage in detecting and developing rewarding opportunities. Such a player can mobilize social capital by acting as a "broker" of information between two clusters that otherwise would not have been in contact, thus providing access to new ideas, opinions and opportunities. British philosopher and political economist John Stuart Mill, writes, "it is hardly possible to overrate the value of placing human beings in contact with persons dissimilar to themselves.... Such communication [is] one of the primary sources of progress." Thus, a player with a network rich in structural holes can add value to an organization through new ideas and opportunities. This in turn, helps an individual's career development and advancement. A social capital broker also reaps control benefits of being the facilitator of information flow between contacts. Full communication with exploratory mindsets and information exchange generated by dynamically alternating positions in a social network promotes creative and deep thinking. In the case of consulting firm Eden McCallum, the founders were able to advance their careers by bridging their connections with former big three consulting firm consultants and mid-size industry firms. By bridging structural holes and mobilizing social capital, players can advance their careers by executing new opportunities between contacts. There has been research that both substantiates and refutes the benefits of information brokerage. A study of high tech Chinese firms by Zhixing Xiao found that the control benefits of structural holes are "dissonant to the dominant firm-wide spirit of cooperation and the information benefits cannot materialize due to the communal sharing values" of such organizations. However, this study only analyzed Chinese firms, which tend to have strong communal sharing values. Information and control benefits of structural holes are still valuable in firms that are not quite as inclusive and cooperative on the firm-wide level. In 2004, Ronald Burt studied 673 managers who ran the supply chain for one of America's largest electronics companies. He found that managers who often discussed issues with other groups were better paid, received more positive job evaluations and were more likely to be promoted. Thus, bridging structural holes can be beneficial to an organization, and in turn, to an individual's career. Computer networks combined with social networking software produce a new medium for social interaction. A relationship over a computerized social networking service can be characterized by context, direction, and strength. The content of a relation refers to the resource that is exchanged. In a computer-mediated communication context, social pairs exchange different kinds of information, including sending a data file or a computer program as well as providing emotional support or arranging a meeting. With the rise of electronic commerce, information exchanged may also correspond to exchanges of money, goods or services in the "real" world. Social network analysis methods have become essential to examining these types of computer mediated communication. In addition, the sheer size and the volatile nature of social media has given rise to new network metrics. A key concern with networks extracted from social media is the lack of robustness of network metrics given missing data. Based on the pattern of homophily, ties between people are most likely to occur between nodes that are most similar to each other, or within neighbourhood segregation, individuals are most likely to inhabit the same regional areas as other individuals who are like them. Therefore, social networks can be used as a tool to measure the degree of segregation or homophily within a social network. Social Networks can both be used to simulate the process of homophily but it can also serve as a measure of level of exposure of different groups to each other within a current social network of individuals in a certain area. See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Rap] | [TOKENS: 9450]
Contents Rapping Rapping (also dropping, rhyming, flowing, spitting, emceeing, or MCing) is an artistic form of vocal delivery and emotive expression that incorporates "rhyme, rhythmic speech, and [commonly] street vernacular". It is usually performed over a backing beat or musical accompaniment. The components of rap include "content" (what is being said, e.g., lyrics), "flow" (rhythm, rhyme), and "delivery" (cadence, tone). Rap differs from spoken-word poetry in that it is usually performed off-time to musical accompaniment. It also differs from singing, which varies in pitch and does not always include words. Because they do not rely on pitch inflection, some rap artists may play with timbre or other vocal qualities. Rap is a primary ingredient of hip-hop music, and so commonly associated with the genre that it is sometimes called "rap music". Precursors to modern rap music include the West African griot tradition, certain vocal styles of blues and jazz, an African-American insult game called playing the dozens (see Battle rap and Diss), and 1960s African-American poetry. Stemming from the hip-hop cultural movement, rap music originated in the Bronx, New York City, in the early 1970s and became part of popular music later that decade. Rapping developed from the announcements made over the microphone at parties by DJs and MCs, evolving into more complex lyrical performances. Rap is usually delivered over a beat, typically provided by a DJ, turntablist, or beatboxer when performing live. Much less commonly a rapper can decide to perform a cappella. When a rap or hip-hop artist is creating a song, "track", or record, done primarily in a production studio, most frequently a producer provides the beat(s) for the MC to flow over. Stylistically, rap occupies a gray area between speech, prose, poetry, and singing. The word, which predates the musical form, originally meant "to lightly strike", and is now used to describe quick speech or repartee. The word has been used in the English language since the 16th century. In the 1960s the word became a slang term meaning "to converse" in African American vernacular, and very soon after that came to denote the musical style. Rap music has played a significant role in expressing social and political issues, addressing topics such as racism, poverty, and political oppression. By the 21st century, rap had become a global phenomenon, influencing music, fashion, and culture worldwide. History The English verb rap has various meanings; these include "to strike, especially with a quick, smart, or light blow", as well "to utter sharply or vigorously: to rap out a command". The Shorter Oxford English Dictionary gives a date of 1541 for the first recorded use of the word with the meaning "to utter (esp. an oath) sharply, vigorously, or suddenly". Wentworth and Flexner's Dictionary of American Slang gives the meaning "to speak to, recognize, or acknowledge acquaintance with someone", dated 1932, and a later meaning of "to converse, esp. in an open and frank manner". It is these meanings from which the musical form of rapping derives, and this definition may be from a shortening of repartee. A rapper refers to a performer who "raps". By the late 1960s, when Hubert G. Brown changed his name to H. Rap Brown, rap was a slang term referring to an oration or speech, such as was common among the "hip" crowd in the protest movements, but it did not come to be associated with a musical style for another decade. Rap was used to describe talking on records as early as 1970 on Isaac Hayes' album ...To Be Continued with the track name "Monologue: Ike's Rap I". Hayes' "husky-voiced sexy spoken 'raps' became key components in his signature sound". Del the Funky Homosapien similarly states that rap was used to refer to talking in a stylistic manner in the early 1970s: "I was born in '72 ... back then what rapping meant, basically, was you trying to convey something—you're trying to convince somebody. That's what rapping is, it's in the way you talk." It is sometimes claimed that “rap" is an acronym for 'Rhythm And Poetry', but this does not reflect the history of the word and thus is best seen as a backronym. Similarities to rapping can be observed in West African chanting folk traditions. Centuries before hip-hop music existed, the griots of West Africans were delivering stories rhythmically, over drums and sparse instrumentation. Such resemblances have been noted by many modern artists, modern day "griots", spoken word artists, mainstream news sources, and academics. Rap lyrics and music are part of the "Black rhetorical continuum", continuing past traditions of expanding upon them through "creative use of language and rhetorical styles and strategies". Blues, rooted in the work songs and spirituals of slavery, was first played by black Americans around the time of the Emancipation Proclamation. This way of preaching, unique to African-Americans, called the Black sermonic tradition influenced singers and musicians such as 1940s African-American gospel group The Jubalaires. The Jubalaire's songs "The Preacher and the Bear" (1941) and "Noah" (1946) are precursors to the genre of rap music. The Jubalaires and other African-American singing groups during the blues, jazz, and gospel era are examples of the origins and development of rap music. Grammy-winning blues musician/historian Elijah Wald and others have argued that the blues were being rapped as early as the 1920s. Wald went so far as to call hip-hop "the living blues". A notable recorded example of rapping in blues was the 1950 song "Gotta Let You Go" by Joe Hill Louis. Jazz, which developed from the blues and other African-American and European musical traditions and originated around the beginning of the 20th century, has also influenced hip-hop and has been cited as a precursor of hip-hop. Not just jazz music and lyrics but also jazz poetry. According to John Sobol, the jazz musician and poet who wrote Digitopia Blues, rap "bears a striking resemblance to the evolution of jazz both stylistically and formally". Boxer Muhammad Ali anticipated elements of rap, often using rhyme schemes and spoken word poetry, both for when he was trash talking in boxing and as political poetry for his activism outside of boxing, paving the way for The Last Poets in 1968, Gil Scott-Heron in 1970, and the emergence of rap music in the 1970s. An editor of the newspaper, The Fayetteville Observer interviewed Bill Curtis of the disco-funk music group the Fatback Band in 2020. Curtis noted that when he moved to the Bronx in the 1970s he heard people rapping over scratched records throughout the neighborhoods and radio DJs were rapping before the genre was released on retail recordings. The Fatback Band released the first rap recording, "King Tim III (Personality Jock)", a few weeks before the Sugarhill Gang in 1979. In another interview Curtis said: "There was rapping in the Bronx and the cats there had been doing it for a while...Fatback certainly didn't invent rap or anything. I was just interested in it and I guess years later we were the first to record it. At the time you could already see cats rapping everywhere in the streets and doing stuff." With the decline of disco in the early 1980s rap became a new form of expression. Rap arose from musical experimentation with rhyming, rhythmic speech. Rap was a departure from disco. Sherley Anne Williams refers to the development of rap as "anti-Disco" in style and means of reproduction. The early productions of Rap after Disco sought a more simplified manner of producing the tracks they were to sing over. Williams explains how Rap composers and DJ's opposed the heavily orchestrated and ritzy multi-tracks of Disco for "break beats" which were created from compiling different records from numerous genres and did not require the equipment from professional recording studios. Professional studios were not necessary therefore opening the production of rap to the youth who as Williams explains felt "locked out" because of the capital needed to produce Disco records. More directly related to the African-American community were items like schoolyard chants and taunts, clapping games, jump-rope rhymes, some with unwritten folk histories going back hundreds of years across many nationalities. Sometimes these items contain racially offensive lyrics. In his narration between the tracks on George Russell's 1958 jazz album New York, N.Y., the singer Jon Hendricks recorded something close to modern rap, since it all rhymed and was delivered in a hip, rhythm-conscious manner. Art forms such as spoken word jazz poetry and comedy records had an influence on the first rappers. Coke La Rock, often credited as hip-hop's first MC cites the Last Poets among his influences, as well as comedians such as Wild Man Steve and Richard Pryor. Comedian Rudy Ray Moore released under the counter albums in the 1960s and 1970s such as This Pussy Belongs to Me (1970), which contained "raunchy, sexually explicit rhymes that often had to do with pimps, prostitutes, players, and hustlers", and which later led to him being called "The Godfather of Rap". Gil Scott-Heron, a jazz poet/musician, has been cited as an influence on rappers such as Chuck D and KRS-One. Scott-Heron himself was influenced by Melvin Van Peebles, whose first album was 1968's Brer Soul. Van Peebles describes his vocal style as "the old Southern style", which was influenced by singers he had heard growing up in South Chicago. Van Peebles also said that he was influenced by older forms of African-American music: "... people like Blind Lemon Jefferson and the field hollers. I was also influenced by spoken word song styles from Germany that I encountered when I lived in France." During the mid-20th century, the musical culture of the Caribbean was constantly influenced by the concurrent changes in American music. As early as 1956, deejays were toasting over dubbed Jamaican beats. It was called "rap", expanding the word's earlier meaning in the African-American community—"to discuss or debate informally." The early rapping of hip-hop developed out of DJ and master of ceremonies' announcements made over the microphone at parties, and later into more complex raps. Grandmaster Caz stated: "The microphone was just used for making announcements, like when the next party was gonna be, or people's moms would come to the party looking for them, and you have to announce it on the mic. Different DJs started embellishing what they were saying. I would make an announcement this way, and somebody would hear that and they add a little bit to it. I'd hear it again and take it a little step further 'til it turned from lines to sentences to paragraphs to verses to rhymes." One of the first rappers at the beginning of the hip-hop period, at the end of the 1970s, was also hip-hop's first DJ, DJ Kool Herc. Herc, a Jamaican immigrant, started delivering simple raps at his parties, which some claim were inspired by the Jamaican tradition of toasting. However, Kool Herc himself denies this link (in the 1984 book Hip Hop), saying, "Jamaican toasting? Naw, naw. No connection there. I couldn't play reggae in the Bronx. People wouldn't accept it. The inspiration for rap is James Brown and the album Hustler's Convention". Herc also suggests he was too young while in Jamaica to get into sound system parties: "I couldn't get in. Couldn't get in. I was ten, eleven years old," and that while in Jamaica, he was listening to James Brown: "I was listening to American music in Jamaica and my favorite artist was James Brown. That's who inspired me. A lot of the records I played were by James Brown." However, in terms of what was identified in the 2010s as "rap", the source came from Manhattan. Pete DJ Jones said the first person he heard rap was DJ Hollywood, a Harlem (not Bronx) native who was the house DJ at the Apollo Theater. Kurtis Blow also said the first person he heard rhyme was DJ Hollywood. In a 2014 interview, Hollywood said: "I used to like the way Frankie Crocker would ride a track, but he wasn't syncopated to the track though. I liked [WWRL DJ] Hank Spann too, but he wasn't on the one. Guys back then weren't concerned with being musical. I wanted to flow with the record". And in 1975, he ushered in what became known as the "hip hop" style by rhyming syncopated to the beat of an existing record uninterruptedly for nearly a minute. He adapted the lyrics of Isaac Hayes' "Good Love 6-9969" and rhymed it to the breakdown part of "Love Is the Message". His partner Kevin Smith, better known as Lovebug Starski, took this new style and introduced it to the Bronx hip-hop set that until then was composed of DJing and b-boying (or beatboxing), with traditional "shout out" style rapping. The style that Hollywood created and his partner introduced to the hip-hop set quickly became the standard. Before that time, most MC rhymes, based on radio DJs, consisted of short patters that were disconnected thematically; they were separate unto themselves. But by using song lyrics, Hollywood gave his rhyme an inherent flow and theme. This was quickly noticed, and the style spread. By the end of the 1970s, artists such as Kurtis Blow and the Sugarhill Gang were starting to receive radio airplay and make an impact far outside of New York City, on a national scale. Blondie's 1981 single, "Rapture", was the first number-one single on the United States Billboard Hot 100 chart to feature rap vocals. Old school rap (1979–84) was "easily identified by its relatively simple raps" according to AllMusic, "the emphasis was not on lyrical technique, but simply on good times", one notable exception being Melle Mel, who set the way for future rappers through his socio-political content and creative wordplay. Golden age hip-hop (the mid-1980s to early '90s) was the time period where hip-hop lyricism went through its most drastic transformation – writer William Jelani Cobb says "in these golden years, a critical mass of mic prodigies were literally creating themselves and their art form at the same time" and Allmusic writes, "rhymers like PE's Chuck D, Big Daddy Kane, KRS-One, and Rakim basically invented the complex wordplay and lyrical kung-fu of later hip-hop". The golden age is considered to have ended around 1993–94, marking the end of rap lyricism's most innovative period. Flow "Flow" is defined as "the rhythms and rhymes" of a hip-hop song's lyrics and how they interact – the book How to Rap breaks flow down into rhyme, rhyme schemes, and rhythm (also known as cadence). 'Flow' is also sometimes used to refer to elements of the delivery (pitch, timbre, volume) as well, though often a distinction is made between the flow and the delivery. Staying on the beat is central to rap's flow – many MCs note the importance of staying on-beat in How to Rap including Sean Price, Mighty Casey, Zion I, Vinnie Paz, Fredro Starr, Del the Funky Homosapien, Tech N9ne, People Under the Stairs, Twista, B-Real, Mr Lif, 2Mex, and Cage. MCs stay on beat by stressing syllables in time to the four beats of the musical backdrop. Poetry scholar Derek Attridge describes how this works in his book Poetic Rhythm – "rap lyrics are written to be performed to an accompaniment that emphasizes the metrical structure of the verse". He says rap lyrics are made up of, "lines with four stressed beats, separated by other syllables that may vary in number and may include other stressed syllables. The strong beat of the accompaniment coincides with the stressed beats of the verse, and the rapper organizes the rhythms of the intervening syllables to provide variety and surprise". The same technique is also noted in the book How to Rap, where diagrams are used to show how the lyrics line up with the beat – "stressing a syllable on each of the four beats gives the lyrics the same underlying rhythmic pulse as the music and keeps them in rhythm ... other syllables in the song may still be stressed, but the ones that fall in time with the four beats of a bar are the only ones that need to be emphasized in order to keep the lyrics in time with the music". In rap terminology, 16-bars is the amount of time that rappers are generally given to perform a guest verse on another artist's song; one bar is typically equal to four beats of music. Old school flows were relatively basic and used only few syllables per bar, simple rhythmic patterns, and basic rhyming techniques and rhyme schemes. Melle Mel is cited as an MC who epitomizes the old school flow – Kool Moe Dee says, "from 1970 to 1978 we rhymed one way [then] Melle Mel, in 1978, gave us the new cadence we would use from 1978 to 1986". "He's the first emcee to explode in a new rhyme cadence, and change the way every emcee rhymed forever. Rakim, The Notorious B.I.G., and Eminem have flipped the flow, but Melle Mel's downbeat on the two, four, kick to snare cadence is still the rhyme foundation all emcees are building on". Artists and critics often credit Rakim with creating the overall shift from the more simplistic old school flows to more complex flows near the beginning of hip-hop's new school – Kool Moe Dee says, "any emcee that came after 1986 had to study Rakim just to know what to be able to do. Rakim, in 1986, gave us flow and that was the rhyme style from 1986 to 1994. From that point on, anybody emceeing was forced to focus on their flow". Kool Moe Dee explains that before Rakim, the term 'flow' was not widely used – "Rakim is basically the inventor of flow. We were not even using the word flow until Rakim came along. It was called rhyming, it was called cadence, but it wasn't called flow. Rakim created flow!" He adds that while Rakim upgraded and popularized the focus on flow, "he didn't invent the word". Kool Moe Dee states that Biggie introduced a newer flow which "dominated from 1994 to 2002", and also says that Method Man was "one of the emcees from the early to mid-'90s that ushered in the era of flow ... Rakim invented it, Big Daddy Kane, KRS-One, and Kool G Rap expanded it, but Biggie and Method Man made flow the single most important aspect of an emcee's game". He also cites Craig Mack as an artist who contributed to developing flow in the '90s. Music scholar Adam Krims says, "the flow of MCs is one of the profoundest changes that separates out new-sounding from older-sounding music ... it is widely recognized and remarked that rhythmic styles of many commercially successful MCs since roughly the beginning of the 1990s have progressively become faster and more 'complex'". He cites "members of the Wu-Tang Clan, Nas, AZ, Big Pun, and Ras Kass, just to name a few" as artists who exemplify this progression. Kool Moe Dee adds, "in 2002 Eminem created the song that got the first Oscar in Hip-Hop history [Lose Yourself] ... and I would have to say that his flow is the most dominant right now (2003)". There are many different styles of flow, with different terminology used by different people – stic.man of Dead Prez uses the following terms – Alternatively, music scholar Adam Krims uses the following terms – MCs use many different rhyming techniques, including complex rhyme schemes, as Adam Krims points out – "the complexity ... involves multiple rhymes in the same rhyme complex (i.e. section with consistently rhyming words), internal rhymes, [and] offbeat rhymes". There is also widespread use of multisyllabic rhymes. It has been noted that rap's use of rhyme is some of the most advanced in all forms of poetry – music scholar Adam Bradley notes, "rap rhymes so much and with such variety that it is now the largest and richest contemporary archive of rhymed words. It has done more than any other art form in recent history to expand rhyme's formal range and expressive possibilities". In the book How to Rap, Masta Ace explains how Rakim and Big Daddy Kane caused a shift in the way MCs rhymed: "Up until Rakim, everybody who you heard rhyme, the last word in the sentence was the rhyming [word], the connection word. Then Rakim showed us that you could put rhymes within a rhyme ... now here comes Big Daddy Kane — instead of going three words, he's going multiple". How to Rap explains that "rhyme is often thought to be the most important factor in rap writing ... rhyme is what gives rap lyrics their musicality. Many of the rhythmic techniques used in rapping come from percussive techniques and many rappers compare themselves to percussionists. How to Rap 2 identifies all the rhythmic techniques used in rapping such as triplets, flams, 16th notes, 32nd notes, syncopation, extensive use of rests, and rhythmic techniques unique to rapping such as West Coast "lazy tails", coined by Shock G. Rapping has also been done in various time signatures, such as 3/4 time. Since the 2000s, rapping has evolved into a style of rap that spills over the boundaries of the beat, closely resembling spoken English. Rappers like MF Doom and Eminem have exhibited this style, and since then, rapping has been difficult to notate. The American hip-hop group Crime Mob exhibited a new rap flow in songs such as "Knuck If You Buck", heavily dependent on triplets. Rappers including Drake, Kanye West, Rick Ross, Young Jeezy and more have included this influence in their music. In 2014, an American hip-hop collective from Atlanta, Migos, popularized this flow, and is commonly referred to as the "Migos Flow" (a term that is contentious within the hip-hop community). Mitchell Ohriner in "Flow: The Rhythmic Voice in Rap Music" describes seven "groove classes" consisting of archetypal sixteen-step accent patterns generated by grouping notes in clusters of two and/or three. These groove classes are further distinguished from one another as "duple" and "nonduple". Groove classes without internal repetition can occur in any of sixteen rhythmic rotations, whereas groove classes with internal repetition have fewer meaningful rotations. The standard form of rap notation is the flow diagram, where rappers line-up their lyrics underneath "beat numbers". Different rappers have slightly different forms of flow diagram that they use: Del the Funky Homosapien says, "I'm just writing out the rhythm of the flow, basically. Even if it's just slashes to represent the beats, that's enough to give me a visual path.", Vinnie Paz states, "I've created my own sort of writing technique, like little marks and asterisks to show like a pause or emphasis on words in certain places.", and Aesop Rock says, "I have a system of maybe 10 little symbols that I use on paper that tell me to do something when I'm recording." Hip-hop scholars also make use of the same flow diagrams: the books How to Rap and How to Rap 2 use the diagrams to explain rap's triplets, flams, rests, rhyme schemes, runs of rhyme, and breaking rhyme patterns, among other techniques. Similar systems are used by PhD musicologists Adam Krims in his book Rap Music and the Poetics of Identity and Kyle Adams in his academic work on flow. Because rap revolves around a strong 4/4 beat, with certain syllables said in time to the beat, all the notational systems have a similar structure: they all have the same 4 beat numbers at the top of the diagram, so that syllables can be written in-line with the beat numbers. This allows devices such as rests, "lazy tails", flams, and other rhythmic techniques to be shown, as well as illustrating where different rhyming words fall in relation to the music. Performance To successfully deliver a rap, a rapper must also develop vocal presence, enunciation, and breath control. Vocal presence is the distinctiveness of a rapper's voice on record. Enunciation is essential to a flowing rap; some rappers choose also to exaggerate it for comic and artistic effect. Breath control, taking in air without interrupting one's delivery, is an important skill for a rapper to master, and a must for any MC. An MC with poor breath control cannot deliver difficult verses without making unintentional pauses. Raps are sometimes delivered with melody. West Coast rapper Egyptian Lover was the first notable MC to deliver "sing-raps". Popular rappers such as 50 Cent and Ja Rule add a slight melody to their otherwise purely percussive raps whereas some rappers such as Cee-Lo Green are able to harmonize their raps with the beat. The Midwestern group Bone Thugs-n-Harmony was one of the first groups to achieve nationwide recognition for using the fast-paced, melodic and harmonic raps that are also practiced by Do or Die, another Midwestern group. Another rapper that harmonized his rhymes was Nate Dogg, a rapper part of the group 213. Rakim experimented not only with following the beat, but also with complementing the song's melody with his own voice, making his flow sound like that of an instrument (a saxophone in particular). The ability to rap quickly and clearly is sometimes regarded as an important sign of skill. In certain hip-hop subgenres such as chopped and screwed, slow-paced rapping is often considered optimal. The current record for fastest rapper is held by Spanish rapper Domingo Edjang Moreno, known by his alias Chojin, who rapped 921 syllables in one minute on December 23, 2008. In the late 1970s, the term emcee, MC or M.C., derived from "master of ceremonies", became an alternative title for a rapper, and for their role within hip-hop music and culture. An MC uses rhyming verses, pre-written or ad lib ('freestyled'), to introduce the DJ with whom they work, to keep the crowd entertained or to glorify themselves. As hip-hop progressed, the title MC acquired backronyms such as 'mike chanter' 'microphone controller', 'microphone checker', 'music commentator', and one who 'moves the crowd'. Some use this word interchangeably with the term rapper, while for others the term denotes a superior level of skill and connection to the wider culture. MC can often be used as a term of distinction; referring to an artist with good performance skills. As Kool G Rap notes, "masters of ceremony, where the word 'M.C.' comes from, means just keeping the party alive" [sic]. Many people in hip-hop including DJ Premier and KRS-One feel that James Brown was the first MC. James Brown had the lyrics, moves, and soul that greatly influenced a lot of rappers in hip-hop, and arguably even started the first MC rhyme. For some rappers, there was a distinction to the term, such as for MC Hammer who acquired the nickname "MC" for being a "Master of Ceremonies" which he used when he began performing at various clubs while on the road with the Oakland As and eventually in the military (United States Navy). It was within the lyrics of a rap song called "This Wall" that Hammer first identified himself as M.C. Hammer and later marketed it on his debut album Feel My Power. The term MC has also been used in the genre of grime music to refer to a rapid style of rapping. Grime artist JME released an album titled Grime MC in 2019 which peaked at 29 on the UK Albums Chart. Uncertainty over the acronym's expansion may be considered evidence for its ubiquity: the full term "Master of Ceremonies" is very rarely used in the hip-hop scene. This confusion prompted the hip-hop group A Tribe Called Quest to include this statement in the liner notes to their 1993 album Midnight Marauders: The use of the term MC when referring to a rhyming wordsmith originates from the dance halls of Jamaica. At each event, there would be a master of ceremonies who would introduce the different musical acts and would say a toast in style of a rhyme, directed at the audience and to the performers. He would also make announcements such as the schedule of other events or advertisements from local sponsors. The term MC continued to be used by the children of women who moved to New York City to work as maids in the 1970s. These MCs eventually created a new style of music called hip-hop based on the rhyming they used to do in Jamaica and the breakbeats used in records. MC has also recently been accepted to refer to all who engineer music. Female rappers with mainstream success have included Lauryn Hill, Nicki Minaj, MC Lyte, Jean Grae, Foxy Brown, Lil' Kim, Missy Elliott, Queen Latifah, Da Brat, Trina, Megan Thee Stallion, Cardi B, M.I.A., CL from 2NE1, Iggy Azalea, Eve, and Lisa Lopes from TLC. Subject matter "Party rhymes", meant to excite the crowd at a party, were nearly the exclusive focus of old school hip-hop, and they remain a staple of hip-hop music to this day. In addition to party raps, rappers also tend to make references to love and sex. Love raps were first popularized by Spoonie Gee of the Treacherous Three, and later, in the golden age of hip-hop, Big Daddy Kane, Heavy D, and LL Cool J would continue this tradition. Hip-hop artists such as KRS-One, Hopsin, Public Enemy, Lupe Fiasco, Mos Def, Talib Kweli, Jay-Z, Nas, The Notorious B.I.G. (Biggie), and dead prez are known for their sociopolitical subject matter. Their West Coast counterparts include The Coup, Paris, and Michael Franti. Tupac Shakur was also known for rapping about social issues such as police brutality, teenage pregnancy, and racism. Other rappers take a less critical approach to urbanity, sometimes even embracing such aspects as crime. Schoolly D was the first notable MC to rap about crime. Early on KRS-One was accused of celebrating crime and a hedonistic lifestyle, but after the death of his DJ, Scott La Rock, KRS-One went on to speak out against violence in hip-hop and has spent the majority of his career condemning violence and writing on issues of race and class. Ice-T was one of the first rappers to call himself a "playa" and discuss guns on record, but his theme tune to the 1988 film Colors contained warnings against joining gangs. Gangsta rap, made popular largely because of N.W.A, brought rapping about crime and the gangster lifestyle into the musical mainstream. Materialism has also been a popular topic in hip-hop since at least the early 1990s, with rappers boasting about their own wealth and possessions, and name-dropping specific brands: liquor brands Cristal and Rémy Martin, car manufacturers Bentley and Mercedes-Benz and clothing brands Gucci and Versace have all been popular subjects for rappers. Various politicians, journalists, and religious leaders have accused rappers of fostering a culture of violence and hedonism among hip-hop listeners through their lyrics. However, there are also rappers whose messages may not be in line with these views, for example Christian hip-hop. Others have praised the "political critique, innuendo and sarcasm" of hip-hop music. In contrast to the more hedonistic approach of gangsta rappers, some rappers have a spiritual or religious focus. Christian rap is currently the most commercially successful form of religious rap. With Christian rappers like Lecrae, Thi'sl and Hostyle Gospel winning national awards and making regular appearances on television, Christian hip-hop seems to have found its way in the hip-hop family. Aside from Christianity, the Five Percent Nation, an Islamic esotericist religious/spiritual group, has been represented more than any religious group in popular hip-hop. Artists such as Rakim, the members of the Wu-Tang Clan, Brand Nubian, X-Clan and Busta Rhymes have had success in spreading the theology of the Five Percenters. Rappers use the literary techniques of double entendres, alliteration, and forms of wordplay that are found in classical poetry. Similes and metaphors are used extensively in rap lyrics; rappers such as Fabolous and Lloyd Banks have written entire songs in which every line contains similes, whereas MCs like Rakim, GZA, and Jay-Z are known for the metaphorical content of their raps. Rappers such as Lupe Fiasco are known for the complexity of their songs that contain metaphors within extended metaphors. Many hip-hop listeners believe that a rapper's lyrics are enhanced by a complex vocabulary. Kool Moe Dee claims that he appealed to older audiences by using a complex vocabulary in his raps. Rap is famous, however, for having its own vocabulary—from international hip-hop slang to regional slang. Some artists, like the Wu-Tang Clan, develop an entire lexicon among their clique. African-American English has always had a significant effect on hip-hop slang and vice versa. Certain regions have introduced their unique regional slang to hip-hop culture, such as the Bay Area (Mac Dre, E-40), Houston (Chamillionaire, Paul Wall), Atlanta (Ludacris, Lil Jon, T.I.), and Kentucky (Cunninlynguists, Nappy Roots). The Nation of Gods and Earths, aka The Five Percenters, has influenced mainstream hip-hop slang with the introduction of phrases such as "word is bond" that have since lost much of their original spiritual meaning. Preference toward one or the other has much to do with the individual; GZA, for example, prides himself on being very visual and metaphorical but also succinct, whereas underground rapper MF DOOM is known for heaping similes upon similes. In still another variation, 2Pac was known for saying exactly what he meant, literally and clearly. Rap music's development into popular culture began in the 1990s. The 1990s marked the beginning of an era of popular culture guided by the musical influences of hip-hop and rap itself, moving away from the influences of rock music. As rap continued to develop and further disseminate, it went on to influence clothing brands, movies, sports, and dancing through popular culture. As rap has developed to become more of a presence in popular culture, it has focused itself on a particular demographic, adolescent and young adults. As such, it has had a significant impact on the modern vernacular of this portion of the population, which has diffused throughout society. The effects of rap music on modern vernacular can be explored through the study of semiotics. Semiotics is the study of signs and symbols, or the study of language as a system. French literary theorist Roland Barthes furthers this study with this own theory of myth. He maintains that the first order of signification is language and that the second is "myth", arguing that a word has both its literal meaning, and its mythical meaning, which is heavily dependent on socio-cultural context. To illustrate, Barthes uses the example of a rat: it has a literal meaning (a physical, objective description) and it has a greater socio-cultural understanding. This contextual meaning is subjective and is dynamic within society. Through Barthes' semiotic theory of language and myth, it can be shown that rap music has culturally influenced the language of its listeners, as they influence the connotative message to words that already exist. As more people listen to rap, the words that are used in the lyrics become culturally bound to the song, and then are disseminated through the conversations that people have using these words. Most often, the terms that rappers use are pre-established words that have been prescribed new meaning through their music, that are eventually disseminated through social spheres. This newly contextualized word is called a neosemanticism. Neosemanticisms are forgotten words that are often brought forward from subcultures that attract the attention of members of the reigning culture of their time, then they are brought forward by the influential voices in society – in this case, these figures are rappers. To illustrate, the acronym YOLO was popularized by rapper, actor and RnB singer Drake in 2012 when he featured it in his own song, The Motto. That year the term YOLO was so popular that it was printed on t-shirts, became a trending hashtag on Twitter, and was even considered as the inspiration for several tattoos. However, although the rapper may have come up with the acronym, the motto itself was in no way first established by Drake. Similar messages can be seen in many well-known sayings, or as early as 1896, in the English translation of La Comédie Humaine, by Honoré de Balzac where one of his free-spirited characters tells another, "You Only Live Once!". Another example of a neosemanticism is the word "broccoli". Rapper E-40 initially uses the word "broccoli" to refer to marijuana, on his hit track Broccoli in 1993. In contemporary society, artists D.R.A.M. and Lil Yachty are often accredited for this slang on for their hit song, also titled Broccoli. With the rise in technology and mass media, the dissemination of subcultural terms has only become easier. Dick Hebdige, author of Subculture: The Meaning of Style, merits that subcultures often use music to vocalize the struggles of their experiences. As rap is also the culmination of a prevalent sub-culture in African-American social spheres, often their own personal cultures are disseminated through rap lyrics. It is here that lyrics can be categorized as either historically influenced or (more commonly) considered as slang. Vernon Andrews, the professor of the course American Studies 111: Hip-Hop Culture, suggests that many words, such as "hood", "homie", and "dope", are historically influenced. Most importantly, this also brings forward the anarchistic culture of rap music. Common themes from rap are anti-establishment and instead, promote black excellence and diversity. It is here that rap can be seen to reclaim words, namely, "nigga", a historical term used to subjugate and oppress Black people in America. This word has been reclaimed by Black Americans and is heavily used in rap music. Niggaz With Attitude embodies this notion by using it as the first word of their influential rap group name. Freestyle and battle There are two kinds of freestyle rap: one is scripted (recitation), but having no particular overriding subject matter, and has yet evolved since the late 2000s to become the most commonly referred to style when the term "freestyle" is being used. Its primary focus has morphed from making up a rap on the spot, to being able to recite memorized or "written" lyrics over an "undisclosed" beat, not revealed until the performance actually begins. A variation is when a DJ or host will use multiple beats and will rotate them dynamically; it is the freestyler's job to keep their flow and not appear to trip up when the beat switches. Alternatively, keeping the rhythm or flow going can be substituted by "switching styles". This involves the rapper doing a variation of changing one's voice or tone, and/or the rhythm or flow, and potentially much more. However, this must be done smoothly, else any notoriety or respect gained can very quickly be lost all together. Some rappers have multiple characters, egos, or styles in their repertoire. The second, more difficult and respected style, has adapted the terms "off the dome", or "off (the) top" in addition to relatively less common older references like "spitting", "on the spot" and "unscripted". Often times these terms are followed by "freestyle" e.g. Killer "Off top Freestyle" by (Artist X)! This type of rapping requires the artist to both spit their lyrics over undisclosed and possibly rotating beats, but additionally primarily completely improvise the session's rapped lyrics. Many "off top" rappers inadvertently reuse old lines, or even "cheat" by preparing segments or entire verses in advance. Therefore, "off the dome" freestyles with proven spontaneity are valued above generic, always usable, or rehearsed lines or "bars". Rappers will often reference places or objects in their immediate setting, or specific (usually demeaning) characteristics of opponents, to prove their authenticity and originality. Battle rapping, which can be freestyled, is the competition between two or more rappers in front of an audience. The tradition of insulting one's friends or acquaintances in rhyme goes back to the dozens, and was employed famously by Muhammad Ali in his boxing matches. The winner of a battle is decided by the crowd and/or preselected judges. According to Kool Moe Dee, a successful battle rap focuses on an opponent's weaknesses, rather than one's own strengths. Television shows such as MTV's DFX and BET's 106 and Park host weekly freestyle battles live on the air. Battle rapping gained widespread public recognition outside of the African-American community with rapper Eminem's movie 8 Mile. The strongest battle rappers will generally perform their rap fully freestyled. This is the most effective form in a battle as the rapper can comment on the other person, whether it be what they look like, how they talk, or what they wear. It also allows the rapper to reverse a line used to "diss" him or her if they are the second rapper to battle. This is known as a "flip". MC Jin was considered "World Champion" battle rapper in the mid-2000s.[citation needed] Derivatives and influence Throughout hip-hop's history, new musical styles and genres have developed that contain rapping. Entire genres, such as rap rock and its derivatives rapcore and rap metal (rock/metal/punk with rapped vocals), or hip house have resulted from the fusion of rap and other styles. Many popular music genres with a focus on percussion have contained rapping at some point; be it disco (DJ Hollywood), jazz (Gang Starr), new wave (Blondie), funk (Fatback Band), contemporary R&B (Mary J. Blige), reggaeton (Daddy Yankee), or even Japanese dance music (Soul'd Out). UK garage music has begun to focus increasingly on rappers in a new subgenre called grime which emerged in London in the early 2000s and was pioneered and popularized by the MC Dizzee Rascal. Increased popularity with the music has shown more UK rappers going to America as well as tour there, such as Sway DaSafo possibly signing with Akon's label Konvict. Hyphy is the latest of these spin-offs. It is typified by slowed-down atonal vocals with instrumentals that borrow heavily from the hip-hop scene and lyrics centered on illegal street racing and car culture. Another Oakland, California group, Beltaine's Fire, has recently gained attention for their Celtic fusion sound which blends hip-hop beats with Celtic melodies. Unlike the majority of hip-hop artists, all their music is performed live without samples, synths, or drum machines, drawing comparisons to The Roots and Rage Against the Machine. Bhangra, a widely popular style of music from Punjab, India has been mixed numerous times with reggae and hip-hop music. The most popular song in this genre in the United States was "Mundian to Bach Ke" or "Beware the Boys" by Panjabi MC and Jay-Z. Although "Mundian To Bach Ke" had been released previously, the mixing with Jay-Z popularized the genre further. See also Notes References Further reading
========================================
[SOURCE: https://en.wikipedia.org/wiki/OpenAI#cite_ref-176] | [TOKENS: 8773]
Contents OpenAI OpenAI is an American artificial intelligence research organization comprising both a non-profit foundation and a controlled for-profit public benefit corporation (PBC), headquartered in San Francisco. It aims to develop "safe and beneficial" artificial general intelligence (AGI), which it defines as "highly autonomous systems that outperform humans at most economically valuable work". OpenAI is widely recognized for its development of the GPT family of large language models, the DALL-E series of text-to-image models, and the Sora series of text-to-video models, which have influenced industry research and commercial applications. Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI. The organization was founded in 2015 in Delaware but evolved a complex corporate structure. As of October 2025, following restructuring approved by California and Delaware regulators, the non-profit OpenAI Foundation holds 26% of the for-profit OpenAI Group PBC, with Microsoft holding 27% and employees/other investors holding 47%. Under its governance arrangements, the OpenAI Foundation holds the authority to appoint the board of the for-profit OpenAI Group PBC, a mechanism designed to align the entity’s strategic direction with the Foundation’s charter. Microsoft previously invested over $13 billion into OpenAI, and provides Azure cloud computing resources. In October 2025, OpenAI conducted a $6.6 billion share sale that valued the company at $500 billion. In 2023 and 2024, OpenAI faced multiple lawsuits for alleged copyright infringement against authors and media companies whose work was used to train some of OpenAI's products. In November 2023, OpenAI's board removed Sam Altman as CEO, citing a lack of confidence in him, but reinstated him five days later following a reconstruction of the board. Throughout 2024, roughly half of then-employed AI safety researchers left OpenAI, citing the company's prominent role in an industry-wide problem. Founding In December 2015, OpenAI was founded as a not for profit organization by Sam Altman, Elon Musk, Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba, with Sam Altman and Elon Musk as the co-chairs. A total of $1 billion in capital was pledged by Sam Altman, Greg Brockman, Elon Musk, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), and Infosys. However, the actual capital collected significantly lagged pledges. According to company disclosures, only $130 million had been received by 2019. In its founding charter, OpenAI stated an intention to collaborate openly with other institutions by making certain patents and research publicly available, but later restricted access to its most capable models, citing competitive and safety concerns. OpenAI was initially run from Brockman's living room. It was later headquartered at the Pioneer Building in the Mission District, San Francisco. According to OpenAI's charter, its founding mission is "to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity." Musk and Altman stated in 2015 that they were partly motivated by concerns about AI safety and existential risk from artificial general intelligence. OpenAI stated that "it's hard to fathom how much human-level AI could benefit society", and that it is equally difficult to comprehend "how much it could damage society if built or used incorrectly". The startup also wrote that AI "should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible", and that "because of AI's surprising history, it's hard to predict when human-level AI might come within reach. When it does, it'll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest." Co-chair Sam Altman expected a decades-long project that eventually surpasses human intelligence. Brockman met with Yoshua Bengio, one of the "founding fathers" of deep learning, and drew up a list of great AI researchers. Brockman was able to hire nine of them as the first employees in December 2015. OpenAI did not pay AI researchers salaries comparable to those of Facebook or Google. It also did not pay stock options which AI researchers typically get. Nevertheless, OpenAI spent $7 million on its first 52 employees in 2016. OpenAI's potential and mission drew these researchers to the firm; a Google employee said he was willing to leave Google for OpenAI "partly because of the very strong group of people and, to a very large extent, because of its mission." OpenAI co-founder Wojciech Zaremba stated that he turned down "borderline crazy" offers of two to three times his market value to join OpenAI instead. In April 2016, OpenAI released a public beta of "OpenAI Gym", its platform for reinforcement learning research. Nvidia gifted its first DGX-1 supercomputer to OpenAI in August 2016 to help it train larger and more complex AI models with the capability of reducing processing time from six days to two hours. In December 2016, OpenAI released "Universe", a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites, and other applications. Corporate structure In 2019, OpenAI transitioned from non-profit to "capped" for-profit, with the profit being capped at 100 times any investment. According to OpenAI, the capped-profit model allows OpenAI Global, LLC to legally attract investment from venture funds and, in addition, to grant employees stakes in the company. Many top researchers work for Google Brain, DeepMind, or Facebook, which offer equity that a nonprofit would be unable to match. Before the transition, OpenAI was legally required to publicly disclose the compensation of its top employees. The company then distributed equity to its employees and partnered with Microsoft, announcing an investment package of $1 billion into the company. Since then, OpenAI systems have run on an Azure-based supercomputing platform from Microsoft. OpenAI Global, LLC then announced its intention to commercially license its technologies. It planned to spend $1 billion "within five years, and possibly much faster". Altman stated that even a billion dollars may turn out to be insufficient, and that the lab may ultimately need "more capital than any non-profit has ever raised" to achieve artificial general intelligence. The nonprofit, OpenAI, Inc., is the sole controlling shareholder of OpenAI Global, LLC, which, despite being a for-profit company, retains a formal fiduciary responsibility to OpenAI, Inc.'s nonprofit charter. A majority of OpenAI, Inc.'s board is barred from having financial stakes in OpenAI Global, LLC. In addition, minority members with a stake in OpenAI Global, LLC are barred from certain votes due to conflict of interest. Some researchers have argued that OpenAI Global, LLC's switch to for-profit status is inconsistent with OpenAI's claims to be "democratizing" AI. On February 29, 2024, Elon Musk filed a lawsuit against OpenAI and CEO Sam Altman, accusing them of shifting focus from public benefit to profit maximization—a case OpenAI dismissed as "incoherent" and "frivolous," though Musk later revived legal action against Altman and others in August. On April 9, 2024, OpenAI countersued Musk in federal court, alleging that he had engaged in "bad-faith tactics" to slow the company's progress and seize its innovations for his personal benefit. OpenAI also argued that Musk had previously supported the creation of a for-profit structure and had expressed interest in controlling OpenAI himself. The countersuit seeks damages and legal measures to prevent further alleged interference. On February 10, 2025, a consortium of investors led by Elon Musk submitted a $97.4 billion unsolicited bid to buy the nonprofit that controls OpenAI, declaring willingness to match or exceed any better offer. The offer was rejected on 14 February 2025, with OpenAI stating that it was not for sale, but the offer complicated Altman's restructuring plan by suggesting a lower bar for how much the nonprofit should be valued. OpenAI, Inc. was originally designed as a nonprofit in order to ensure that AGI "benefits all of humanity" rather than "the private gain of any person". In 2019, it created OpenAI Global, LLC, a capped-profit subsidiary controlled by the nonprofit. In December 2024, OpenAI proposed a restructuring plan to convert the capped-profit into a Delaware-based public benefit corporation (PBC), and to release it from the control of the nonprofit. The nonprofit would sell its control and other assets, getting equity in return, and would use it to fund and pursue separate charitable projects, including in science and education. OpenAI's leadership described the change as necessary to secure additional investments, and claimed that the nonprofit's founding mission to ensure AGI "benefits all of humanity" would be better fulfilled. The plan has been criticized by former employees. A legal letter named "Not For Private Gain" asked the attorneys general of California and Delaware to intervene, stating that the restructuring is illegal and would remove governance safeguards from the nonprofit and the attorneys general. The letter argues that OpenAI's complex structure was deliberately designed to remain accountable to its mission, without the conflicting pressure of maximizing profits. It contends that the nonprofit is best positioned to advance its mission of ensuring AGI benefits all of humanity by continuing to control OpenAI Global, LLC, whatever the amount of equity that it could get in exchange. PBCs can choose how they balance their mission with profit-making. Controlling shareholders have a large influence on how closely a PBC sticks to its mission. On October 28, 2025, OpenAI announced that it had adopted the new PBC corporate structure after receiving approval from the attorneys general of California and Delaware. Under the new structure, OpenAI's for-profit branch became a public benefit corporation known as OpenAI Group PBC, while the non-profit was renamed to the OpenAI Foundation. The OpenAI Foundation holds a 26% stake in the PBC, while Microsoft holds a 27% stake and the remaining 47% is owned by employees and other investors. All members of the OpenAI Group PBC board of directors will be appointed by the OpenAI Foundation, which can remove them at any time. Members of the Foundation's board will also serve on the for-profit board. The new structure allows the for-profit PBC to raise investor funds like most traditional tech companies, including through an initial public offering, which Altman claimed was the most likely path forward. In January 2023, OpenAI Global, LLC was in talks for funding that would value the company at $29 billion, double its 2021 value. On January 23, 2023, Microsoft announced a new US$10 billion investment in OpenAI Global, LLC over multiple years, partially needed to use Microsoft's cloud-computing service Azure. From September to December, 2023, Microsoft rebranded all variants of its Copilot to Microsoft Copilot, and they added MS-Copilot to many installations of Windows and released Microsoft Copilot mobile apps. Following OpenAI's 2025 restructuring, Microsoft owns a 27% stake in the for-profit OpenAI Group PBC, valued at $135 billion. In a deal announced the same day, OpenAI agreed to purchase $250 billion of Azure services, with Microsoft ceding their right of first refusal over OpenAI's future cloud computing purchases. As part of the deal, OpenAI will continue to share 20% of its revenue with Microsoft until it achieves AGI, which must now be verified by an independent panel of experts. The deal also loosened restrictions on both companies working with third parties, allowing Microsoft to pursue AGI independently and allowing OpenAI to develop products with other companies. In 2017, OpenAI spent $7.9 million, a quarter of its functional expenses, on cloud computing alone. In comparison, DeepMind's total expenses in 2017 were $442 million. In the summer of 2018, training OpenAI's Dota 2 bots required renting 128,000 CPUs and 256 GPUs from Google for multiple weeks. In October 2024, OpenAI completed a $6.6 billion capital raise with a $157 billion valuation including investments from Microsoft, Nvidia, and SoftBank. On January 21, 2025, Donald Trump announced The Stargate Project, a joint venture between OpenAI, Oracle, SoftBank and MGX to build an AI infrastructure system in conjunction with the US government. The project takes its name from OpenAI's existing "Stargate" supercomputer project and is estimated to cost $500 billion. The partners planned to fund the project over the next four years. In July, the United States Department of Defense announced that OpenAI had received a $200 million contract for AI in the military, along with Anthropic, Google, and xAI. In the same month, the company made a deal with the UK Government to use ChatGPT and other AI tools in public services. OpenAI subsequently began a $50 million fund to support nonprofit and community organizations. In April 2025, OpenAI raised $40 billion at a $300 billion post-money valuation, which was the highest-value private technology deal in history. The financing round was led by SoftBank, with other participants including Microsoft, Coatue, Altimeter and Thrive. In July 2025, the company reported annualized revenue of $12 billion. This was an increase from $3.7 billion in 2024, which was driven by ChatGPT subscriptions, which reached 20 million paid subscribers by April 2025, up from 15.5 million at the end of 2024, alongside a rapidly expanding enterprise customer base that grew to five million business users. The company’s cash burn remains high because of the intensive computational costs required to train and operate large language models. It projects an $8 billion operating loss in 2025. OpenAI reports revised long-term spending projections totaling approximately $115 billion through 2029, with annual expenditures projected to escalate significantly, reaching $17 billion in 2026, $35 billion in 2027, and $45 billion in 2028. These expenditures are primarily allocated toward expanding compute infrastructure, developing proprietary AI chips, constructing data centers, and funding intensive model training programs, with more than half of the spending through the end of the decade expected to support research-intensive compute for model training and development. The company's financial strategy prioritizes market expansion and technological advancement over near-term profitability, with OpenAI targeting cash-flow-positive operations by 2029 and projecting revenue of approximately $200 billion by 2030. This aggressive spending trajectory underscores both the enormous capital requirements of scaling cutting-edge AI technology and OpenAI's commitment to maintaining its position as a leader in the artificial intelligence industry. In October 2025, OpenAI completed an employee share sale of up to $10 billion to existing investors which valued the company at $500 billion. The deal values OpenAI as the most valuable privately owned company in the world—surpassing SpaceX as the world's most valuable private company. On November 17, 2023, Sam Altman was removed as CEO when its board of directors (composed of Helen Toner, Ilya Sutskever, Adam D'Angelo and Tasha McCauley) cited a lack of confidence in him. Chief Technology Officer Mira Murati took over as interim CEO. Greg Brockman, the president of OpenAI, was also removed as chairman of the board and resigned from the company's presidency shortly thereafter. Three senior OpenAI researchers subsequently resigned: director of research and GPT-4 lead Jakub Pachocki, head of AI risk Aleksander Mądry, and researcher Szymon Sidor. On November 18, 2023, there were reportedly talks of Altman returning as CEO amid pressure placed upon the board by investors such as Microsoft and Thrive Capital, who objected to Altman's departure. Although Altman himself spoke in favor of returning to OpenAI, he has since stated that he considered starting a new company and bringing former OpenAI employees with him if talks to reinstate him didn't work out. The board members agreed "in principle" to resign if Altman returned. On November 19, 2023, negotiations with Altman to return failed and Murati was replaced by Emmett Shear as interim CEO. The board initially contacted Anthropic CEO Dario Amodei (a former OpenAI executive) about replacing Altman, and proposed a merger of the two companies, but both offers were declined. On November 20, 2023, Microsoft CEO Satya Nadella announced Altman and Brockman would be joining Microsoft to lead a new advanced AI research team, but added that they were still committed to OpenAI despite recent events. Before the partnership with Microsoft was finalized, Altman gave the board another opportunity to negotiate with him. About 738 of OpenAI's 770 employees, including Murati and Sutskever, signed an open letter stating they would quit their jobs and join Microsoft if the board did not rehire Altman and then resign. This prompted OpenAI investors to consider legal action against the board as well. In response, OpenAI management sent an internal memo to employees stating that negotiations with Altman and the board had resumed and would take some time. On November 21, 2023, after continued negotiations, Altman and Brockman returned to the company in their prior roles along with a reconstructed board made up of new members Bret Taylor (as chairman) and Lawrence Summers, with D'Angelo remaining. According to subsequent reporting, shortly before Altman’s firing, some employees raised concerns to the board about how he had handled the safety implications of a recent internal AI capability discovery. On November 29, 2023, OpenAI announced that an anonymous Microsoft employee had joined the board as a non-voting member to observe the company's operations; Microsoft resigned from the board in July 2024. In February 2024, the Securities and Exchange Commission subpoenaed OpenAI's internal communication to determine if Altman's alleged lack of candor misled investors. In 2024, following the temporary removal of Sam Altman and his return, many employees gradually left OpenAI, including most of the original leadership team and a significant number of AI safety researchers. In August 2023, it was announced that OpenAI had acquired the New York-based start-up Global Illumination, a company that deploys AI to develop digital infrastructure and creative tools. In June 2024, OpenAI acquired Multi, a startup focused on remote collaboration. In March 2025, OpenAI reached a deal with CoreWeave to acquire $350 million worth of CoreWeave shares and access to AI infrastructure, in return for $11.9 billion paid over five years. Microsoft was already CoreWeave's biggest customer in 2024. Alongside their other business dealings, OpenAI and Microsoft were renegotiating the terms of their partnership to facilitate a potential future initial public offering by OpenAI, while ensuring Microsoft's continued access to advanced AI models. On May 21, OpenAI announced the $6.5 billion acquisition of io, an AI hardware start-up founded by former Apple designer Jony Ive in 2024. In September 2025, OpenAI agreed to acquire the product testing startup Statsig for $1.1 billion in an all-stock deal and appointed Statsig's founding CEO Vijaye Raji as OpenAI's chief technology officer of applications. The company also announced development of an AI-driven hiring service designed to rival LinkedIn. OpenAI acquired personal finance app Roi in October 2025. In October 2025, OpenAI acquired Software Applications Incorporated, the developer of Sky, a macOS-based natural language interface designed to operate across desktop applications. The Sky team joined OpenAI, and the company announced plans to integrate Sky’s capabilities into ChatGPT. In December 2025, it was announced OpenAI had agreed to acquire Neptune, an AI tooling startup that helps companies track and manage model training, for an undisclosed amount. In January 2026, it was announced OpenAI had acquired healthcare technology startup Torch for approximately $60 million. The acquisition followed the launch of OpenAI’s ChatGPT Health product and was intended to strengthen the company’s medical data and healthcare artificial intelligence capabilities. OpenAI has been criticized for outsourcing the annotation of data sets to Sama, a company based in San Francisco that employed workers in Kenya. These annotations were used to train an AI model to detect toxicity, which could then be used to moderate toxic content, notably from ChatGPT's training data and outputs. However, these pieces of text usually contained detailed descriptions of various types of violence, including sexual violence. The investigation uncovered that OpenAI began sending snippets of data to Sama as early as November 2021. The four Sama employees interviewed by Time described themselves as mentally scarred. OpenAI paid Sama $12.50 per hour of work, and Sama was redistributing the equivalent of between $1.32 and $2.00 per hour post-tax to its annotators. Sama's spokesperson said that the $12.50 was also covering other implicit costs, among which were infrastructure expenses, quality assurance and management. In 2024, OpenAI began collaborating with Broadcom to design a custom AI chip capable of both training and inference, targeted for mass production in 2026 and to be manufactured by TSMC on a 3 nm process node. This initiative intended to reduce OpenAI's dependence on Nvidia GPUs, which are costly and face high demand in the market. In January 2024, Arizona State University purchased ChatGPT Enterprise in OpenAI's first deal with a university. In June 2024, Apple Inc. signed a contract with OpenAI to integrate ChatGPT features into its products as part of its new Apple Intelligence initiative. In June 2025, OpenAI began renting Google Cloud's Tensor Processing Units (TPUs) to support ChatGPT and related services, marking its first meaningful use of non‑Nvidia AI chips. In September 2025, it was revealed that OpenAI signed a contract with Oracle to purchase $300 billion in computing power over the next five years. In September 2025, OpenAI and NVIDIA announced a memorandum of understanding that included a potential deployment of at least 10 gigawatts of NVIDIA systems and a $100 billion investment from NVIDIA in OpenAI. OpenAI expected the negotiations to be completed within weeks. As of January 2026, this has not been realized, and the two sides are rethinking the future of their partnership. In October 2025, OpenAI announced a multi-billion dollar deal with AMD. OpenAI committed to purchasing six gigawatts worth of AMD chips, starting with the MI450. OpenAI will have the option to buy up to 160 million shares of AMD, about 10% of the company, depending on development, performance and share price targets. In December 2025, Disney said it would make a $1 billion investment in OpenAI, and signed a three-year licensing deal that will let users generate videos using Sora—OpenAI's short-form AI video platform. More than 200 Disney, Marvel, Star Wars and Pixar characters will be available to OpenAI users. In early 2026, Amazon entered advanced discussions to invest up to $50 billion in OpenAI as part of a potential artificial intelligence partnership. Under the proposed agreement, OpenAI’s models could be integrated into Amazon’s digital assistant Alexa and other internal projects. OpenAI provides LLMs to the Artificial Intelligence Cyber Challenge and to the Advanced Research Projects Agency for Health. In October 2024, The Intercept revealed that OpenAI's tools are considered "essential" for AFRICOM's mission and included in an "Exception to Fair Opportunity" contractual agreement between the United States Department of Defense and Microsoft. In December 2024, OpenAI said it would partner with defense-tech company Anduril to build drone defense technologies for the United States and its allies. In 2025, OpenAI's Chief Product Officer, Kevin Weil, was commissioned lieutenant colonel in the U.S. Army to join Detachment 201 as senior advisor. In June 2025, the U.S. Department of Defense awarded OpenAI a $200 million one-year contract to develop AI tools for military and national security applications. OpenAI announced a new program, OpenAI for Government, to give federal, state, and local governments access to its models, including ChatGPT. Services In February 2019, GPT-2 was announced, which gained attention for its ability to generate human-like text. In 2020, OpenAI announced GPT-3, a language model trained on large internet datasets. GPT-3 is aimed at natural language answering questions, but it can also translate between languages and coherently generate improvised text. It also announced that an associated API, named the API, would form the heart of its first commercial product. Eleven employees left OpenAI, mostly between December 2020 and January 2021, in order to establish Anthropic. In 2021, OpenAI introduced DALL-E, a specialized deep learning model adept at generating complex digital images from textual descriptions, utilizing a variant of the GPT-3 architecture. In December 2022, OpenAI received widespread media coverage after launching a free preview of ChatGPT, its new AI chatbot based on GPT-3.5. According to OpenAI, the preview received over a million signups within the first five days. According to anonymous sources cited by Reuters in December 2022, OpenAI Global, LLC was projecting $200 million of revenue in 2023 and $1 billion in revenue in 2024. After ChatGPT was launched, Google announced a similar chatbot, Bard, amid internal concerns that ChatGPT could threaten Google’s position as a primary source of online information. On February 7, 2023, Microsoft announced that it was building AI technology based on the same foundation as ChatGPT into Microsoft Bing, Edge, Microsoft 365 and other products. On March 14, 2023, OpenAI released GPT-4, both as an API (with a waitlist) and as a feature of ChatGPT Plus. On November 6, 2023, OpenAI launched GPTs, allowing individuals to create customized versions of ChatGPT for specific purposes, further expanding the possibilities of AI applications across various industries. On November 14, 2023, OpenAI announced they temporarily suspended new sign-ups for ChatGPT Plus due to high demand. Access for newer subscribers re-opened a month later on December 13. In December 2024, the company launched the Sora model. It also launched OpenAI o1, an early reasoning model that was internally codenamed strawberry. Additionally, ChatGPT Pro—a $200/month subscription service offering unlimited o1 access and enhanced voice features—was introduced, and preliminary benchmark results for the upcoming OpenAI o3 models were shared. On January 23, 2025, OpenAI released Operator, an AI agent and web automation tool for accessing websites to execute goals defined by users. The feature was only available to Pro users in the United States. OpenAI released deep research agent, nine days later. It scored a 27% accuracy on the benchmark Humanity's Last Exam (HLE). Altman later stated GPT-4.5 would be the last model without full chain-of-thought reasoning. In July 2025, reports indicated that AI models by both OpenAI and Google DeepMind solved mathematics problems at the level of top-performing students in the International Mathematical Olympiad. OpenAI's large language model was able to achieve gold medal-level performance, reflecting significant progress in AI's reasoning abilities. On October 6, 2025, OpenAI unveiled its Agent Builder platform during the company's DevDay event. The platform includes a visual drag-and-drop interface that lets developers and businesses design, test, and deploy agentic workflows with limited coding. On October 21, 2025, OpenAI introduced ChatGPT Atlas, a browser integrating the ChatGPT assistant directly into web navigation, to compete with existing browsers such as Google Chrome and Apple Safari. On December 11, 2025, OpenAI announced GPT-5.2. This model will be better at creating spreadsheets, building presentations, perceiving images, writing code and understanding long context. On January 27, 2026, OpenAI introduced Prism, a LaTeX-native workspace meant to assist scientists to help with research and writing. The platform utilizes GPT-5.2 as a backend to automate the process of drafting for scientific papers, including features for managing citations, complex equation formatting, and real-time collaborative editing. In March 2023, the company was criticized for disclosing particularly few technical details about products like GPT-4, contradicting its initial commitment to openness and making it harder for independent researchers to replicate its work and develop safeguards. OpenAI cited competitiveness and safety concerns to justify this repudiation. OpenAI's former chief scientist Ilya Sutskever argued in 2023 that open-sourcing increasingly capable models was increasingly risky, and that the safety reasons for not open-sourcing the most potent AI models would become "obvious" in a few years. In September 2025, OpenAI published a study on how people use ChatGPT for everyday tasks. The study found that "non-work tasks" (according to an LLM-based classifier) account for more than 72 percent of all ChatGPT usage, with a minority of overall usage related to business productivity. In July 2023, OpenAI launched the superalignment project, aiming within four years to determine how to align future superintelligent systems. OpenAI promised to dedicate 20% of its computing resources to the project, although the team denied receiving anything close to 20%. OpenAI ended the project in May 2024 after its co-leaders Ilya Sutskever and Jan Leike left the company. In August 2025, OpenAI was criticized after thousands of private ChatGPT conversations were inadvertently exposed to public search engines like Google due to an experimental "share with search engines" feature. The opt-in toggle, intended to allow users to make specific chats discoverable, resulted in some discussions including personal details such as names, locations, and intimate topics appearing in search results when users accidentally enabled it while sharing links. OpenAI announced the feature's permanent removal on August 1, 2025, and the company began coordinating with search providers to remove the exposed content, emphasizing that it was not a security breach but a design flaw that heightened privacy risks. CEO Sam Altman acknowledged the issue in a podcast, noting users often treat ChatGPT as a confidant for deeply personal matters, which amplified concerns about AI handling sensitive data. Management In 2018, Musk resigned from his Board of Directors seat, citing "a potential future conflict [of interest]" with his role as CEO of Tesla due to Tesla's AI development for self-driving cars. OpenAI stated that Musk's financial contributions were below $45 million. On March 3, 2023, Reid Hoffman resigned from his board seat, citing a desire to avoid conflicts of interest with his investments in AI companies via Greylock Partners, and his co-founding of the AI startup Inflection AI. Hoffman remained on the board of Microsoft, a major investor in OpenAI. In May 2024, Chief Scientist Ilya Sutskever resigned and was succeeded by Jakub Pachocki. Co-leader Jan Leike also departed amid concerns over safety and trust. OpenAI then signed deals with Reddit, News Corp, Axios, and Vox Media. Paul Nakasone then joined the board of OpenAI. In August 2024, cofounder John Schulman left OpenAI to join Anthropic, and OpenAI's president Greg Brockman took extended leave until November. In September 2024, CTO Mira Murati left the company. In November 2025, Lawrence Summers resigned from the board of directors. Governance and legal issues In May 2023, Sam Altman, Greg Brockman and Ilya Sutskever posted recommendations for the governance of superintelligence. They stated that superintelligence could happen within the next 10 years, allowing a "dramatically more prosperous future" and that "given the possibility of existential risk, we can't just be reactive". They proposed creating an international watchdog organization similar to IAEA to oversee AI systems above a certain capability threshold, suggesting that relatively weak AI systems on the other side should not be overly regulated. They also called for more technical safety research for superintelligences, and asked for more coordination, for example through governments launching a joint project which "many current efforts become part of". In July 2023, the FTC issued a civil investigative demand to OpenAI to investigate whether the company's data security and privacy practices to develop ChatGPT were unfair or harmed consumers (including by reputational harm) in violation of Section 5 of the Federal Trade Commission Act of 1914. These are typically preliminary investigative matters and are nonpublic, but the FTC's document was leaked. In July 2023, the FTC launched an investigation into OpenAI over allegations that the company scraped public data and published false and defamatory information. They asked OpenAI for comprehensive information about its technology and privacy safeguards, as well as any steps taken to prevent the recurrence of situations in which its chatbot generated false and derogatory content about people. The agency also raised concerns about ‘circular’ spending arrangements—for example, Microsoft extending Azure credits to OpenAI while both companies shared engineering talent—and warned that such structures could negatively affect the public. In September 2024, OpenAI's global affairs chief endorsed the UK's "smart" AI regulation during testimony to a House of Lords committee. In February 2025, OpenAI CEO Sam Altman stated that the company is interested in collaborating with the People's Republic of China, despite regulatory restrictions imposed by the U.S. government. This shift comes in response to the growing influence of the Chinese artificial intelligence company DeepSeek, which has disrupted the AI market with open models, including DeepSeek V3 and DeepSeek R1. Following DeepSeek's market emergence, OpenAI enhanced security protocols to protect proprietary development techniques from industrial espionage. Some industry observers noted similarities between DeepSeek's model distillation approach and OpenAI's methodology, though no formal intellectual property claim was filed. According to Oliver Roberts, in March 2025, the United States had 781 state AI bills or laws. OpenAI advocated for preempting state AI laws with federal laws. According to Scott Kohler, OpenAI has opposed California's AI legislation and suggested that the state bill encroaches on a more competent federal government. Public Citizen opposed a federal preemption on AI and pointed to OpenAI's growth and valuation as evidence that existing state laws have not hampered innovation. Before May 2024, OpenAI required departing employees to sign a lifelong non-disparagement agreement forbidding them from criticizing OpenAI and acknowledging the existence of the agreement. Daniel Kokotajlo, a former employee, publicly stated that he forfeited his vested equity in OpenAI in order to leave without signing the agreement. Sam Altman stated that he was unaware of the equity cancellation provision, and that OpenAI never enforced it to cancel any employee's vested equity. However, leaked documents and emails refute this claim. On May 23, 2024, OpenAI sent a memo releasing former employees from the agreement. OpenAI was sued for copyright infringement by authors Sarah Silverman, Matthew Butterick, Paul Tremblay and Mona Awad in July 2023. In September 2023, 17 authors, including George R. R. Martin, John Grisham, Jodi Picoult and Jonathan Franzen, joined the Authors Guild in filing a class action lawsuit against OpenAI, alleging that the company's technology was illegally using their copyrighted work. The New York Times also sued the company in late December 2023. In May 2024 it was revealed that OpenAI had destroyed its Books1 and Books2 training datasets, which were used in the training of GPT-3, and which the Authors Guild believed to have contained over 100,000 copyrighted books. In 2021, OpenAI developed a speech recognition tool called Whisper. OpenAI used it to transcribe more than one million hours of YouTube videos into text for training GPT-4. The automated transcription of YouTube videos raised concerns within OpenAI employees regarding potential violations of YouTube's terms of service, which prohibit the use of videos for applications independent of the platform, as well as any type of automated access to its videos. Despite these concerns, the project proceeded with notable involvement from OpenAI's president, Greg Brockman. The resulting dataset proved instrumental in training GPT-4. In February 2024, The Intercept as well as Raw Story and Alternate Media Inc. filed lawsuit against OpenAI on copyright litigation ground. The lawsuit is said to have charted a new legal strategy for digital-only publishers to sue OpenAI. On April 30, 2024, eight newspapers filed a lawsuit in the Southern District of New York against OpenAI and Microsoft, claiming illegal harvesting of their copyrighted articles. The suing publications included The Mercury News, The Denver Post, The Orange County Register, St. Paul Pioneer Press, Chicago Tribune, Orlando Sentinel, Sun Sentinel, and New York Daily News. In June 2023, a lawsuit claimed that OpenAI scraped 300 billion words online without consent and without registering as a data broker. It was filed in San Francisco, California, by sixteen anonymous plaintiffs. They also claimed that OpenAI and its partner as well as customer Microsoft continued to unlawfully collect and use personal data from millions of consumers worldwide to train artificial intelligence models. On May 22, 2024, OpenAI entered into an agreement with News Corp to integrate news content from The Wall Street Journal, the New York Post, The Times, and The Sunday Times into its AI platform. Meanwhile, other publications like The New York Times chose to sue OpenAI and Microsoft for copyright infringement over the use of their content to train AI models. In November 2024, a coalition of Canadian news outlets, including the Toronto Star, Metroland Media, Postmedia, The Globe and Mail, The Canadian Press and CBC, sued OpenAI for using their news articles to train its software without permission. In October 2024 during a New York Times interview, Suchir Balaji accused OpenAI of violating copyright law in developing its commercial LLMs which he had helped engineer. He was a likely witness in a major copyright trial against the AI company, and was one of several of its current or former employees named in court filings as potentially having documents relevant to the case. On November 26, 2024, Balaji died by suicide. His death prompted the circulation of conspiracy theories alleging that he had been deliberately silenced. California Congressman Ro Khanna endorsed calls for an investigation. On April 24, 2025, Ziff Davis sued OpenAI in Delaware federal court for copyright infringement. Ziff Davis is known for publications such as ZDNet, PCMag, CNET, IGN and Lifehacker. In April 2023, the EU's European Data Protection Board (EDPB) formed a dedicated task force on ChatGPT "to foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities" based on the "enforcement action undertaken by the Italian data protection authority against OpenAI about the ChatGPT service". In late April 2024 NOYB filed a complaint with the Austrian Datenschutzbehörde against OpenAI for violating the European General Data Protection Regulation. A text created with ChatGPT gave a false date of birth for a living person without giving the individual the option to see the personal data used in the process. A request to correct the mistake was denied. Additionally, neither the recipients of ChatGPT's work nor the sources used, could be made available, OpenAI claimed. OpenAI was criticized for lifting its ban on using ChatGPT for "military and warfare". Up until January 10, 2024, its "usage policies" included a ban on "activity that has high risk of physical harm, including", specifically, "weapons development" and "military and warfare". Its new policies prohibit "[using] our service to harm yourself or others" and to "develop or use weapons". In August 2025, the parents of a 16-year-old boy who died by suicide filed a wrongful death lawsuit against OpenAI (and CEO Sam Altman), alleging that months of conversations with ChatGPT about mental health and methods of self-harm contributed to their son's death and that safeguards were inadequate for minors. OpenAI expressed condolences and said it was strengthening protections (including updated crisis response behavior and parental controls). Coverage described it as a first-of-its-kind wrongful death case targeting the company's chatbot. The complaint was filed in California state court in San Francisco. In November 2025, the Social Media Victims Law Center and Tech Justice Law Project filed seven lawsuits against OpenAI, of which four lawsuits alleged wrongful death. The suits were filed on behalf of Zane Shamblin, 23, of Texas; Amaurie Lacey, 17, of Georgia; Joshua Enneking, 26, of Florida; and Joe Ceccanti, 48, of Oregon, who each committed suicide after prolonged ChatGPT usage. In December 2025, Stein-Erik Soelberg, who was 56 years old at the time, allegedly murdered his mother Suzanne Adams. In the months prior the paranoid, delusional man often discussed his ideas with ChatGPT. Adam's estate then sued OpenAI claiming that the company shared responsibility due to the risk of chatbot psychosis despite the fact that chatbot psychosis is not a real medical diagnosis. OpenAI responded saying they will make ChatGPT safer for users disconnected from reality. See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Python_(programming_language)#cite_ref-PepCite000_172-0] | [TOKENS: 4314]
Contents Python (programming language) Python is a high-level, general-purpose programming language. Its design philosophy emphasizes code readability with the use of significant indentation. Python is dynamically type-checked and garbage-collected. It supports multiple programming paradigms, including structured (particularly procedural), object-oriented and functional programming. Guido van Rossum began working on Python in the late 1980s as a successor to the ABC programming language. Python 3.0, released in 2008, was a major revision and not completely backward-compatible with earlier versions. Beginning with Python 3.5, capabilities and keywords for typing were added to the language, allowing optional static typing. As of 2026[update], the Python Software Foundation supports Python 3.10, 3.11, 3.12, 3.13, and 3.14, following the project's annual release cycle and five-year support policy. Python 3.15 is currently in the alpha development phase, and the stable release is expected to come out in October 2026. Earlier versions in the 3.x series have reached end-of-life and no longer receive security updates. Python has gained widespread use in the machine learning community. It is widely taught as an introductory programming language. Since 2003, Python has consistently ranked in the top ten of the most popular programming languages in the TIOBE Programming Community Index, which ranks based on searches in 24 platforms. History Python was conceived in the late 1980s by Guido van Rossum at Centrum Wiskunde & Informatica (CWI) in the Netherlands. It was designed as a successor to the ABC programming language, which was inspired by SETL, capable of exception handling and interfacing with the Amoeba operating system. Python implementation began in December 1989. Van Rossum first released it in 1991 as Python 0.9.0. Van Rossum assumed sole responsibility for the project, as the lead developer, until 12 July 2018, when he announced his "permanent vacation" from responsibilities as Python's "benevolent dictator for life" (BDFL); this title was bestowed on him by the Python community to reflect his long-term commitment as the project's chief decision-maker. (He has since come out of retirement and is self-titled "BDFL-emeritus".) In January 2019, active Python core developers elected a five-member Steering Council to lead the project. The name Python derives from the British comedy series Monty Python's Flying Circus. (See § Naming.) Python 2.0 was released on 16 October 2000, featuring many new features such as list comprehensions, cycle-detecting garbage collection, reference counting, and Unicode support. Python 2.7's end-of-life was initially set for 2015, and then postponed to 2020 out of concern that a large body of existing code could not easily be forward-ported to Python 3. It no longer receives security patches or updates. While Python 2.7 and older versions are officially unsupported, a different unofficial Python implementation, PyPy, continues to support Python 2, i.e., "2.7.18+" (plus 3.11), with the plus signifying (at least some) "backported security updates". Python 3.0 was released on 3 December 2008, and was a major revision and not completely backward-compatible with earlier versions, with some new semantics and changed syntax. Python 2.7.18, released in 2020, was the last release of Python 2. Several releases in the Python 3.x series have added new syntax to the language, and made a few (considered very minor) backward-incompatible changes. As of January 2026[update], Python 3.14.3 is the latest stable release. All older 3.x versions had a security update down to Python 3.9.24 then again with 3.9.25, the final version in 3.9 series. Python 3.10 is, since November 2025, the oldest supported branch. Python 3.15 has an alpha released, and Android has an official downloadable executable available for Python 3.14. Releases receive two years of full support followed by three years of security support. Design philosophy and features Python is a multi-paradigm programming language. Object-oriented programming and structured programming are fully supported, and many of their features support functional programming and aspect-oriented programming – including metaprogramming and metaobjects. Many other paradigms are supported via extensions, including design by contract and logic programming. Python is often referred to as a 'glue language' because it is purposely designed to be able to integrate components written in other languages. Python uses dynamic typing and a combination of reference counting and a cycle-detecting garbage collector for memory management. It uses dynamic name resolution (late binding), which binds method and variable names during program execution. Python's design offers some support for functional programming in the "Lisp tradition". It has filter, map, and reduce functions; list comprehensions, dictionaries, sets, and generator expressions. The standard library has two modules (itertools and functools) that implement functional tools borrowed from Haskell and Standard ML. Python's core philosophy is summarized in the Zen of Python (PEP 20) written by Tim Peters, which includes aphorisms such as these: However, Python has received criticism for violating these principles and adding unnecessary language bloat. Responses to these criticisms note that the Zen of Python is a guideline rather than a rule. The addition of some new features had been controversial: Guido van Rossum resigned as Benevolent Dictator for Life after conflict about adding the assignment expression operator in Python 3.8. Nevertheless, rather than building all functionality into its core, Python was designed to be highly extensible via modules. This compact modularity has made it particularly popular as a means of adding programmable interfaces to existing applications. Van Rossum's vision of a small core language with a large standard library and easily extensible interpreter stemmed from his frustrations with ABC, which represented the opposite approach. Python claims to strive for a simpler, less-cluttered syntax and grammar, while giving developers a choice in their coding methodology. Python lacks do .. while loops, which Rossum considered harmful. In contrast to Perl's motto "there is more than one way to do it", Python advocates an approach where "there should be one – and preferably only one – obvious way to do it". In practice, however, Python provides many ways to achieve a given goal. There are at least three ways to format a string literal, with no certainty as to which one a programmer should use. Alex Martelli is a Fellow at the Python Software Foundation and Python book author; he wrote that "To describe something as 'clever' is not considered a compliment in the Python culture." Python's developers typically prioritize readability over performance. For example, they reject patches to non-critical parts of the CPython reference implementation that would offer increases in speed that do not justify the cost of clarity and readability.[failed verification] Execution speed can be improved by moving speed-critical functions to extension modules written in languages such as C, or by using a just-in-time compiler like PyPy. Also, it is possible to transpile to other languages. However, this approach either fails to achieve the expected speed-up, since Python is a very dynamic language, or only a restricted subset of Python is compiled (with potential minor semantic changes). Python is meant to be a fun language to use. This goal is reflected in the name – a tribute to the British comedy group Monty Python – and in playful approaches to some tutorials and reference materials. For instance, some code examples use the terms "spam" and "eggs" (in reference to a Monty Python sketch), rather than the typical terms "foo" and "bar". A common neologism in the Python community is pythonic, which has a broad range of meanings related to program style: Pythonic code may use Python idioms well; be natural or show fluency in the language; or conform with Python's minimalist philosophy and emphasis on readability. Syntax and semantics Python is meant to be an easily readable language. Its formatting is visually uncluttered and often uses English keywords where other languages use punctuation. Unlike many other languages, it does not use curly brackets to delimit blocks, and semicolons after statements are allowed but rarely used. It has fewer syntactic exceptions and special cases than C or Pascal. Python uses whitespace indentation, rather than curly brackets or keywords, to delimit blocks. An increase in indentation comes after certain statements; a decrease in indentation signifies the end of the current block. Thus, the program's visual structure accurately represents its semantic structure. This feature is sometimes termed the off-side rule. Some other languages use indentation this way; but in most, indentation has no semantic meaning. The recommended indent size is four spaces. Python's statements include the following: The assignment statement (=) binds a name as a reference to a separate, dynamically allocated object. Variables may subsequently be rebound at any time to any object. In Python, a variable name is a generic reference holder without a fixed data type; however, it always refers to some object with a type. This is called dynamic typing—in contrast to statically-typed languages, where each variable may contain only a value of a certain type. Python does not support tail call optimization or first-class continuations; according to Van Rossum, the language never will. However, better support for coroutine-like functionality is provided by extending Python's generators. Before 2.5, generators were lazy iterators; data was passed unidirectionally out of the generator. From Python 2.5 on, it is possible to pass data back into a generator function; and from version 3.3, data can be passed through multiple stack levels. Python's expressions include the following: In Python, a distinction between expressions and statements is rigidly enforced, in contrast to languages such as Common Lisp, Scheme, or Ruby. This distinction leads to duplicating some functionality, for example: A statement cannot be part of an expression; because of this restriction, expressions such as list and dict comprehensions (and lambda expressions) cannot contain statements. As a particular case, an assignment statement such as a = 1 cannot be part of the conditional expression of a conditional statement. Python uses duck typing, and it has typed objects but untyped variable names. Type constraints are not checked at definition time; rather, operations on an object may fail at usage time, indicating that the object is not of an appropriate type. Despite being dynamically typed, Python is strongly typed, forbidding operations that are poorly defined (e.g., adding a number and a string) rather than quietly attempting to interpret them. Python allows programmers to define their own types using classes, most often for object-oriented programming. New instances of classes are constructed by calling the class, for example, SpamClass() or EggsClass()); the classes are instances of the metaclass type (which is an instance of itself), thereby allowing metaprogramming and reflection. Before version 3.0, Python had two kinds of classes, both using the same syntax: old-style and new-style. Current Python versions support the semantics of only the new style. Python supports optional type annotations. These annotations are not enforced by the language, but may be used by external tools such as mypy to catch errors. Python includes a module typing including several type names for type annotations. Also, mypy supports a Python compiler called mypyc, which leverages type annotations for optimization. 1.33333 frozenset() Python includes conventional symbols for arithmetic operators (+, -, *, /), the floor-division operator //, and the modulo operator %. (With the modulo operator, a remainder can be negative, e.g., 4 % -3 == -2.) Also, Python offers the ** symbol for exponentiation, e.g. 5**3 == 125 and 9**0.5 == 3.0. Also, it offers the matrix‑multiplication operator @ . These operators work as in traditional mathematics; with the same precedence rules, the infix operators + and - can also be unary, to represent positive and negative numbers respectively. Division between integers produces floating-point results. The behavior of division has changed significantly over time: In Python terms, the / operator represents true division (or simply division), while the // operator represents floor division. Before version 3.0, the / operator represents classic division. Rounding towards negative infinity, though a different method than in most languages, adds consistency to Python. For instance, this rounding implies that the equation (a + b)//b == a//b + 1 is always true. Also, the rounding implies that the equation b*(a//b) + a%b == a is valid for both positive and negative values of a. As expected, the result of a%b lies in the half-open interval [0, b), where b is a positive integer; however, maintaining the validity of the equation requires that the result must lie in the interval (b, 0] when b is negative. Python provides a round function for rounding a float to the nearest integer. For tie-breaking, Python 3 uses the round to even method: round(1.5) and round(2.5) both produce 2. Python versions before 3 used the round-away-from-zero method: round(0.5) is 1.0, and round(-0.5) is −1.0. Python allows Boolean expressions that contain multiple equality relations to be consistent with general usage in mathematics. For example, the expression a < b < c tests whether a is less than b and b is less than c. C-derived languages interpret this expression differently: in C, the expression would first evaluate a < b, resulting in 0 or 1, and that result would then be compared with c. Python uses arbitrary-precision arithmetic for all integer operations. The Decimal type/class in the decimal module provides decimal floating-point numbers to a pre-defined arbitrary precision with several rounding modes. The Fraction class in the fractions module provides arbitrary precision for rational numbers. Due to Python's extensive mathematics library and the third-party library NumPy, the language is frequently used for scientific scripting in tasks such as numerical data processing and manipulation. Functions are created in Python by using the def keyword. A function is defined similarly to how it is called, by first providing the function name and then the required parameters. Here is an example of a function that prints its inputs: To assign a default value to a function parameter in case no actual value is provided at run time, variable-definition syntax can be used inside the function header. Code examples "Hello, World!" program: Program to calculate the factorial of a non-negative integer: Libraries Python's large standard library is commonly cited as one of its greatest strengths. For Internet-facing applications, many standard formats and protocols such as MIME and HTTP are supported. The language includes modules for creating graphical user interfaces, connecting to relational databases, generating pseudorandom numbers, arithmetic with arbitrary-precision decimals, manipulating regular expressions, and unit testing. Some parts of the standard library are covered by specifications—for example, the Web Server Gateway Interface (WSGI) implementation wsgiref follows PEP 333—but most parts are specified by their code, internal documentation, and test suites. However, because most of the standard library is cross-platform Python code, only a few modules must be altered or rewritten for variant implementations. As of 13 March 2025,[update] the Python Package Index (PyPI), the official repository for third-party Python software, contains over 614,339 packages. Development environments Most[which?] Python implementations (including CPython) include a read–eval–print loop (REPL); this permits the environment to function as a command line interpreter, with which users enter statements sequentially and receive results immediately. Also, CPython is bundled with an integrated development environment (IDE) called IDLE, which is oriented toward beginners.[citation needed] Other shells, including IDLE and IPython, add additional capabilities such as improved auto-completion, session-state retention, and syntax highlighting. Standard desktop IDEs include PyCharm, Spyder, and Visual Studio Code; there are web browser-based IDEs, such as the following environments: Implementations CPython is the reference implementation of Python. This implementation is written in C, meeting the C11 standard since version 3.11. Older versions use the C89 standard with several select C99 features, but third-party extensions are not limited to older C versions—e.g., they can be implemented using C11 or C++. CPython compiles Python programs into an intermediate bytecode, which is then executed by a virtual machine. CPython is distributed with a large standard library written in a combination of C and native Python. CPython is available for many platforms, including Windows and most modern Unix-like systems, including macOS (and Apple M1 Macs, since Python 3.9.1, using an experimental installer). Starting with Python 3.9, the Python installer intentionally fails to install on Windows 7 and 8; Windows XP was supported until Python 3.5, with unofficial support for VMS. Platform portability was one of Python's earliest priorities. During development of Python 1 and 2, even OS/2 and Solaris were supported; since that time, support has been dropped for many platforms. All current Python versions (since 3.7) support only operating systems that feature multithreading, by now supporting not nearly as many operating systems (dropping many outdated) than in the past. All alternative implementations have at least slightly different semantics. For example, an alternative may include unordered dictionaries, in contrast to other current Python versions. As another example in the larger Python ecosystem, PyPy does not support the full C Python API. Creating an executable with Python often is done by bundling an entire Python interpreter into the executable, which causes binary sizes to be massive for small programs, yet there exist implementations that are capable of truly compiling Python. Alternative implementations include the following: Stackless Python is a significant fork of CPython that implements microthreads. This implementation uses the call stack differently, thus allowing massively concurrent programs. PyPy also offers a stackless version. Just-in-time Python compilers have been developed, but are now unsupported: There are several compilers/transpilers to high-level object languages; the source language is unrestricted Python, a subset of Python, or a language similar to Python: There are also specialized compilers: Some older projects existed, as well as compilers not designed for use with Python 3.x and related syntax: A performance comparison among various Python implementations, using a non-numerical (combinatorial) workload, was presented at EuroSciPy '13. In addition, Python's performance relative to other programming languages is benchmarked by The Computer Language Benchmarks Game. There are several approaches to optimizing Python performance, despite the inherent slowness of an interpreted language. These approaches include the following strategies or tools: Language Development Python's development is conducted mostly through the Python Enhancement Proposal (PEP) process; this process is the primary mechanism for proposing major new features, collecting community input on issues, and documenting Python design decisions. Python coding style is covered in PEP 8. Outstanding PEPs are reviewed and commented on by the Python community and the steering council. Enhancement of the language corresponds with development of the CPython reference implementation. The mailing list python-dev is the primary forum for the language's development. Specific issues were originally discussed in the Roundup bug tracker hosted by the foundation. In 2022, all issues and discussions were migrated to GitHub. Development originally took place on a self-hosted source-code repository running Mercurial, until Python moved to GitHub in January 2017. CPython's public releases have three types, distinguished by which part of the version number is incremented: Many alpha, beta, and release-candidates are also released as previews and for testing before final releases. Although there is a rough schedule for releases, they are often delayed if the code is not ready yet. Python's development team monitors the state of the code by running a large unit test suite during development. The major academic conference on Python is PyCon. Also, there are special Python mentoring programs, such as PyLadies. Naming Python's name is inspired by the British comedy group Monty Python, whom Python creator Guido van Rossum enjoyed while developing the language. Monty Python references appear frequently in Python code and culture; for example, the metasyntactic variables often used in Python literature are spam and eggs, rather than the traditional foo and bar. Also, the official Python documentation contains various references to Monty Python routines. Python users are sometimes referred to as "Pythonistas". Languages influenced by Python See also Notes References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Category:Video_games_scored_by_Kumi_Tanioka] | [TOKENS: 61]
Category:Video games scored by Kumi Tanioka Video games that were scored by Kumi Tanioka. Pages in category "Video games scored by Kumi Tanioka" The following 18 pages are in this category, out of 18 total. This list may not reflect recent changes.
========================================
[SOURCE: https://en.wikipedia.org/wiki/Category:Video_games_scored_by_Lena_Raine] | [TOKENS: 55]
Category:Video games scored by Lena Raine Video games that were scored by Lena Raine. Pages in category "Video games scored by Lena Raine" The following 13 pages are in this category, out of 13 total. This list may not reflect recent changes.
========================================
[SOURCE: https://en.wikipedia.org/wiki/Harambe] | [TOKENS: 2387]
Contents Harambe Harambe (/həˈrɑːmbeɪ/ hə-RAHM-bay; May 27, 1999 – May 28, 2016) was a western lowland gorilla who lived at the Cincinnati Zoo. On May 28, 2016, a three-year-old boy visiting the zoo climbed under a fence into an outdoor gorilla enclosure where he was grabbed and violently dragged and thrown by Harambe. Fearing for the boy's life, a zoo worker shot and killed Harambe. The incident was recorded on video and received broad international coverage and commentary, including controversy over the choice to use lethal force. Several primatologists and conservationists wrote later that the zoo had no other choice under the circumstances, and that it highlighted the danger of zoo animals near humans and the need for better standards of care. Harambe became the subject of Internet memes, a statue, songs, and other tributes and recognitions. History Harambe was born at Gladys Porter Zoo in Brownsville, Texas, on May 27, 1999. He was named by Dan Van Coppenolle, a local area counselor who won a naming contest sponsored by the zoo. He came up with the name after listening to the 1988 song "Harambe (Working Together for Freedom)" by Rita Marley, widow of Bob Marley. Harambee is a Swahili term for communal labor. On January 6, 2002, when Harambe was two years old, his mother, Kayla, his 11-month-old brother, Makoko, and his two-year-old half-sister, Uzuri, died of chlorine gas poisoning after chlorine tablets left too close to a space heater released gas into the gorilla enclosure. Harambe was also possibly injured in the accident. On September 18, 2014, Harambe was transferred to the Cincinnati Zoo and Botanical Garden, to learn adult gorilla behavior and join a new social group. On May 28, 2016, a 3-year-old boy visiting the Cincinnati Zoo fell into the moat at the Gorilla World habitat. Witnesses said they heard the child say he wanted to go into the gorilla enclosure. The boy then climbed a 3-foot-tall (0.9 m) fence, crawled through 4 feet (1.2 m) of bushes, and then fell 15 feet (4.6 m) into a moat of shallow water. Zoo officials immediately signaled for the three gorillas in the habitat to return inside, and two females did so. However, the third gorilla, the inquisitive 440-pound (200 kg) male silverback Harambe, climbed down into the moat to investigate the child splashing in the water. Over the next 10 minutes, Harambe became increasingly "agitated and disoriented" by the screams of onlookers. He carried the child through the water, occasionally propping him up when he sat, or pushing him down when he stood. Harambe exhibited "strutting" behavior—walking around with legs and arms stiffly extended to appear bigger—a bluffing move, though one with inherent danger should he throw or drag the boy around too roughly. Harambe then carried the boy up a ladder out of the moat onto dry land. Afraid for the boy's welfare, zoo officials decided to kill Harambe, doing so with a single rifle shot to the head. Cincinnati firefighters said the boy was between Harambe's legs when the shot was fired. Harambe was killed one day after his 17th birthday. The boy was given a trauma assessment and transported to Cincinnati Children's Hospital Medical Center; his injuries were non-life-threatening. Reactions The incident was recorded in a dramatic video by an anonymous bystander and uploaded to YouTube, where it went viral, sparking global publicity and controversy. Some observers said that it was unclear whether Harambe was likely to harm the child. Others called for the boy's parents or the zoo to be held accountable for the gorilla's death. Zoo director Thane Maynard stated, "The child was being dragged around ... His head was banging on concrete. This was not a gentle thing. The child was at risk." Police investigated possible criminal charges against the parents while the parents defended the zoo's actions. The boy's mother also became the target of online shaming. On June 6, 2016, Ohio prosecutor Joe Deters said that the mother would not face any charges of wrongdoing. The zoo was investigated by the Association of Zoos and Aquariums (AZA), which sets the standards for zoos, and the USDA. Several vigils took place to honor Harambe's death. A candlelight vigil was held at Hyde Park, Cincinnati. Animal rights activist Anthony Seta spoke at a vigil at Cincinnati Zoo, saying: "I'm not here to decide what was right and what was wrong; the fact is that a gorilla who just celebrated his birthday has been killed." The shooting was criticized by celebrities, including Ricky Gervais, Brian May, and Piers Morgan. Donald Trump defended the actions of the zoo during his 2016 presidential campaign, stating the zoo employees "probably had no choice", although he said "it was almost like a mother holding a baby". The incident sparked debate among biologists and primatologists on whether gorillas and other primates should be held in captivity at all. Primatologist Jane Goodall said that according to the video it seemed Harambe was trying to protect the child. She gave a longer explanation in an interview with the president of the International Fund for Animal Welfare, concluding that the zoo had no choice but to kill Harambe. She wrote, "It was awful for the child, the parents, Harambe, the zoo, the keepers and the public. But when people come into contact with wild animals, life and death decisions sometimes have to be made." Goodall said "we will never be able to be 100% sure that people and wildlife won't be injured when they are in such close proximity", and she believed that zoos "with the highest standards of care" could play an important role in the animals' well-being. Zookeeper Jack Hanna strongly defended the zoo's actions, noting that a tranquilizer dart might have taken five or ten minutes to take effect and would have further aggravated Harambe. Primatologist Frans de Waal said he saw few options for the zoo: "A gorilla is so immensely strong that even with the best of intentions—and we are not sure that Harambe had those—the child's death was a probable outcome." Ian Redmond of the Ape Alliance said other options were not tried, such as showing force to bluster the gorilla to back down, or if someone known and trusted by Harambe had tried to calm him. Cultural impact Following the killing, Harambe became the subject of multiple viral memes. Vox wrote in November that Harambe has an "undeniable status as 2016's meme of the year." People magazine wrote that "Harambe continues to live on in the collective mind of the internet, entering into a rarefied state of venerated meme status." One of the most widespread memes was noted by The Washington Post and New York magazine who observed a proliferation of over-the-top and fake tributes to Harambe. "The idea is, the more intense and more sincere-seeming the expression of mourning is, the funnier the joke." For example, the "Dicks out for Harambe" meme can be seen as a fake tribute to an incident that would normally engender sincere mourning. Aja Romano of Vox wrote that "If you were a progressive, the Harambe meme gave you a chance to mock what you viewed as the hypocritical haranguing of the mainstream while avoiding real issues of social justice; and if you were a conservative, the Harambe meme gave you a chance to mock liberal hysteria." One meme is a play on conspiracy theories, such as "Bush did Harambe", a reference to 9/11 conspiracy theories. In Australia, people joked about supporting Harambe's corpse as a write-in candidate on the ballot for the federal election. Public Policy Polling included Harambe in their polling for the U.S. presidential election. Harambe had 5% support in late July 2016 (ahead of Green Party nominee Jill Stein) and 2% in August 2016 (tied with Stein). Cincinnati Zoo director Thane Maynard reacted negatively: "We are not amused by the memes, petitions and signs about Harambe. Our zoo family is still healing, and the constant mention of Harambe makes moving forward more difficult for us. We are honoring Harambe by redoubling our gorilla conservation efforts and encouraging others to join us." In late August, the zoo deleted its Twitter account after being targeted daily by trolls mentioning Harambe. The zoo resumed its account two months later. As noted by Chris Rosales, "While the gorilla's death is tragic, the culture that has spawned around it is quite comedic." A self-described underground culture collective known as Otaku Gang released a computer parody fighting game known as Harambe vs. Capcom, with Harambe being able to fight characters from Capcom's Street Fighter franchise. American rappers Young Thug and Dumbfoundead each released songs entitled "Harambe". The former did so on his 2016 album Jeffery, each track of which is named after one of his "idols", although the lyrics do not reference the gorilla; the latter likens the fate of the ape to gang violence and police brutality. Canadian dubstep producer Excision included a song titled "Harambe" on his 2016 album Virus. On June 16, 2017, satire news site The Onion featured a parody article of professional wrestler Big Show being killed by WWE after a seven-year-old boy wandered into his fight cage. On March 30, 2019, Elon Musk released a two-minute rap song titled "RIP Harambe" onto his SoundCloud. The track was performed by Yung Jake, written by him and Caroline Polachek. Rolling Stone magazine called the track "a bouncy tribute to Harambe". On October 18, 2021, the 7-foot-tall (2.1 m) bronze statue Harambe was placed in Bowling Green Park in New York City, facing the Charging Bull statue, to promote Sapien Network. The statue of Harambe facing the bull, whose feet were surrounded by 10,000 bananas, was a statement about wealth disparity. In October 2023, a new pedestrian bridge in Mauldin, South Carolina, was listed as "Harambe Memorial Bridge" on Google Maps before the bridge had been officially named. The city was petitioned to keep the proposed name, but in February 2024 it was officially named the Mauldin Gateway Bridge. Later developments In September 2017, the zoo added Mshindi, a 29-year-old male western lowland gorilla transferred from the Louisville Zoo. He joined females Chewie, 21, and Mara, 22, who were present on the day of the killing. At the same time, the zoo created a new indoor habitat where the public can view the gorillas year-round from behind safety glass. On the 2023 World Gorilla Day (September 24), the feature-length documentary Harambe was released. The film is critical of Harambe's killing. It shows new photographs and video footage from the day, and claims new evidence that Harambe was trying to return the boy to his parents. See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Al-Khayma] | [TOKENS: 549]
Contents Al-Khayma Al-Khayma (Arabic: الخيمة) was a Palestinian Arab village in the Ramle Subdistrict of Mandatory Palestine. It was depopulated during the 1948 Arab–Israeli War on July 9, 1948, by the Givati Brigade of Operation An-Far. It was located 18.5 km south of Ramla. History In 1863, Victor Guérin found that it had two hundred and fifty inhabitants. In 1882, the PEF's Survey of Western Palestine noted it as principally an adobe village of "on low ground", and with a well to the east. In the 1922 census of Palestine, conducted by the British Mandate authorities, Khaimeh had a population of 132 Muslims, increasing in the 1931 census to 141 Muslims, in 30 houses. In the 1945 statistics, the village had a population of 190, all Muslim, and the total land area was 5,150 dunums. Of this, 4 dunams were irrigated or used for plantations, 5,007 were used for cereals, while 9 dunams were classified as built-up urban areas. Morris list both date and reason for depopulation as "not known". However, he also notes it in connection with Operation An-Far, in mid July 1948. Following the 1948 war, the area was incorporated into the State of Israel and in August 1948 al-Khaymas was one of 21 Palestinian villages whose land was proposed for resettlement with an Israeli village named Revadim. In November, 1948, the proposal to establish Revadim on al-Khayma's land was passed. Revadim was eventually established close to village land, according to Morris, however, according to Khalidi, Revadim is located north of al-Khayma, on the land of the depopulated Palestinian village of al-Mukhayzin. In 1992 the village site was described: "All that remains of the village are three mounds to the east, west, and south of the site that contain the remnants of houses. A girder protrudes from the eastern mound and there is a large, deserted well at the mounds centre. A large artificial pond lies about 100 m northeast of the site, and there is a monument next to a well about 0.5 km to the north. An inscription on the monument reads: To the eMemory of the Members of Kibbutz Revadim, who Settled on the Land in 1948." References Bibliography External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Chemical_compound] | [TOKENS: 2343]
Contents Chemical compound A chemical compound is a chemical substance composed of many identical molecules (or molecular entities) containing atoms from more than one chemical element held together by chemical bonds. A molecule consisting of atoms of only one element is therefore not a compound. A compound can be transformed into a different substance by a chemical reaction, which may involve interactions with other substances. In this process, bonds between atoms may be broken or new bonds formed or both. There are four major types of compounds, distinguished by how the constituent atoms are bonded together. Molecular compounds are held together by covalent bonds, ionic compounds are held together by ionic bonds, intermetallic compounds are held together by metallic bonds, and coordination complexes are held together by coordinate covalent bonds. Non-stoichiometric compounds form a disputed marginal case. A chemical formula specifies the number of atoms of each element in a compound molecule, using the standard chemical symbols with numerical subscripts. Many chemical compounds have a unique CAS number identifier assigned by the Chemical Abstracts Service. Globally, more than 350,000 chemical compounds (including mixtures of chemicals) have been registered for production and use. History of the concept The term "compound"—with a meaning similar to the modern—has been used at least since 1661 when Robert Boyle's The Sceptical Chymist was published. In this book, Boyle variously used the terms "compound", "compounded body", "perfectly mixt body", and "concrete". "Perfectly mixt bodies" included for example gold, lead, mercury, and wine. While the distinction between compound and mixture is not so clear, the distinction between element and compound is a central theme. Quicksilver ... with Aqua fortis will be brought into a ... white Powder ... with Sulphur it will compose a blood-red and volatile Cinaber. And yet out of all these exotick Compounds, we may recover the very same running Mercury. Boyle used the concept of "corpuscles"—or "atomes", as he also called them—to explain how a limited number of elements could combine into a vast number of compounds: If we assigne to the Corpuscles, whereof each Element consists, a peculiar size and shape ... such ... Corpuscles may be mingled in such various Proportions, and ... connected so many ... wayes, that an almost incredible number of ... Concretes may be compos’d of them. In his Logick, published in 1724, the English minister and logician Isaac Watts gave an early definition of chemical element, and contrasted element with chemical compound in clear, modern terms. Among Substances, some are called Simple, some are Compound ... Simple Substances ... are usually called Elements, of which all other Bodies are compounded: Elements are such Substances as cannot be resolved, or reduced, into two or more Substances of different Kinds. ... Followers of Aristotle made Fire, Air, Earth and Water to be the four Elements, of which all earthly Things were compounded; and they suppos'd the Heavens to be a Quintessence, or fifth sort of Body, distinct from all these : But, since experimental Philosophy ... have been better understood, this Doctrine has been abundantly refuted. The Chymists make Spirit, Salt, Sulphur, Water and Earth to be their five Elements, because they can reduce all terrestrial Things to these five : This seems to come nearer the Truth; tho' they are not all agreed ... Compound Substances are made up of two or more simple Substances ... So a Needle is simple Body, being made only of Steel; but a Sword or a Knife is a compound because its ... Handle is made of Materials different from the Blade. Definitions Any substance consisting of two or more different types of atoms (chemical elements) in a fixed stoichiometric proportion can be termed a chemical compound; the concept is most readily understood when considering pure chemical substances.: 15 It follows from their being composed of fixed proportions of two or more types of atoms that chemical compounds can be converted, via chemical reaction, into compounds or substances each having fewer atoms. A chemical formula is a way of expressing information about the proportions of atoms that constitute a particular chemical compound, using chemical symbols for the chemical elements, and subscripts to indicate the number of atoms involved. For example, water is composed of two hydrogen atoms bonded to one oxygen atom: the chemical formula is H2O. In the case of non-stoichiometric compounds, the proportions may be reproducible with regard to their preparation, and give fixed proportions of their component elements, but proportions that are not integral [e.g., for palladium hydride, PdHx (0.02 < x < 0.58)]. Chemical compounds have a unique and defined chemical structure held together in a defined spatial arrangement by chemical bonds. Chemical compounds can be molecular compounds held together by covalent bonds, salts held together by ionic bonds, intermetallic compounds held together by metallic bonds, or the subset of chemical complexes that are held together by coordinate covalent bonds. Pure chemical elements are generally not considered chemical compounds, failing the two or more atom requirement, though they often consist of molecules composed of multiple atoms (such as in the diatomic molecule H2, or the polyatomic molecule S8, etc.). Many chemical compounds have a unique numerical identifier assigned by the Chemical Abstracts Service (CAS): its CAS number. There is varying and sometimes inconsistent nomenclature differentiating substances, which include truly non-stoichiometric examples, from chemical compounds, which require the fixed ratios. Many solid chemical substances—for example many silicate minerals—are chemical substances, but do not have simple formulae reflecting chemical bonding of elements to one another in fixed ratios; even so, these crystalline substances are often called "non-stoichiometric compounds". It may be argued that they are related to, rather than being chemical compounds, insofar as the variability in their compositions is often due to either the presence of foreign elements trapped within the crystal structure of an otherwise known true chemical compound, or due to perturbations in structure relative to the known compound that arise because of an excess or deficit of the constituent elements at places in its structure; such non-stoichiometric substances form most of the crust and mantle of the Earth. Other compounds regarded as chemically identical may have varying amounts of heavy or light isotopes of the constituent elements, which changes the ratio of elements by mass slightly. Types A molecule is an electrically neutral group of two or more atoms held together by chemical bonds. A molecule may be homonuclear, that is, it consists of atoms of one chemical element, as with two atoms in the oxygen molecule (O2); or it may be heteronuclear, a chemical compound composed of more than one element, as with water (two hydrogen atoms and one oxygen atom; H2O). A molecule is the smallest unit of a substance that still carries all the physical and chemical properties of that substance. An ionic compound is a chemical compound composed of ions held together by electrostatic forces; this is termed ionic bonding. The compound is neutral overall, but consists of positively charged ions, called cations, and negatively charged ions, called anions. These can be simple ions such as the sodium (Na+) and chloride (Cl−) in sodium chloride, or polyatomic species such as the ammonium (NH+4) and carbonate (CO2−3) ions in ammonium carbonate. Individual ions within an ionic compound usually have multiple nearest neighbours, so are not considered to be part of molecules, but instead part of a continuous three-dimensional network, usually in a crystalline structure. Ionic compounds containing basic ions, like hydroxide (OH−) or oxide (O2−), are classified as bases. Ionic compounds without these ions are also known as salts and can be formed by acid–base reactions. Ionic compounds can also be produced from their constituent ions by evaporation of their solvent, precipitation, freezing, a solid-state reaction, or the electron transfer reaction of reactive metals with reactive non-metals, such as halogen gases. Ionic compounds typically have high melting and boiling points, and are hard and brittle. As solids they are almost always electrically insulating, but when melted or dissolved they become highly conductive, because the ions are mobilized. An intermetallic compound is a type of metallic alloy that forms an ordered solid-state compound between two or more metallic elements. Intermetallics are generally hard and brittle, with good high-temperature mechanical properties. They can be classified as stoichiometric or nonstoichiometric intermetallic compounds. A coordination complex consists of a central atom or ion, which is usually metallic and is called the coordination centre, and a surrounding array of bound molecules or ions, that are in turn known as ligands or complexing agents. Many metal-containing compounds, especially those of transition metals, are coordination complexes. Bonding and forces Compounds are held together through a variety of different types of bonding and forces. The differences in the types of bonds in compounds differ based on the types of elements present in the compound. London dispersion forces, part of the Van der Waals force, are the weakest force of all intermolecular forces. They are temporary attractive forces that form when the electrons in two adjacent atoms are positioned so that they create a temporary dipole. London dispersion forces can create Van der Waals molecules. Additionally, they are responsible for condensing nonpolar substances to liquids, and to further freeze to a solid state dependent on how low the temperature of the environment is. A covalent bond, also known as a molecular bond, involves the sharing of electrons between two atoms. Primarily, this type of bond occurs between elements that fall close to each other on the periodic table of elements, yet it is observed between some metals and nonmetals. This is due to the mechanism of this type of bond. Elements that fall close to each other on the periodic table tend to have similar electronegativities, which means they have a similar affinity for electrons. Since neither element has a stronger affinity to donate or gain electrons, it causes the elements to share electrons so both elements have a more stable octet. Ionic bonding occurs when valence electrons are completely transferred between elements. Opposite to covalent bonding, this chemical bond creates two oppositely charged ions. The metals in ionic bonding usually lose their valence electrons, becoming a positively charged cation. The nonmetal will gain the electrons from the metal, making the nonmetal a negatively charged anion. As outlined, ionic bonds occur between an electron donor, usually a metal, and an electron acceptor, which tends to be a nonmetal. Hydrogen bonding occurs when a hydrogen atom bonded to an electronegative atom forms an electrostatic connection with another electronegative atom through interacting dipoles or charges. Reactions A compound can be converted to a different chemical composition by interaction with a second chemical compound via a chemical reaction. In this process, bonds between atoms are broken in both of the interacting compounds, and then bonds are reformed so that new associations are made between atoms. Schematically, this reaction could be described as AB + CD → AD + CB, where A, B, C, and D are each unique atoms; and AB, AD, CD, and CB are each unique compounds. See also References Sources Further reading
========================================